Tuesday, December 29, 2009

Tuesday, Tuesday, Tuesday!

Matt Weiner and I will be discussing truth and warranted assertion at 10:00. I'm not entirely sure where the talk is supposed to take place. Hope to see you there!

Sunday, December 27, 2009

Justification isn't the right to believe?

I've been reading a forthcoming piece by Jeffrey Glick (too lazy for PQ link) where he argues against the view that having a justification for believing is a matter of having a certain kind of epistemic right. I take the view to be an utter triviality, so I had to take a look. (I can't imagine what a case of having a right to believe without having a justification for believing would look like, and I can't imagine how a successful justification for believing wouldn't show that having the relevant belief is consistent with the satisfaction of epistemic duty.) We disagree about (JustRightEPR). I like it. He doesn't. He writes:
JustRightEPR. A candidate for duties S has a justified belief in a proposition p if and only if
(a) S believes that p
(b) S has no epistemic duty not to believe that p.

If we now allow that the reference to an epistemic right in (JustRight) is to be interpreted as the relatively weak (JustRightEPR), as Wenar concluded, then the path is laid to save the view that justification confers an epistemic right to believe, and this parallel between ethics and epistemology is preserved. For it is surely the case that when one’s total evidence justifies one in believing that p, there is no epistemic duty against believing p ...

But things are not as straightforward as they may first appear. When philosophers talk of having an epistemic right to believe, or being epistemically entitled to believe, or having the epistemic authority to believe, they have in mind something stronger than the mere permissibility which (JustRightEPR) implies. The epistemic privilege right to believe is entailed by having on-balance justification for a belief, but there is more to having on balance justification for p than simply having no epistemic duty not to believe that p. This difference is what undermines the (JustRightEPR) interpretation of (JustRight). [I disagree with this--CL]

Here is an example of an ordinary privilege right. Smith picks up a piece of seaweed floating in international waters. Her doing so is permissible; she has no obligation not to pick up the seaweed. If she had chosen instead to leave it be, she would have done something permissible. Smith has no obligation not to leave it be. If the seaweed in the example is replaced by an object in the water that would have some non-negligible benefit to Smith but would serve no significant purpose to anyone else, a shell which would complete her collection, perhaps, again if Smith picks it up she does something permissible. If she chooses not to pick it up, she also does something permissible. She has no moral obligation to pick up the shell. It certainly is in her interest to pick up the shell. She would benefit if she did so, and the effort to acquire the shell is minimal. It may be prudentially obligatory that Smith should pick up the shell, but that is not the same as moral obligation.

This structure is not reflected in an epistemic right to believe. Suppose a rational adult agent Jones who has a total body of evidence E is considering which doxastic attitude to hold towards some proposition p. Suppose E justifies p: very roughly, were Jones to believe that p on the basis of E, he would be justified in believing that p. But p is false, and so it is false that were Jones to believe that p on the basis of E, then he would know that p. If he does in fact believe that p, he will do what is epistemically permissible. He has no epistemic obligation not to believe p given the facts about his evidence. But if Jones instead does not believe that p, then to maintain parity with the discussion of moral privilege rights, it should often not be the case that he has done something epistemically impermissible. He should often lack an epistemic duty not to disbelieve that p. But it is false that often Jones does something epistemically permissible when he believes that not-p when his total evidence justifies p. Therefore the view that epistemic rights are merely privilege rights is false.

This strikes me as a rather strange argument. I can't see how the argument could succeed unless we were to assume that there was no epistemic duty to refrain from believing without adequate evidence. But isn't there such a duty?

Bracket that. Glick seems to assume that you have a claim right only when you often also have a lot of latitude. But, there's nothing to the concept of a claim right that suggests (in a way that is obvious to me) that such rights are enjoyed only when there's also a lot of latitude in how to exercise such rights. Yes, there's a difference between picking up seaweed and picking up an attitude, but I don't see why the fact that there's less doxastic latitude entails that (JustRightEPR) is false since that claim entails nothing about the amount of doxastic latitude we have.

Here's a view on which justified belief is just the belief you have a claim right to. Your only epistemic duty is to refrain from believing what you don't know. I don't see that there's anything above that suggests that you couldn't defend this version of (JustRightEPR).

Saturday, December 26, 2009

Reasonable religious disagreement and the private evidence problem

Suppose Feldman is right that reasonable people cannot (in full awareness of each other) draw different conclusions from the same evidence while each regards the other as a peer. He thinks that in such cases, the reasonable thing to do is suspend judgment. He notes that it in realistic cases, we won't have the same evidence to support our beliefs:
In any realistic case, the totality of one’s evidence concerning a proposition will be a long and complex story, much of which may be difficult to put into words. This makes it possible that each party to a disagreement has an extra bit of evidence, evidence that has not been shared. You might think that each person’s unshared evidence can justify that person’s beliefs. For example, there is something about the atheist’s total evidence that can justify his belief, and there is something different about the theist’s total evidence that can justify her belief. Of course, not all cases of disagreement need to turn out this way. But perhaps some do, and perhaps this is what the students in my class thought was going on in our class. And, more generally, perhaps this is what people generally think is going on when they conclude that reasonable people can disagree.

Can we say that reasonable disagreement is possible in cases where the parties to the disagreement have 'private evidence'? He says, "It is possible that the private evidence includes the private religious (or nonreligious) experiences one has", but he seems to think that these experiences won't present much of a problem. The idea is that the theist (alleges) she has private evidence for her beliefs and this allows someone with a conciliatory view to say that theist is reasonable in the face of disagreement.

He says:
This response will not do. To see why, compare a more straightforward case of regular sight, rather than insight. Suppose you and I are standing by the window looking out on the quad. We think we have comparable vision and we know each other to be honest. I seem to see what looks to me like the dean standing out in the middle of the quad. (Assume that this is not something odd. He’s out there a fair amount.) I believe that the dean is standing on the quad. Meanwhile, you seem to see nothing of the kind there. You think that no one, and thus not the dean, is standing in the middle of the quad. We disagree. Prior to our saying anything, each of us believes reasonably. Then I say something about the dean’s being on the quad, and we find out about our situation. In my view, once that happens, each of us should suspend judgment. We each know that something weird is going on, but we have no idea which of us has the problem. Either I am ‘‘seeing things,’’ or you are missing something. I would not be reasonable in thinking that the problem is in your head, nor would you be reasonable in thinking that the problem is in mine.

I don't think this helps with mystical experience, not if the theist alleges that the object of such experiences chooses who to reveal itself to. It's not as if you can sneak a peak at God, on the theist's view, if you hide behind a bush while God appears to someone else.

Much of Feldman's remarks have to do with feelings of obviousness and insight that the theist and atheist can share, but if these remarks are intended to deal with religious experience, they don't seem to work:
Similarly, I think, even if it is true that the theists and the atheists have private evidence, this does not get us out of the problem. Each may have his or her own special insight or sense of obviousness. But each knows about the other’s insight. Each knows that this insight has evidential force. And now I see no basis for either of them justifying his own belief simply because the one insight happens to occur inside of him. A point about evidence that plays a role here is this: evidence of evidence is evidence.More carefully, evidence that there is evidence for P is evidence for P. Knowing that the other has an insight provides each of them with evidence.

Suppose that evidence that there is evidence for P is evidence for P. Can't P be better supported by the ground level evidence for P than the evidence that such evidence exists? If so, will the atheist really 'share' the theist's evidence when the theist reports a mystical experience?

Suppose we think of evidence as non-inferential knowledge. The theist claims that they know non-inferentially that God is speaking to them. This allegation, if true, means that their evidence rules out the hypothesis that there's no God. I see no reason to think that the theist's report of such an experience gives the atheist evidence that rules out the hypothesis that God exists and no reason to think that the upper limit of evidential support provided by an experience is determined by the degree of support a report of that experience can provide for another. So, I don't think there's anything in these passages that deals with the problem of private evidence understood as a kind of (alleged) mystical experience.

Now, just to be clear, that doesn't mean that the atheist should defer. They shouldn't believe that the kinds of experiences that the theist report are possible. (Should the theist believe they have the kinds of experiences they do? If 'should believe' is cashed out in the way that I think Feldman wants to, I cannot say with much confidence that the theist shouldn't believe that they could have the kinds of mystical experiences they report. (If, however, you shouldn't believe p if p is false or p isn't something you know, that's another matter ...)) The problem is that I don't see how Feldman can use the conciliatory view in the way he seems to want to without either begging the question against the theist who claims to know God directly via mystical experience or assuming something questionable about the kind of justificatory support experience provides.

You better watch out!



Thank goodness he's not real!

(Courtesy of Sketchy Santas)

Disagreement and universalism

I've been reading Foley's Intellectual Trust book and while it's filled with interesting stuff, I'm having a hard time figuring out what Foley's views are concerning disagreement (or maybe what they ought to be given what's he said). Foley defends universalism, and the universalist believes:

(U) If you discover that another person believes p, this provides you with a prima facie reason to believe p even if you happen to know nothing about the reliability of this other person.

Foley accepts universalism because he believes:
(i) that we should place trust in ourselves;
(ii) that there is rational pressure to place the same trust in others that we place in ourselves.

His argument for (i) is rather straightforward—self-trust is an essential part of any non-skeptical outlook. His arguments for (ii) are contained in these passages. First:
Our belief systems are saturated with the opinions of others. In our childhoods, we acquire beliefs from parents, siblings, and teachers without much thought. These constitute the backdrop against which we form yet other beliefs, and, often enough, these latter beliefs are also the products of other people’s beliefs. We hear testimony from those we meet, read books and articles, listen to television and radio reports, and then form opinions on the basis of these sources of information. Moreover, our most fundamental concepts and assumptions, the material out of which our opinions are built, are not self-generated but rather are passed down to us from previous generations as part of our intellectual inheritance. We are not intellectual atoms, unaffected by one another. Our views are continuously and thoroughly shaped by others. But then, if we have intellectual trust in ourselves, we are pressured also to have prima facie intellectual trust in others. For, insofar as the opinions of others have shaped our opinions, we would not be reliable unless they were (Foley 2004: 102).

Second:
[U]nless one of us has had an extraordinary upbringing, your opinions have been shaped by an intellectual and physical environment that is broadly similar to the one that has shaped my opinions. Moreover, your cognitive equipment is broadly similar to mine. So, once again, if I trust myself, I am pressured on the threat of inconsistency also to trust you (Foley 2004: 102).


At first, I thought that the universalist would be sympathetic to the conciliatory view. The universalist view is motivated by the thought that epistemic egoism and egotism are incoherent. (Basically, those who adopt these views don't take the fact that others believe p to be a prima facie reason to believe likewise.) But, it isn't clear that this is the view that Foley likes. He writes:
[T]here is an important and common way in which the prima facie credibility of someone else’s opinion can be defeated even when I have no specific knowledge of the individual’s track record, capacities, training, evidence, or background. It is defeated when our opinions conflict, because, by my lights, the person has been unreliable. Whatever credibility would have attached to the person’s opinion as a result of my general attitude of trust toward the opinions of others is defeated by the trust I have in myself. It is trust in myself that creates for me a presumption in favor of other people’s opinions, even if I know little about them. Insofar as I trust myself and insofar as this trust is reasonable, I risk inconsistency if I do not trust others, given that their faculties and environment are broadly similar to mine. But by the same token, when my opinions conflict with a person about whom I know little, the pressure to trust that person is dissipated and, as a result, the presumption of trust is defeated. It is defeated because, with respect to the issue in question, the conflict itself constitutes a relevant dissimilarity between us, thereby undermining the consistency argument that generates the presumption of trust in favor of the person’s opinions about the issue. To be sure, if I have other information indicating that the person is a reliable evaluator of the issue, it might still be rational for me to defer, but in cases of conflict I need special reasons to do so (Foley 2004: 109).

The problem is that last line. Those who defend the conciliatory view aren't committed to any particular view about the proper reaction to the discovery that some schmohawk happens to believe p. Those who defend the view are interested in cases of peer disagreement. Maybe Foley thinks that the default attitude to take in light of the "self-trust radiates outward" arguments is that we treat all we come across as if they are peers but they lose that status when they disagree with us unless we have special reasons for deferring, reasons that we needn't have when we meet someone we take to be a peer up to the moment of discovering that the person we've met disagrees with us.

At any rate, the last line seems out of line with the spirit of the conciliatory view.
On its face, two claims seem in tension. If you have no attitude concerning p and you discover someone believes p, you have a prima facie reason to believe p. If you have an attitude concerning p and you discover someone believes ~p, their belief gives you only a defeated reason to believe p whereas the reason provided by your belief remains undefeated. The first claim is motivated by the thought that we're all in roughly the same boat and so there's no rational justification for trusting yourself and not others. That seems to suggest that a kind of deference in the face of disagreement doesn't require a special reason to justify it.

Part of what bugs me about these passages is that when you discover the disagreement you discover that someone disagrees with you that you antecedently took to be no less likely to be wrong seems like an odd defeater for their attitude. I can see defending this line with an argument about, say, the problems of the equal weight view or some defense of the right reasons approach, but that's not what we have here. It is as if the argument against the conciliatory view is just an intuition about defeat and disagreement between what you initially took to be a peer.

Tuesday, December 22, 2009

Could 'ought' be objective but shifty?

[Fixed a gaff]
I think something like this exchange once took place:
LD: You should do something about the kitchen and leave the living room alone.
Me: No, I'm think I should paint the living room and leave the kitchen alone.
LD: In that case, you should paint the walls brown or grey but not that navy blue you're looking at.

I think there are many contexts in which an advisor will (properly) advise an agent to perform a suboptimal action because she knows that the agent simply will not perform the optimal action. (I don't think this lends any support to actualism.) Nevertheless, I think that the advisor needn't be anything less than perfectly conscientious. What goes for apartment improvement goes for morality as well. I think that an advisor could be perfectly morally conscientious, know that A is better than B, but advise the agent to pursue B upon learning that the advisee won't A.

Zimmerman says this about 'ought' and the conscientious agent:
It is with overall moral obligation that the morally conscientious person is primarily concerned. When one wonders what to do in a particular situation and asks, out of conscientiousness, 'What ought I to do?,' the 'ought' expresses overall moral obligation ... Conscientiousness precludes deliberately doing what one believes to be overall morally wrong (2)

I think that it even if it is with overall moral obligation that the morally conscientious advisor is primarily concerned (it might be the values that ground those obligations, however, that concerns the conscientious agent but let that pass), there might be legitimate reasons for the advisor to 'shift' focus to something she knows full well would be a violation of the advisee's obligations (e.g., when the advisee is just dead set on acting in ways that go against obligation but can be steered to act in such a way that she does the next best thing rather than something even worse).

And this raises a question. Assuming that this is so, why can't we say that just as a morally conscientious advisor might sincerely advise someone to do something other than what they really ought to do _and_ yet be primarily concerned with overall moral obligation (e.g., when they have good reason to advise the agent to do the next best thing) the agent herself might have good reason to focus on something other than her overall obligation. She could still be primarily concerned with her overall obligation, but have some good reason to strive for something else.

Here's the basic strategy for blocking the argument for prospectivism. In cases where the agent takes herself to have adequate information, the 'ought' she is primarily concerned with is one that picks out overall moral obligation. In cases where the agent takes herself to lack adequate information to determine what she ought to do all things considered, the conscientious agent might be concerned primarily with that same 'ought', but with that 'ought' out of cognitive reach, she'll aim to bring about the best state of affairs she can work out a strategy for bringing about given her state of ignorance. Provided that the 'ought' on the lips of the conscientious agent in these cases are different, intuitions about the proper use of 'ought' under ignorance is a poor guide to the truth-conditions for the 'ought' that the conscientious agent is primarily concerned about.

Following up on the post from earlier, the conscientious agent will only shift attention away from the 'ought' that picks out overall obligation when she has good moral reason to shift her attention. This requires identifying some good moral reason to set your sights on something other than what there's overall moral reason to do. I think that the desire to minimize a certain kind of risk could be just that reason.

Two cases seem to cause trouble for the objectivist view that says that an agent always ought to do what's best:
Case 2: All the evidence at Jill’s disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable, but it also indicates (in contrast with the facts) that giving him drug C would cure him completely and giving him drug A would kill him.

Case 3: All the evidence at Jill’s disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable, but her evidence leaves it completely open whether it is giving him Drug A or Drug C that will kill him or cure him.

Here’s Zimmerman’s version of the objection to the objectivist view:
Put Moore [or any objectivist] in Jill’s place in Case 2. Surely, as a conscientious person, he would decide to act as Jill did and so give John drug C. He could later say, “Unfortunately, it turns out that what I did was wrong. However, since I was trying to do what was best for John, and all the evidence at the time indicated that that was indeed what I was doing, I cannot be blamed for what I did.” But now put Moore in Jill’s place in Case 3. Sure, as a conscientious person, he would once again decide to act as Jill did and so give John drug B. But he could not later say, “Unfortunately, it turns out that what I did was wrong. However, since I was trying to do what was best for John, and all the evidence at the time indicated that that was indeed what I was doing, I cannot be blamed for what I did.” He could not say this precisely because he knew at the time that he was not doing what was best for John. Hence Moore could not justify his action by appealing to the Objective View … On the contrary, since conscientiousness precludes deliberately doing what one believes to be overall morally wrong, his giving drug B would appear to betray the fact that he actually subscribed to something like the Prospective View (Zimmerman 2008: 18).

Case 4: All the evidence at Jill’s disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable. Jill’s evidence strongly indicates that drug A would cure John completely and that drug C would kill him, but Jill doesn’t know that because she doesn’t know how to compute the expected value of her outcomes because she doesn’t know Bayes’ Theorem and needs to know how to use Bayes’ Theorem to work out the value.
Intuitively, it seems that Jill oughtn’t take the chance and ought to use drug B. But, it also seems that Jill knows that this course of action is not the course of action the Prospective View or the Objective View advises. As Zimmerman stresses, it is hard to know which option maximizes expected value and the innumerate among us know that he’s right on this point. Shouldn’t we sometimes play it safe in cases like case 4? I think this is what the conscientious person would do. From an intuitive point of view, case 4 is a lot like case 3. But, if intuition suggests that this is what Jill should do and the Prospective View says Jill should give drug A, it seems those who defend the Prospective View are in the same boat as those who defend the Objective View.
Do we have to dumb the Prospective View down? That’s one way to go, but I think that those who defend the Prospective View don’t have to go this route. If the conscientious agent in case 4 is thinking about subsidiary obligations (i.e., what to do if she's not going to do what she ought to do), we can save the prospective view from cases like case 4 but it seems the same thing should work for case 3. It will take some work to get the details right. If you ought to A but won't and have some subsidiary obligation to do B, that's because B is second best. Instead, maybe the idea is that the obligation the agent has in mind is the best world available that she can figure out a way to realize. Something like that.

Sunday, December 20, 2009

The prospects of prospectivism?

Another question about prospectivism. Consider two challenge cases to views that say that you ought to do the best you can (as opposed to saying that you ought to do what you believe to be best, what will probably be best, or what will maximize expectable value):

Case 2: All the evidence at Jill's disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable, but it also indicates (in contrast with the facts) that giving him drug C would cure him completely and giving him drug A would kill him.

Suppose that on the basis of the evidence, Jill gives drug C and kills John. Zimmerman's prospective view implies that Jill did what she ought to but the objective view implies that she did not. Someone like Moore would say that although Jill acted wrongly but she is not to blame for doing so.

Some will say that this response just isn't satisfactory, but making matters worse is this:
Case 3: All the evidence at Jill's disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable, but her evidence leaves it completely open whether it is giving him Drug A or Drug C that will kill him or cure him.

Zimmerman says (paraphrase) that if we put Moore in Jill's shoes in Case 2, he could say, "Unfortunately, it turns out that what I did was wrong. However, since I was trying to do what was best for John, and all the evidence at the time indicated that that was indeed what I was doing, I cannot be blamed for what I did" (18). He cannot say this in Case 3, however, because the conscientious person would knowingly do what would not be the best.

I think there are two things the objectivist might say in response. First, the objectivist might offer a sort of tu quoque. Zimmerman stresses that it can be exceptionally difficult to determine which actions will maximize expectable value and so I think he'd acknowledge that someone can have reasonable but mistaken beliefs about which acts will maximize expectable value. I don't see why cannot construct cases where an agent knows that the action that will maximize expectable value is either A or C, not know which of these options will be the one that maximizes expectable value, know that A or C (but not which one) will be the worst from the point of view of maximizing expectable value, and know that B is somewhere in between these two options.

Second, it seems that Zimmerman is assuming that the conscientious person will not do what they believe to be overall wrong. Can't the objectivist deny this? It might seem a desperate maneuver, but if that's a move that everyone has to make, we should all lump it.

Zimmerman anticipates a version of this response, and here's what he says:
One response that might be made on behalf of the Objective View is this. It is true that, if Moore were put in Kill's place in Case 3, as a conscientious person he would choose to give John Drug B. But the choice would be perectly in keeping with his adherence to the Objective View, for it would simply constitute an attempt on his part to minimize the risk of doing wrong (20)

He says:
This response is unacceptable. I have stipulated ... that the probability that giving John Drug B will cure him only partially is 1. From the prospective of the Objective View, then, the probability that giving him this drug is wrong is 1, whereas, for each of drugs A and C, the probability that giving him the drug is wrong is less than 1. Hence, according to the Objective View, giving John Drug B does not minimize the risk of doing wrong; only the contrary, it is guaranteed to be wrong (20).

Darn, good point. Why can't the objectivist say that the agent will minimize the risk of _harm_ or _negative value_ rather than wrongdoing? I think the idea is that the conscientious agent always acts on judgments about what's right or wrong rather than what's good/bad, but if denying that the conscientious agent never decides to do what he knows he oughtn't, this seems like a good fallback position.

So, a lot of this will depend upon whether there are variants of Case 2 and 3 that cause trouble for the prospectivist, but if there are, the response I'm imagining the objectivist could use could be used by the prospectivist as well. But, that does mean that the force of Case 2 and 3 has effectively been neutralized. I need to look at Zimmerman's remarks concerning determinate levels of evidence to see if he has a way of dealing with this.

Friday, December 18, 2009

Reprehensible but not responsible?

At last year's Eastern, I picked up a copy of Zimmerman's Living with Uncertainty at the Cambridge sale where I was spotted by someone I knew who confessed to being a bit jealous that I had my mitts on the thing since he had to write a review of it. I told him he could have it (I hope I did!), but I recall he declined saying that he was supposed to get one from the journal he's reviewing it for. Karma is a funny thing. Now I'm to review it and I let the wrong person know that I had a copy already, so no free copy for me. I'm really glad that I get an excuse to read the whole thing carefully and have taken this as an excuse to order Zimmerman's earlier work, The Concept of Moral Obligation. (Among my resolutions for the new year is to stop putting so many books on my credit cards.) In the last chapter of Uncertainty, Zimmerman argues that if you don't know that your behavior is wrong, you are not morally responsible (in the backwards looking sense) for that behavior. The view seems, well, counterintuitive. But, there's an argument for it that we should consider:

(1) Alf did A, A was wrong, but Alf was ignorant of this fact at the time he did A because he did not believe it was wrong [suppose].
(2) One is culpable for ignorant behavior only if one is culpable for the ignorance on or from which it was performed.
(3) So, Alf is culpable for having done A only if he is culpable for the ignorance on or from which he A'd.
(4) However, one is culpable for something only if one was in control of that thing.
(5) Alf is culpable for having done A only if he was in control of the ignorance in which he did A.
(6) One is never directly in control of whether one believes or does not believe something..
(7) Moreover, if one is culpable for something over which one had merely indirect control, then one's culpability for it is itself merely indirect.
(8) Furthermore, one is indirectly culpable for something only if that thing was a consequence of something else for which one is directly culpable.
(9) So, Alf is culpable for having done A only if there was something else, B, for which he is directly culpable and of which the ignorance-the disbelief-in or from which he did A was a consequence.
(10) But, whatever B was, it cannot itself have been an instance of ignorant behavior because then the argument would apply all over again to it.
(11) Thus, Alf is culpable for having done A only if there was some other act or omission, B, for which he is directly culpable and of which is failure to believe that A was wrong was a consequence, and B was such that Alf believed it at the time to be wrong.

What is true of Alf is true of Brenda, Charles, Doris, Edward, Frick and Frack, etc... (176).

What about the Nazis? Zimmerman says that we cannot say they are morally responsible (unless, I guess, we assume that Hitler believed that he was acting wrongly which I guess we shouldn't assume) but adds, "there are a variety of ways in which a person is open to moral evaluation; attributions of moral responsibility constitute only one such way. Thus, we may indeed say that the beliefs and actions of the youthful Nazi are morally reprehensible, and even that he is morally reprehensible in light of them, without saying that he is morally responsible for them" (179).

I want to get back to the argument, but I first want to try to understand this distinction between moral responsibility in the backwards looking sense and these others forms of moral evaluation. Can you be blameworthy for something we know you're not morally responsible for? (Oops, that was a mistake.) Can you be morally reprehensible for doing deeds that we know that you're not morally responsible for? We can futz around with the assumptions concerning control and control over our attitudes, but my first reaction is to say that our judgments about blame and about what's morally reprehensible (and not just bad or of negative value) are going to serve as the basis of our judgments about moral responsibility, so to the extent that it's plausible to say that you can be reprehensible for doing something you didn't know you shouldn't do it's plausible to say that you're responsible for those same things. If we're comfortable with the idea that you can be reprehensible for A-ing when you don't have the sort of control that Zimmerman has in mind, why not say that (4) is false?

Zimmerman says that we can imagine sadists that cannot control their sadistic impulses who are morally reprehensible but uncontrollably so, and this seems to cause trouble for those who would say that we can have responsibility without control. But I think there's an important difference between cases of moral ignorance and cases of uncontrollable impulses and the kind of failures of control we're considering when we compare the person who cannot resist an impulse to do something sadistic and the sadist who identifies with the sadistic action but could resist if they tried. I don't see that it's all that bad to say that those who truly cannot control their impulses are not responsible in a backwards looking sense, blameworthy, reprehensible, etc... whereas those who act on or from moral ignorance without being compelled to act by some irresistible impulse can be responsible for doing what they don't know they oughtn't because they identify with the wrong values. Like our nazis.

Universalism and Disagreement

I've been reading Foley's _Intellectual Trust_ and his defense of universalism, the view that tells us that when we discover another person believes p, this provides you with a prima facie reason to believe p even if you happen to know nothing about the reliability of this other person. I've been wondering what Foley would say about cases of peer disagreement. At first, I thought he'd favor the conciliatory approach that encourages us to modify our attitudes we discover that someone we take to be a peer disagrees. But, this passage muddies the waters significantly:
[T]here is an important and common way in which the prima facie credibility of someone else’s opinion can be defeated even when I have no specific knowledge of the individual’s track record, capacities, training, evidence, or background. It is defeated when our opinions conflict, because, by my lights, the person has been unreliable. Whatever credibility would have attached to the person’s opinion as a result of my general attitude of trust toward the opinions of others is defeated by the trust I have in myself. It is trust in myself that creates for me a presumption in favor of other people’s opinions, even if I know little about them. Insofar as I trust myself and insofar as this trust is reasonable, I risk inconsistency if I do not trust others, given that their faculties and environment are broadly similar to mine. But by the same token, when my opinions conflict with a person about whom I know little, the pressure to trust that person is dissipated and, as a result, the presumption of trust is defeated. It is defeated because, with respect to the issue in question, the conflict itself constitutes a relevant dissimilarity between us, thereby undermining the consistency argument that generates the presumption of trust in favor of the person’s opinions about the issue. To be sure, if I have other information indicating that the person is a reliable evaluator of the issue, it might still be rational for me to defer, but in cases of conflict I need special reasons to do so (Foley 2004: 109).

The problem is that last line. Those who defend the conciliatory view aren't committed to any particular view about the proper reaction to the discovery that some schmohawk happens to believe p. Those who defend the view are interested in cases of peer disagreement. Maybe Foley thinks that the default attitude to take in light of the "self-trust radiates outward" arguments is that we treat all we come across as if they are peers but they lose that status when they disagree with us unless we have special reasons for deferring, reasons that we needn't have when we meet someone we take to be a peer up to the moment of discovering that the person we've met disagrees with us. At any rate, the last line seems out of line with the spirit of the conciliatory view.

On its face, two claims seem in tension. If you have no attitude concerning p and you discover someone believes p, you have a prima facie reason to believe p. If you have an attitude concerning p and you discover someone believes ~p, their belief gives you only a defeated reason to believe p unless you have special reason for deferring. The first claim is motivated by the thought that we're all in roughly the same boat and so there's no rational justification for trusting yourself and not others. That seems to suggest that a kind of deference in the face of disagreement doesn't require a special reason to justify it.

He could say that there's no tension here because the discovery of disagreement undermines the justification for thinking that we're all in the same boat, epistemically. But, I don't think it's quite that simple. Here's one of the passages where Foley defends universalism:
[U]nless one of us has had an extraordinary upbringing, your opinions have been shaped by an intellectual and physical environment that is broadly similar to the one that has shaped my opinions. Moreover, your cognitive equipment is broadly similar to mine. So, once again, if I trust myself, I am pressured on the threat of inconsistency also to trust you (Foley 2004: 102).

If that's the rationale for treating your opinions like its one of my own, is there really some proposition in the rationale for universalism that gets called into question _simply_ because I've encountered an apparent peer who disagrees with me? Sure, if you believe in Zeus, I'll think we've had very different upbringing but we're talking about cases where we respond in different ways to the same sort of evidence and have seemed up until this point to be very similar in terms of epistemic ability, intelligence, intellectual virtue, etc...

At any rate, Christensen takes Foley to be defending a view that is at odds with the conciliatory view of disagreement and I can see that. But, I can also see (or I think I can) why a universalist would be attracted to the conciliatory view. So, I'm having a hard time connecting the arguments and positions defended in the book to the literature on disagreement. I'm tired, though, so maybe I'll sort it out later.

Thursday, December 17, 2009

Examined Life



This is a trailer for, Examined Life, a documentary film by Astra Taylor. It includes a series of vignettes with Cornel West, Avital Ronell, Peter Singer, Kwame Anthony Appiah, Martha Nussbaum, Michael Hardt, Slavoj Zizek, Judith Butler and Sunaura Taylor. Check it out here.

Tuesday, December 15, 2009

Watching the detectives

Should we trust the experts?

Here's Feser's $.02:
But of course there is another obvious way to interpret the results in question [He's speaking of the results of the Phil Papers survey that revealed that the majority of professional philosophers lean towards or accept atheism whereas the majority of professional philosophers who specialize in philosophy of religion lean towards or accept theism] – as clear evidence that those philosophers who have actually studied the arguments for theism in depth, and thus understand them the best – as philosophers of religion and medieval specialists naturally would – are far more likely to conclude that theism is true, or at least to be less certain that atheism is true, than other philosophers are. And if that’s what the experts on the subject think, then what the “all respondents” data shows is that most academic philosophers have a degree of confidence in atheism that is rationally unwarranted.


There's lots of interesting stuff to think about here. Should the confidence of non-experts reflect the attitudes of experts? Shouldn't this depend, in part, upon the size of the 'knowledge gap' between expert and non-expert? Suppose there's a gap. (Plausible). Is that gap anything like the gap between global warming deniers and climatologists? I don't think so, but that's still perfectly consistent with the idea that non-experts ought to be less confident in their attitudes upon learning what we've learned when the results were released.

Here's something that I think matters but I don't know what to make of it.

*Suppose the majority of the experts agree that a certain argument for the non-existence of X (electrons, phlogiston, fairies, objective moral standards, heaven, a justification for intentionally terminating a pregnancy) fails.
*Suppose that this is based on the widespread conviction that there's some adequate reason or other to believe in X.

This is all perfectly consistent with widespread disagreement amongst experts on two points:
(CP1) what the adequate reasons are for believing in X;
(CP2) what's wrong with the arguments for the non-existence of X.

So, some what-iffing based on next to nothing.

What if the experts were evenly divided in the following ways. We divide the experts into the A team and B team if we look at their attitudes concerning (CP1). We divide the experts into the C team and D team if we look at their attitudes concerning (CP2). The members of the A team thought that the reasons that the members of the B team had for believing in X were inadequate and poor for reasons readily available in the literature. The members of the B team thought that the reasons that the members of the A team had for believing in X were inadequate and poor for reasons readily available in the literature. The members of the C team thought that the members of the D team failed to neutralize the arguments for the non-existence of X because those responses rested on false premises that were shown to be false/unwarranted in the literature. The members of the D team thought that the members of the C team failed to neutralize the arguments for the non-existence of X because those responses rested on false premises that were shown to be false/unwarranted in the literature.

I can imagine some epistemologists saying that if the experts had a high degree of confidence in the hypothesis that X existed, that would be misplaced confidence given some principles about the weight of peer opinion and some evidentialist assumptions (which, admittedly, might be hard to rationally accept as a package given the principles about the weight of peer opinion which themselves are problematic given contingent facts about what opinions are floating around). I can imagine some epistemologists saying that when expert or ('expert') opinion is known to be not warranted by the evidence, the gap in confidence between expert and non-expert (which is really the gap between specialist and non-specialist) does not entail that the attitudes of non-specialists/non-experts are unwarranted/unreasonable/epistemically impermissible.

At any rate, I think the issue is a bit more complicated than some people have assumed. Indeed, I fear that I've oversimplified things. You've been good to read this far. Enjoy some Elvis Costello, you deserve a treat:

Friday, December 11, 2009

Fair and Balanced is perfectly consistent with Fair, Balanced, and Biased

From a Fox "News" Poll:
17. What do you think President Obama would like to do with the extra bank bailout money -- save it for an emergency, spend it on government programs that might help him politically in 2010 and 2012, or return it to taxpayers?

Thursday, December 10, 2009

Eastern APA

Had a strange dream about the APA last night. The drive in to NYC took longer than planned, there were notes apologizing that there wasn't enough beer at the talks, NYC was suffering from a severe and prolonged coffee shortage, and I couldn't hear the questions from the audience over the traffic noise. (The talk was outside at an abandoned gas station.) For reasons that weren't entirely clear, the members of the audience would all raise their hands at once. Not to ask questions, it was as if they were voting.

Upon waking, I saw that Matt had sent me his comments. Spooky.

The paper I'm giving is one of a handful of papers where I try to motivate claims about epistemic norms by appeal to claims about non-epistemic norms supported by intuition. There seem to be two responses to the general strategy:

R1: Anyone who buys into epistemic internalism will simply not have the intuition that the deontic status of an action can depend (in part) upon features of the situation that the subject is non-culpably ignorant of.

R2: Anyone who thinks about it will realize that the deontic status of actions depend (in part) upon features of the situation that the subject is non-culpably ignorant of and that the epistemic status of attitudes never depends upon features of the situation but only the subject's non-factive mental states.

If only we could get the R1 and R2 people in a room. My response to R1 people is (in part) that there are R2 people. R2 people are tough, they seem to require a real response. This isn't a response (yet) so much as some questions and hand waving.

An example:
GIN/PETROL
The first gin and tonic was delicious, so you order a second. You promise to share this one with your partner. The drink you are given looks like a gin and tonic, has the limes you’d expect a gin and tonic to have, but it is in fact petrol and tonic. You give it to your partner to drink and she becomes violently ill as a result.

Someone who is sympathetic to (R2) might say the following:
While you oughtn't give your partner the stuff, it's not the case that you shouldn't say that your partner should drink the stuff. The giving is wrong but the saying that you should give is not epistemically wrongful

You can only say this if you believe that faultless wrongdoing is possible. You have to think that you’re obliged to refrain from giving someone a drink containing petrol when you justifiably believe that it’s gin and know that you’ve promised to give them some of your gin drink. You have to believe that there are inaccessible normative reasons not to Φ that not only bear on whether to Φ but can still manage to defeat whatever reasons count in favor of Φ-ing. These inaccessible reasons aren’t diminished in strength just because they are inaccessible and so these reasons can be the ‘winning’ reasons.

I can’t see how this response to GIN/PETROL could be right unless we were to assume:
(FW) There can be cases of faultless wrongdoing, cases where the subject is obliged to refrain from Φ-ing when the subject was nevertheless rational, reasonable, and responsible in Φ-ing.

In defense of the idea that there deontic status of action and normative standing of attitude/assertion you can say two things. First, you can say that (FW) is false. If (FW) were true, morality would make unreasonable demands on us. Morality is, if anything, reasonable. Here’s what Fantl and McGrath say about the case:
… it is highly plausible that if two subjects have all the same very strong evidence for my glass contains gin, believe that proposition on the basis of this evidence, and then act on the belief in reaching to take a drink, those two subjects are equally justified in their actions and equally justified in treating what they each did as a reason, even if one of them, the unlucky one, has cleverly disguised petrol in his glass rather than gin. Notice that if we asked the unlucky fellow why he did such a thing, he might reply with indignation: ‘well, it was the perfectly rational thing to do; I had every reason to think the glass contained gin; why in the world should I think that someone would be going around putting petrol in cocktail glasses!?’ Here the unlucky subject is not providing an excuse for his action or treating what he did as a reason; he is defending it as the action that made the most sense for him to do and the proposition that made most sense to treat as a reason (forthcoming: 141).

If (FW) is false, the facts that the subject is ignorant of cannot be the facts that oblige the subject to act against her justified judgment about what to do and say. So, cases like GIN/PETROL aren’t a threat to Link:

(LINK) If S oughtn't Φ, an advisor epistemically oughtn't advise S to Φ.

Essentially, this is (R1). The problem with (R1) is that it isn't supported by intuition. Indeed, it is counterintuitive.

Suppose instead that (FW) is true and suppose factual ignorance can excuse, but does not obviate the need to justify giving your partner the petrol. Since there was no overriding reason to give your partner the petrol, you shouldn’t have given her that stuff to drink. Even if we assume (FW) is true, it still isn’t obvious why we should think that GIN/PETROL poses a threat to (LINK).

There has to be some explanation as to why the facts that the subject is non-culpably ignorant of adversely affect the normative standing of an action without adversely affecting the normative standing of the assertion that the action is to be performed. Any explanation as to how there could be no epistemic obligation to refrain from asserting that someone should Φ when the relevant agent shouldn't Φ would either focus on the epistemicness of the epistemic obligations or the obligatoriness of epistemic obligations.

You can’t say that it is the obligatoriness of the obligation that provides the explanation. If (FW) is true, there’s nothing about obligation, per se, that requires that the subject knows, is in a position to know, or is in a position to justifiably believe the obligation to be an obligation. This is not quite the same point, but it is related to a point that Gibbons make which is worth repeating. If we’re going to talk about normative reasons that bear on action and belief, at some level of abstraction we should expect reasons for action and belief to behave in the same way. They are, after all, reasons. What goes for reasons goes for obligations.

Someone could try to explain how it could be that non-normative facts bear on the deontic status of the action but not the assertion by focusing on the epistemicness of the obligations we’re under. This isn’t promising either. Epistemic obligations have to do with the pursuit of truth and avoidance of falsity. Practical obligations have to do with the pursuit of the best. Either you are really into the idea that faultless failures to bring about the best are failures to meet your obligations or you think this makes a joke out of morality. If you think that a faultless failure to produce what is actually deontically best is not a failure to live up to your obligations, you don’t accept (FW) and so won’t try to explain how there could be a moral obligation to act against the advice in GIN/PETROL. If you think that faultless failure to bring your beliefs/assertions in line with the truth is not a failure to meet your epistemic obligations but think that (FW) is true and that a faultless failure to bring about what is deontically best is a failure to meet your moral obligations, you are appealing to some difference between the epistemic and the practical that you haven’t explained.

Wednesday, December 9, 2009

The results are in!

Philosophers of Religion in Target Faculty
God: theism or atheism?
Accept or lean toward: theism 34 / 47 (72.3%)
Accept or lean toward: atheism 9 / 47 (19.1%)
Other 4 / 47 (8.5%)

All Respondents/Target Faculty
God: theism or atheism?
Accept or lean toward: atheism 678 / 931 (72.8%)
Accept or lean toward: theism 136 / 931 (14.6%)
Other 117 / 931 (12.5%)

There's some discussion of the numbers emerging over at Prosblogion.

Philosophers of Mind in Target Faculty
Mind: physicalism or non-physicalism?
Accept or lean toward: physicalism 117 / 191 (61.2%)
Accept or lean toward: non-physicalism 42 / 191 (21.9%)
Other 32 / 191 (16.7%)

All Respondents/Target Faculty
Mind: physicalism or non-physicalism?
Accept or lean toward: physicalism 526 / 931 (56.4%)
Accept or lean toward: non-physicalism 252 / 931 (27%)
Other 153 / 931 (16.4%)

Among the more interesting claims being floated is this one: just as theists go into philosophy of religion in order to defend theism, there are many atheists going into philosophy of mind in order to defend physicalism. I offered some suggestions as to why atheists/agnostics aren't going in to philosophy of religion. Unless people are converting rapidly, there's got to be some reason why it is. So far, I don't think I've hit upon any explanatory factors that have convinced anyone but (a) there has to be some explanation as to why this is (b) the explanation has to be partially contained in what I said because I covered just about all the possible explanations.

On the epistemology front:

Target Faculty/Epistemology
Epistemic justification: internalism or externalism?
Accept or lean toward: internalism 59 / 160 (36.8%)
Accept or lean toward: externalism 56 / 160 (35%)
Other 45 / 160 (28.1%)

Target Faculty/All Respondents
Epistemic justification: internalism or externalism?
Accept or lean toward: externalism 398 / 931 (42.7%)
Other 287 / 931 (30.8%)
Accept or lean toward: internalism 246 / 931 (26.4%)

Some move away from externalism about epistemic justification among the specialists, but the view is not without its defenders. This data is relevant to something I've had to contend with recently. In the paper I'm giving at the Eastern in a few weeks, I offer some examples that I used to elicit intuitions from undergraduates where I try to see whether they are internalists or externalists about moral permissibility. The intuitions suggest that the folk are externalists about the justification of action. (At the very least, they think you can generate reparative duties by bringing about bad effects when you couldn't have been expected to know that you would bring these effects about at the time of action.) I argue on theoretical grounds that you cannot accommodate these intuitions given the constraints imposed on you by internalism about the epistemic stuff.

Two responses to this. The first was that the undergrad responses are not a good guide to community standards. Here's a response:
* I could point to further data that suggests that community standards are externalist (e.g., Darley and Robinson's work (this is a good place to start) suggests that the dominant view in the community held by the folk is that the degree of punishment appropriate to an offense is partially determined by the effects of an action. When you have two subjects that are mental duplicates that bring about different effects, the community standard appears to be that the punishment appropriate for the agent that brought about the worse effects is greater than the punishment appropriate for the agent that brought about the lesser effects). That's more data, but it's the sort of data that one could offer without challenging the (empirical) claim that epistemic internalists won't share the intuitions I've tried to elicit and that Darley and Robinson have elicited (i.e., that there can be moral differences in the status of an action without mental differences that distinguish actors).

The second response to my argument was that epistemic internalists will simply not have the intuitions that these undergrads had. Three responses to that.
* First, the fact that they respond that way doesn't mean that the response is reasonable.
* Second, it's an empirical question as to whether they will react to that. (I can think of some prominent internalists who do _not_ react like that. Richard Feldman, for example, is a prominent an internalist and he rejects the view that you are morally justified in acting on your epistemically justified moral judgments precisely because he thinks that the consequences of an action (known or unknown) can bear on the permissibility of the action but can have no bearing on the epistemic standing of judgments about the deontic status of the action that brings those consequences about. Barbara Herman thinks that all moral evaluation is concerned with the quality of the agent's will and she tries to tell a Kantian story as to why we have what she thinks intuition suggests are duties of reparation to deal with the unforeseen consequences of our action. Theoretically, they are internalist but they have intuitions that appear to favor some externalist views.)
* Third, I think that, ceteris paribus, we want theories that are consistent with community standards that govern the application of normative terms. While a philosophical argument could correct these community standards, that would suggest that cateris isn't paribus, and one of the difficulties that such an argument would face is that it likely would have to be pinned down by intuition at some point. If those intuitions are unique to specialists with philosophical axes to grind, we should worry about theory contamination of intuition defeating their evidential significance.

Wouldn't wrap Fish in it

[W]hile I wouldn’t count myself a fan in the sense of being a supporter, I found it compelling and very well done. My assessment of the book has nothing to do with the accuracy of its accounts. Some news agencies have fact-checkers poring over every sentence, which would be to the point if the book were a biography ... “Going Rogue,” however, is an autobiography, and while autobiographers certainly insist that they are telling the truth, the truth the genre promises is the truth about themselves — the kind of persons they are — and even when they are being mendacious or self-serving (and I don’t mean to imply that Palin is either [Heavens, no]), they are, necessarily, fleshing out that truth. As I remarked in a previous column, autobiographers cannot lie because anything they say will truthfully serve their project, which, again, is not to portray the facts, but to portray themselves.As I remarked in a previous column, autobiographers cannot lie because anything they say will truthfully serve their project, which, again, is not to portray the facts, but to portray themselves.

Gag me.

Does he believe anything written in that book?
It doesn’t matter. What matters is that she does, and that her readers feel they are hearing an authentic voice. I find the voice undeniably authentic (yes, I know the book was written “with the help” of Lynn Vincent, but many books, including my most recent one, are put together by an editor). It is the voice of small-town America, with its folk wisdom, regional pride, common sense, distrust of rhetoric (itself a rhetorical trope), love of country and instinctive (not doctrinal) piety. It says, here are some of the great things that have happened to me, but they are not what makes my life great and American. (“An American life is an extraordinary life.”) It says, don’t you agree with me that family, freedom and the beauties of nature are what sustain us? And it also says, vote for me next time. For it is the voice of a politician, of the little girl who thought she could fly, tried it, scraped her knees, dusted herself off and “kept walking.”

Undeniably authentic, but wholly unbelievable. Holy unbelievable!

Monday, December 7, 2009

The evidence wars continue

Turri's False Evidence.

Weatherson's Evidence and Inference.

Will the truth out? Will truth out? We'll have to wait and see.

Some arguments (that might need some tinkering)

(1) If someone knows that p is part of her evidence, it seems that the question ‘Why is it that p?’ is appropriate/in place/proper/doesn't rest on a mistake in the way that 'Why do fish weigh less when they die?' is inappropriate/out of place/improper/rests on a mistake. That assumes that we’ll respond by saying either ‘No reason, it’s just a brute fact that p’ or ‘p because q’. Both answers entail p. I can't see how you could explain this unless you assumed that evidence is factive.

(2) If S knows that p is part of her evidence, she knows that p is true. If I know that p is part of S's evidence, it isn't an open question for me as to whether p.

(3) It seems that if A asserts that p is part of A’s evidence and then B asserts ~p, it seems that A and B disagree/can't both be right.

(4) If p is part of my evidence and I know that p is part of my evidence, I think I’m in a position to A for the reason that p (when I know that my choice to A is a p-dependent choice). You cannot A for the reason that p if ~p.

(5) It just sounds weird to say, ‘His evidence was that p, but of course ~p’ or 'His evidence was that p, but I don't believe p', but this cannot be weird for Moorean reasons because it's his evidence, not mine.

I've offered other arguments in the Synthese piece and in other posts, but I won't repeat them here.

Sunday, December 6, 2009

For the record

My membership in Sarah Palin's facebook fanclub is not non-ironic. Membership is, however, non-ironically awesome! For example, Friday's installment:
Voters have every right to ask candidates for information if they so choose. I’ve pointed out that it was seemingly fair game during the 2008 election for many on the left to badger my doctor and lawyer for proof that Trig is in fact my child. Conspiracy-minded reporters and voters had a right to ask... which they have repeatedly. But at no point – not during the campaign, and not during recent interviews – have I asked the president to produce his birth certificate or suggested that he was not born in the United States.

That's the combination of batpoo crazy and bitterness we can believe in!

Tuesday, December 1, 2009

'Might's might

Ages ago, I wanted to write a paper called ''Might' made right'. That's not going to happen, but I'm still working on epistemic possibility. Ordinarily, I think it would be pedantic to object to the following view in the ways I'm about to, but I have my reasons. First things first. The view:

(EPk) p is epistemically possible for S iff ~p isn’t obviously entailed by something S knows.

Think about cases of inductive knowledge. It seems odd to think that you only have knowledge of future events when it is not epistemically possible that these events do not occur. Myself, I don’t doubt that our beliefs about the future constitute knowledge. I doubt that it would be correct to say that it isn’t epistemically possible that these beliefs are mistaken.

Think about conversations where sceptical hypotheses are introduced. In such contexts, it seems proper to concede that we might be mistaken in just about any belief about the external world. Now, suppose that knowledge is necessary for warranted assertion and that concessions (e.g., ‘It might be that I’m a BIV’) are really assertions. It seems that given these assumptions and (EPk), the propriety of the concession would depend upon whether the speaker knew herself to be ignorant. But, it seems harder to know that you don’t know than it is to know that it’s proper to concede that you might be mistaken. Given (EPk), to assert knowingly that you might be mistaken, you either know that you don’t believe p, that your belief about p is mistaken, that the justification you have for your belief is insufficient, or that you are in some sort of Gettier case. I doubt that you know one of these to be true whenever you know that it’s proper to concede that you might be mistaken. Thus, you either should think that concessions aren’t really assertions, deny that knowledge is the norm of assertion, or say (as I do) that in conceding that you might be mistaken you might only be conceding that you are not completely certain.

If the example of inductive knowledge shows what I think it does, then we need to revise (EPk) as follows:


(EPx) p is epistemically possible for S iff ~p isn’t obviously entailed by something S knows w/X.

Whatever we put in for 'X', it just has to be something we don't always have when we know. We could put in 'out inference' for 'X', and we get that epistemic necessity is non-inferential knowledge. That gives us the induction case, but not the perception case. We could put in 'infallible grounds for believing' and that gives us the induction case and would show that CKAs can't possibly pose a threat to fallibilism. Given my views about perceptual justification, I don't think that gets the perception cases right. There are many things we know non-inferentially that I think we have infallible grounds for, but these are things we can properly concede we might be mistaken about when skeptical hypotheses are introduced. So, why not just say something like 'certainty' and be done with it. The context determines whether someone knows with certainty because the conversational context can determine whether certain possibilities are significant and we can say that something is certain for S when S's evidence rules out all the significant possibilities where S is mistaken. Assuming that 'knows' and 'certain' doesn't sway together, this view wouldn't motivate a contextualist account of 'knows'. 'Might' might have the power to derail conversations, but it doesn't threaten your knowledge or evidence.

Monday, November 30, 2009

Is blind review blind?

I think that some sort of blind review in philosophy is the norm, but I think that blind review comes in different forms. Apart from AJP, I don't know of many journals that have blind referees and editors. Actually, I don't think I know that this is AJP's policy, but I seem to remember reading that this is their policy. (Full disclosure, I have fond feelings towards AJP because they've published my work, their editor writes you nice little notes thanking you for refereeing, and the editor will also return submissions to you within ten minutes of submission with a note that says 'I've spotted three typographical errors on the first page alone that you should fix them before resubmitting' (Ryan and I only found two after hours of searching, but I suspect that was all part of the fun.) There are always going to be problems with googly eyed referees trying to subvert the blind review process, but I'm really interested in the idea of blinding the editors today. (While I'm on the subject, don't referees know they know we know that they're looking? It's one of the reasons I take the opportunity to address referee's criticisms in posts here after I receive rejections. It's rare that you get the opportunity to stand up for your work when a referee criticizes it, but when you know that refs are checking in it's nice to be able to take the opportunity to explain why you think your work is more defensible than an unhappy referee suggests or say how you plan on fixing the paper to address the referee's criticisms. You can't make them accept your work or take back their criticism, but at the very least you hope they can respect the responses. (And, yes, fwiw, I know that some refs only look after the review is completed.))

I figured I'd post this because we might see the launch of an exciting new journal and the discussion has thus far has largely focused on open access and there's not been much discussion of the review process. I don't know who the editors for NIP will be apart from one and I have nothing but good things to say about this person. It seems that I'm not the only one who thinks that NIP could be a good model for other journals, but I'm less concerned about the open access stuff than I am about review practices. I've heard various rationales for having editors that know the identity of an author, but they seem weak. I've heard that it would be a hassle to blind the papers, but that seems unlikely if we're talking about journals that use online submissions. If TAs responsible for hundreds of students in intro classes can blind before grading, I have a hard time believing that editors can't look at a blinded version of a paper while deciding whether to send the paper out and which referee(s) to use. I've heard that the editors benefit from knowledge of the author's identity so they can use that knowledge to decide which referees will be appropriate for the paper. I don't know why we need to do this on the front end. Ask the referees whether they know the identity of the author and after the refereeing process is completed, the editor can then decide whether the referee's relationship to the author was problematic. I suspect that this will be rare. We all know the work of our friends and enemies rather well and I can't imagine that we'd deceive some editor by saying that we didn't know who authored a paper to help friends or harm enemies. I've heard that editors will use their knowledge of the author's talents and abilities to select referees, but isn't this precisely the sort of thing that should be discouraged? I've also heard in various blog threads that if you fall out of favor with an editor, you can expect to get bad treatment in the future. I've also heard that writing to the editor to address a referee's report will put you in bad standing with an editor. I don't know how much of this to believe, but if there is some truth to this, I think this is just another reason to blind editors. I don't like annoying people (but I'm sure someone near and dear to me will say that I love annoying people lately) and I can imagine that editors develop a dislike for authors who pester them, but I don't think this should affect how someone's work is handled by a journal. Going back to the grading example, if you have a particularly obnoxious student (it does happen sometimes, admit it) it's probably best that you try to blind student work before grading it. That just seems sort of obvious, so there must be some important difference between grading and the review process that I don't get. Throw in additional stuff about biases that are favorable to an author or unfavorable to an author, and it seems like there's a good case to be made for keeping editors blind. But, they aren't. Not to my knowledge, at least. I have to confess that my knowledge is quite limited.

Sunday, November 29, 2009

Wide-Scope: use sparingly or use something else?

Zimmerman (1996: 118) says that you shouldn't try to represent conditional obligation in terms of a material conditional where the 'ought' operator takes wide-scope. At first I thought I got it, but now I'm not so sure.

Consider:
(1) You should vote Obama but if you don't you shouldn't vote at all.

Let A: Vote Obama.
Let B: Vote for no one.

Letting the arrow stand for the material conditional, the idea is that we model conditional obligation (e.g., (1)) as follows:

(2) O(~A --> B)

Here's the problem with that. Suppose O(A). O(A) entails O(AvB), which entails O(~A-->B).

Now, let C: Vote for Romney. Suppose O(A). O(A) entails O(AvC), which entails:

(3) O(~A-->C).

At first I thought that his problem with (2) was that (2) was too weak and that there's got to be more to conditional obligation than (2). That still seems right, but now I'm worried. I could be wrong, but doesn't (1) entail?

(4) O(A).

Doesn't (4) entail (2) and (3)? Doesn't it seem that (1) is incompatible with (3)?

I've been traveling all afternoon and evening, reading Zimmerman on the plane and then driving back to Austin from San Antonio. It is now 1:14 AM and I've just stepped in the door (can't sleep), so there's a non-zero chance that I'm just missing something obvious, but I can't tell whether the problem is that (2) is false or that it's just too weak. What do we have to give up to block the inference from (4) to (2)? He doesn't deny that O(A) entails O(AvB). (His solution to Ross' paradox is the one that I'd offer and it doesn't require denying that inference.)

Maybe the idea is that (1) is compatible with (2) and (3)? Maybe that's right, and maybe the problem is just that (2) is too weak to capture the conditional obligation stated in (1). (AvB) and (AvC) are logically equivalent to (~A --> B) and (~A --> C) respectively and to ~(~A & ~B) and ~(~A & ~C). Just as there's no deontically superior world accessible to this one where you neither A nor B on the assumption that the best accessible world is an A-world, there's no deontically superior world accessible to this one where you neither A nor C on the assumption that the best accssible world is an A-world.

So, maybe the idea is that (1)-(4) are all true and that's consistent with the denial of:

(5) You should vote Obama but if you don't you should vote for Romney.

Why not? If there's more to (1) than (2), then there's more to (5) than (3) and that extra stuff is what we can use to capture the intuition that there's something wrong with (3) that really is the intuition that (5) can't be true if (1) is.

Thursday, November 26, 2009

FAQ: Reasons as facts and factivity

Suppose you think reasons are facts. I'm thinking of normative or guiding reasons, not motivating or explanatory reasons. Consider the following objections:

On the assumption that reasons are facts, we can have no false reasons and we have no reasons for any of our false beliefs. Yet, we know both that we have (and have had) many false beliefs (e.g., that there can be justified false beliefs!). To say that there are no false, justified beliefs at face value is to accept the absurd consequence that none of the false beliefs we hold are held rationally, since they are not capable of any justification whatsoever. Further, since we often cannot tell with any certainty which of our beliefs are true (their being a certain degree of opacity to the matter) we must be sceptics about most, if not all, of our beliefs. Since we can’t be certain that they’re true, we can’t have any justification for them whatsoever, and further there’s no difference between being rational and being irrational (being epistemically responsible and irresponsible) so long as we’re mistaken.

It's true that if reasons are facts, there cannot be false reasons understood as normative reasons but that does not entail that we have no reasons for any of our false beliefs. This is a point familiar from Williamson. If E = K, what justifies must be true but it doesn't follow that what is justified must be true.

Of course, I do think that there are no false, justified beliefs. To the extent that a false belief is supported things we know to be true, it is capable of some degree of justification, but that's perfectly compatible with my claim that they will not be permissibly held/justified. Moreover, to say that there are no false, justified beliefs is _NOT_ to say that no false beliefs are held rationally. The reasonable and the rational and the responsible is often taken to be distinct from the permissible and the justified. Rationality is a necessary condition for certain kinds of excuses (including mistaken belief excuses for action), but excuses do not provide justification. Rational/reasonable response to reasons is not the same as the justified response.

Suppose we cannot tell with certainty which of our beliefs are true. Okay. Suppose certainty isn't needed for knowledge. If we identify justified beliefs with items of knowledge (as Sutton does), there can be no false, justified beliefs but we can have justified beliefs without certainty.

Also, the claim that there cannot be false, justified beliefs is not incompatible with the claim that there can be defeasibly justified beliefs. Suppose that we identify justified beliefs with items of knowledge. If knowledge can be had on defeasible grounds, the same goes for justified belief. Both, however, would be factive. Now, start subtracting conditions away from knowledge while holding the truth requirement for justified belief and you won't then end up with a view on which justifiably believing p requires indefeasible justification.

Tuesday, November 24, 2009

Regret, apology, wrongs, excuses, justifications, etc...

[Previous version of the post written while half asleep at 3 am, current version written while half awake at 9am. I think it's improved.]

Here is something epistemic norms might not be. They might not be the sorts of things we can be properly faulted for violating whenever we violate them. In a recent discussion of the norms of assertion, Lackey appears to deny this:
[T]here is an intimate connection between our assessment of asserters and our assessment of their assertions. In particular, asserters are in violation of a norm of assertion and thereby subject to criticism when their assertions are improper. An analogy with competitive basketball may make this point clear: suppose a player steps over the free throw line when making his foul shot. In such a case, there would be an intimate connection between our assessment of the player and our assessment of the free throw—we would, for instance, say that the player is subject to criticism for making an improper shot.

It is hard to imagine what could excuse a player’s failure to notice that she’s stepped over the line when taking a foul shot without bringing in evil demons, evil geniuses, evil hallucinations, etc… If this is supposed to be an argument for some sort of fault requirement on warranted assertion, I think it provides little support for:

Fault: If S’s assertion that p isn’t warranted, S can be faulted for asserting p.

First, even if it’s hard to imagine how someone can break the foul line rule without being at fault for doing this, this rule is hardly representative of the rules of basketball, the rules in competitive sports, or norms that govern our behavior. It is not difficult to find rules where there is not an intimate connection between the player and the play. Among the rules in an NBA referee’s book is one that says, “A player in control of a dribble who steps on or outside a boundary line, even though not touching the ball while on or outside that boundary line, shall not be allowed to return inbounds and continue his dribble. He may not even be the first player to touch the ball after he has re-established a position inbounds.” You don’t have to watch much basketball to know that someone can fail to do this without being subject to criticism for this failure. You cannot tell the referee that you had used the greatest skill to avoid stepping out of bounds unless you are hoping to get the referee to smile while (rightly) giving the ball to the opposing team. Second, there seems to be no a priori restriction on how rules of games are formulated. So even if there were no actual examples of rules from competitive sports that someone could faultlessly break or only examples of rules from competitive sports that someone that are broken only by those who can be faulted for so doing, I think the thing to say is that we get to make up whatever crazy rules we want for the games we invent. I don’t think we just get to make up the rules that govern assertion. You can’t uncover the norms of assertion by seeing which ones look most like the rules of basketball. (If only I were wrong on this point, I could use the referee’s rulebook to refute Fault.)

In a recent defense of a justified belief account of warranted assertion, Kvanvig says something that seems to support Fault:
This point should be self-evident … norms of assertion are norms governing a certain type of human activity, and thus relate to the speech act itself rather than the content of such an act. Notice that when we look at the four conditions for knowledge above [i.e., truth, belief, absence of defeaters, and justification], the only ones regarding which apology or regret for the speech act itself is appropriate are the belief and justification conditions. There is, therefore, a prima facie case that knowledge is not the norm of assertion, but rather justified belief is.

This isn’t self-evident, not to me at least. I can see someone saying that what you ought to apologize for is what you can be faulted for and what you can be faulted for is what you ought to apologize for. I can see someone saying that it doesn’t follow from the fact that you asserted something false that you ought to apologize. Having said that, I think this doesn’t support Fault because it doesn’t follow from the fact that you did something you shouldn’t have that you can be faulted and ought to apologize. Excusable wrongs are things that oughtn’t to have been done but it’s not obvious that we should apologize for committing them. Perhaps an explanation is in order, but that’s not the same thing. If I’m wrong on that point, it is self-evident that you can’t be faulted for doing that which you ought to be excused for doing. Either way, I don’t think intuitions about when apologies are in order are a particularly good guide to claims about when someone did something they shouldn’t have. Given that warranted is just a technical term for permitted, intuitions about apologies don’t seem to be a good guide to claims about warranted assertion.
It might help to consider the ‘serial’ view of defenses:
[I]t is best if we commit no wrongs. If we cannot but commit wrongs, it is best if we commit them with justification. Failing justification, it is best if we have an excuse. The worst case is the one in which we must cast doubt on our own responsibility. When I say ‘best’ and ‘worst’ here I mean best and worst for us: for the course of our own lives and for our integrity as people.

Just to be clear, a ‘wrong’ here is a pro tanto wrong, not something that is wrong all things considered. In this scheme, excuses are distinguished from justifications and denials of responsibility. In offering an excuse or in offering a justification, we seek to provide a rational explanation for the agent’s action. The difference between justifications and excuses is that when the former is available, there are reasons available that explain why there was a sufficient case for acting. The reasons for acting are sufficient even if there were reasons to do things differently. Often offering an excuse involves explaining how it could seem from the subject’s point of view that there was a sufficient case for acting when there was not. If we cannot show that there was an appearance of a sufficient case for acting that would convince a reasonable or responsible person to act, the excuse wouldn’t excuse. In offering an excuse we point to factors that show that a reasonable and responsible agent could have engaged in precisely the wrong that the agent did while in full awareness of those facts that the agent was cognizant of, and it is most unclear that when we can pull this off the agent has something to apologize for. Who could demand an apology from someone while acknowledging that the agent’s decision to act as she did was reasonable?

What about regret? Regret is a funny thing. An agent can act with justification but regret that there was no way to avoid committing some wrong even if the agent knows full well that the it would have been wrong to refrain. The regret is not the recognition that the agent should have done things differently. The justification didn’t fail. If we can regret doing what is right, it seems that the connection between regret and fault will be far from straightforward. It seems that in the passage above, Kvanvig is saying that intuitions about regret are going to cause trouble for KA. He must be thinking that we don’t regret or don’t properly regret bringing about some state of affairs we didn’t know at the time of acting we would bring about (e.g., we don’t regret asserting something false when we have good reason to think that it’s true, we don’t regret running down the kid when we couldn’t see the kid crossing the street, we don’t regret those romantic relationships that started well but drive us to jump off of a bridge, etc…). That’s far from obvious. If we regret acting against a defeated reason in acting rightly, we regret bringing about a bad state of affairs we did not know how to avoid and knew we could not avoid if we were to do the right thing. Why then should we assume that we cannot regret that we brought about some bad state of affairs that we did not know that we brought about? It can’t be that we don’t regret it because it was unavoidable. Not only can we avoid bringing about bad states of affairs that we don’t know we bring about when we act, we can regret that which we know was unavoidable. My intuitions might differ from his, but I think that Oedipus can rationally regret marrying his mother. I think prosecutors can rationally regret winning a conviction when the accused was innocent. I think I can rationally regret every accident I’ve every caused while driving even when I exercised due care in trying to avoid them. If regret is the mark of the wrong, there are wrongs without fault and wrongs we commit without awareness. Seems like a prima facie case against RA, not KA. We don’t need cases of factual ignorance to make this point. Take cases of reasonable moral disagreement where someone reasonably but mistakenly judges that they should resolve the disagreement in a particular way and then acts on that judgment. If they discover later that they acted on the weaker reason, there are grounds for regret. By hypothesis, however, the subject acted reasonably because they acted on a reasonable judgment about what to do.

Friday, November 20, 2009

Can't we leave Iraq alone already?



Wasn't this essentially W's policy?

All your epistemic views are belong to him

Good news! My theory of evidence survives what might be the coolest destructive argument in epistemology. See here. Warning: be prepared to kiss your favorite theory of knowledge and evidence goodbye. (I'm not bragging, by the way, my view gets off on a mere technicality. If interested, the view is defended here.)

This should cheer you up.

Wednesday, November 18, 2009

Hot Ziggety!

I just received word that 'Evidence and Armchair Access' has been accepted by Synthese! Thanks to my referees for slogging through the drafts and picking it apart like kind and supportive vultures. I believe it has been vastly improved thanks to your efforts.

Tuesday, November 17, 2009

Tea partying nativists

Video

UT vs. UCF

I went to my first UT game a few weeks ago and John (thanks John and Sherri!) sent me a link to this gigapan shot. Here's the blurb:

Gigapans are extremely large, high-resolution files created by digitally stitching, in these cases, hundreds of photographs together to form a complete whole. The process involves proprietary software, coupled with a robotic camera platform which measures, aligns, and moves a camera in precisely defined steps, and a viewing platform which allows for panning and zooming, all designed by GigaPan Systems.

Very cool. I still can't find us in this shot yet, but it's amazing/terrifying that the God's eye point of view is at most a decade away.

Monday, November 16, 2009

Death

Running errands tonight, I heard an interview on some sort of public radio station tonight with Bobby Hackney, the bass and vocalist of Death, a now defunct (pre-?) punk band from Detroit. Waterloo had a copy of For the Whole World to See, and it's really, really good. There are copies to be had, here and you can read their story, here. It's a good story, warm the cockles of your heart it will. One of the reasons we've probably never heard of them is that the record executives who funded their recording sessions asked them to change their name to something more commercially viable and their answer was 'No'. That was that. Flash forward a few decades and Bobby's son starts hearing bootlegs of his dad's band at parties. They found the tapes, and the album is finally released.

Sunday, November 15, 2009

Get yer peasoup!

I'm taking the fight to Pea Soup.

(RC, I've added a section to address your concern. The concern is legit and I think The View's prospects for dealing with it are not good.)

Normative Judgment: Short draft

Here's a short draft of a paper I've just written up:

Justified Normative Judgment.

It's short, so please give it a look.

[Update: I've updated it on 11.15.2009.]

Saturday, November 14, 2009

OIC and Justified Normative Judgment

I've been reading Zimmerman's, Living with Uncertainty and some remarks of his concerning the subject 'ought' inspired this.

Consider the view that obligations are rationally identifiable as such. If there's an obligation to act against one's own reasonable verdictive moral judgments (say, but doing what one rationally judges is worse than what one reasonably judges is best), it's not rationally identifiable as such. Consider a case where a subject reviews what she takes to be her options: A, B, and C and comes to judge correctly that:

(1) A is uniquely best and B is better than C. D (i.e., do nothing) is worst of all.

As a matter of fact, however, she cannot bring it about that A obtains. [In a vending machine, there's (A) a small child, (B) a small puppy, and (C), a small kitten. Agent has sufficient change to release any of these trapped critters, but the mechanism that would release the small child is broken.] So, if we assume that OIC, our subject is obliged to either bring it about that B or C. She knows B is better and that she could bring it about that either B obtains or C obtains. She reasonably but mistakenly believes that she could bring about A, B, or C.

Facts about what can be done are, well, facts. Facts like that don't directly affect what's reasonable to believe about what can be done, will be done, should be done, etc... So, suppose we identify reasonable judgment with justified judgment with permissible judgment.

(2) Agent judges reasonably/justifiably/permissibly that she should bring A about.

From (i) the principle that obligations are rationally identifiable as such and (ii) the observation that she cannot rationally identify any (alleged) obligation to bring it about that B obtains and refrain from trying to bring it about that A obtains, it seems to follow from (2) that:

(3) Agent's obligation cannot be to do other than A.

It follows from OIC that:

(4) Agent's obligation cannot be to do A.

What's Agent to do? You can say that A ought to act against A's judgment about what ought to be done, but this is just to give up the principle that obligations are rationally identifiable as such.

Now, consider (LINK)

Link: If you judge that you ought to A (and oughtn't refrain from so judging), you ought to A.

In other words, you gotta do what you (judge permissibly) you gotta do. It seems that the argument above suggests that the justification for normative judgment is sensitive to objective facts (e.g., facts about what can be done) or LINK is incompatible with OIC.

Friday, November 13, 2009

Never step into a different dish twice

If it works for Wittgenstein and Oliver Sacks, why not?

I have this friend who eats cheerios every morning for breakfast, alternates tuna/pbj for every lunch, and finishes that off with veggie tacos every night for dinner. There's nothing wrong with that, right? Eat some oranges to stave of scurvy, sure, but I can't think of a reason for this guy not to stick with this and use the brain for something better. Like, philosophy or online tetris.

(That online tetris is addicting. I'm hooked on two player battle tetris with the monster map.)

Tuesday, November 10, 2009

Disjunctivism Draft

I've revised a paper I've written on epistemological disjunctivism: here.

Highlights:
* I defend the view that the reasons and evidence provided by veridical experience are better than those provided by hallucination.
* I defend the view that only beliefs in the good case are justified.
* I explain why these views do not require experiential disjunctivism and address McDowell's argument to the contrary.
* I use terribly unfair rhetoric having to do with the Innocence Project to beat up on internalists.

Comments and suggestions would be very much appreciated.

Poxes for both houses

I've been thinking about epistemological and experiential disjunctivism a lot lately, and I wanted to say some sketchy things about disjunctivism and infallibilism.

Some commentators (van Cleve, possibly but I need to check) think that McDowell is committed to a kind of infallibilism. Because McDowell says that the evidence someone has for her beliefs in the good case are better than what she would have in the bad on the grounds that only subjects in the good case have knowledge, some take him to be committed to the view that among the conditions necessary for knowledge is that the possession of evidence or reasons that the subject could have only in the good case. Because of this, some might take McDowell to be saying that it is impossible for the truth of a belief to be the only thing that distinguishes a good case of perceptual knowledge from the bad case. In turn, this suggests that his view is that a perceptual belief constitutes knowledge only if based on something that is incompatible with the falsity of that belief.

Does that mean that McDowell subscribes to the infallibilist view that S can know p only if S’s basis for believing p is incompatible with ~p? He might, but epistemological disjunctivism as such does not entail infallibilism. At least, I hope it doesn’t. Infallibilism leads to skepticism. It might not lead to a skepticism concerning perceptual knowledge, but it leads to skepticism concerning induction. (Actually, I think he denies this. But, well, c'mon!) If knowledge is possible only when we have infallible grounds for our beliefs, the external world skeptic might be wrong but I cannot see how the inductive skeptic could be.

Okay, so is McDowell committed to infallibilism? According to fallibilism:
(F) It is possible for a subject to know that p is the case on the basis of evidence or grounds that do not entail that p.

If fallibilism is true, subjects in the good and bad case could have just the same evidence or reasons for believing p, but one of these subjects will be mistaken in believing p. But, then it seems that the difference between the good and bad case will be ‘blankly external’ to the subjects in these cases. So, either there can be differences in epistemic standing that are blankly external to the subjects in the good and bad case or infallibilism is true and knowledge based on non-entailing grounds or evidence is impossible. If the former is true, we do not need experiential disjunctivism to understand how perceptual knowledge is possible. If the latter is true, we trade one skeptical problem for another.

A Response

While I am not entirely convinced that this response is sufficient (or necessary), McDowell could say this. One problem with the objection is that it assumes that p could be blankly external to the subject who knows p. Why would the truth of her belief be blankly external to her? Sure, you might say that the falsity of the mistaken subject’s belief is blankly external to her. Why can’t McDowell acknowledge this as a possibility and say that if someone is in the dark, there will be matters blankly external to her that explain why she believes p without knowing p? How much do you have to know to be ignorant?

From Epistemological to Experiential Disjunctivism
I take it that one of McDowell's arguments for experiential disjunctivism is contained in this passage:
The root idea is that one’s epistemic standing … cannot intelligibly be constituted, even in part, by matters blankly external to how it is with one subjectively. For how could such matters be other than beyond one’s ken? And how could matters beyond one’s ken make any difference to one’s epistemic standing?


If you endorse epistemological disjunctivism but think that experience embraces the same things in perception and hallucination, you end up having to say that facts blankly external to the subject are responsible for the superior epistemic standing of that subject’s beliefs in the good case when compared to the beliefs in the bad.

The response I offered on McDowell’s behalf earlier to the charge that his view led to a kind of infallibilism that came with skeptical consequences should work here if it worked earlier. The difference between the good case and bad will not be blankly external to the subject in the good case. Because she knows p, she knows something that rules out the possibility that she’s the one in the bad case. How could a fact known to her directly on the basis of observation be blankly external to her? How could it matter to her that the difference between her and someone else who is ignorant is a difference lost on the subject who is in the dark about a great many things? If McDowell insists that the difference between the good and bad case cannot be a difference that is blankly external to the subject in the bad case, it seems he is committed to the rather odd view that there is something available to the subject in the bad case that would allow her (in principle, perhaps) to work out her epistemic predicament. If he rests content with the much more modest principle that the difference between the good and bad case cannot be 'blankly external' to the subject in the good case, I guess I'd say that the truth isn't blankly external when you know the truth.