Sunday, October 14, 2012

Draft of Foley Review


Richard Foley, When is True Belief Knowledge? Princeton University

Introduction
The orthodox view is that true belief is that true belief is sometimes knowledge.  What distinguishes the true beliefs that make for knowledge from those that don’t? Is it that a person is justified in believing a true proposition? No, not if Gettier is right. Is it reliability? Sensitivity? Safety? Aptness? No, not if Foley is right. If Foley is right, it was a mistake to try to find some general differentiating condition that distinguishes knowing p from being justified in believing correctly that p.  In his bold new book, Foley argues that what we need to add to true belief to get knowledge is more true belief.  If you believe p without knowing p, you’re either mistaken about p or there’s some important truth that you’re missing.  If you’re right about p and you have adequate information, you’ll know p.
What does it take to have adequate information?  Foley understands information as true belief.  Adequacy isn’t understood in terms of quantity.  You might have little information concerning p and still have enough to know p.  The adequacy of your information doesn’t supervene upon facts about the true beliefs you have.  Somebody could have the very same true beliefs that you do and not know something you do.  Adequate information seems to be defined by what’s missing.  Your information is adequate if you’re not missing an important truth.  If your belief about p is correct and there’s no important truth that you’re missing, you know that p. If there’s some important truth that you’re missing, you won’t know that p. 
What’s an important truth? Foley doesn’t think that there’s much that important truths share in common.  Just as the particularists seem to think that right acts share little in common apart from rightness, Foley seems to think that what important truths share in common is importance.  He recommends an “ecumenical” approach.  Sometimes an important truth might concern a clue that the subject is missing. Sometimes it might have to do with the reliability of processes or methods responsible for a subject’s beliefs.  Sometimes differences in practical stakes mean that truths that aren’t important for you will be important to others.  Foley is skeptical of the commonly held view that there’s some general way of characterizing the defects and depravity that undermine knowledge.  If there’s no general account of important truths, how can Foley’s approach shed light on the notion of knowledge?  He thinks we have a knack for finding important truths.  In any of the normal cases where a subject’s true belief doesn’t constitute knowledge, he thinks we’ll find the important truth if we look for it.
It’s not difficult to recommend Foley’s excellent book.  He has offered a genuinely novel approach to the theory of knowledge.  It’s not immediately clear whether his approach improves upon extant approaches you’ll already find in the literature.  If you’re dissatisfied with the standard accounts of knowledge, you’ll likely agree that a new approach is called for.  Time will tell whether Foley’s approach will advance the discussion.
When is True Belief Knowledge? is divided into twenty-seven chapters.  In the first seven, Foley outlines the basic contours of his account. In the remaining chapters, he addresses some puzzles, discusses different sources of knowledge, and argues that the theories of knowledge and rationality/justification should be developed independently from one another.  In this review, I’ll identify some features of his view that strike me as being the most problematic.    

Rationality and Knowledge
According to Foley, knowledge doesn’t require rationality or justification.  A virtue of this approach, he says, is that:
It frees the theory of knowledge from the dilemma of either having to insist on an overly intellectual conception of knowledge, according to which one is able to provide an intellectual defense of whatever one knows, or straining to introduce a nontraditional notion of justified belief because the definition of knowledge is thought to require this (126).
If rationality/justification aren’t understood in terms of their relationship with knowledge, how should they be understood?  Foley offers an account of rationality/justification in Chapter 26.  Believing p is epistemically rational, on his view, if it is epistemically rational for you to believe that believing p would acceptably satisfy the epistemic goal of now having accurate and comprehensive belief (148).  Believing p is justified if it is epistemically rational to believe that your procedures with respect to p have been acceptable given your goals and your limitations (132).  Epistemic rationality is, on Foley’s view, the foundational concept in an account of practical rationality.  Whether it would be rational to f in sense X (e.g., moral, prudential, etc.) depends upon the rationality of believing that f-ing would do an acceptably good job at satisfying your goals of type X (128).[1]  Perhaps if ‘goal’ is understood broadly enough, the account can provide an account of overall practical rationality.  Some provision should probably be made to handle cases where agents have adopted confused or unreasonable goals (e.g., it isn’t clear that there’s a rational way to go about trying to count the moon, but perhaps somebody could have that as a goal).
One area of potential concern has to do with pragmatic encroachment.  At various places Foley expresses some sympathy for the view that knowledge can be harder to attain when the practical stakes are high.  It’s not clear what role, if any, practical significance plays in his account of epistemic rationality.  That’s because it’s not at all clear what role the practical stakes can play in determining whether believing p would satisfy your twin epistemic goals. Provided that p isn’t itself about some practical subject matter, it seems that the account would exclude practical considerations.  Would an account that combines a purist account of epistemic rationality with an impurist account of knowledge be stable? It might be. It might not be incoherent.  Would it accommodate our intuitions? That’s hard to say.  Much of the intuitive motivation for accepting pragmatic encroachment has to do with intuitions about when it’s rational to proceed on the information you have and when it would be rational to search for additional evidence before making a decision.[2]  In light of this, it’s hard to see how to square the standard intuitions offered in support of pragmatic encroachment for knowledge with a seemingly purist account of rational belief if Foley is right and the rational thing to do is determined by rational beliefs about what would do an acceptable job meeting your goals.
A second area of potential concern has to do with the seriousness of the dilemma Foley wishes to avoid.  There are many plausible accounts of rational/justified that would preserve the link between knowledge and justification that don’t lead to an overly intellectual conception of either knowledge or justification.  (It’s not clear, for example, why Foley’s own theory of rational belief doesn’t solve this dilemma since it’s not clear whether there are cases where you know p where it’s not rational to believe that your belief concerning p would do an acceptably good job in terms of meeting your own epistemic goals.)  Moreover, we do have some independent reason to think that knowledge and justification do go together.  Suppose you know (p or q) and that you justifiably believe ~p without knowing ~p.  You infer q.  It seems that there must be something going for believing q because you’ve deduced q from a set of premises justifiably believed or known.  We can’t assume that q is known because it’s not deduced from a set of known premises (and it’s consistent with what’s been said that q is false).  To accommodate the intuition that there’s something good about believing q, we either need to say that the belief is rational/justified or introduce some wholly new term of epistemic approval.  I can’t see any good reason to coin a new term here to pick out beliefs that are good in some way because deduced from premises justifiably believed or known that are not themselves justified or known, so I’d prefer to describe the belief as rational or as justified.  This seems to require that there’s a link between knowledge and rationality/justification.  Assuming that there is a connection between knowledge and justification helps us make sense of what’s happening in cases with this shape.[3]
Let me mention one final concern.  One of the costs of severing the connection between rationality and knowledge that’s emerged from the recent literature on epistemic norms is that it’s difficult to explain why certain combinations of belief and concessions about what your not in a position to know strike us as being irrational.  If we know that knowing has nothing at all to do with rationality and rationality has nothing at all to do with knowledge, why is it irrational to believe outright, say, that dogs bark while conceding that you don’t know whether they do? This is easily explained on views that treat knowledge as a goal, an aim, or a standard of correctness and uses the regulative function of knowledge to explain the standards of rationality.  
Is knowledge a mutt?
On Foley’s approach, pedigree doesn’t matter in the way that it does in more familiar accounts of knowledge.  He doesn’t think that reliability, for example, is a necessary condition for knowledge.  He does acknowledge that it will often seem to us that a case of unreliably formed, true belief isn’t a case of knowledge, but he thinks that the reason that the subject doesn’t know is that the subject is missing an important truth.  It’s not unreliability, per se, that undermines the belief’s epistemic standing.
To test this, he thinks we should consult our intuitions about cases involving subjects that have maximally comprehensive accurate sets of beliefs.  Consider an example:
Imagine that Sally’s beliefs are as accurate and comprehensive as it is humanly possible for them to be. She has true beliefs about the basic laws of the universe, and in terms of these she can explain what has happened, is happening, and will happen. She can explain the origin of the universe, the origin of the earth, the mechanisms by which cells age, and the year in which the sun will die.  She even has a complete and accurate explanation of how it is that she came to have all this information.  Consider a truth p-cells about the aging mechanism in cells.  Sally believes p-cells, and because her beliefs about these mechanisms are maximally accurate and comprehensive, there are few gaps of any sort in her information, much less important ones. Thus, she knows p-cells (33).
It’s consistent with the story that Sally doesn’t meet the conditions on knowledge imposed by a reliabilist account of knowledge.  Let’s stipulate that the processes that produce Sally’s beliefs are unreliable. We can suppose that it was a series of strange processes and unlikely events that led her to believe p-cells. Under these conditions, is Foley right that Sally knows?
I don’t share Foley’s intuition about the case.  If we stipulate that Sally is trapped inside Nozick’s experience machine, I don’t think she knows p-cells.  On this stipulation, I also fear that the case hasn’t been described in suitably neutral terms. Suppose someone believes correctly that the barn burned down because a cow kicked over a lantern.  Suppose, however, that she doesn’t know that the barn burned down, doesn’t know that a cow kicked over a lantern, and doesn’t know that the barn burned down because a cow kicked over a lantern. (Because our subject has been stuffed into Nozick’s experience machine, her beliefs are only accidentally correct.)  Can she explain why the barn burned down?  I don’t think so.  She can explain why barns burn, why cows topple lanterns, etc., but she cannot explain why events she didn’t know about transpired.  Give Sally all the knowledge she needs to be able to explain these things, and I’d probably agree that she knows p-cells. I’m less inclined to do so if you describe the case carefully as one in which most of her beliefs are only accidentally true.
Anticipating this response, Foley tries to motivate his description of the case by noting that “Sally is fully aware that however strange and unlikely this history may be, in her case it led to her having maximally accurate and comprehensive beliefs” (34).  I still have reservations. First, I don’t think he’s entitled to describe the case as one in which Sally is ‘aware’ of these facts. Can you be aware that p if you don’t know that p?  He might argue that Sally is aware of the facts related to p-cells, but that’s a controversial description that needs justification.  Second, Sally’s beliefs about her own strange and unlikely history are among the beliefs that aren’t grounded by reliable processes.  If we think those beliefs don’t constitute knowledge, it’s not clear that they’d help to turn her belief about p-cells into knowledge.

Lotteries
How does Foley’s approach handle lottery propositions?  Billy believes that his ticket, #345, lost after the drawing was held, but he won’t know that it lost simply on the basis of his correct beliefs about the set up of the lottery and the probability of losing. Foley says that his ignorance is due to some important gap in his information. For example, he doesn’t have this bit of information—ticket #543 was the winner (72).
Is this approach preferable to approaches that impose a sensitivity or safety condition?  That’s not clear.  If the paper announces that #543 is the winner Billy will learn by reading the paper that he lost. So far, everyone is on the same page. What if the paper didn’t announce the winning number but simply announced that Billy’s ticket lost?  If he reads that, he should know he lost.  If that’s sufficient for knowledge, what important truth was Billy missing before he read the paper that he has now?  The important piece of information he’s missing can’t be that his ticket lost.  If information is true belief, that’s information he already had. Maybe the important truth he’s missing is not a truth about what it says in a paper.  This would be an odd way to account for the intuition.  You might think that that information only matters because it provides you with information (in some intuitive sense of ‘provides information’ that’s more demanding than the notion Foley works with) about the winners and losers.  A natural explanation as to why reading the paper matters is that it’s only after you’ve read the paper that you can have a sensitive belief or a safe belief.  While it’s not clear that our intuitive verdicts about lotteries are at odds with Foley’s view, it’s not clear whether his view has the explanatory resources to account for those intuitions in the straightforward ways that rivals accounts do.     

Ignorance as a lack
In Chapter 20, Foley discusses cases in which we admit that we’re not in a position to know something.  Some philosophers think that if you appreciate that you’re not in a position to know p, you can’t then rationally believe p.  Foley thinks that there’s nothing at all puzzling about believing what you concede you don’t know.  He’s right, I think, that reports of the form ‘I believe p, but I don’t know it’ are common (101). Still, there are puzzles lurking here. We often say ‘I believe p’ as a way of hedging. It’s a way of expressing that we don’t take on the commitment to the truth of p typical of outright or full belief.  What about cases of full belief in which you concede you don’t know?  Consider, ‘Dogs bark but I don’t know that they do’.  Here, the speaker expresses the belief that dogs bark and concedes that he doesn’t know that they do.  This strikes many of us as irrational.  Can you know the proposition expressed?  To know that dogs bark, there would have to be no important truths that you were missing.  The second conjunct is true iff you don’t know that dogs bark.  Assuming you believe correctly that dogs bark, the second conjunct couldn’t be true unless there’s some important truth that you were missing.  Foley’s account explains why you can’t know both conjuncts.
Foley’s account nicely handles this sort of case, but I don’t think it can easily handle beliefs expressed by statements of the form, ‘p, but my evidence doesn’t show/establish that p’.  It doesn’t seem that you can know that the proposition this expresses is true.  How can we explain this?  The proposition expressed isn’t necessarily false. If someone believed this without knowing that it’s true, Foley’s account implies that there’s some important truth that the subject is missing.  I can’t think of what that truth might be.
One could try to explain why the proposition can’t be known as follows:
To know the conjunction, you’d have to know both conjuncts. To know p, you’d have to have evidence that establishes p.  If you have that evidence, the second conjunct is false and the conjunction is not known. If you lack that evidence, you don’t know the first conjunct and the conjunction is not known. The conjunction is not knowable.
This explanation isn’t available to Foley because he wouldn’t want to say that knowing p requires having evidence that establishes p.[4] 
One could offer a different style of explanation:
To know the conjunction, you’d have to know both conjuncts.  To know p, you can’t be irrational in believing p.  Believing the second conjunct makes believing the first conjunct irrational.  You can’t know the conjunction without believing the second conjunct.  The conjunction is not knowable.
If he offers this second sort of explanation, he can say that having evidence that shows that p isn’t necessary for knowing. Instead, he can say that not believing that one lacks this evidence is necessary for knowing.  While this seems to be the better route for Foley to take, it faces a handful of problems.  First, this explanation assumes that your ignorance is due to a presence, not an absence.  It’s not due to the fact that you’re missing some truth, but due to the presence of a set of attitudes that’s rationally self-defeating.  Second, this explanation is shallow.  If it didn’t matter whether you had evidence that showed that p, why would it matter what view you had on whether you had this evidence?  Some explanation of the irrationality of believing p whilst believing that your evidence doesn’t show p is in order.  Does it fall out of Foley’s account of rationality? It’s not obvious that it does.  Moreover, it’s not clear that Foley’s account of rationality will help him explain the relevant data if it’s part of Foley’s account of knowledge that knowledge doesn’t require rationality.  
How serious are the problems discussed above? Foley might be right that ignorance is typically due to some lack or deficiency. Cases discussed in this section suggest that the gap isn’t always due to some lack of information.  Some conjunctive propositions might be unknowable truths because it would be irrational to believe the conjuncts in combination.  The irrationality precludes knowledge. Add all the true beliefs you like and you’ll not restore the rationality needed for knowledge.

Knowledge Blocks
Foley acknowledges that a pure version of his view might be difficult to defend. Conceding that his account won’t accommodate all of our intuitions, he suggests that a perfectly good fallback position would be one that acknowledges ‘knowledge blocks’. Think of a knowledge block as something that interferes with the normal conditions for knowledge, say, by preventing the subject from meeting some minimum standard of rationality, reliability, tethering of belief to experience, etc.  On the modified version of the view, knowledge is true belief with adequate information without any knowledge blocks.
To accommodate intuitions, it seems that Foley would need to introduce knowledge blocks. By doing so, it seems he would have to impose general rationality and reliability requirements on knowledge.  Can he do this while maintaining the distinctiveness of his approach?  That remains to be seen.  It depends upon whether the notion of an important truth does any explanatory work once a sufficient set of knowledge blocks is introduced.  
  
References
Adler, J.  2002.  Belief’s Own Ethics. MIT University Press.
Fantl, J. and M. McGrath.  2002.  Evidence, Pragmatics, and Justification.  Philosophical Review 111: 67-94.
Williamson, T. 2007. On Being Justified in One’s Head. In M. Timmons, J. Greco, and A. Mele (ed.), Rationality and the Good: Critical Essays on the Ethics and Epistemology of Robert Audi (Oxford University Press).


[1] As stated, the account is sketchy.  There are two areas that could use further discussion. The first is that he provides an account of goal-relative practical rationality, but no account of overall practical rationality.  Given the goal of meeting your moral obligations, it would be practically rational in the moral sense to f if it is rational to believe that f-ing would do acceptably well at meeting that goal. Given the goal of looking after your own interests, it would be practically irrational in the prudential sense to f if it is rational to believe f-ing would prevent you from meeting that goal.  What about all things considered practical rationality?  Is that notion confused?  Can we provide an account of that notion in terms of, say, some overarching goal?  He doesn’t say. The second is that he says nothing about the coherence or intelligibility of the goals. Can’t there be goals that are unintelligible or incoherent?  Are there practically rational ways to go about trying to count the moon?
[2] See Fantl and McGrath (2002) for discussion.
[3] See Williamson (2007) for discussion of this sort of argument.
[4] Adler (2002) argues that reflection on Moore’s paradox reveals that this requirement must be met to know and to satisfy the normative standards governing belief.

Tuesday, October 9, 2012

When is true belief knowledge?

I'm finishing off my review of Foley's new book.  Thought I'd post some initial thoughts here.  My overall impression is that it's a bold attempt to introduce a new way of thinking about knowledge and that Foley's turn might be fruitful. It's really hard to say at this stage because it's difficult to determine the implications of the account he offers.  Here, I raise some problems that I think arise for a version of his view.  It might be that if he modified his views only slightly, none of these problems would have come up.  Foley's account is that if your belief about p doesn’t constitute knowledge, it’s either because it doesn’t fit the facts or because there is some important truth that you’re missing.  What’s needed to ‘turn’ a true belief into knowledge is just more true belief.  Knowledge is true belief plus adequate information (where adequate information is understood in terms of true belief).  


How does Foley’s approach handle lottery propositions?  If somebody believes correctly that her ticket is a loser, we don’t credit her with knowledge.  What’s missing?  Billy believes that his ticket, #345, lost after the drawing was held, but he won’t know that it lost simply on the basis of his correct beliefs about the set up of the lottery and the probability of losing. Foley says that his ignorance is due to some important gap in his information. For example, he doesn’t have this bit of information—ticket #543 was the winner (72).

Is this approach preferable to, say, an approach on which there’s a sensitivity condition or a safety condition?  That’s not clear.  The paper announces that #543 is the winner. If Billy reads that and he knows that his ticket is #345, he’ll know his ticket lost. What if the paper didn’t announce the winning number but simply announced that Billy’s ticket lost.  If he reads that, he should know he lost.  If that’s sufficient, what important truth was Billy initially missing?  The important piece of information he’s missing can’t be that his ticket lost.  He’d have that information if he believed the true proposition that his ticket lost.  He has that belief, so he has that information.  Maybe the important truth he’s missing is not a truth about the outcome of the lottery but a truth about what it says in a paper.  If he already has the information that he’d get from the paper, what does the information about what it says on the page add?  What role does the paper play?  One thought might be that the paper is run in such a way that beliefs formed on the basis of that paper are sensitive or safe. The need for sensitive or safe belief would explain the need to consult the paper, but Foley’s account denies that there’s any general sensitivity or safety condition. On these approaches, there’s an explanation as to why Billy needs to look at the paper. On Foley’s, I don’t see why this should be.

Rationality and Justification
On Foley’s account of knowledge, rationality and justification don’t seem to be necessary for knowing p.  A virtue of this approach, he says, is that:
It frees the theory of knowledge from the dilemma of either having to insist on an overly intellectual conception of knowledge, according to which one is able to provide an intellectual defense of whatever one knows, or straining to introduce a nontraditional notion of justified belief because the definition of knowledge is thought to require this (126).
I don’t think that this dilemma is all that serious.  Many plausible accounts of justification have been offered that would preserve the link between knowledge and justification that don’t lead to an overly intellectual conception of either knowledge or justification.  It seems we have some independent reason to think that knowledge and justification do go together.  Suppose you know (p or q). Suppose you justifiably believe ~p, but don’t know that ~p. Suppose you infer q.  It doesn’t seem that it follows that you know q because q isn’t derived from known premises.  It does seem, however, that there’s something going for your belief about q because it’s derived from premises either known or justified.  Why not think of q as justifiably believed?  To accommodate the intuition that there’s something going for the belief, it’s tempting to think of it as justified. To think of it as justified, however, I think we’d want to say that it came from justified beliefs.  To say that, we’d want to say that you didn’t just know (p or q), but that you justifiably believed it. Assuming that there is a connection between knowledge and justification helps us make sense of what’s happening in cases that have this shape.[1]

Suppose that you take a true-belief pill. The pill induces scores of new true beliefs.  Depending upon which pill you take, you might suffer from one of two side effects.  First, it was found that some users would form a false belief incompatible with every true belief that they formed as a result of taking the pill.  While they moved towards an accurate and maximally comprehensive set of beliefs, they also acquired a comprehensive set of false beliefs. I don’t think that their new beliefs constitute knowledge. The problem is familiar from attempts to formulate omniscience in terms of knowing all the truths.  There are no important truths that you lack. The problem is that there are too many falsehoods.  Giving you more truths won’t help you dig out.  (Yes, there’s a sense in which you would be aware of which falsehoods were false. If ‘awareness’ is cashed out in terms of true belief, you will believe truthfully that the falsehoods are false. The trouble is that you will also seem to be aware of the truths as being false.) Second, it was found that some users would form further true beliefs. For each first-order belief formed by taking the pill, the subject believed that that belief was one that the subject could not rationally accept.  It seems that if you correctly believe of your own attitude towards p that it’s irrational for you to have that attitude, you don’t know p.  Adding in further true beliefs about the power of the pill only makes you seem crazier. 

To handle these cases, Foley can say that there’s a minimal condition of rationality or consistency required for knowledge.  If it was robust enough to deal with the problem cases, it would seem to require something akin to a familiar sort of rationality or justification requirement on knowledge (e.g., something like an internalist view on which all justifiably held beliefs are backed by internally available grounds).   

In Chapter 20, Foley discusses cases in which we admit that we’re not in a position to know something.  Some philosophers think that if you appreciate that you’re not in a position to know p, you can’t then rationally believe p.  Foley thinks that there’s nothing at all puzzling about believing what you concede you don’t know.  He’s right, I think, that reports of the form ‘I believe p, but I don’t know it’ are common (101). Still, there are puzzles lurking here. We often say ‘I believe p’ as a way of hedging. It’s a way of expressing that we don’t take on the commitment to the truth of p typical of outright or full belief.  What about cases of full belief in which you concede you don’t know?  Consider, ‘Dogs bark but I don’t know that they do’.  Here, the speaker expresses the belief that dogs bark and concedes that he doesn’t know that they do.  This strikes many of us as irrational.  Can you know the proposition expressed?  To know that dogs bark, there would have to be no important truths that you were missing.  The second conjunct is true iff you don’t know that dogs bark.  Assuming you believe correctly that dogs bark, the second conjunct couldn’t be true unless there’s some important truth that you were missing.  Foley’s account explains why you can’t know both conjuncts.

Foley’s account nicely handles this sort of case, but what cases of the form, ‘p, but my evidence doesn’t show/establish that p’?  It doesn’t seem that you can know that the proposition this expresses is true.  Why can’t you know that this is so?  It’s perfectly consistent, so its status as unknowable isn’t down to the fact that it’s necessarily false.  If it’s not known, it has to be because there’s some important truth that you’re missing.  I can’t think of what truth that might be.  One could argue that this is unknowable on the following grounds:
To know the conjunction, you’d have to know both conjuncts. To know p, you’d have to have evidence that establishes p.  If you have that evidence, the second conjunct is false and the conjunction is not known. If you lack that evidence, you don’t know the first conjunct and the conjunction is not known. The conjunction is not knowable.
I don’t think this explanation is available to Foley because he wouldn’t want to say that knowing p requires having evidence that establishes p.  One could offer a different style of explanation:
To know the conjunction, you’d have to know both conjuncts.  To know p, you can’t be irrational in believing p.  Believing the second conjunct makes believing the first conjunct irrational.  You can’t know the conjunction without believing the second conjunct.  The conjunction is not knowable.
On neither approach to explaining why the conjunction is unknowable does it seem that there is an important truth that you’re missing. On the first, you don’t satisfy an evidential requirement that Foley thinks isn’t required for knowledge and can’t be satisfied simply by having more true beliefs. On the second, your problem has to do with violating a requirement that says, in effect, that knowledge of p requires that you’re not irrational in believing p.  Remedying that defect requires believing less or finding new evidence. It’s not a matter of missing some important truth.


[1] See Williamson (2007) for discussion of this sort of argument.


Foley’s account of knowledge has paradoxical implications.  Consider Sartwell’s (1991) view that knowledge is merely true belief and consider the following:
(*) You don’t know (*).
Suppose (*) is false. If it is, you know (*).  You can’t know (*), however, if (*) is false. So, the supposition is false. Since you followed the reasoning thus far, you must be tempted to conclude that (*) must be true. If you believe (*) on the basis of the reasoning just sketched, however, and (*) is true, Sartwell’s account implies that (*) is known.  This contradicts (*).  Either way, on Sartwell’s view, (*) generates a contradiction.  To avoid generating the same contradiction, Foley has to avoid saying (*) is known.  On his view, your belief about p constitutes knowledge so long as p is true and there’s no important truth that you’re missing.  For reasons just sketched, you might believe (*) and it might seem (*) is true.[1]  What important truth might you be missing that explains why you don’t know (*)?  I can’t think of one.  Your problem doesn’t seem to be due to some lack of information.
 


[1] I owe this example to Brian Weatherson. He discusses its significance for various theories of knowledge and for the norms of assertion on his blog, Thoughts, Arguments, and Rants (http://tar.weatherson.org/2009/11/19/your-favourite-theory-of-knowledge-is-wrong/).

Tuesday, September 18, 2012

New Draft: The Unity of Reason

It's taken ages to get this done, but I've finished a draft of my paper on the epistemic norms governing practical reason: The Unity of Reason.  Highlights:
* Argues for the view that what justifies belief justifies acting on that belief;
* Argues that it's important to distinguish justification from reasonableness and rationality;
* Argues that a standard objection to the knowledge account defended by Hawthorne and Stanley fails;
* There's sex in it.
* There's a cow in it.

Sunday, August 26, 2012

On knowledge norms

An objection.  Hawthorne and Stanley, from their JPhil paper:

Consider also how knowledge interacts with conditional orders. Suppose a prison guard is ordered to shoot a prisoner if and only if they are trying to escape. If the guard knows someone is trying to escape and yet does not shoot he will be held accountable. Suppose meanwhile he does not know that someone is trying to escape but shoots them anyway, acting on a belief grounded in a baseless hunch that they were trying to escape. Here again the person will be faulted, even if the person is in fact trying to escape. Our common practice is to require knowledge of the antecedent of a conditional order in order to discharge it.
The principle to take from this seems to be this:

KAct: If you oughtn't X unless C obtains, you oughtn't X unless you know C obtains.
  
Consider two claims about knowledge and warranted assertion:

KAN: You oughtn't assert what you don't know.
KAS: You may assert what you know.

P1. In C1, you know p but aren't in a position to know that you do [~KK]. 
P2. You may assert p in C1[P1, KAS].
P3. You shouldn't assert p in C1 unless you know p in C1 [P1, KAN]. 
P4. You shouldn't assert p in C1 unless you know that you know p in C1 [P3, Kact].
P5. You don't know that you know p in C1 [P1].
P6. You shouldn't assert p in C1 [P4, P5].

P6 is incompatible with P2, so something has to give.  I think Kact has to be false, but I also think that principles in the neighborhood of Kact have to be more fundamental than principles that govern assertion.  Since KAS seems much more plausible than KAN or Kact, I'd try to winnow the requirements on warrant, permission, etc. to something much weaker.  I'd also worry about the coherence of principles in the neighborhood of Kact.  I discuss this worry in the book and in my JPhil paper.  In the literature, everyone seems to be fixated on Gettier cases and false belief cases, but these structural problems strike me as much more interesting.

Tuesday, August 14, 2012

If Ayn Rand and Paul Ryan had a love child, he would be indistinguishable from his parents

First we had Kim Kierkegaard and now we have Paul Rand. Trolling of the highest quality:





Sunday, August 12, 2012

#RomneyRand2012

One of the virtues of insomnia is that you can get the jump on snarky hashtags:


If you want to read up on Rand, you should look at this piece on Rand and William Hickman.

Saturday, August 4, 2012

Evidence and Epistemic Reasons

Some people seem to think that epistemic reasons and evidence come to the same thing. (Call this 'the equivalence thesis'.)  I suspect that some people think that this equation suggests that some sort of evidentialist view must be the correct one. 

Don't think these things! 

Here's the quick and dirty argument (inspired by some things that David Owens said (or, probably said-this is from recollection) in Reason without Freedom).  Suppose that you shouldn't believe p unless you have sufficient evidence to believe p.  You might, if you like, think of the evidence you have as epistemic reasons that somehow help to justify believing p.  According to the equivalence thesis, all the epistemic reasons will be evidence that concerns p.  But that cannot be.  If you don't have sufficient evidence to believe p, you oughtn't believe p. If you oughtn't believe p, you have a decisive epistemic reason not to believe p. This reason, however, is not some further bit of evidence you have.  So, the equivalence thesis must be false.  Some epistemic reasons must not be further evidence you have.  If it were, the obligation to refrain from believing without sufficient evidence couldn't be binding on you.

There's a point here that's simple, but important, and that is that it's undeniable that some epistemic reasons will have a bearing on whether you should believe p whether or not you have those reasons in your cognitive possession.  The fact that you don't have sufficient reason to believe p, for example, constitutes a decisive reason not to believe p even if it's one that you're non-culpably ignorant of. 

If the undeniable point is indeed an undeniable point, it shows that lots and lots of things that people say about justification are mistaken.  I've argued that McDowell misses just this point when he tries to show that we need to reject the traditional view of experience in his epistemological argument for disjunctivism.  I also think that people miss this point when they criticize people for defending externalist epistemic norms.  What's wrong with these norms, people often say, is that they imply that we have decisive reasons not to believe even when we're non-culpably ignorant of these reasons and it is reasonable not to refrain from believing in just the way that these (alleged) reasons tell us to.  Well, that cannot be what's wrong with truth or knowledge norms, not if the undeniable point is correct, for this feature of truth and knowledge norms is a feature that all norms share in common. 

Monday, July 23, 2012

Vicious hate crime in Lincoln, NE

This story deserves wider attention.  A woman in Lincoln, NE was tied to a chair, her attackers carved homophobic slurs into her body, and they then set her house in fire.  Luckily, she was able to crawl to safety.  My hope is that the feds will get involved and bring these men to justice soon and that religious leaders in the state will start leading on this issue.  (Unfortunately, a fair number of them are lining up to be on the wrong side of history.  A branch of Focus on the Family recently mobilized their followers to try to block a local ordinance that would protect homosexuals from employment and housing discrimination.)  I've provided a link to Star City Pride.  They are setting up a victim recovery fund. Please, consider donating and share this story with others.

Update
It turned out to be a hoax.  It's a terribly sad ending to a horrible story.  Hope that people in Lincoln will be quick to forgive her.

Sunday, July 15, 2012

Romney & Price on backwards causation

On the same day that the Romney campaign introduced us to the notion of a retrospective retirement, Huw Price defends backwards causation on Philosophy Bites.  This can't be a coincidence, can it?  This calls for an explanation. The best explanation is that somebody planned this retrospectively. If backwards causation is possible, maybe Romney did retire in 1999 in 2002.

Friday, July 13, 2012

New page

I've created a homepage.  There are links to work on evidence/epistemic reasons, justification, fallibilism, epistemic norms, and moral obligation.  There are also links to some reviews. 

Wednesday, June 27, 2012

The basing relation and reasons as causes

Consider:
1. Reasons are causes.
2. Propositions are not causes.
3. (Therefore) Reasons are not propositions.
4. Reasons are either propositions or the subject’s mental states.
5. (Therefore) Reasons are the subject’s mental states.

Because people seem to think that Davidson's arguments from "Actions, Reasons, and Causes" support the causalist view (i.e., that rationalizing explanations are a species of causal explanation), people seem to think that the first premise in this argument must be true. Once that premise is in place, it is hard to see how one might reject the argument's conclusion. In Justification and the Truth-Connection, I defend the view that reasons aren't the subjects psychological states. I argue that reasons to believe, reasons to act, the reasons for which we believe/the reasons on the basis of which we believe, and the reasons for which we act/the reasons on the basis of which we act are facts. Specifically, they are the facts that agents have in mind when making up their minds about what to do or believe, not facts about their minds when they make up their minds about what to do or believe. I thought I'd write up a post here to try to defend that view. In so doing, I'm trying to show that there's little that supports the standard view in epistemology which says that our beliefs have to be based on our own psychological states. (In his contribution to the Routledge Companion to Epistemology, Neta observes that psychologism about the basis of belief seems to be the only game in town and he seems to credit this to Davidson's influence.)

 Remember that there are three ways of reading (1):
1a. The reasons why the subject believes what she does are causes.
1b. The agent’s reasons for believing or what she does are causes.
1c. The reasons that bear on whether to believe what the agent believes are causes.

Typically, people think that it is possible to believe for good reasons. That is, they think that it is possible that the reason for which we believe are good reasons to believe. Thus, I shall assume that the reasons for which we believe belong to the same ontological category as normative reasons that bear on whether we should believe what we do. Since this is a debate about the ontology of normative reasons, the causal argument for psychologism has to establish (1c). If it does so, it does so is indirectly. First, the psychologists argue that explanatory or motivating reasons are causes. Second, they argue that (1c) follows from (1a) or (1b) because it is possible to act and believe for good reasons. Now, if we were feeling generous, we might grant (1a). Explanatory reasons or the reasons why someone acts need not be motivating reasons, the reasons in light of which they acted. Since (1a) does not entail (1b), we can accept (1a) and remain agnostic as to whether (1b) is true. And, if we can accept (1a) while denying that the reasons for which someone acted are psychological states, we can say turn the tables on the psychologists. Since it must be possible that the reasons we act for are good reasons, neither motivating nor normative reasons are psychological states. Thus, the psychologists have to show that (1b) is true. Typically, psychologists say that Davidson showed that motivating reasons are psychological states. In a later post, I shall explain why arguments for (1b) typically undermine the psychologist’s suggestion that (1b) and (1c) are both true. Here, I shall explain why Davidson’s arguments do not support (1b) and so cannot support (1c).

The argument that Davidson was supposed to provide for (1b) is found in “Actions, Reasons, and Causes”, which opens with these remarks: What is the relation between a reason and an action when the reason explains the action by giving the agent’s reason for doing what he did? We may call such explanations rationalizations, and say that the reason rationalizes the action. In this paper I want to defend the ancient – and commonsense – position that rationalization is a species of causal explanation. His aim was to show that the force of the ‘because’ that figures in a rationalization (e.g., “Audrey went outside because she believed Donna was waiting for her”) is the same as the force of the ‘because’ that figures in sentential causal explanations (e.g., “Coop went through the front door because he was pushed”). Davidson’s argument is contained in this passage:
Noting that non-teleological causal explanations do not display the element of justification provided by reasons, some philosophers have concluded that the concept of cause that applies elsewhere cannot apply to the relation between reasons and actions, and that the pattern of justification provides, in the case of reasons, the required explanation. But suppose we grant that reasons alone justify actions in the course of explaining them; it does not follow that the explanation is not also … causal … How about the other claim: that justifying is a kind of explaining, so that the ordinary notion of a cause need not be brought in? Here it is necessary to decide what is being included under justification. It could be taken to cover only … that the agent have certain beliefs and attitudes in the light of which the action is reasonable. But then something essential has certainly been left out, for a person can have a reason for an action, and perform the action, and yet this reason not be the reason why he did it. Central to the relation between a reason and an action it explains is the idea that the agent performed the action because he had the reason.
His point was that if we want to understand the difference between (i) simply having reasons that could potentially justify an action but do not move you to act and (ii) acting for those reasons, we have to say that agents act because they have certain reasons. To say that she acted because she had these reasons is to say more than just that she simply had these reasons or had them in mind, for these reasons could be explanatorily idle (e.g., I might desire to amuse my roommate and annoy my neighbors and believe that tap dancing in my boots to Tupac would be a way of fulfilling both desires. If I start dancing, I might do so in order to amuse my roommate and not to annoy the neighbors or might do so in order to annoy my neighbors.). To distinguish cases where reasons are idle from cases in which the reasons are operative, we need to posit some causal difference between the agent’s desires and actions to decide which reasons are operative. Thus, we cannot understand how rationalizing explanations work unless the force of the ‘because’ in a rationalizing explanation is the same as in a causal explanation.

Suppose Davidson is right and rationalizations are causal explanations. What does this tell us about the relation between reasons and causes? Nothing. I realize that many people believe that it shows that reasons are causes, but this simply does not follow. Since it does not show that motivating reasons are the causes of the agent’s action or attitudes, it cannot support the crucial premise in the causal argument for psychologism. Remember that if the argument for psychologism has any hope of success, we have to assume that facts are not causes. If facts are not causes, then causes belong to a different ontological category than the explanantes that figure in rationalizing explanations. This is so even if rationalizing explanations are causal explanations because facts are explanantes and we have stipulated that facts are not causes. The Davidsonian thesis that rationalizing explanations are causal explanations is consistent with one of two views. The first identifies motivating reasons with the subject’s mental states and states that motivating reasons are causes rather than the explanantes of successful causal/rationalizing explanations. The second identifies motivating reasons with the explanantes of successful causal/rationalizing explanations and distinguishes them from the agent’s mental states/the causal antecedents of the agent’s actions. Both of these options are consistent with the conclusion of Davidson’s argument, but the second is incompatible with (1b) and incompatible with psychologism. Thus, even if Davidson’s arguments succeed, they do not support (1b) or psychologism.

Thursday, June 21, 2012

Will work for books

Found two new books waiting for me in the office this morning. Two perks of the job: free* books and the time to read them. (Free* books include review copies, free copies sent by friends, and copies I've received as payment for services rendered.) The first, The Philosophy of J.L. Austin, contains a handful of really interesting epistemology pieces. The second, Explaining Explanation (2nd Edition), is a real gem. David Ruben (a.k.a., Baby Ruben)has just published a second edition of EE with Paradigm Publishers. In my early days as a graduate student, I remember looking in vain for a good introduction to explanation. Ruben's book was the book I sought. Too bad I didn't know it at the time. Highly recommended. You can soon purchase copies here or here. (Not available until August, unfortunately.)

Wednesday, June 13, 2012

Justification and the Truth-Connection (CUP) is now in print

I just received some advanced copies of my first book, Justification and the Truth-Connection (Cambridge University Press). You can pick up a copy here or wait until the end of the month and grab a copy here. The book is about the internalism/externalism debate in epistemology. Why did we need another book on this topic? Well, it seemed to me that none of the standard arguments for the standard views were decisive. I'm not alone in thinking this. Lots of people think that the debate has reached a kind of stalemate. I look at three ways of trying to advance the discussion and end up defending an unorthodox externalist view. To justifiably believe some proposition, you have to believe for reasons that show that you are right about that proposition. Here are some of the highlights.

* In the first chapter, I survey the standard arguments for the standard views and explain why these arguments don't settle the issue.

* There's been considerable discussion of the value of knowledge, but little discussion of the value of justification. In the second chapter, I offer an account of the value of justification and explain why none of the value-driven arguments for internalism or for externalism are decisive.

* In the third chapter I offer an account of the ontology of epistemic reasons. If the account offered here is sound, it shows that the only way to defend the internalist supervenience thesis (i.e., that all of the facts about justification strongly supervene upon a subject's non-factive mental states) is to embrace external world skepticism.

* In the third chapter, I also defend the view that the reasons for which we believe and act are the facts that we have in mind, not mental states or facts about those states. In the course of defending this view, I evaluate Davidson's arguments for psychologism about motivating reasons. Even if his arguments are sound, they don't support psychologism about motivating reasons. They don't show that reasons are causes but that reasons explanations are causal explanations. You can consistently maintain that reasons explanations are causal explanations without identifying reasons with causes. Instead, you can identify reasons with explanantia.

* Having argued that justifying or normative reasons are facts, I argue in the fourth chapter that justification ascriptions are factive. That is to say, the justification of a belief depends (in part) upon whether that belief fits the facts. This is because belief is governed by a norm that enjoins us to exclude beliefs that would pass of spurious reasons as if they were genuine from practical and theoretical deliberation.

* In the fourth chapter, I explain why my view doesn't commit you to any sort of disjunctivist view. McDowell has tried to show that the account of reasons I've defended does commit you to a disjunctivist account of experience. I explain why he's mistaken.

* In the fourth chapter, I discuss arguments from error that are intended to show that the reasons for which we act or believe aren't the facts we have in mind. These arguments don't support the view that motivating reasons are propositions or mental states. The mistake in the argument is in thinking that we believe or act for reasons in the bad case. To act or believe for a reason, I argue, is to respond to a reason that applies to you and that's not something that happens when you're in the bad case.

* In the fifth and sixth chapter, I defend the view that truth, not knowledge, is the norm of assertion and of practical reason. It's there that I show that the truth norm can account for the data typically offered in support of the knowledge norm (e.g., lottery cases, Moorean absurdities). It's there that I show that knowledge norms generate too many epistemic obligations.

* In the sixth chapter, I argue that we need a factive conception of epistemic justification to make sense of our moral intuitions. This might be the most important part of the book. It's because we're rational creatures that we're under the epistemic and moral obligations that we are. They apply to us categorically. I assume that these obligations don't pit us against ourselves, compelling us to believe that our duty is to do one thing and then compelling us to do something else instead. If this is right, then justified beliefs have to serve as the justified basis for action. And if this is right, only a factive account of justified belief will do. Any non-factive account will either deny that the demands of practical and theoretical reason are unified or will undermine any objectivist account of obligation on which facts about obligation are determined independently from our opinions about them.

* In the seventh chapter, I offer my positive account of justification. To justifiably believe something is to believe for reasons that show that you are right. Normative reasons are facts. They determine what we should feel, think, and do. Beliefs are supposed to provide us with reasons so that we can feel what we should feel, think what we should think, and do what we should do. The justification of a belief depends upon whether the belief in question can do what it's supposed to do, and so it has to be held for reasons that put you in a position to see what reasons apply to you.

Thursday, May 31, 2012

3 options

You know that you have three options, but you don't know what to do. You know so little about them, so God decides to come along to offer some help. God tells you that it would be right (=permissible) for you to choose a and wrong (=impermissible) for you to choose c. She tells you that she won't tell you anything about b. An angel that is very, very reliable but not infallible comes along and tells you that it would be right (=permissible) for you to choose b. She won't tell you about a or c. What should you do? What shouldn't you do? I'm curious to know whether it follows from the story I've told that you _shouldn't_ choose b. Or, might we say that the angel speaks the truth and that it's acceptable for you to choose b. I ask because the following seems intuitive to me: no conscientious person would choose b over a. What I don't know is whether _that_ tells us anything about b. I can imagine someone running the following argument: No conscientious agent could choose b over a in the circumstances described. So, it could not be right (=permissible) to choose b over a. So, any agent offered these choices shouldn't choose b. [Fwiw, I'm quite sceptical of this line of argument, but there's an intuition here that's interesting]