Jessica Brown (forthcoming) identifies a potential problem for Williamson’s (2000) approach to evidence and knowledge. Williamson identifies your evidence with the propositions that you know:
E=K: S’s evidence includes p iff S knows p.
He also accepts this account of evidential probability:
EP: The evidential probability of a proposition p for you is the conditional probability of p on your total evidence.
Taken together, these two claims commit Williamson to a form of infallibilism:
Infallibilism: If S knows p, the evidential probability of p on S’s evidence is 1.
Why is Williamson forced to choose between inductive skepticism and the possibility of p being evidence for p? Consider a standard approach to evidential support, one that Williamson accepts:
EV: e is part of S’s evidence for h iff S’s evidence includes e and P(h/e) > P(h).
In the case of inductive inference, prior to believing p the evidential probability of p is less than 1. After adding p to your evidence, it's probability raises to 1. So, in the case of inductive inference that results in knowledge, p is evidence for p because (i) p is part of your evidence (by E=K and the anti-skeptical assumption) and (ii) P(p/p) > P(p).
How serious a problem is this for Williamson? Brown thinks it's quite serious. I disagree, but that's for another post.
I have written more about this in a paper that will probably never see the light of day (unless someone can think of a good journal for it):