A Debate on HEE and the Skeptical Argument against Rationality
Recently, I’ve had a little debate on Facebook that has prompted my interlocutor to take to his own blog in order to clarify his views and rebut my argument. Unfortunately, in the process, I believe my own position has been misrepresented, so I thought I would discuss some of the issues here and respond to some of his claims.
Essentially, my interlocutor fails to understand why Hume’s skeptical argument against rationalism is problematic for his particular epistemology. In fact, he thinks the argument can be ignored precisely because it leads to untoward consequences. And he thinks that I am guilty of special pleading because I do not think this problem affects my own epistemology. But before I respond to these issues, a little background on the debate is needed.
The discussion was initiated when my interlocutor posted this (for the sake of clarity, his texts appear in blue, my responses are in red, and quotes from Hume are in green):
It appears the the [sic] entire project of the apologist is to introduce, outside the standards of material evidence, a realm of logical possibility that can violate all known laws of physical necessity such as causation, embodied minds, temporal limits, and the known mechanism of knowledge acquisition. What extra-scientific standards of evidence can be justifiably admitted when entertaining supernatural entities?
He claims that there is a “standard of material evidence” and it is from this perspective that he rails against people of faith, whom he believes are holding to beliefs in an irrational way. But what are these standards? Based on my experience of dialoguing with him, I find the following claims to be central to his epistemology:
Rationality has its essence in positioning the degree belief [sic] to the corresponding degree of perceived evidence. To be rational is the very act of positioning your degree of belief to the degree of the corresponding perceived evidence. If you disagree, explain how it could be anything other. You’ll quickly discover you will not be able to.
We can have a rational belief in logic based on our inductive experience with logic. We can not have absolute certainty in logic, since logic itself is determined to be reliable only inductively. You will not be able to come up with any other mechanism to warrant confidence in logic/mathematics.
Therefore, you do NOT have any deductive system that is not itself warranted through induction.
You and I both agree that induction works to justify beliefs. I am content to assume that this is all there is…UNTIL you defend your position that there is something more.
No fallible human can rationally hold beliefs with absolute certainty.
So, we can see that some basic points of my interlocutor’s epistemology includes notions that:
- Rationality is positioning the degree of belief to the corresponding degree of evidence
- All knowledge is gained through induction.
- No fallible human can rationally hold any belief with absolute certainty.
He uses these three planks to attack those who hold to religious faith in a forum I frequent, since he thinks that they cannot sustain their degree of certainty by appealing to “perceived evidence.” This may call to mind a certain Scottish philosopher of the late-eighteenth century, David Hume, who believed that all knowledge comes from perceptions. In his famous argument against miracles, Hume remarks, “A wise man… proportions his belief to the evidence” (Enquiry Concerning Human Understanding I, Section 10).
Given his obvious commitment to what I would call “Humean Emiprical Evidentialism”, HEE, I thought I would challenge him by raising a problem that Hume himself perceived with his own philosophy. Hume observes:
In every judgment that we can form about probability, as well as about knowledge, we ought always to correct the first judgment derived from the nature of the object by a second judgment derived from the nature of the understanding. A man of solid sense and long experience certainly should and usually does have more confidence in his opinions than a man who is foolish and ignorant. . . . But even in someone with the best sense and longest experience this confidence is never complete, because such a person must be conscious of many errors in the past, and must still fear making more. So now there arises a new sort of probability to correct and regulate the first, assigning to it its proper level of confidence. Just as demonstration is subject to the control of probability, so also this probability admits of further adjustment through an act of the mind in which we reflect on the nature of our understanding and on the reasoning that took us to the first probability.
Now we have found in every probability the original uncertainty inherent in the subject and also a second uncertainty derived from the weakness of our judgment in arriving at the first probability. When we have put two together to get a single over-all probability, we are obliged by our reason to add a third doubt derived from the possibility of error at the second stage where we estimated the reliability of our faculties. This third doubt is one that immediately occurs to us, and if we want to track our reason closely we can’t get out of reaching a conclusion about it. But even if this conclusion is favourable to our second judgment, it is itself based only on probability and must weaken still further our first level of confidence. And it must itself be weakened by a fourth doubt of the same kind, and so on ad infinitum; till at last nothing remains of the first probability, however great we may have supposed it to be, and however small the lessening of it by every new uncertainty. Nothing that is finite can survive an infinity of repeated decreases; and even the vastest quantity that we can imagine must in this manner be reduced to nothing. However strong our first belief is, it is bound to perish when it passes through so many new examinations, each of which somewhat lessens its force and vigour. When I reflect on the natural fallibility of my judgment, I have less confidence in my opinions than when I consider only the topic that I am reasoning about; and when I go still further and scrutinize every successive estimation that I make of my faculties, all the rules of logic require a continual lessening and eventually a total extinction of belief and evidentness (Hume, Treatise on Human Nature, I.4.1).
The argument is sometimes referred to as the skeptical argument against reason. Here is a good article outlining the argument, for those interested in how it can be updated according to our contemporary understanding of probability calculus.
Anyways, I asked my interlocutor:
Are you familiar with Hume’s skeptical argument against reason?
To which he responded:
Yes. If you’d like to lay out an argument, lay it out neat and clean.
I then gave a cursory version of what I take Hume to be saying:
So suppose I agree with you that, given my past experience and familiarity with my own fallibility, I make sure to always proportion my degree of belief to the evidence. Some proposition, call it P, presents itself. I evaluate the evidence for P and decide there is a good amount of evidence in its favor, say for the sake of argument that I think it is 75% likely to be true on the evidence. I determine that I have good reason to believe that P. But that determination is, itself, a reasoning process that I have a belief about, namely I have a belief that my reasoning process has properly arrived at a proper assessment of the probability of P given the evidence. We will call this belief about the likelihood of P Q. My experience with my ability to assess probabilities given evidence tells me that I should put 95% confidence in Q. But if that is true, then I need to lower my confidence in P, since Q tells me that I could be wrong about P being 75% likely. My confidence in Q is a belief that I am also not certain of, and so it, R, says that it is 95% likely that Q is right. But that means that I should be 95% sure that I am 95% sure that I am 75% sure of P. And each time I iterate this, and reflect on my certitude, I must lower my confidence that P until it approaches the point where I cease to be confident that P is true at all.
Daniel, I just want to clarify. You believe that every belief is equally irrational, correct?
To which I responded:
No, I don’t accept Hume’s argument. But since he is an empiricist and believes all knowledge must be proportioned to the degree of evidence, he seems very close to your own views on epistemology. I was wondering how you would escape his problem given that you seem to share his premises.
This is significant, because you will see in his follow up responses, and post, that he continually assumes that I think I share the same epistemological framework in which this problem arises.
The following day, my interlocutor wrote a response. I gave a line by line response to him, which you may want to read for further context:
“Why would an epistemic agent be obligated to spend eternity recursively assessing his assessments of his initial assessment? Would that be rational? Could that ever be rational?”
It seems to me that it is required under your definition of rationality, since a belief is only rationally held if the degree to which it is held is proportioned to the available evidence. But part of what ought to go into the evidence in assessing the probability that a belief is true is the degree to which the subject finds his own cognitive faculties and processes of belief formation.
“I’m sure, when you employ the terms “rational” and “irrational” to the beliefs of others, you do so before waiting for their infinitely recursive iterations of assessment is complete. So you’re demanding of others what you don’t demand of yourself. Why?”
I don’t hold the same Humean epistemology you do. I think this is an internal problem for your views, not mine.
“To be rational, an epistemic agent merely needs to assign a probability to the immediate system of assessment as immediately perceived; 1) the assessment of the proposition up against 2) the assessment of one’s current ability to assess. Nothing more is needed, and most certainly not an eternity of recursive assessments of assessments. That, in fact, would be irrational.”
You have to give a reason why it should stop after one iteration other than the fact that you want to avoid the untoward consequences. Why should you assess your ability to assess, but you should not assess your ability to assess your ability to assess? If your reason is that it wouldn’t be pragmatic and it would lead to global skepticism, then it seems you don’t actually proportion your beliefs on all the evidence, just an ad hoc grouping of evidence that you are willing to look at.
“This is easily discerned by a simple thought experiment. You know your mental faculties when you are sober and when you are drunk are very different. But with your game of recursive iterations of assessments of assessments, you are forced to say that the final equations in both contexts amount to an equal degree of uncertainty about your mental faculties. You don’t do that. You intuitively know that would be irrational. Why suggest others must?”
Sure, I think intuition can warrant a degree of certitude that one could not find through induction and evidence. But that’s my epistemology. If your epistemology runs counter to this intuition, shouldn’t you follow the evidence where it leads, even if it is surprising and counterintuitive? If the evidence says that you have no more reason to trust your cognitive faculties when sober or when blitzed, then to hell with intuition, thought experiments, and common sense. The world is weird, and your epistemology demands it.
“Rationality is following what works. Your game of recursive iterations of assessments of assessments demonstrably does not work since it requires eternity. What does work, and what you actually do yourself, is to simply assess the proposition and your current ability to assess that proposition.”
I thought rationality is proportioning your beliefs to the available evidence. Falling back on a pragmatic epistemology when evidence undermines your very ability to make any rational assessment at all is just to abandon the very epistemology you use to beat up on religious folks. I agree that your epistemology doesn’t work. To escape the recursion problem, you basically have to arbitrarily fence off certain relevant evidence and say that you are not going to bother considering it.
“So your “method” fails due to 1) the length of time necessary for the assessment (eternity), 2) the absurd convergence of all beliefs to a disbelief asymptotic to zero certainty, and 3) the fact that you don’t practice it yourself.”
I agree that it would stall out assessment, lead to global skepticism and that I don’t practice or believe it myself. You seem not to understand that I am saying that this is a problem for the sort of epistemology you espouse here on Unbelievable daily.
So, if you want to respond to my challenge, I basically would want to know how you determine which evidence to consider and which evidence to ignore when considering how you ought to proportion beliefs. Can you provide a reason to restrict recursive assessments of belief forming processes other than the fact that not restricting them would lead your epistemology to ruin?
After a few short responses about how science works, of which I agree (note that science is not the same as Humean evidentialism–a point I will return to later), my interlocutor decided to take to his own blog, where he makes the following remarks:
“This demonstrates a lack of understanding of how science works. Let’s walk through this.”
Here, my interlocutor equates his epistemology with “science” itself. Of course, one need not do science as a Humean. In fact, my whole point of raising the problem is that one must either add ad hoc restrictions, or insist that, at a certain point, the recursions are 100% certain, and so do not degrade the probability of P, but that solution would undercut his epistemological commitments. He goes on to provide an example to help the discussion along:
I determine that, based on the evidence I perceive, there is a 80% probability that Proposition X is true. We can write this as…
When scientists assess the probability of a proposition, they include and assessment of the resolution, biases and accuracy of their instruments in that probability. For example, if a sociologists, based on a survey, assesses the probability of a child born into a Evangelical home to still be Evangelical at age 20 to be 80%, that assessment of 80% already includes all of the limitations of the methodology and instrument of assessment. And the past reliability of the instruments does not affect the assessment (80%); it only affects the confidence in that assessment.
This is called margin of error. The margin of error does not change the statistically determined probability. It only changes the error bars. If the sample size is small, the statistical analysis may yield an 80% probability, yet margin of error will be large. If the sample size is large, the statistical analysis may yield an 80% probability, yet margin of error will be smaller. But the assessment of an 80% probability need not change.
There may be, in addition to small sample sizes, other elements that can affect the margin of error. One could be a sampling bias. Perhaps Evangelicals are more/less likely to respond to surveys than non-Evangelicals. Perhaps the survey was conducted Sunday morning when most evangelicals are not available to respond to surveys. There are many potential weaknesses in the measurement apparatus. These should be identified in determining the degree of confidence in the statistical determination of the 80% probability, but do not change that 80% probability itself. They only change the margin of error, our confidence in our conclusion.
Part of the assessment also includes the scientists [sic] assessment of their track record of reliability. Have they made mistakes in methodology in the past that have resulted in low accuracy of predictions? If so, this does not change the probability they assign to the proposition upon assessment, but only their degree of certainty in that assessment, the error bars.
In light of this, Daniel, if he has even a fundamental understanding of science, will have to admit that an assessment of the tools of assessment, including the mind doing the assessment, does not in any way affect the probabilistic conclusion. Only the margin of error can be affected.
I do not dispute that a scientific analysis can include a margin of error. Of course the margin or error can diminish the likelihood of that the conclusion is true. A scientific study stops there not because they are 100% certain that they have calculated the margin of error along with the statistical probability of the conclusion itself, but because they are not interested in engaging in a Humean skeptical analysis of the cognitive reliability of their own minds vis-a-vis the experiment. They will consider the margin of errors that can be objectively determined, but not the subjective aspects of their own cognitive processes, and conduct an inductive analysis to determine how reliable they are. They are not going to explain the nature of their epistemological commitments in the scientific study either, as those are issues outside of the domain of science, broadly speaking. So, this point is moot. Essentially, my friend is conflating the practices of scientists with a deeper epistemological issue with which they are not concerned.
He then suggests:
But perhaps that is what Daniel is actually saying. Perhaps he is saying we don’t have any respectable margin of error in any assessment we make. Let’s take a closer look.
Let’s say our conclusion of P(X).8 is accompanied by a margin of error of 10%. We might write this…
P(X).8 & ME(X).1
Daniel, for some reason, believes we need to include a recursively to this. This is what we might end up with after 5 recursions.
P(X).8 & ME(ME(ME(ME(ME(X).1).1).1).1).1 = .00001
The error bars would be located at the poles! This would indeed destroy our confidence in our apparatus of assessment!
Yes, that is what I am essentially saying. And yes, I recognize that scientists are not doing this because they are not interested in Hume’s problem (because they do not assume Hume’s epistemology, which my interlocutor apparently thinks just is science’s “epistemology”). He continues:
But is this what scientists do?
Let me list a few reasons, some very obvious.
Here is the meat of his post. This is where he attempts to solve the Skeptical Argument against Rationality. Let’s see if it is successful:
1. It would have destroyed science long ago. If no one had had legitimate confidence in the apparatus of their methodology (including their own minds), science would have never gotten off the ground. But science works! Are we now to trade what works for something that doesn’t?
Indeed, it would destroy science to adopt HEE, but notice that his higher-level epistemology, the epistemology he uses to know how he knows rather than what he knows, isn’t HEE; it’s pragmatism. Science works, and science, my interlocutor would have us believe, requires that we only consider empirical (perceived) evidence to rationally base the level of our beliefs. So on the meta-level, he exempts himself from all of the restrictions he applies elsewhere (and upon all those “irrational” religious folks).
My interlocutor takes such a hard line, as you saw in the earlier quotes, that, like Hume, Quine, and Putnam, he insists that basic logical and mathematical axioms that are fundamental to reasoning are empirically based through induction. Of course, if it destroys science, and HEE is correct, one would think that he ought to nobly follow the evidence where ever it leads. Thus, (1) is not so much an objection as it is to admit that this process leads to untoward and unliveable consequences. What he needs to do is not merely point out the untoward consequences, demanding that he shouldn’t be susceptible to them. Rather, he needs to show why his epistemology doesn’t leave him vulnerable to the skeptical argument in the first place.
2. There is no logical imperative to employ this silly recursive assessment of the assessing apparatus. If there is, I’d like to see it laid out in syllogistic form. It appears that Daniel would like to force this rule on recursion on the scientific method so he can dismiss it as unreliable. This is straw-manning in its most dishonest form.
The only sense in which there is a logical imperative to employ this “silly” recursion argument is because he, earlier on, insisted that everything is known by induction, and that no belief can be rationally held with certainty. It is because he makes these universal claims about beliefs and knowledge that recursion leads to a problematic form of skepticism.
I, myself, adopt a more contextualized and pluralistic epistemology. Some contexts demand beliefs be proportioned to the available empirical evidence. Other contexts permit pragmatic justifications. Still others may be properly basic. Others still may be axiomatic and self-evident. I don’t feel the need to adopt a single method for explaining how we know what we know. The more the merrier, so long as they are appropriate. The only problem is that it becomes more difficult for me to turn around and mock other people for not being rational and following the One True Method™. They might be rational, arrive at a contrary conclusion from me and do so via an epistemic method that confers rationality on the method. This means that rational people don’t have to agree, which has been my experience in life! So, (2) isn’t much of a response as much as it is an admission that he doesn’t really understand that the recursion problem is entailed by his view. I laid out the problem for him, and would recommend that he reads the paper that I initially linked to, if he requires tighter argumentation.
3. The process of employing this invented rule of infinite recursion of assessments would require eternity. Daniel seems to believe that we need to assess our assessment of our assessment of our assessment…ad infinitum. Daniel presumably is not currently engaged in this assessment of his own assessments. Why impose it on others?
So (3) suggests two things: a) it requires eternity and b) I am committing some special pleading by exempting myself. To the first point, it doesn’t really require eternity, though he is free to try. One can quickly extrapolate on the problem and realize that it leads to pyrhonnian skepticism. Hume despaired of this, for he noticed that if you follow reason on, you shouldn’t follow reason at all, which means that we seem to be fundamentally irrational. He was soon comforted by the thought that our “Nature” swoops in and rescues us by emotionally compelling us to utilize reason despite those melancholy and philosophical moments when we know we are deluding ourselves. We begin to forget our situation, and adopt a position that we really can know things. But this is not an epistemological solution, it is a psychological solution that reveals that, epistemologically speaking, not only should the Humean admit to utter skepticism, she should also admit to self-deception as well. To the second point, the problem would only apply to me if I were to adopt this epistemology. This seems to be a continuous problem for my interlocutor. I am trying to explain that this is a problem for his views (given the universal claims he makes on the nature of rationality, certainty, and induction) and not a problem for other people who don’t adopt his HEE position on rationality. I cannot emphasize this enough. Though my inductive experience leads me to suspect that my interlocutor will continue to make this mistake no matter how many times I correct him on it.
4. For an epistemic agent to be rational in any given epistemic context, they merely need to position their degree of belief in a proposition X to the degree that the evidence relevant to X warrants. This conclusion is in no way immutable. It may be changed later as more evidence arrives, including evidence relevant to the mental faculties of the scientist.
This is not a rebuttal. It is just my interlocutor’s attempt to reassert the very epistemology and insist that it is definitional for rationality. He has not taken the time to show that this epistemology does not lead to Hume’s skeptical problem, only that the recursion leads to the sort of untoward skepticism I had indicated. My interlocutor sums up his case:
In conclusion, it appears that his epistemic recursion is not something done by Daniel, but only something he is imposing on the normal successful epistemology employed in scientific inquiry in an attempt to make it equivalent or inferior to his own epistemology.
Again, scientific inquiry may have something to say about epistemology–of what we can know and how we can know it, but it is not an epistemology in and of itself. My interlocutor equates his epistemology with science itself, and thinks that he has, therefore, embodied the epistemology that is most rational and successful. This is symptomatic of the sort of scientism that lies just under the surface of this debate.
The epistemology employed by science works. Those holding to religious epistemologies are justifiably envious of its success. And this is the probable cause of their failing attempts to dismantle the epistemology of science.
The irony, here, is that ultimately my interlocutor is a pragmatist about rationality. Science, if it can be reified in any meaningful way, is successful at the range of problems and questions it addresses. But it is unsuccessful as the complete theory of knowledge that my interlocutor wants to construe it as. Hence his epistemology about his epistemology reveals that he is just a pragmatist after all. Science works, so the problem of induction and the skeptical argument can be safely ignored. But what does it mean to “work”? And why can’t religious views “work” too? William James certainly thought that religious views could be pragmatically justified. Ah, but when it comes to religion, we should be Humean skeptics, when we do science, we should adopt HEE and carefully assess our hypotheses and the potential methodological errors we can objectively quantify. But when someone turns HEE inward, and raises potential problems with induction and certainty, then we can fall back on the comfy pillow that is pragmatic epistemology. The issue in our debate is on whether HEE works at all. My contention is that it undercuts science and rationality. To turn around and say, “but it works” is merely to beg the question, and to do so by a) stumbling into the problem of induction by circularly justifying the rationality of this all-inductive epistemology through an inductive appeal to past success, and b) conflating or co-opting the success of a relatively epistemological neutral enterprise, science, with HEE.
So, I will simply conclude by saying that my interlocutor has not yet understood why his epistemology is subject to untoward and unlivable consequences. He thinks that by merely pointing out that the consequences are so dire, he should not be subject to the problem. And he thinks that because other people, who don’t share his HEE views, are not subject to this problem, then he shouldn’t be subject to it either. Unfortunately, if you hold that rationality is determined by proportioning beliefs to empirical evidence, that all knowledge is inductive, and no beliefs are certain, you are subject to Hume’s critique of rationality.
[Update] My interlocutor insists that I am misrepresenting him by referencing knowledge and epistemology rather than sticking strictly to the topic of rationality. He thinks that I am covertly smuggling the issue of truth into the discussion. I do not believe so, I believe I am dealing specifically with the question of rational warrant or justification as it is a part of any theory of knowledge.
He responded on Facebook with:
I don’t believe there exists a coherent theory of knowledge if knowledge must obtain to traditional standards. There is only rationality. What are you not getting?
I respond with:
Do you have knowledge?
To which he replied:
Not the way it is often defined philosophically. But I do have knowledge of various things as the word knowledge is conventionally employed.
It is conventionally employed to me a high degree of certainty based on the perceived evidence.
Please correct your misrepresentations.
Unfortunately, I don’t think I can correct any misrepresentations, especially given these comments. Catch what he has said. He claims that there is only rationality, and when he speaks of “knowing” something, he just means that he “rationally holds” it. He rejects the idea that knowledge entails truth. I understand that, and I don’t want any of my readers to think that my interlocutor holds that knowledge entails true belief. But note, then, that his “epistemology” just is “rationality”. And that was my point all along. So I think it is fair to equate his theory of rationality as a theory of knowledge, precisely because it explains what my interlocutor means when he says “I know P” and “This is how I know P”. He just means that he has evidence for P, and has proportioned his beliefs accordingly. But this theory doesn’t just reduce knowledge to rationality, it reduces rationality to irrationality, and so one is left with skepticism. This isn’t a new problem!
[NB: I was initially perturbed that my interlocutor took our discussion from a closed group discussion to his public blog. He used my full name and suggested that I had a basic misunderstanding of science (because HEE is science?). I decided that I had to respond in my post to air my side of the discussion. He has since removed my name, but given that we are now cross-linked, we can figure out who the two parties in this discussion are. I am fine with that, but it did force me to blog on this issue, when I really didn’t want to. Now my interlocutor wants me to fix what he believes are misrepresentations. Unfortunately, that is the way the cookie crumbles when you move from private discussions to public blogs. I will let my readers read his blog post, and mine to decide on their own as to whether I have misrepresented him.]