One of the most well-known examples of being lead to irrational belief comes from psychologist Solomon Asch’s experiments in the early 1950s on conformity. Asch was a professor at Swathmore who ran a series of experiments. The subject of the experiment would sit at a table with several other people who he was told were also participating in the experiment; but, unbeknown to him were really confederates working with the experimenter.
Now, it was explained that this was a study of perception, and that they would be shown a series of charts with different length line segments, labelled A, B and C; and they just needed to say which was the same length as a fourth line next to them. Simple enough task.
The table was arranged so that the subject would answer last, after the others said which line they thought was the same size as the comparison line.
The first chart was shown, and all of the confederates answered correctly, and so did the test subject. Same for the next chart. But then, on the third trial the confederates gave what was obviously the wrong answer.
When it got to the test subject’s turn, 12 out of 18 times in the original study (and it has been reconfirmed many times since) the test subject gave the wrong answer so as not to stick out.
Not conforming with the group is something we fear, and we do things we know are wrong, in order to get along with the majority. What Asch show is interesting: that we will act in such a way as to do something contrary to our reasonable beliefs. But, the people who gave the wrong answer to fit in knew that it was wrong – right? Well, that’s where it gets interesting.
Interviewed after the fact, most said, “Yeah, I knew it was wrong, but I didn’t want to embarrass myself or mess up the researcher’s data by being different.”. But, then there were others, who, more interestingly said, “Because the test was so easy, and because the others answered so quickly and confidently, they started to doubt themselves. We can’t all be right because we disagree. Since they all seem to agree with one another I must be wrong.”
Asch’s study was
about what people would do whether
By “rational” we mean “that which we have ground to show is likely true”. A belief is rational if we have good reason to believe it is at least probably the case in reality.
What we are looking for is those grounds that give us good reason to believe that something is likely to be the case.
Arguments come in two flavours: “inductive” and “deductive”. Now an argument is deductive if a conclusion is no broader than its premises: that is, if the conclusion only refers to that which is mentioned in the premises. The technical term which we use is to say that deductive inferences are non-ampliative: that is, the conclusion is contained within the context of the premises. So consider, “All men are mortal. Socrates is a man. Therefore Socrates is mortal.” Now, in the premises we have all men being mortal. But then move to the conclusion: to just one of the guys being mortal. Deduction moves from broad to narrow.
Inductive arguments are ampliative – that is, that do have a conclusion that is broader than the premises. Just because the sun has risen every morning, there is no guarantee that it will do so again. It probably will, but not definitely. Successful inductive arguments, because they are ampliative, only give us high probability: not absolute certainty. With an inductive argument, because the conclusion lies outside the scope of the premises, there is a risk that even a very good inductive argument might have a false conclusion. Now, this is not to say that we shouldn’t believe it. We should believe that which is probably true. It is wonderful to achieve absolute certainty – the sort of thing you get from deduction; but in most real life cases, we are restricted to the high probability of induction.
To evaluate arguments we use two criteria: validity and well-groundedness. An argument is valid if and only if, assuming the truth of the premises for the sake of argument, the conclusion follows from them. The important thing to notice here is we are assuming the premises are true for the sake of argument. Maybe they are true. Maybe they’re false. We don’t care. Validity does not concern the content of the premises. All we are looking at it whether the premises, if true, will lead you to the conclusion. Validity is not about the content of the argument, but about the form of the argument. Validity looks at the skeleton of the argument and determines if it is strong enough to determine the weight of the conclusion.
Now, an argument that satisfies both of our criteria – that is both valid, and well grounded – that is what we call “sound”. A sound argument gives us good reason to believe its conclusion. What we want are sound arguments. In order to determine which arguments are sound we need to develop tests – both for validity and well-groundedness. Validity looks at structural elements of the argument. Well-groundedness looks at the acceptability of the argument other than the form.
We want to focus on well-groundedness –- that is the question about the likely truthfulness of the premises. The conclusion rests upon these premises, and we want to see when we have good reason to take seriously the premises and the arguments they make up. We want to know when it is rational to believe something. But we are going to approach this study backwards. We are going to look at the way arguments can go wrong. It turns out there are standard rhetorical tricks and traps that we regularly fall into. There are common reasoning errors that create unsound arguments, but which are nonetheless attractive to our minds. They sound like good arguments but in fact they are unsound arguments. We call these reasoning errors “fallacies”. A fallacy, sometimes called “a courtroom trick”, is an identifiable category of argument that does not support its conclusion. Consider, for example, one of the most well-known fallacies, a circular argument.
Does the fact that it is logically flawed mean we have cause to reject the conclusion of a circular argument? If we find a flaw, do we have reason to think its conclusion is likely false? No. Just because somebody makes a bad argument for a conclusion does not give us rational justification for thinking the conclusion is not the case. Remember what an argument does. It gives us good reason to believe in the likely truth of the conclusion. If an argument fails, it means that those specific premises do not give us reason to believe that conclusion. But does it mean that there are no other sets of premises that does? Couldn’t there be a good argument for the conclusion and this one just isn’t it? Of course every true proposition can be made the conclusion of a terrible argument. We just have to keep in mind that a lack of presentation of good argument is not a refutation.
Circular arguments are one instance of a larger class of fallacies we call “begging the question”. Now, this is a phrase that is often misused. One will frequently hear someone say “Well, that begs the question that...”, when really what they mean is: that leads one to ask the question that. But, what begging the question really means is arguing unfairly in a way that tries to use the conclusion in support of itself. Any time we try to get the conclusion to pull itself up by its logical bootstraps, instead of giving independent reasons for belief –- getting it support from propositions distinct from itself –- then we are looking at “begging the question”.
To understand the term, let’s think about what a question is. So, what’s a question? A question is a request for information. A pseudo-question – on the other hand – seems like a request for information, but really isn’t. “You’re not going to wear that, are you?” looks like a question; sounds like a question. It is not a question. It is, in fact, a combination of the declarative sentence, “You are not wearing that tonight”, and the imperative, “Go change clothes, now”. It’s a pseudo-question. It is a conveyance of information about my intention to wear that outfit, not a request for information about my intention to wear that outfit. It is what we call a leading question. That is, a question which is not fairly asked to elicit an honest response from the listener, but rather a question a sentence that looks like a question, but is desired to lead the listener to a particular desired response.
So, one way to beg the question is to use questions that are not questions. Another way is to use the connotative power of language to ask those questions unfairly. Now, words have denotation – they pick out certain things – and they have connotation – that is, they lend an emotional weight to those things. “Budget movers” implies that they are cheaper than other companies, but we have no supporting evidence of this. “Speedy motors”, “Premier windows”, “Apex tree removals”, “Acme markets”, “Elite auto-glass”, “Best build homes”. Are they the cream of the crop? We don’t know. But we are led to think so without any evidence. Why? Because of the name. We do this in politics too. Think of the labelling of sides in the abortion debate: “pro-life” and “pro-choice”. Who doesn’t like life? Who doesn’t want choice? Why are these labels used? Because they beg the question – that is, they lead the listener toward one side of the debate or the other, not with rational argumentation, but just with the connotative power of the language used. The words are chosen because of their connotative power to sway the listener.
We have seen our first two linguistic fallacies: circular argument –- where the conclusion is included as part of the premises –- ; and begging the question –- where the connotative power of language is used to lead the listener to a conclusion in lieu of evidence. Our next fallacy is called “equivocation”. The basis of this fallacy is ambiguity. Words can mean more than one thing. This is not a logical problem itself -– just a feature of language. The fallacy of equivocation occurs when we change the meaning of a word in the middle of an argument. Consider the following argument.
Tables are furniture.
My statistics book has tables in it.
Therefore, there is furniture in my statistics book.
What’s the flaw? Well I have changed the meaning of the word “tables” here –- from, “a flat raised surface on which to place things”; to, “a rectangular array of numbers”. Here, I changed the meaning of the operative-term within the argument, and the result is the absurdity of thinking there is furniture inside of a book.
Now, life would be wonderful if all examples of equivocation were this clear cut. Unfortunately, ambiguous terms may have meanings that are related; so the resultant shift from one to another is more subtle and easier to overlook. Take the following example.
She asked me to her cousin’s wedding as a casual friend.
Ties are formal-wear, not casual.
Therefore I will not be wearing a tie to the wedding.
Now, both senses of the word “casual” here, refer to relaxed standards; but one sense means a lack of long-term commitment in an inter-personal relationship, whereas another refers to informal dress. Another example:
We have a right to vote.
One should always do what is right.
Therefore, one should always vote.
In this case, the word “right” is being equivocated upon. In one case, we have one case of “right” meaning a “legally protected action”; and in the second premise as meaning, “an act that is morally necessary”. But these are two different meanings. They are, however, ones that can be confused if one is not careful.
Another fallacy to consider is “distinction without a difference”.
A distinction is a linguistic separation of two concepts that are different. This fallacy occurs when we try to draw a distinction between two things that are not in fact distinct.
“I am not saying you don’t look good in that outfit. I am just saying it doesn’t bring out your natural beauty in the way the other outfits do.”
“Well, I didn’t steal it. I just didn’t ask before I borrowed it.”
“We never dated. We just went to dinner and dancing a couple of times.”
We will often try to justify our misdeeds or problematic views by trying to distance them from the categories to which we know they really belong.
“I am not sloppy. I just don’t clear up as often as I should.”
“It is not that I am uncaring. I just don’t think it is up to me to help out the less fortunate.”
Notice the difference between “distinction without a difference” and “circular argument”. In the case of distinction without a difference, it is of the form, “It is not A. It’s A.”; whereas in circular argument we are saying, “A because A.”. Okay, so here is out first set of fallacies we can point out.
Circular argument: trying to use a conclusion as a premise.
Begging the question: using the connotative power of language instead of evidence to convince listeners.
Equivocation: changing the meaning of operative terms in the middle of an argument.
Distinction without a difference: linguistically separating two meanings that are not distinct.
In order to have a
good argument from authority, the cited authority needs to be real.
But we hear arguments from authority which violate this single
criterion all the time. Think about the phrase, “I read somewhere
that...”; the idea being that if I read it, it must have been
published somewhere, and surely you can trust information that has
been published. Now, if this person had read it in some referenced
prestigiously renowned public journal that goes through a strict
refereeing process; then having read it would be a successful appeal
to authority.
Have material existence.
Be an expert in the field.
Be impartial, and not profit from my belief or ability to sway it.
A related fallacy is, “an appeal to common opinion”. Simply because a lot of people believe something does not make it true. It was widely believed that the Earth was flat, and that slavery was morally acceptable. How many times does a best-selling book, or a top-grossing film cause disappointment? We learn to think of ourselves as independent-minded, as the captain of our own intellectual ship – but we are deeply influenced by the views of those around us. Humans are – as Aristotle said – “political animals”. That is, we live in communities, and we are influenced by the actions and beliefs of those around us. We don’t like to stick out, and when we believe something that the majority of others around us do not, it often leads to doubt and insecurity. When we surround ourselves by others of like-mind it is comforting to us, leading us to be as sure of our own views that is rationally warranted. In a deep way, we are programmed to commit that fallacy of, “an appeal to common opinion”. To be rational, we need to learn to keep this proclivity in check. Now, this does not mean that it is rational to always reject common opinion. Sometimes use of an appeal to common opinion is a version of an acceptable argument from authority. In some cases it is not wrong to believe that what everyone else believes is also true. But we need independent reason why everybody else thinks so.
We often use analogies in science and everyday life. Predicting the weather uses computer models – which are a form of analogy. Testing and research of disease and medicines often occurs where animals are used as test subjects, before these drugs are used in experimentations with humans. But a flaw may be made in exaggerated the effect by linking a system of significantly lesser degree to one of significantly higher. An analogy can fail, despite the similarity, by exaggerating the difference in degree. If the analogy is apt, the argument can be a good one. But we need to make sure that the analogy connects systems that have requisite structural similarities, and that the degree of such similarities give good results. If not we are looking at a faulty analogy.
To recap, we have seen the following reasoning errors:
The fallacy of faulty authority.
The fallacy of an appeal to common opinion.
The fallacy of an appeal to tradition.
The fallacy of novelty.
The fallacy of faulty analogy.
Most people have learnt the old logical dictum that correlation does not entail causation. That is, simply because you can find two things occurring together does not mean that we can assert with any certainty that one caused the other. Now, that may be the case. But simply identifying a correlation does not in any way establish that there is a cause and effect relation. We need to understand the mechanism by which event A brings about event B if we want to be justified in asserting that A causes B.
It is certainly true that if we wish to establish that A causes B, then A must precede B in time. Causes precede effects. But just because we saw event A before event B does not itself give us good reason to infer that A caused B. Most logicians refer to this as the, “post hoc” fallacy. This is an abbreviation of, “post hoc, ergo propter hoc” which means, “after this, therefore because of it”. Just because it came after does not mean it occurred because of.
In the case of the post hoc fallacy, we wrongly assert a cause and effect relationship based on nothing but time order; but, with, “neglect of a common cause” we assert a cause and effect relationship because we see them together, but overlook a third element which actually cause them both. In both these cases we wrongly assert that A caused B, or B caused A.
But, there are two other cause and effect reasoning errors that can
occur even if there actually is a causal relation between A and B;
and these both result from the fact that causes in the real world are
often multi-faceted.
So, A is necessary for B if you can’t have B without first having had A; and A is sufficient for B if, having A, is by itself enough to guarantee B. The fallacy is when you confuse one type of condition from the other. Water may be necessary to grow a plant, but it is not necessarily sufficient is there is not enough light or nutrients. While thinking that a necessary condition is sufficient is the most common version of this fallacy, the converse can be found occasionally as well. Sometimes we take a sufficient condition and wrongly assert it to be necessary. This will often be the result of focussing on a particular way of doing something that had become habitual for us, and we allow ourselves to become blinded to other ways of accomplishing the same task. Question: why is that dirty dish sitting in the sink? Answer: because the dish-washer is running. But the dishwasher is not necessary to clean this dish. It could be washed by hand instead.
So, “causal over-simplification” takes a complex interplay of causes, and ignores all but one, falsely elevating it to the status of the cause; and, “confusion of a necessary and sufficient condition” is the error where we take a sufficient condition wrongly to be necessary, or a necessary condition wrongly to be sufficient.
Our lasts causal fallacy is perhaps the most famous: the, “slippery slope fallacy”, or the, “domino fallacy”. Now, there is no doubt that there are causal chains: the is, an event A causes B, which in turn becomes the cause of C, which in turn causes D. While such chains of cause and effect relations exist, the fallacy is where one asserts the existence of one such chain without giving full causal arguments for each and every step in the chain. As we have seen, arguing for cause and effect relations can be tricky. We need to show the underlying mechanism at work; and in the complex world of intervening causes often A would bring about B; but the other steps may fail. When warning people of an act we think is imprudent we will often neglect to do all of the logical heavy-lifting and simply assert a causal chain on the likelihood of it, and leave it at that. Make the case for each step. If you cannot, the argument fails.
So we have five new fallacies:
Post hoc, ergo propter hoc.
Neglect of a common cause.
Causal over-simplification.
Difference between necessary and sufficient conditions.
Slippery slope fallacy.
Now, ad hominem attacks tend to come in three general categories. First is the, “You’re a jerk” version. Now, there are miserable, hateful, deceitful, thoughtless, selfish, people in this world who do nothing to make this world a better place, and who often pursue their petty desires at the cost of the well-being of others. But, if that person makes an argument, we need to analyse the validity of the argument, and assess the likelihood of the premises’ truth. To try to cut off debate with this horrible immoral being on the grounds that this person is a horrible immoral being is illegitimate, because even horrible immoral beings can ; give us good reason to believe things.
The second version of an ad hominem attack is a sort of “guilt by association” where we discount an argument, not for objective reasons, but because the person offering it belongs to some identifiable group. “Don’t listen to her – she’s a feminist”; “You can take his arguments seriously – he’s a Christian, a Jew, a Muslim, a Hindu, an atheist, a Texan, a Liberal, a Conservative, a Socialist, a Communist; or worst of all, a philosopher.”. Just because a philosopher says it, does not give us the logical right to automatically discount it. A common variation of this kind of ad hominem attack is to point out that the speaker is not among those who follow the advice the speaker is giving. This is known by its Latin name “tu quoque” – the “but you do it too” objection; but it is an illegitimate ad hominem attack. Now, it may the rue that the person telling you not to drink is an alcoholic, or the person telling you to give up cigarettes is a four-pack a day smoker, or the person telling you to keep on the straight and narrow is himself in prison, or the person telling you to stay in school is a drop-out. It doesn’t mean it is not good advice.
The third class of ad hominem attacks is where we focus on the motivations of the speaker. “Well, of course you’d say that: you stand to profit if it is true.”. Again, maybe that is correct. Maybe that isn’t. But the argument stands or falls on its own merits, regardless of who, where, or when, the argument is made.
Another diversionary
tactic that we must be on guard against is what we call “attacking
a straw man”. Now, the name comes from the fact that it is easier
to beat the stuffing out of scarecrow than it is to take on an actual
human being. It is a metaphor for arguments that do not address the
actual argument made, but rather a weaker, but easier to refute,
version.
So, the first kind of straw man is where the scope of a premise is made broader or narrower, to weaken the argument, while keeping the rest of the premises intact.
The other type of straw man argument is more radical. It is where the interlocutor replaces all the premises wholesale. When you hear the phrase, “The real reason...” you are likely looking at a straw man argument. Why would someone say, “The real reason...”? Because what they are doing is replacing the real reasons –- that is, the premises -– with new premises, and odds are, these new premises are going to be a whole lot easier to undermine. But in undermining them, the interlocutor has done nothing with regard to the soundness of the original argument since the original argument is gone. We find this sort of reasoning error when the interlocutor does not contend that the conclusion does not follow from the premises, or the premises are false. Instead, the interlocutor substitutes a new set of premises altogether creating a completely new argument to critique. To the surprise of no-one, the new argument is much easier to attack. Again, contrary to the principle of charity, we are analysing a weaker version of an argument, not the strongest possible. That is, attacking a straw man.
The final fallacy in this discussion is what we call a “red herring”. Where attacking a straw man is the error wherein we replace the premises of an offered argument, a “red herring” is where we replace the conclusion. With a straw man we are still talking about the same thing, but talking about it differently. But where we change the conclusion, we are completely changing the topic of conversation. That is a “red herring” –- the ultimate in argumentative diversion.
The name comes from fox hunting. To throw the dogs off the trail, the idea is to take something that is so strong smelling it hides the fox’s scent. So a herring would be baked until red, and then dragged in front of the dogs in order to pull the dogs’ attention away from the fox, to something on the side. That’s the metaphor. Something that distracts us to a side issue, away from that that which we were originally pursuing.
To recap:
Ad hominem is where we try to undermine the belief in the soundness of the argument, by ignoring the argument, and instead attacking the arguer by either: pointing out undesirable aspects of the arguer, by associating the arguer with undesirable groups, or by pointing out that the arguer doesn’t follow his own advice. But all of these are diversionary fallacies that fail to address the argument at all. Arguments stand or fall on their own merits regardless of whose mouth they come out of.
We looked at attacking a straw man, which is a violation of a logician’s principle of charity -– by which, when we evaluate an argument, we must assess the soundness of the strongest possible version. Attacking a straw man is when we construct a weaker version of an argument, and attack that. There’s the, “Oh, so you’re saying...” version, where we focus on one premise, and alter its scope to make it easier to attack; and then there’s ,“The real reason...” version, where we replace the premises wholesale, proposing a completely different argument for the same conclusion, and attack the one we gave, instead of the one the arguer gave.
Finally, we looked at “red herrings”, where -– we take an argument and respond to it by moving the conversation to a completely new topic.
Recall that an argument is valid if and only if, assuming the truth of the premises for the sake of argument, the probable truth of the conclusion follows. Validity is a function of the form of the argument – the structure of the argument, the logical integrity of the underlying skeleton of the argument. But when we look at questions of validity, we are asking what propositions follow from what propositions. Recall also that arguments come in two different kinds: deductive, and inductive. A deductive argument is one in which the scope is non-ampliative -– that is, the scope of the conclusion is no broader than the scope of the premises. In other words, deductive arguments go from broad to narrow: the conclusion does not talk about anything that wasn’t already covered in the premises. Because there is no new information in the conclusion, one nice thing about deductive arguments is that if they are sound, or well-grounded, then their conclusions must be true.
Question: what is the cost of this growing stockpile of beliefs? Answer: certainty. With induction, because we are making a logical leap beyond the content of the premises, there is no guarantee that the results must be true. The best we get from induction is, likely truth. If you have a good inductive argument, the conclusion is probably true. Now, probably true, is sufficient for rational belief. We should believe that which is probably true. A lack of guarantee is not evidence against is being true. It is evidence –- good inductive evidence –- and while we should not be certain, we should believe it.
Deductive certainty in all our beliefs would be wonderful. But it is not available to us. We need inductive inferences because is most life-situations, it is all we have, and it is good enough. We make inductive inferences all the time; and we should. High probability is all we have for rational belief.
While there are many different kinds of inductive arguments, there are three forms that are the most important because they are the most frequently used. An inductive analogy is of the form that because all of the observed P’s have the property A, so does the case of the P coming up. It is when we apply what we have learnt from all instances, to one in the present or the future.
Now, a related but stronger inductive inference is the “universal generalisation”. It has the form that because all of the observed P’s have the property A, therefore all P’s have the property A.
While both inductive analogy and universal generalisation have the same premises, notice the difference in the conclusions. One is an analogy in that is says a future instance will be like all the past instances; where the other bigger much broader claim saying something about all members of the observed population. Universal generalisations are not just predictions about a single upcoming instance.
Our third form of inductive inference is what we call statistical generalisation. Its form is: X percent of all observable P’s have had the property A. Therefore X percent of all P’s have the property A. Now, like universal generalisation, we are generalising over the entire set of P’s from some limited sample of P’s. But here we are not attributing the property to all of them, but to some percentage –- either an explicit percentage, or a more vague amount of the population.
For deductive arguments, as long as the premises
were valid to begin with, adding new premises, no matter what they
are, will always keep the argument valid. This is true for all
deductive arguments. But is is not true for inductive arguments.
For inductive arguments, adding a new premise can turn a completely
valid argument into an invalid one. For inductive inferences, one new
piece of information can completely destroy and argument in a way
that can’t happen for deductive arguments. So for an inductive
inference to be valid, we need a guarantee that the evidence given in
the premises is all of the relevant observations we have.
So, we have seen three fallacies we need to avoid in making inductive inferences:
First is the fallacy of selective evidence. The requirement of complete information mandates that we cannot accept cherry-picked data. We need to know that evidence given is all the evidence gathered, and not just hand-selected examples that will make a weak case sound stronger than it is.
Second is the fallacy of insufficient sample. In order to make a good inductive argument, you need to start with enough data. One or two instances -– what we call anecdotal evidence –- is not enough if we want to make any sort of larger assertion.
Third is unrepresentative data. Samples must not also be large enough. They must resemble the population over which we are generalising.
Inductive inferences do not give us certainty of deduction; but in the messiness of the real world, they are the inferences we most often make. We know that we learn from experience, but what logical form does that learning take? The answer is, inductive inferences. Because their conclusions out-run the content of their premises, inductions are incapable of giving any complete confidence in their conclusions; but they give us high probability, and for reasonable belief, that’s good enough.