The following is extracted from the lectures of Dr. Steven Gimbel's An Introduction to Formal Logic, course number 4215, The Great Courses:

Lecture 1

One of the most well-known examples of being lead to irrational belief comes from psychologist Solomon Asch’s experiments in the early 1950s on conformity. Asch was a professor at Swathmore who ran a series of experiments. The subject of the experiment would sit at a table with several other people who he was told were also participating in the experiment; but, unbeknown to him were really confederates working with the experimenter.

Now, it was explained that this was a study of perception, and that they would be shown a series of charts with different length line segments, labelled A, B and C; and they just needed to say which was the same length as a fourth line next to them. Simple enough task.

The table was arranged so that the subject would answer last, after the others said which line they thought was the same size as the comparison line.

The first chart was shown, and all of the confederates answered correctly, and so did the test subject. Same for the next chart. But then, on the third trial the confederates gave what was obviously the wrong answer.

When it got to the test subject’s turn, 12 out of 18 times in the original study (and it has been reconfirmed many times since) the test subject gave the wrong answer so as not to stick out.

Not conforming with the group is something we fear, and we do things we know are wrong, in order to get along with the majority. What Asch show is interesting: that we will act in such a way as to do something contrary to our reasonable beliefs. But, the people who gave the wrong answer to fit in knew that it was wrong – right? Well, that’s where it gets interesting.

Interviewed after the fact, most said, “Yeah, I knew it was wrong, but I didn’t want to embarrass myself or mess up the researcher’s data by being different.”. But, then there were others, who, more interestingly said, “Because the test was so easy, and because the others answered so quickly and confidently, they started to doubt themselves. We can’t all be right because we disagree. Since they all seem to agree with one another I must be wrong.”

Asch’s study was about what people would do whether [if -- ed] they would be willing to act in a way they knew was wrong. But what came out was, in some cases, it was not just about their actions, but about their thoughts: about the way in which humans will change what we believe in order to fit in with those in our environment.

We may not be rational animals, but we certainly are rationalising animals. We suffer greatly from what sociologists call, “confirmation bias” – that is, when we hold a particular belief, we search out that which we believe supports the belief and explain away or outright ignore that which undermines rational belief in the proposition. When faced with a small amount of supporting evidence, and a huge amount of disconfirming evidence, we focus on that which back us up and use it to swamp the overwhelming evidence against us. Especially when the belief is core to our world view we will do incredible intellectual gymnastics to save it from falsification. We will do whatever we need to do to save our pre-existing beliefs.

Emile Durkheim, one of the founding fathers of sociology, discussed what he called, “social facts”: which we acquire from being part of a society. Social facts are a way of thinking or acting that originate outside the individual, are enforced by the society, and become a part of the individual.

Lecture 2

By “rational” we mean “that which we have ground to show is likely true”. A belief is rational if we have good reason to believe it is at least probably the case in reality.

What we are looking for is those grounds that give us good reason to believe that something is likely to be the case.

“Argument”, for us, is not a disagreement. For us, “argument” is a technical term: it is a linguistic thing. An argument is a set of sentences such that one sentence (what we call the conclusion) is claimed to follow from the other sentences (what we call the premises). Arguments have two parts: a conclusions and premises. The conclusion is the point of the argument; it is the thing which is being argued for: that which we are trying to convince ourselves, or others, of. We give arguments in order to provide legitimate reasons to believe the conclusion. The premises are those reasons. The premises are the grounds that are being proposed to support rational belief in the conclusion.

Arguments come in two flavours: “inductive” and “deductive”. Now an argument is deductive if a conclusion is no broader than its premises: that is, if the conclusion only refers to that which is mentioned in the premises. The technical term which we use is to say that deductive inferences are non-ampliative: that is, the conclusion is contained within the context of the premises. So consider, “All men are mortal. Socrates is a man. Therefore Socrates is mortal.” Now, in the premises we have all men being mortal. But then move to the conclusion: to just one of the guys being mortal. Deduction moves from broad to narrow.

Inductive arguments are ampliative – that is, that do have a conclusion that is broader than the premises. Just because the sun has risen every morning, there is no guarantee that it will do so again. It probably will, but not definitely. Successful inductive arguments, because they are ampliative, only give us high probability: not absolute certainty. With an inductive argument, because the conclusion lies outside the scope of the premises, there is a risk that even a very good inductive argument might have a false conclusion. Now, this is not to say that we shouldn’t believe it. We should believe that which is probably true. It is wonderful to achieve absolute certainty – the sort of thing you get from deduction; but in most real life cases, we are restricted to the high probability of induction.

To evaluate arguments we use two criteria: validity and well-groundedness. An argument is valid if and only if, assuming the truth of the premises for the sake of argument, the conclusion follows from them. The important thing to notice here is we are assuming the premises are true for the sake of argument. Maybe they are true. Maybe they’re false. We don’t care. Validity does not concern the content of the premises. All we are looking at it whether the premises, if true, will lead you to the conclusion. Validity is not about the content of the argument, but about the form of the argument. Validity looks at the skeleton of the argument and determines if it is strong enough to determine the weight of the conclusion.

An argument is well-grounded if and only if all of its premises are true. Well-grounded arguments have true premises. Maybe the conclusion is true. Maybe the conclusion is false. But what is important for us in looking for the well-groundedness of an argument, is just the truth or falsity of the premises. [If the premises -- ed] are not true, we have no good reason to believe the truth of the conclusion.

Now, an argument that satisfies both of our criteria – that is both valid, and well grounded – that is what we call “sound”. A sound argument gives us good reason to believe its conclusion. What we want are sound arguments. In order to determine which arguments are sound we need to develop tests – both for validity and well-groundedness. Validity looks at structural elements of the argument. Well-groundedness looks at the acceptability of the argument other than the form.

Lecture 3

We want to focus on well-groundedness –- that is the question about the likely truthfulness of the premises. The conclusion rests upon these premises, and we want to see when we have good reason to take seriously the premises and the arguments they make up. We want to know when it is rational to believe something. But we are going to approach this study backwards. We are going to look at the way arguments can go wrong. It turns out there are standard rhetorical tricks and traps that we regularly fall into. There are common reasoning errors that create unsound arguments, but which are nonetheless attractive to our minds. They sound like good arguments but in fact they are unsound arguments. We call these reasoning errors “fallacies”. A fallacy, sometimes called “a courtroom trick”, is an identifiable category of argument that does not support its conclusion. Consider, for example, one of the most well-known fallacies, a circular argument.

A circular argument is one in which the conclusion is identical to the premises. Consider the argument, “My name is Steve. Therefore my name is Steve”. Is it valid? If we assume the truth of the premises, are we also led to accept the truth of the conclusion? Well, yeah. It is a perfectly valid argument. But is it a good argument? Does the premise give us independent warrant for rational belief in the conclusion? No. The point of an argument is to provide independent support for the conclusion, because the conclusion is in doubt. But if the conclusion is in doubt then so is the premise because they are one of the same proposition. If we don’t have independent reason to think the premise is true, then the well-groundedness of the argument is in doubt, and the argument cannot be said to be sound, and we don’t have good reason to believe the conclusion. It’s a bad argument logically. Now, unfortunately, it is an effective argument psychologically. If we say something clearly, slowly, loudly, or forcefully enough, people will believe it. Repeat it and it seems more likely to be the case. Circular argument is a fallacy, but it’s effective.

Does the fact that it is logically flawed mean we have cause to reject the conclusion of a circular argument? If we find a flaw, do we have reason to think its conclusion is likely false? No. Just because somebody makes a bad argument for a conclusion does not give us rational justification for thinking the conclusion is not the case. Remember what an argument does. It gives us good reason to believe in the likely truth of the conclusion. If an argument fails, it means that those specific premises do not give us reason to believe that conclusion. But does it mean that there are no other sets of premises that does? Couldn’t there be a good argument for the conclusion and this one just isn’t it? Of course every true proposition can be made the conclusion of a terrible argument. We just have to keep in mind that a lack of presentation of good argument is not a refutation.

Circular arguments are one instance of a larger class of fallacies we call “begging the question”. Now, this is a phrase that is often misused. One will frequently hear someone say “Well, that begs the question that...”, when really what they mean is: that leads one to ask the question that. But, what begging the question really means is arguing unfairly in a way that tries to use the conclusion in support of itself. Any time we try to get the conclusion to pull itself up by its logical bootstraps, instead of giving independent reasons for belief –- getting it support from propositions distinct from itself –- then we are looking at “begging the question”.

To understand the term, let’s think about what a question is. So, what’s a question? A question is a request for information. A pseudo-question – on the other hand – seems like a request for information, but really isn’t. “You’re not going to wear that, are you?” looks like a question; sounds like a question. It is not a question. It is, in fact, a combination of the declarative sentence, “You are not wearing that tonight”, and the imperative, “Go change clothes, now”. It’s a pseudo-question. It is a conveyance of information about my intention to wear that outfit, not a request for information about my intention to wear that outfit. It is what we call a leading question. That is, a question which is not fairly asked to elicit an honest response from the listener, but rather a question a sentence that looks like a question, but is desired to lead the listener to a particular desired response.

So, one way to beg the question is to use questions that are not questions. Another way is to use the connotative power of language to ask those questions unfairly. Now, words have denotation – they pick out certain things – and they have connotation – that is, they lend an emotional weight to those things. “Budget movers” implies that they are cheaper than other companies, but we have no supporting evidence of this. “Speedy motors”, “Premier windows”, “Apex tree removals”, “Acme markets”, “Elite auto-glass”, “Best build homes”. Are they the cream of the crop? We don’t know. But we are led to think so without any evidence. Why? Because of the name. We do this in politics too. Think of the labelling of sides in the abortion debate: “pro-life” and “pro-choice”. Who doesn’t like life? Who doesn’t want choice? Why are these labels used? Because they beg the question – that is, they lead the listener toward one side of the debate or the other, not with rational argumentation, but just with the connotative power of the language used. The words are chosen because of their connotative power to sway the listener.

We have seen our first two linguistic fallacies: circular argument –- where the conclusion is included as part of the premises –- ; and begging the question –- where the connotative power of language is used to lead the listener to a conclusion in lieu of evidence. Our next fallacy is called “equivocation”. The basis of this fallacy is ambiguity. Words can mean more than one thing. This is not a logical problem itself -– just a feature of language. The fallacy of equivocation occurs when we change the meaning of a word in the middle of an argument. Consider the following argument.

  1. Tables are furniture.

  2. My statistics book has tables in it.

  3. Therefore, there is furniture in my statistics book.

What’s the flaw? Well I have changed the meaning of the word “tables” here –- from, “a flat raised surface on which to place things”; to, “a rectangular array of numbers”. Here, I changed the meaning of the operative-term within the argument, and the result is the absurdity of thinking there is furniture inside of a book.

Now, life would be wonderful if all examples of equivocation were this clear cut. Unfortunately, ambiguous terms may have meanings that are related; so the resultant shift from one to another is more subtle and easier to overlook. Take the following example.

  1. She asked me to her cousin’s wedding as a casual friend.

  2. Ties are formal-wear, not casual.

  3. Therefore I will not be wearing a tie to the wedding.

Now, both senses of the word “casual” here, refer to relaxed standards; but one sense means a lack of long-term commitment in an inter-personal relationship, whereas another refers to informal dress. Another example:

  1. We have a right to vote.

  2. One should always do what is right.

  3. Therefore, one should always vote.

In this case, the word “right” is being equivocated upon. In one case, we have one case of “right” meaning a “legally protected action”; and in the second premise as meaning, “an act that is morally necessary”. But these are two different meanings. They are, however, ones that can be confused if one is not careful.

Another fallacy to consider is “distinction without a difference”.

A distinction is a linguistic separation of two concepts that are different. This fallacy occurs when we try to draw a distinction between two things that are not in fact distinct.

“I am not saying you don’t look good in that outfit. I am just saying it doesn’t bring out your natural beauty in the way the other outfits do.”

“Well, I didn’t steal it. I just didn’t ask before I borrowed it.”

“We never dated. We just went to dinner and dancing a couple of times.”

We will often try to justify our misdeeds or problematic views by trying to distance them from the categories to which we know they really belong.

“I am not sloppy. I just don’t clear up as often as I should.”

“It is not that I am uncaring. I just don’t think it is up to me to help out the less fortunate.”

Notice the difference between “distinction without a difference” and “circular argument”. In the case of distinction without a difference, it is of the form, “It is not A. It’s A.”; whereas in circular argument we are saying, “A because A.”. Okay, so here is out first set of fallacies we can point out.

  • Circular argument: trying to use a conclusion as a premise.

  • Begging the question: using the connotative power of language instead of evidence to convince listeners.

  • Equivocation: changing the meaning of operative terms in the middle of an argument.

  • Distinction without a difference: linguistically separating two meanings that are not distinct.

Lecture 4

When we want to know something we don’t know, it is perfectly rational to ask someone who does. We call this “an appeal to authority”; and arguing from authority is a legitimate means of reasoning. We do it all the time –- and we should. If you are sick, do you see the doctor, or call your uncle Murray the dry-cleaner? Hopefully it is the physician you seek out because he or she is the expert you need. By being credentialed, you know your doctor has successfully completed a course of study at an accredited medical school, and he or she has been gaining experience in the years since by treating other patients –- a number of whom have probably had what you have; and your doctor has seen the success of courses of treatment with them. Your doctor is an authority on your health, and we have confidence that further referrals may be made to another doctor who is even more specialised in the field if need be.

In order to have a good argument from authority, the cited authority needs to be real. But we hear arguments from authority which violate this single criterion all the time. Think about the phrase, “I read somewhere that...”; the idea being that if I read it, it must have been published somewhere, and surely you can trust information that has been published. Now, if this person had read it in some referenced prestigiously renowned public journal that goes through a strict refereeing process; then having read it would be a successful appeal to authority. For all we know the person making such a claim may be making it up. So we need confirmation about the actual existence of an authority; and this authority has to be an expert. What makes someone an expert? Is their expertise general or specialist? Does this person have a requisite background to be an authority? This expert must also be objective or disinterested: they must not have a vested interest in my believing them one way or another. So a legitimate authority must:

  1. Have material existence.

  2. Be an expert in the field.

  3. Be impartial, and not profit from my belief or ability to sway it.

A related fallacy is, “an appeal to common opinion”. Simply because a lot of people believe something does not make it true. It was widely believed that the Earth was flat, and that slavery was morally acceptable. How many times does a best-selling book, or a top-grossing film cause disappointment? We learn to think of ourselves as independent-minded, as the captain of our own intellectual ship – but we are deeply influenced by the views of those around us. Humans are – as Aristotle said – “political animals”. That is, we live in communities, and we are influenced by the actions and beliefs of those around us. We don’t like to stick out, and when we believe something that the majority of others around us do not, it often leads to doubt and insecurity. When we surround ourselves by others of like-mind it is comforting to us, leading us to be as sure of our own views that is rationally warranted. In a deep way, we are programmed to commit that fallacy of, “an appeal to common opinion”. To be rational, we need to learn to keep this proclivity in check. Now, this does not mean that it is rational to always reject common opinion. Sometimes use of an appeal to common opinion is a version of an acceptable argument from authority. In some cases it is not wrong to believe that what everyone else believes is also true. But we need independent reason why everybody else thinks so.

A problematic justification often cited is what we call, “an appeal to tradition”. Some traditions should be kept; others shouldn’t. It is certainly true that an apprentice can learn a lot from somebody with more experience. But to prove that a tradition is good requires evidence. Intellectual inertia is not justification. Just because it has always been believed, or always done that way, does not make that belief or that method justified. We need independent evidence. Merely appealing to tradition is not sufficient.

The converse of an appeal to tradition is an appeal to novelty. By saying, “new and improved”, does not mean that, just because something is new, it is improved. Another way in which we find the fallacy of novelty used in marketing, is in words like, “modern”, “latest” or, “cutting edge”, which not only tells us that this is new, but also implies by contrast, that competing products are less effective than their newer counterparts. Now, this may or may not be true, and this requires evidence, not just assertions about its status as the most recent addition. We are talking here about the fallacy of novelty.

We often use analogies in science and everyday life. Predicting the weather uses computer models – which are a form of analogy. Testing and research of disease and medicines often occurs where animals are used as test subjects, before these drugs are used in experimentations with humans. But a flaw may be made in exaggerated the effect by linking a system of significantly lesser degree to one of significantly higher. An analogy can fail, despite the similarity, by exaggerating the difference in degree. If the analogy is apt, the argument can be a good one. But we need to make sure that the analogy connects systems that have requisite structural similarities, and that the degree of such similarities give good results. If not we are looking at a faulty analogy.

To recap, we have seen the following reasoning errors:

  1. The fallacy of faulty authority.

  2. The fallacy of an appeal to common opinion.

  3. The fallacy of an appeal to tradition.

  4. The fallacy of novelty.

  5. The fallacy of faulty analogy.

Lecture 5

Most people have learnt the old logical dictum that correlation does not entail causation. That is, simply because you can find two things occurring together does not mean that we can assert with any certainty that one caused the other. Now, that may be the case. But simply identifying a correlation does not in any way establish that there is a cause and effect relation. We need to understand the mechanism by which event A brings about event B if we want to be justified in asserting that A causes B.

It is certainly true that if we wish to establish that A causes B, then A must precede B in time. Causes precede effects. But just because we saw event A before event B does not itself give us good reason to infer that A caused B. Most logicians refer to this as the, “post hoc” fallacy. This is an abbreviation of, “post hoc, ergo propter hoc” which means, “after this, therefore because of it”. Just because it came after does not mean it occurred because of.

When something odd occurs, we try to determine why, and see if there is something different in the antecedent context. If there is something else unusual beforehand, we may jump to the conclusion that whatever is different before, must be the cause of what is different after. But that prior difference may be completely unrelated to the posterior difference. If this sequence of events is repeated, then seeing this repetition will only strengthen our belief in the causation. But this correlation does not necessarily mean causation. To infer a causation between A and B we need the mechanism; and by taking the correlation to imply causation we may have committed the fallacy we call, “neglect of a common cause”. There may be a third thing C, that causes A and B.

In the case of the post hoc fallacy, we wrongly assert a cause and effect relationship based on nothing but time order; but, with, “neglect of a common cause” we assert a cause and effect relationship because we see them together, but overlook a third element which actually cause them both. In both these cases we wrongly assert that A caused B, or B caused A.

But, there are two other cause and effect reasoning errors that can occur even if there actually is a causal relation between A and B; and these both result from the fact that causes in the real world are often multi-faceted. Real world phenomena are complex and often brought about by a convergence of a multiplicity of factors. By misconstruing or ignoring this complexity we can commit one of two cause and effect reasoning errors. The first is what we call, “causal over-simplification”. To pick one of these factors and the to elevate it and it alone to be the cause is the error of “causal over-simplification”. We see this fallacy committed often when we are dealing with complex social issues.

A related error is to confuse a necessary with a sufficient condition. A condition A is necessary for B if you cannot have B without first having had A. In other words, A is necessary for B if A is required for even the possibility of B. A doesn’t bring about B by itself, but if there is no A there is no B. Oxygen, for example, is necessary for fire. If there is no oxygen there can be no fire. This doesn’t mean that everywhere there is oxygen there will be fire. But take away the oxygen and you remove the chance for fire.

Now, a condition A is sufficient for B if A by itself is enough to bring about B. Winning a high stakes lottery, for example, is sufficient to becoming a millionaire. If you have the right numbers they hand you the over-sized cheque for a huge amount of money. But while it is sufficient -– that is, it is by itself enough to make you a millionaire –- it is not necessary. There are other ways to becoming a millionaire. You could start a tech company that gets bought by google. You could be born into such wealth, or marry into it. So winning the lottery is sufficient, but not necessary.

So, A is necessary for B if you can’t have B without first having had A; and A is sufficient for B if, having A, is by itself enough to guarantee B. The fallacy is when you confuse one type of condition from the other. Water may be necessary to grow a plant, but it is not necessarily sufficient is there is not enough light or nutrients. While thinking that a necessary condition is sufficient is the most common version of this fallacy, the converse can be found occasionally as well. Sometimes we take a sufficient condition and wrongly assert it to be necessary. This will often be the result of focussing on a particular way of doing something that had become habitual for us, and we allow ourselves to become blinded to other ways of accomplishing the same task. Question: why is that dirty dish sitting in the sink? Answer: because the dish-washer is running. But the dishwasher is not necessary to clean this dish. It could be washed by hand instead.

So, “causal over-simplification” takes a complex interplay of causes, and ignores all but one, falsely elevating it to the status of the cause; and, “confusion of a necessary and sufficient condition” is the error where we take a sufficient condition wrongly to be necessary, or a necessary condition wrongly to be sufficient.

Our lasts causal fallacy is perhaps the most famous: the, “slippery slope fallacy”, or the, “domino fallacy”. Now, there is no doubt that there are causal chains: the is, an event A causes B, which in turn becomes the cause of C, which in turn causes D. While such chains of cause and effect relations exist, the fallacy is where one asserts the existence of one such chain without giving full causal arguments for each and every step in the chain. As we have seen, arguing for cause and effect relations can be tricky. We need to show the underlying mechanism at work; and in the complex world of intervening causes often A would bring about B; but the other steps may fail. When warning people of an act we think is imprudent we will often neglect to do all of the logical heavy-lifting and simply assert a causal chain on the likelihood of it, and leave it at that. Make the case for each step. If you cannot, the argument fails.

So we have five new fallacies:

  1. Post hoc, ergo propter hoc.

  2. Neglect of a common cause.

  3. Causal over-simplification.

  4. Difference between necessary and sufficient conditions.

  5. Slippery slope fallacy.

Lecture 6

A “fallacy of irrelevance” is a reasoning error that serves to distract our attention from what we are supposed to be objecting about. One of the most common diversionary fallacies is where instead of attacking the argument, we instead focus on attacking the arguer. This fallacy is known by its Latin name, “ad hominem” – which translates as, “to the man”. The idea is that we are focussed on the person instead of the case the person is making. Arguments are acceptable if they are sound –- that is, if their form is acceptable and their premises are true. An argument may be true regardless of whose mouth it comes out of –- whether this is Abraham Lincoln, or Adolf Hitler. Arguments stand on their own merits; and the argument is valid because of its logical structure, and because of the truth of its premises. The identity, background, or motivation of the speaker has nothing to do with the satisfaction of either criteria. To try to claim otherwise -– to argue that we should not accept, or even consider the argument because of the source of the argument, is to commit an ad hominem fallacy.

Now, ad hominem attacks tend to come in three general categories. First is the, “You’re a jerk” version. Now, there are miserable, hateful, deceitful, thoughtless, selfish, people in this world who do nothing to make this world a better place, and who often pursue their petty desires at the cost of the well-being of others. But, if that person makes an argument, we need to analyse the validity of the argument, and assess the likelihood of the premises’ truth. To try to cut off debate with this horrible immoral being on the grounds that this person is a horrible immoral being is illegitimate, because even horrible immoral beings can ; give us good reason to believe things.

The second version of an ad hominem attack is a sort of “guilt by association” where we discount an argument, not for objective reasons, but because the person offering it belongs to some identifiable group. “Don’t listen to her – she’s a feminist”; “You can take his arguments seriously – he’s a Christian, a Jew, a Muslim, a Hindu, an atheist, a Texan, a Liberal, a Conservative, a Socialist, a Communist; or worst of all, a philosopher.”. Just because a philosopher says it, does not give us the logical right to automatically discount it. A common variation of this kind of ad hominem attack is to point out that the speaker is not among those who follow the advice the speaker is giving. This is known by its Latin name “tu quoque” – the “but you do it too” objection; but it is an illegitimate ad hominem attack. Now, it may the rue that the person telling you not to drink is an alcoholic, or the person telling you to give up cigarettes is a four-pack a day smoker, or the person telling you to keep on the straight and narrow is himself in prison, or the person telling you to stay in school is a drop-out. It doesn’t mean it is not good advice.

The third class of ad hominem attacks is where we focus on the motivations of the speaker. “Well, of course you’d say that: you stand to profit if it is true.”. Again, maybe that is correct. Maybe that isn’t. But the argument stands or falls on its own merits, regardless of who, where, or when, the argument is made.

Some may be thinking back a few discussions where I seemed to say the opposite when setting out the fallacy of faulty authority. There we said that someone had to be authority –- that is, have no stake in getting us to believe one way or another. Yet here we are saying it doesn’t matter whether one has a stake or not. Let’s be clear why both are correct. Now, in the case of questionable authority, we are dealing specifically and solely with arguments from authority -– that is, arguments of the form, “You should believe X, because an expert in the field believes X.”. What is under consideration in such a case, is a particular statement of fact, and the entire argument hinges upon the authority’s credibility. But, in the case of ad hominem, on the other hand, what is being discounted is not a single claim of purported fact, bit rather the entire argument. That argument may be flawed, but it is up to us to find the flaw. We cannot simply argue: a problem with the argument exists because the speaker would benefit from it not being there. So, for an individual statement of fact we are being asked to believe, because of someone’s authority, impartiality of the speaker matters. For entire arguments we are given to consider, impartiality of the speaker does not matter. If an inmate on death row produces an argument against the death penalty, or someone from the national rifle association produces an argument against gun-control legislation, or somebody from an industry-advocates-group gives an argument against regulations, that argument needs to be taken seriously and evaluated. It cannot be rejected out of hand because of the interests of the speaker.

Another diversionary tactic that we must be on guard against is what we call “attacking a straw man”. Now, the name comes from the fact that it is easier to beat the stuffing out of scarecrow than it is to take on an actual human being. It is a metaphor for arguments that do not address the actual argument made, but rather a weaker, but easier to refute, version. Logicians have what we call the “principle of charity” –- according to which, when one analyses an argument, one must assess the strongest possible version of that argument. To defeat a weaker version does nothing in terms of demonstrating the given argument to be unsound. It can only be rejected as not providing legitimate grounds for belief in its conclusion if the strongest version -– the best understanding -– is seen to be flawed. To take on a weaker version, and then assert that we have done anything : that is to attack a straw man.

Now, there are two main varieties of attacking a straw man. One version is to alter the scope of the premises offered, making them broader or narrower than the ones that were offered. The hint that this is what you are hearing, often is the phrase, “Oh, so what you’re saying is...”. Suppose, for example, the interlocutor broadens the claim made in one of the premises, and puts words in the original arguer’s mouth that the original arguer did not say. In doing this, the interlocutor has weakened the argument. This only undermines the new weaker version of this argument, not the original version; and certainly not the strongest version we are required to consider because of the principle of charity.

So, the first kind of straw man is where the scope of a premise is made broader or narrower, to weaken the argument, while keeping the rest of the premises intact.

The other type of straw man argument is more radical. It is where the interlocutor replaces all the premises wholesale. When you hear the phrase, “The real reason...” you are likely looking at a straw man argument. Why would someone say, “The real reason...”? Because what they are doing is replacing the real reasons –- that is, the premises -– with new premises, and odds are, these new premises are going to be a whole lot easier to undermine. But in undermining them, the interlocutor has done nothing with regard to the soundness of the original argument since the original argument is gone. We find this sort of reasoning error when the interlocutor does not contend that the conclusion does not follow from the premises, or the premises are false. Instead, the interlocutor substitutes a new set of premises altogether creating a completely new argument to critique. To the surprise of no-one, the new argument is much easier to attack. Again, contrary to the principle of charity, we are analysing a weaker version of an argument, not the strongest possible. That is, attacking a straw man.

The final fallacy in this discussion is what we call a “red herring”. Where attacking a straw man is the error wherein we replace the premises of an offered argument, a “red herring” is where we replace the conclusion. With a straw man we are still talking about the same thing, but talking about it differently. But where we change the conclusion, we are completely changing the topic of conversation. That is a “red herring” –- the ultimate in argumentative diversion.

The name comes from fox hunting. To throw the dogs off the trail, the idea is to take something that is so strong smelling it hides the fox’s scent. So a herring would be baked until red, and then dragged in front of the dogs in order to pull the dogs’ attention away from the fox, to something on the side. That’s the metaphor. Something that distracts us to a side issue, away from that that which we were originally pursuing.

To recap:

  1. Ad hominem is where we try to undermine the belief in the soundness of the argument, by ignoring the argument, and instead attacking the arguer by either: pointing out undesirable aspects of the arguer, by associating the arguer with undesirable groups, or by pointing out that the arguer doesn’t follow his own advice. But all of these are diversionary fallacies that fail to address the argument at all. Arguments stand or fall on their own merits regardless of whose mouth they come out of.

  2. We looked at attacking a straw man, which is a violation of a logician’s principle of charity -– by which, when we evaluate an argument, we must assess the soundness of the strongest possible version. Attacking a straw man is when we construct a weaker version of an argument, and attack that. There’s the, “Oh, so you’re saying...” version, where we focus on one premise, and alter its scope to make it easier to attack; and then there’s ,“The real reason...” version, where we replace the premises wholesale, proposing a completely different argument for the same conclusion, and attack the one we gave, instead of the one the arguer gave.

  3. Finally, we looked at “red herrings”, where -– we take an argument and respond to it by moving the conversation to a completely new topic.

Lecture 7

Recall that an argument is valid if and only if, assuming the truth of the premises for the sake of argument, the probable truth of the conclusion follows. Validity is a function of the form of the argument – the structure of the argument, the logical integrity of the underlying skeleton of the argument. But when we look at questions of validity, we are asking what propositions follow from what propositions. Recall also that arguments come in two different kinds: deductive, and inductive. A deductive argument is one in which the scope is non-ampliative -– that is, the scope of the conclusion is no broader than the scope of the premises. In other words, deductive arguments go from broad to narrow: the conclusion does not talk about anything that wasn’t already covered in the premises. Because there is no new information in the conclusion, one nice thing about deductive arguments is that if they are sound, or well-grounded, then their conclusions must be true.

Inductive arguments, on the other hand, are ampliative -– that is, their conclusions do move beyond the premises to give us rational belief about something we have not yet observed. Induction is ampliative in that it amplifies our rational beliefs: it takes us from narrow to broad. Inductive arguments are wonderful because they give us new knowledge about the world. They take what we already know, and give us logical permission to believe new things that we did not know before. Deduction only arranges our previous knowledge into new forms we may or may not have considered; but induction actually generates completely novel beliefs about the world.

Question: what is the cost of this growing stockpile of beliefs? Answer: certainty. With induction, because we are making a logical leap beyond the content of the premises, there is no guarantee that the results must be true. The best we get from induction is, likely truth. If you have a good inductive argument, the conclusion is probably true. Now, probably true, is sufficient for rational belief. We should believe that which is probably true. A lack of guarantee is not evidence against is being true. It is evidence –- good inductive evidence –- and while we should not be certain, we should believe it.

Deductive certainty in all our beliefs would be wonderful. But it is not available to us. We need inductive inferences because is most life-situations, it is all we have, and it is good enough. We make inductive inferences all the time; and we should. High probability is all we have for rational belief.

While there are many different kinds of inductive arguments, there are three forms that are the most important because they are the most frequently used. An inductive analogy is of the form that because all of the observed P’s have the property A, so does the case of the P coming up. It is when we apply what we have learnt from all instances, to one in the present or the future.

Now, a related but stronger inductive inference is the “universal generalisation”. It has the form that because all of the observed P’s have the property A, therefore all P’s have the property A.

While both inductive analogy and universal generalisation have the same premises, notice the difference in the conclusions. One is an analogy in that is says a future instance will be like all the past instances; where the other bigger much broader claim saying something about all members of the observed population. Universal generalisations are not just predictions about a single upcoming instance.

Our third form of inductive inference is what we call statistical generalisation. Its form is: X percent of all observable P’s have had the property A. Therefore X percent of all P’s have the property A. Now, like universal generalisation, we are generalising over the entire set of P’s from some limited sample of P’s. But here we are not attributing the property to all of them, but to some percentage –- either an explicit percentage, or a more vague amount of the population.

For deductive arguments, as long as the premises were valid to begin with, adding new premises, no matter what they are, will always keep the argument valid. This is true for all deductive arguments. But is is not true for inductive arguments. For inductive arguments, adding a new premise can turn a completely valid argument into an invalid one. For inductive inferences, one new piece of information can completely destroy and argument in a way that can’t happen for deductive arguments. So for an inductive inference to be valid, we need a guarantee that the evidence given in the premises is all of the relevant observations we have. We need “complete information”. To fail to do so is what we call cherry-picking. Now this a common error where, where someone wants to support a point they’ll present lots and lots of evidence: so much so, that we cannot help but believe the conclusion it leads to. But, if the person was careful to select only evidence which supports his or her own position, and exclude the counter-evidence, we will be led to believe something that is not well-supported. The key to avoiding this error is in the selection procedure for the sample we use to collect our information. In order to have good reason to believe our conclusions, we need to base our conclusions on a sample of sufficient size. But the size of the sample isn’t the only thing we need to be concerned about. Bad samples could be quite large. To make good inductions, samples have to be representative of the population over which we are making the inference. The sample needs to look like the population is miniature, because there may be aspects of different subgroups within the population that affect the distribution of the property we are examining. If you have a poorly distributed sample, you commit the fallacy of unrepresentative data. Even if the sample is large enough, it may only account for a smaller homogeneous portion of a heterogeneous population. Good samples model the heterogeneity of the population. This assumes, of course, that you know what subgroups are relevant, and their proportion of the general population beforehand. But sometimes you don’t. In those cases the key is a random sample. The idea is that if we pull enough individuals out of a population without a bias toward or away from any subgroup, then the relevant subgroups will show up in the sample roughly in the same proportion they occupy in the population as a whole. The key to the random sample then, is to make sure your sample is large enough that small subgroups will appear, and making sure that you selection procedure does not accidentally bias your selection toward or away from such subgroups.

So, we have seen three fallacies we need to avoid in making inductive inferences:

  1. First is the fallacy of selective evidence. The requirement of complete information mandates that we cannot accept cherry-picked data. We need to know that evidence given is all the evidence gathered, and not just hand-selected examples that will make a weak case sound stronger than it is.

  2. Second is the fallacy of insufficient sample. In order to make a good inductive argument, you need to start with enough data. One or two instances -– what we call anecdotal evidence –- is not enough if we want to make any sort of larger assertion.

  3. Third is unrepresentative data. Samples must not also be large enough. They must resemble the population over which we are generalising.

Our last fallacy is one in which we try to make an inductive inference from data which bear no cause and effect relationship to each other, where there is not a probabilistic relation between the members of the sample, and the property being observed. We call this attributing probabilistic relevance to probabilistically irrelevant events, “the gambler’s fallacy”.

Inductive inferences do not give us certainty of deduction; but in the messiness of the real world, they are the inferences we most often make. We know that we learn from experience, but what logical form does that learning take? The answer is, inductive inferences. Because their conclusions out-run the content of their premises, inductions are incapable of giving any complete confidence in their conclusions; but they give us high probability, and for reasonable belief, that’s good enough.