Probabilistic arguments

In this section we will cover:

  • Argument patterns
  • Soft generalisations
  • Extended probabilistic arguments
  • A note on “Probably”

But first, some general comments on probabilistic arguments.

Probabilistic arguments occur where the likelihood of the conclusion can be clearly established given the premises. They are the type of non-deductive argument which most closely resembles the way deductive arguments work.

Consider this argument:

P1. All sheep in New Zealand live on farms.
P2. Alice is a sheep in New Zealand.
                                       
C. Alice lives on a farm.

Suppose for a moment that Alice is a New Zealand sheep (that is, suppose that P2 is true). The argument is valid. However, it cannot be sound. The first premise is a “hard” generalisation – it does not allow of any exceptions. As a hard generalisation about all sheep in New Zealand, P1 is false. There are undoubtedly some rogue sheep. There will be a few who have escaped into the bush, and there are probably a few sheep kept as pets who don’t live on farms. So although it is valid, this argument is unsound.

We could change P1 to a “soft” generalisation that has a better chance of being true. A soft generalisation makes a general claim about a group, but allows that there are some exceptions. So if P1 used the statement

Nearly all sheep in New Zealand live on farms

then it would be true.

But the argument would no longer be valid:

P1. Nearly all sheep in New Zealand live on farms.
P2. Alice is a sheep in New Zealand.
                                       
C. Alice lives on a farm.

In this argument the premises do not guarantee the conclusion. It is possible for the premises to be true but the conclusion false, because it is possible that Alice is one of the few rogue bush-sheep, or a pet.

This sort of argument isn’t valid, but it can be very useful. The premises fall short of guaranteeing its conclusion, and so it isn’t valid. But it does provide strong support for its conclusion. The truth of the premises are sufficient to show that the conclusion is probably true.

This sort of argument isn’t a failed deductive argument: it isn’t intending its conclusion to follow with certainty. We can mark this in the argument frame, by including the word “Probably” before the conclusion, like so:

P1. Nearly all sheep in New Zealand live on farms.
P2. Alice is a sheep in New Zealand.
                                       
[Probably] C. Alice lives on a farm.

Arguments of this sort will be stronger or weaker, depending on how probable the premises make the conclusion. Some such arguments have premises that make their conclusions very probable, and so are very strong.

P1. There are 99 black marbles in this bag and one white marble.
P2. In my fist is a marble randomly selected from the bag.
                                                    
[Probably] C. The marble in my fist is black.

Here we can see that it is 99% probable that the marble in my fist is black. That makes this a very strong  argument.

It’s important to note that the statement “The marble in my fist is black” is still either true or false. It cannot be 99% true. It is either 100% true or 100% false. The 99% applies to the probability that it is true, and not to truth itself.

We can change the probabilities in such arguments by changing the proportions of marbles:

P1. There are 75 black marbles in this bag and 25 white marbles.
P2. In my fist is a marble randomly selected from the bag.
                                                    
[Probably] C. The marble in my fist is black.

This conclusion is still probable. This non-deductive argument is weaker than the previous one, but it is still strong enough to be a useful argument.

With the marbles example it is easy to accurately measure the degree of probability of the conclusion. Most ordinary probabilistic arguments lack this degree of precision.

P1. Most university students do not have children.
P2. Betty is a university student.
                                               
[Probably] C. Betty does not have children.

Here the conclusion is probable, but we can’t assign a precise degree of probability to the conclusion.

Argument patterns

The same sorts of argument patterns can occur in probabilistic non-deductive arguments as occur in deductive arguments.

The argument “No mammals lay eggs. Perry is a mammal. So Perry does not lay eggs” is a valid argument. It follows the general pattern of Modus tollens. (If you can’t see why, try converting the generalisation expressed in the first premise as a conditional. Now the first premise reads “If something is a mammal then it doesn’t lay eggs.”) But the first premise of this argument is false. There are three species of mammal that lay eggs: the most well-known is the platypus. So, we can soften the generalisation in the first premise:

P1. Hardly any mammals lay eggs.
P2. Perry is a mammal.
                                    
[Probably] C. Perry doesn’t lay eggs.

This follows the Modus tollens pattern, except that it uses a soft generalisation instead of a hard generalisation in the first premise. It is a non-deductively strong argument.

It’s important to remember that a fallacious argument pattern cannot be improved by weakening the generalisation being made. Here’s an example to make the point clearer.

Consider this argument:

P1. All geese are birds.
P2. Borka is a bird.
                              
C. Borka is a goose.

The basic pattern of this argument is the fallacy of affirming the consequent (using a generalisation instead of a conditional). There are plenty of birds which are not geese, and Borka could be one of those.

This argument cannot be improved by weakening the generalisation in P1. That would give us an argument like this:

P1. Most geese are birds.
P2. Borka is a bird.
                              
[Probably] C. Borka is a goose.

This is a weak argument. Borka is not likely to be a goose in virtue of being a bird. Once again, Borka could be some other type of bird. So this argument commits a non-deductive version of the fallacy of affirming the consequent. That basic problem with the structure of the argument cannot be avoided by one of the premises being a soft generalisation instead of a hard one.

This might seem obvious, but you do see people give versions of this type of argument. Consider this one, which appeared in the media not that long ago.

P1. Nearly all terrorists are Muslim.
P2. The person sitting next to me on the plane is Muslim.
                                                               
[Probably] C. The person sitting next to me on the plane is a terrorist.

When such arguments are criticised, the people giving them sometimes respond by saying “I wasn’t saying all terrorists are Muslim, only most of them”, or by saying “I’m not saying I’m certain that they’re a terrorist, just that it’s likely”. But these things are not the problem with the argument. Even if it were true that nearly all terrorists are Muslim (which it isn’t), the conclusion would not be likely to be true. That’s because this argument commits the fallacy of affirming the consequent. It cannot make it likely that the person sitting next to me is a terrorist. The argument patterns we learnt in the previous chapter remain relevant for probabilistic arguments using soft generalisations.

Kinds of soft generalisation

Any statement which makes a claim about a group or category of things can be said to be a generalisation. Hard generalisations are things like “All”, “None,” “Always” and “Never”. Soft generalisations include “Almost all,” “Almost none,” “Many,” “Most,” and “Some”. Some soft generalisations are useful in probabilistic arguments and some are not.

Remember that the aim is to show that the conclusion is probable: that is, to have any strength at all, the argument has to show that the conclusion is more likely to be true than not.

Consider this argument:

P1. The majority of men beat their wives.
P2. Aristotle is a man.
                                     
[Probably] C. Aristotle beats his wife.

This argument has some strength, although not much.  If the premises were true, then the conclusion would be more likely to be true than not. However, P1 is clearly false. So the argument cannot be cogent.

We might try to improve the argument by softening the generalisation in P1 further, in an attempt to make it more likely to be true. An attempt to do so might give us something like this:

P1. Many men beat their wives.
P2. Aristotle is a man.
                                      
[Probably] C. Aristotle beats his wife.

P1 is now more likely to be true. But this argument is weak. The premises don’t provide good reason for accepting the conclusion. The “many” doesn’t provide a proportion of all men which could make the conclusion probable. Words such as “many” don’t tell us proportions. It just means that there are at least several men who do so. It’s important to think about whether the generalisation is sufficiently robust to make the conclusion probable.

Extended probabilistic arguments

Just as deductive arguments can occur as extended arguments, so can non-deductive ones. We need to watch how any probability which occurs in the argument effects the probability of the final conclusion.

With an extended argument with a single soft generalisation, the probability of the conclusion will reflect the degree of probability in the soft generalisation:

P1. Nearly all university students write assignments on computers.
P2. Betty is a university student.
                                               
[Probably] C1. Betty writes her university assignments on a computer.
P3. Everyone who writes assignments on a computer can read.
                                               
[Probably] C. Betty can read.

This is a strong argument. The probability of C is the same as that of C1, and the degree of probability of C1 comes from the soft generalisation in P1.

But each soft generalisation in an extended argument will further dilute the probability of the final conclusion.

Consider this argument:

P1. Most university students hand in their assignments.
P2. Conrad is a university student.
                                             
[Probably] C1. Conrad hands in his assignments
P3. Most students who hand in their assignments pass their courses.
                                             
[Probably] C. Conrad passes his courses.

Here, the inference from P1 and P2 to C1 is not particularly strong. It is further weakened by the soft generalisation at P3. By the time the final conclusion is reached, the probability given to the conclusion by the premises is low. This argument is not a strong argument.

If you can’t see why the dilution is causing a problem from that example, consider this one, where the problem is more obvious:

P1. Most of those currently in the university library are university students.
P2. Conrad is currently in the university library.
                                             
[Probably] C1. Conrad is a university student.
P3. Most university students drink in the evenings.
P4. It is evening.
                                             
[Probably] C. Conrad is drinking.

It’s unlikely (although not impossible) that Conrad is drinking in the library. Even if all the premises of this argument were true, the conclusion is not likely. That’s because the group of people who are likely to be in the university library in the evening are likely to be different people than those who are drinking.

Sometimes the generalisations in an extended argument will be strong enough to make the final conclusion still probable, and sometimes they will not. There is no precise way to determine the probability of the conclusion when imprecise quantifiers such as “nearly all” and “few” are used. Instead, watch the number and type of generalisations made, and make a judgement call about whether the probability of the conclusion has been diluted too much.

A note on the use of “Probably”

When putting non-deductive arguments into standard form, we often insert “[Probably]” before the conclusion to indicate that the argument is intended to be non-deductive. It’s put in square brackets to indicate that it is not part of the conclusion itself, and it is not part of the argument itself. It merely indicates the type of argument being used. This is often helpful, but it’s important to note that it does not indicate anything about the success of the argument. Nor can inserting “Probably” improve a poor argument. Consider this argument:

Nearly all dogs have four legs. Fido is a dog. So Fido has four legs.

This is a strong argument. It is strong regardless of whether “Probably” is placed before the conclusion or not. So

P1) Nearly all dogs have four legs.
P2) Fido is a dog.
                                             
C) Fido has four legs

is a strong argument.

Further, inserting “Probably” before the conclusion does not show that an argument is strong, and will not improve a poor argument. Consider this argument:

​P1) Nearly all dogs have four legs.
P2) Fido has four legs.
                                             
[Probably] C) Fido is a dog.

This is not a strong argument. (It affirms the consequent.) The presence of “Probably” cannot change this. You should think of the “Probably” as a useful way to indicate that an argument is non-deductive. But it does not tell you anything about the success of the argument.

Licence

Icon for the Creative Commons Attribution 4.0 International License

Probabilistic arguments Copyright © 2024 by Stephanie Gibbons and Justine Kingsbury is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book