# 18 Differences between respondent groups

Learning Objectives

By the end of this chapter, students must be able to:

- understand the difference between an independent samples t-test and ANOVA
- interpret the output from these tests

The following material is derived from Curtin University ^{[1]} and is used under a Creative Commons Attribution ShareAlike 4.0 Licence.

**One sample t-test**

A **one-sample t-test** is used to test whether the sample mean of a continuous variable is significantly different to a ‘test value’ (some hypothesised value). For example, you would use it if you had a sample of student final marks and you wanted to test whether they came from a population where the mean final mark was equal to a previous year’s mean of 70. In this case, the hypotheses would be:

H0: The sample comes from a population with a mean final mark of 70 (μfinal mark=70)

HA: The sample does not come from a population with a mean final mark of 70 (μfinal mark≠70)

Before conducting a one-sample t-test you need to check that the following assumptions are valid:

**Assumption 1:** The sample is a random sample that is representative of the population.

**Assumption 2:** The observations are independent, i.e. measurements for one subject have no bearing on any other subject’s measurements.

**Assumption 3:** The variable is normally distributed, or the sample size is large enough to ensure normality of the sampling distribution.

If the last assumption of normality is violated, or if you have an ordinal variable rather than a continuous one (such as final grades of F, 5, 6, 7, 8, 9, 10), the **one-sample Wilcoxon signed-rank test **should be used instead.

Assuming the assumptions for the one-sample t-test are met though, and the test is conducted using statistical software (e.g. SPSS as in this example), the results should look something like the following:

Note that the first of these tables displays the descriptive statistics, which you should observe first in order to get an idea of what is happening in the sample. For example, the sample mean is 73.125 as compared to the test value of 70, giving a difference of 3.125 (this value can be calculated, or it is also displayed in the second table). To test whether or not this difference is statistically significant requires the second table though and in particular the p-value (which in this table is listed as ‘Sig. (2-tailed)’) and the confidence interval for the difference. In terms of the p-value:

- If p⩽.05 we reject H0, meaning the sample has come from a population with a mean significantly different to the test value.
- If p>.05 we do not reject H0, meaning the sample has come from a population with a mean that is not significantly different to the test value.

In this case, our p-value of .03 shows that the difference is statistically significant.

This is confirmed by the confidence interval of (.3217, 5.9283), for the difference between the population mean and our test value (70). Because this confidence interval does not contain zero, it again shows that the difference is statistically significant. In fact, we are 95% confident that the true population mean is between .3217 and 5.9283 points higher than our test value.

Note that while the test statistic (t) and degrees of freedom (df) should both generally be reported as part of your results, you do not need to interpret these when assessing the significance of the difference.

If you would like to practise interpreting the results of a one sample t-test for statistical significance, have a go at one or both of the following activities:

**Independent samples t-test**

An **independent samples t-test** is used to test whether there is a significant difference in sample means for a continuous variable for two independent groups. For example, you would use it if you collected data on hours spent watching TV each week, and you wanted to test if there was a significant difference in the mean hours for males and females. In this case, the hypotheses would be:

H0: There is no significant difference in TV hours per week for males and females (μmales=μfemales, or μmales−μfemales=0)

HA: There is a significant difference in TV hours per week for males and females (μmales≠μfemales, or μmales−μfemales≠0)

Before conducting an independent samples t-test you need to check that the following assumptions are valid:

**Assumption 1:** The sample is a random sample that is representative of the population.

**Assumption 2:** The observations are independent, i.e. measurements for one subject have no bearing on any other subject’s measurements.

**Assumption 3:** The variable is normally distributed for both groups, or the sample size is large enough to ensure the normality of the sampling distribution.

If the last assumption of normality is violated, or if you have an ordinal variable rather than a continuous one (such as hours recorded in ranges), the **Mann-Whitney U test** should be used instead.

Assuming the assumptions for the independent samples t-test are met though, and the test is conducted using statistical software (e.g. SPSS as in this example), the results should look something like the following:

Note that the first of these tables displays the descriptive statistics, which you should observe first in order to get an idea of what is happening in the sample. For example, the difference between the sample means for males and females is 1.70833 (this value can be calculated from the means in the first table, and is also displayed in the second table – the fact that it is negative is simply because females watched more TV than males).

Next, note that there are actually three p values (and two confidence intervals) in the second table; one is used in the situation where the variances for the two groups are approximately equal (the one in the ‘Sig. (2-tailed)’ column in the top row of the table), one is used in the situation where the variances for the two groups are not approximately equal (the one in the ‘Sig. (2-tailed)’ column in the bottom row of the table), and the other one (the first one in the table, in the ‘Sig.’ column) is used to determine which situation we have.

We need to interpret the latter first; this p-value is for Levene’s Test for Equality of Variances, for which the null hypothesis is that there are equal variances. Hence, if p⩽.05 it is evidence to reject this null hypothesis and assume unequal variances, while if p>.05 it is evidence to fail to reject this null hypothesis and assume equal variances. Depending on which is which, determines which of the other p values (and confidence intervals) you should interpret. In this case, the p-value of .966 means we can assume equal variances and should interpret the p-value in the top row of the remainder of the table. Again, the interpretation of this p-value is that:

- If p⩽.05 we reject H0, meaning the means of the two groups are significantly different.
- If p>.05 we do not reject H0, meaning the means of the two groups are not significantly different.

In this case, our p-value of .564 (in the top row of the remainder of the table) shows that the difference between the means is not statistically significant.

This is confirmed by the confidence interval of (−7.654, 4.238) for the difference between the means. Because this confidence interval contains zero it again means the difference is not statistically significant. We are 95% confident that the difference in mean hours spent watching TV each week for males and females is between −7.654 and 4.238 hours.

Note that while the test statistic (t) and degrees of freedom (df) should both generally be reported as part of your results, you do not need to interpret these when assessing the significance of the difference.

If you would like to practise interpreting the results of an independent samples t-test for statistical significance, have a go at one or both of the following activities:

**One-way ANOVA**

**One-way ANOVA** is similar to the independent samples t-test but is used when three or more groups are compared. For example, comparing the mean weights of people who do no exercise, who do moderate exercise and who do lots of exercises.

The null hypothesis for a one-way ANOVA states that all the population means are equal, while the alternative hypothesis states that at least one of them is different.

Before conducting a one-way ANOVA you need to check that the following assumptions are valid:

**Assumption 1:** The sample is a random sample that is representative of the population.

**Assumption 2:** The observations are independent, i.e. measurements for one subject have no bearing on any other subject’s measurements.

**Assumption 3:** The variable is normally distributed for each of the groups, or the sample size is large enough to ensure normality of the sampling distribution.

**Assumption 4:** The populations being compared have equal variances.

If a one-way ANOVA is conducted and it turns out that at least one of the means is different, you will need to investigate further to determine where the difference lies using **post hoc tests**, for example Tukey’s HSD.

- Curtin University 2021,
*I**nferential statistics,*viewed 11 May 2022, <https://libguides.library.curtin.edu.au/uniskills/numeracy-skills/statistics/inferential#s-lg-box-wrapper-25242003> ↵