+ - 0:00:00
Notes for current slide
Notes for next slide

Hypothesis Testing

MPA 6010

Ani Ruhil

1 / 53

Agenda

  1. The Logic of Hypothesis Testing

  2. One-tailed versus Two-tailed hypotheses

  3. One-Sample t-tests

  4. Two-group t-tests

  5. Paired t-tests

  6. Assumptions of t-tests

2 / 53

The Logic of Hypothesis Testing

3 / 53

Hypothesis testing is an inferential procedure that uses sample data to evaluate the credibility of a specific belief about a population parameter. The process involves ...

  1. Stating a hypothesis: an assumption that can neither be fully proven nor fully disproven. For example,

    • Not more than 5% of GM trucks breakdown in under 10,000 miles

    • Heights of North American adult males is distributed with \(\mu = 72\) inches

    • Mean year-round temperature in Athens (OH) is \(\ge 62\)

    • 10% of Ohio teachers are Accomplished

    • Mean county unemployment rate in Ohio is 4.1%

  2. Drawing a sample to test the hypothesis

  3. Conducting the test itself to see if the hypothesis should be rejected

4 / 53

The Null and the Alternative Hypotheses

  • Null Hypothesis \((H_{0})\): is the assumption believed to be true

  • Alternative Hypothesis \((H_{1})\): is the statement believed to be true if \(H_{0}\) is rejected

    \(H_{0}\): \(\mu > 72\) inches; \(H_{1}\): \(\mu \leq 72\)

    \(H_{0}\): \(\mu < 72\) inches; \(H_{1}\): \(\mu \geq 72\)

    \(H_{0}\): \(\mu \leq 72\) inches; \(H_{1}\): \(\mu > 72\)

    \(H_{0}\): \(\mu \geq 72\) inches; \(H_{1}\): \(\mu < 72\)

    \(H_{0}\): \(\mu = 72\) inches; \(H_{1}\): \(\mu \neq 72\)

\(H_{0}\) and \(H_{1}\) are mutually exclusive and mutually exhaustive

  • Mutually Exclusive: Either \(H_{0}\) or \(H_{1}\) is True \(\cdots\) both cannot be true at the same time

  • Mutually Exhaustive: \(H_{0}\) and \(H_{1}\) exhaust the Sample Space \(\cdots\); there are no other possibilities that could exist that we are unaware of

5 / 53

Type I and Type II Errors

Decision based on Sample
Null is true
Null is false
Reject the Null Type I error No error
Do not reject the Null No error Type II error

Type I Error: Rejecting the Null hypothesis \(H_0\) when \(H_0\) is true

i.e., we should not have rejected the Null hypothesis

Decision \(H_0\) is True \(H_0\) is False
Reject \(H_0\) Type I error \((\alpha)\) No error \((1 - \beta)\)
Do not reject \(H_0\) No error \((1 - \alpha)\) Type II error \((\beta)\)

Type II Error: Failing to reject the Null hypothesis \(H_0\) when \(H_0\) is false

i.e., we should have rejected the Null hypothesis

The probability of committing a Type I error \(= \text{ Level of Significance }= \alpha\)

We have to decide how often we are okay with committing a Type I error (i.e., falsely Reject \(H_0\)). Conventionally Type I error rate is set to one of the following \(\alpha\) values: \(\alpha=0.05 \text{ or } \alpha=0.01\)

Note the very cautious language ... Reject \(H_{0}\) versus Do Not Reject \(H_{0}\)

6 / 53

The Process of Hypothesis Testing: An Example

Historically, the standard pediatric vaccination schedule — covering diphtheria–tetanus–pertussis (DTaP), polio, Haemophilus influenzae type b (Hib), measles–mumps–rubella (MMR), pneumococcal, plus hepatitis B — became widely normalized and accepted as standard criteria for children to attend public schools in the late 20th century.

During the COVID-19 pandemic folk began wondering if vaccine hesitancy would explode to such an extent that more parent would start asking for vaccine exemptions for their children. Has this happened in Ohio? Are indeed more children attending kindergarten exempt from vaccines? Regardless of what we believe, perhaps the Governor would like us to carry out a test and see if exemptions have grown. How would we test this?

The first thing would be a starting point -- what was the exemption rate in 2018-19? Records show the rate was 2.9%. Now we can write our hypotheses.

$$H_0: \text{Exemption rates are the same or lower } (\mu_0 \leq 2.9)$$

Since we are expected to test if exemption rates have grown the alternative hypothesis would be:

$$H_1: \text{Exemption rates have increased } (\mu_0 > 2.9)$$

7 / 53

The Sampling Distribution of \(\bar{x}\)

We know from the theory of sampling distributions that the distribution of sample means, for all samples of size \(n\), will be normally distributed (as shown below)

Most samples would be in the middle of the distribution but by sheer chance we could end up with a sample mean in the tails. This will happen with a very small probability but it could happen!!

8 / 53

For example, we could get a sample mean that when converted into its \(t-score\) via \(t = \dfrac{\bar{x} - \mu}{s_{\bar{x}}}\), with the standard error is given by \(s_{\bar{x}} = \dfrac{s}{\sqrt{n}}\) could assume values of \(t = \pm\infty\) Here are some \(t\) values the probability of seeing them with a sample size of 300 (i.e., \(n = 300\)) and hence \(df = n - 1 = 300 - 1 = 299\).

Calculate the probabilities of each shaded region.

9 / 53

Calculate the probabilities of each shaded region.

10 / 53

Calculate the probabilities of each shaded region.

11 / 53

If sample means can fall anywhere in the distribution with varying probabilities, we have to establish some probability cutoff such that if the probability of drawing our sample mean meets or surpasses that cutoff we can say there is a very low probability of this occurring if \(H_0\) is true and hence it must be that exemptions have increased.

Conventionally we set this cutoff to be probabilities of \(0.05\) and \(0.01\), respectively. These are the areas shaded in green below:

12 / 53

We run an anonymous survey of public schools and out of a total of 300 responses \((n = 300)\), we see the average exemption rate to be 4.7% and a standard deviation of \((s = 0.212)\).

What is the \(t-score\) here?

First, the standard error \(s_{\bar{x}} = \dfrac{s}{\sqrt{n}} = \dfrac{0.212}{\sqrt{300}} = 0.0122\)

Second, the \(t-score\) calculated as \(t = \dfrac{\bar{x} - \mu_0}{s_{\bar{x}}} = \dfrac{4.7 - 2.9}{0.0122} = \dfrac{1.8}{0.0122} = 147.541\)

If nothing has changed and in reality in the population of kindergarteners only 2.9% are exempt, the probability of ending up with \(t = 147.541\) is practically 0 ... aka, highly unlikely to occur by chance.

Hence the only logical conclusion is to say, well, we reject the null hypothesis that exemption rates are 2.9% or less.

Formally, we set the following decision rules:

  • Reject the Null hypothesis if the probability of your calculated \(t\) is \(\leq \alpha\)
  • Do not reject the Null hypothesis if the probability of your calculated \(t\) is \(> \alpha\)

\(\alpha = 0.05\) or \(\alpha = 0.01\)

13 / 53

Critical Region vs Region of "Acceptance"

Region of Acceptance

Critical Region

14 / 53

But what if we had no specific guidance and wanted to simply test whether exemption rates have changed? Now we would have to allow our calculated \(t-score\) to be positive or negative.

$$H_0: \text{Exemption rates are 2.9% } (\mu_0 = 2.9)$$ $$H_1: \text{Exemption rates are not 2.9% } (\mu_0 \neq 2.9)$$

15 / 53

The process revisited ...

  1. State the hypotheses

    • If we want to test whether something has changed then \(H_0\) must specify that nothing has changed \(\ldots H_0:\mu = \mu_{0}; H_1: \mu \neq \mu_{0} \ldots\) two-tailed
    • If we want to test whether something is different then \(H_0\) must specify that nothing is different \(\ldots H_0:\mu = \mu_{0}; H_1: \mu \neq \mu_{0}\ldots\) two-tailed
    • If we want to test whether something had an impact then \(H_0\) must specify that it had no impact \(\ldots H_0:\mu = \mu_{0}; H_1: \mu \neq \mu_{0}\ldots\) two-tailed
    • If we want to test whether something has increased then \(H_0\) must specify that it has not increased \(\ldots H_0:\mu \leq \mu_{0}; H_1: \mu > \mu_{0}\ldots\) one-tailed
    • If we want to test whether something has decreased then \(H_0\) must specify that it has not decreased \(\ldots H_0:\mu \geq \mu_{0}; H_1: \mu < \mu_{0}\ldots\) one-tailed
  2. Collect the sample and set \(\alpha=0.05\) or \(\alpha=0.01\)

  3. Calculate \(s_{\bar{x}} = \frac{s}{\sqrt{n}}\), \(\bar{x}\), \(df=n-1\), and then \(t\)
  4. Reject \(H_0\) if calculated \(t\) falls in the critical region; do not reject \(H_0\) otherwise. This is the same as saying Reject \(H_0\) if p-value is \(\leq \alpha\); do not reject \(H_0\) if p-value \(> \alpha\)
16 / 53

Problem 1

Last year, Normal (IL) the city's motor pool maintained the city's fleet of vehicles at an average cost of 346 per car. This year Jack's Crash Shop is doing the maintenance. City notices that in a random sample of 36 cars fixed by Jack the mean repair cost is 330 with a standard deviation of 120. Is Jack's Crash Shop saving the City money?

\(H_0: \mu \geq 346\) and \(H_1: \mu < 346\). Let us choose \(\alpha = 0.05\). Note \(df=n-1=36-1=35\)

\(s_{\bar{x}} = \dfrac{s}{\sqrt{n}} = \dfrac{120}{\sqrt{36}} = 20\), and hence \(t = \dfrac{\bar{x} - \mu_{0}}{s_{\bar{x}}} = \dfrac{330-346}{20} = \dfrac{-16}{20} = -0.80\)

Fail to reject \(H_0\); the data suggest that Jack's prices may not differ from those predicted by the null hypothesis

17 / 53

Problem 2

Kramer's (TX) Police Chief learns that his staff clear 46.2% of all burglaries in the city. She wants to benchmark their performance and to do this she samples 10 other similar cities in Texas. She finds their numbers to be as follows:

Rate Rate
44.2 32.1
40.3 32.9
36.4 29.0
49.4 46.4
51.7 41.0

Is Kramer's clearance rate significantly different from those of other similar Texas cities?

\(H_0: \mu = 46.2\) and \(H_1: \mu \neq 46.2\). Set \(\alpha = 0.05\)

Note \(df=n-1=10-1=9\), \(\bar{x}=40.34\), and \(s_{\bar{x}} = 2.4279\)

\(t = \dfrac{ \bar{x} - \mu_{0} }{ s_{\bar{x}} } = \dfrac{40.34-46.2}{2.4279} = -2.414\)

18 / 53

Reject \(H_0\); the data suggest that Kramer's clearance rate does not conform with that of other similar Texas cities.

19 / 53

Problem 3: Philanthropy

The Director of Philanthropy at the Fleckman Institute of the Arts is curious to assess the impact of this year's changes in federal tax laws on donations. Last year the average donation was 580. A random sample of 50 donors yields an average donation of 623.64 with a standard deviation of 84.27. Did the change in federal tax laws have an impact on donations? \(\ldots\) solve this on your own

20 / 53

Problem 4: Volunteering

Springdale University is concerned that student volunteer activity has decreased. Last year their students volunteered an average of 7.3 hours of community service per month. This year, a random sample of 75 student volunteers reveals an average of 7.07 hours per month with a standard deviation of 1.29 hours. Should the University be concerned? \(\ldots\) solve this on your own

21 / 53

Overlap Between Hypothesis Tests and Confidence Intervals

  • Calculate the 95% confidence interval for a sample mean \(\bar{x}\)

  • Note that in this confidence interval, \(\alpha = 0.05\); \(\alpha/2=0.025\)

  • Use the Test Statistic with \(\alpha = 0.05\) to make a decision

  • Note the similarity?

22 / 53

Assumptions Underlying the \(t-test\)

  1. The data are independent within the sample and identically distributed—no clustering, serial dependence, or design quirks that tie observations together
  2. The \(t-statistic\) is approximately normally distributed
  3. The model is correctly specified (i.e., observations come from a population with a constant mean (the parameter you test), finite variance, and measurements are unbiased and on a continuous scale)

Testing Assumptions?

  1. Check your data sampling plan, the study design, and measurement
  2. Visual checks for Normality (Histograms, boxplots, QQ-plots)
  3. Formal tests of Normality
    • Small n: lean on Shapiro–Wilk (\(H_0:\) Data come from a Normally distributed population) and QQ Plots; be wary of outliers.
    • Medium n (30–200): CLT helps; emphasize QQ Plots and tail fit.
    • Large n: tiny deviations will always “fail” tests; trust visual diagnostics and robustness.

If you see curved Q–Q lines, that is skew; S-shaped is heavy/light tails. Fix with transformations only if they clarify interpretation; otherwise prefer robust inference.

23 / 53

QQ Plots that hint at Non-Normality

24 / 53

QQ Plots that hint at Normality

25 / 53

Shapiro-Wilk Test and QQ Plot for Kramer (TX)

26 / 53

Beware of Normality Tests

\(p-value = 0.00008903\); Reject \(H_0\)

\(p-value = 0.2015\); Do not Reject \(H_0\)

27 / 53

What should I do if my data are Non-Normal?

  1. Consider eliminating outliers and then redo tests

  2. Consider transforming your data, starting with taking z-scores, for example

    • Right‑skewed, positive data: try square root, log, or Box–Cox, in that order. Add a small constant only when zeros exist.

    • Proportions near 0 or 1: use logit on bounded rates, or arcsine‑sqrt for binomial proportions; stabilize variance before modeling.

    • Left‑skewed: reflect then apply a right‑skew fix (e.g., transform \(y = max(x) − x\), then take the log or the sqrt of \(y\)).

    • Heavy tails: consider rank‑based or robust methods instead of forcing normality; Huberization or winsorizing only if defensible.

    • Multiplicative errors: log typically linearizes relationships and equalizes spread.

  3. Switch to non-parametric statistical tests

28 / 53

Comparing Two Means

29 / 53

Comparisons of Means from Common Parent Population

Common Parent Population

Common Parent Population

30 / 53

Comparisons of Means from Different Parent Populations

Separate Parent Populations

Separate Parent Populations

31 / 53

We often need to compare sample means across two groups. For example, are average earnings the same for men and women in a specific occupation? Perhaps we suspect (a) women are underpaid or (generally) that (b) their salaries differ from those of men.

Let the population and sample means be \(\mu_m, \mu_w\) and \(\bar{x}_m, \bar{x}_w\), respectively

(a) \(H_0: \mu_m \leq \mu_w\) and \(H_1: \mu_m > \mu_w\), \(\therefore H_0: \mu_m - \mu_w \leq 0\) and \(H_1: \mu_m - \mu_w > 0\)

(b) \(H_0: \mu_m = \mu_w\) and \(H_1: \mu_m \neq \mu_w\), \(\therefore H_0: \mu_m - \mu_w = 0\) and \(H_1: \mu_m - \mu_w \neq 0\)

Standard Error of the difference in means: \(s_{\bar{x}_m - \bar{x}_w} = \sqrt{\dfrac{s^{2}_{m}}{n_m} + \dfrac{s^{2}_w}{n_w}}\)

Confidence Interval estimate: \(\left(\bar{x}_m - \bar{x}_w\right) \pm t_{\alpha/2}\left(s_{\bar{x}_m - \bar{x}_w}\right)\)

The Test Statistic: \(t = \dfrac{\left(\bar{x}_m - \bar{x}_w \right) - \left(\mu_m - \mu_w \right)}{\sqrt{\dfrac{s^{2}_{m}}{n_m} + \dfrac{s^{2}_w}{n_w}}} = \dfrac{\left(\bar{x}_m - \bar{x}_w \right) - D_0}{\sqrt{\dfrac{s^{2}_{m}}{n_m} + \dfrac{s^{2}_w}{n_w}}}\)

32 / 53

The degrees of freedom for this test: \(df = \dfrac{\left(\dfrac{s^{2}_m}{n_m} + \dfrac{s^{2}_w}{n_w} \right)^2}{\dfrac{1}{(n_m -1)}\left(\dfrac{s^{2}_m}{n_m}\right)^2 + \dfrac{1}{(n_w -1)}\left(\dfrac{s^{2}_w}{n_w}\right)^2}\)

Note: We usually round down the \(df\) to the nearest integer

We have two ways of calculating the estimated standard error \((s_{\bar{x}_1 - \bar{x}_2})\) and the degrees of freedom \(df\)

(1) When the population variances are assumed unequal

(2) When the population variances are assumed equal

33 / 53

(1) Unequal Population Variances

Standard Error will be: \(\left(s_{\bar{x}_1 - \bar{x}_2}\right) = \sqrt{\dfrac{\sigma^{2}_{1}}{n_1} + \dfrac{\sigma^{2}_2}{n_2}}\)

Degrees of Freedom will be: \(df = \dfrac{\left(\dfrac{s^{2}_m}{n_m} + \dfrac{s^{2}_w}{n_w} \right)^2}{\dfrac{1}{(n_m -1)}\left(\dfrac{s^{2}_m}{n_m}\right)^2 + \dfrac{1}{(n_w -1)}\left(\dfrac{s^{2}_w}{n_w}\right)^2}\)

Rule of thumb ...

  • Use this when \(n_1\) or \(n_2\) are \(< 30\) and

  • Either sample has a standard deviation at least twice that of the other sample

34 / 53

(2) Equal Population Variances

Standard Error will be: \(\left(s_{\bar{x}_1 - \bar{x}_2}\right) = \sqrt{\dfrac{n_1 + n_2}{n1 \times n_2}}\sqrt{\dfrac{(n_1 -1)s^{2}_{x_{1}} + (n_2 -1)s^{2}_{x_2}}{\left(n_1 + n_2\right) -2}}\)

Degrees of Freedom will be: \(df = \left( n_1 + n_2\right) -2\)

Rule of thumb ...

  • Use this when the standard deviations are roughly equal, and

  • \(n_1\) and \(n_2\) \(\geq 30\)

35 / 53

Assumptions and Rules-of-thumb

Assumptions:

(1) Random samples

(2) Variables are drawn from normally distributed Populations

Rules-of-thumb:

  • Draw larger samples if you suspect the Population(s) may be skewed

  • Go with assumption of equal variances if both the following are met:
    (a) Assumption theoretically justified, standard deviations fairly close, &
    (b) \(n_1 \geq 30\) and \(n_2 \geq 30\)

  • Go with assumption of unequal variances if both the following are met:
    (a) One standard deviation is at least twice the other standard deviation, &
    (b) \(n_1 < 30\) or \(n_2 < 30\)

Of course, some statistical software packages (SPSS, for instance) will run the test under both assumptions so you can choose on the basis of the results (a bad idea in some eyes)

36 / 53

Testing Variances: Levene's Test for Homogeneity of Variances

  • Assumes roughly symmetric frequency distributions within all groups
  • Robust to violations of assumption
  • Can be used with 2 or more groups

\(H_0: \sigma^{2}_{1} = \sigma^{2}_{2} = \sigma^{2}_{3} = \cdots \sigma^{2}_{k}\) and \(H_A:\) For at least one pair of \((i,j)\) we have \(\sigma^{2}_{i} \neq \sigma^{2}_{j}\)

Test Statistic: \(W = \dfrac{ (N-k)\displaystyle\sum^{k}_{i=1}n_{i}\left( \bar{Z}_{i} - \bar{Z} \right)^{2} }{(k-1)\displaystyle\sum^{k}_{i=1}\sum^{n_i}_{j=1}\left( Z_{ij} - \bar{Z}_{i}\right)^{2}}\)

\(Z_{ij} = |{Y_{ij} - \bar{Y}_i}|\); \(\bar{Z}_i\) is the mean for all \(Y\) in the \(i^{th}\) group; \(\bar{Z}\) is the mean for all \(Y\) in the study; \(k\) is the number of groups in the study; and \(n_i\) is the sample size for group \(i\)

If you opt for the more robust version that uses the Median, then, \(Z_{ij} = |Y_{ij} - \tilde{Y}_{i}|\) where \(\tilde{Y}_{i}\) is median of \(i^{th}\) group

\(W \sim F_{\alpha, k-1, n-k}\)

37 / 53

Example 1

The Athens County Public Library is trying to keep its bookmobile alive since it reaches readers who otherwise may not use the library. One of the library employees decides to conduct an experiment, running advertisements in 50 areas served by the bookmobile and not running advertisements in 50 other areas also served by the bookmobile. After one month, circulation counts of books are calculated and mean circulation counts are found to be 526 books for the advertisement group with a standard deviation of 125 books and 475 books for the non-advertisement group with a standard deviation of 115 books. Is there a statistically significant difference in mean book circulation between the two groups?

Since we are being asked to test for a "difference" it is a two-tailed test, with hypotheses given by:

$$\begin{array}{l} H_0: \text{ There is no difference in average circulation counts } (\mu_1 = \mu_2) \\ H_1: \text{ There is a difference in average circulation counts } (\mu_1 \neq \mu_2) \end{array}$$

38 / 53

Since both groups have sample sizes that exceed 30 we can proceed with the assumption of equal variances and calculate the standard error and the degrees of freedom. The degrees of freedom as easy: \(df = n_1 + n_2 - 2 = 50 + 50 - 2 = 98\). The standard error is \(s_{\bar{x}_1 - \bar{x}_2} = \sqrt{\dfrac{n_1 + n_2}{n_1n_2}}\sqrt{\dfrac{(n_1 -1)s^{2}_{x_{1}} + (n_2 -1)s^{2}_{x_2}}{\left(n_1 + n_2\right) -2}}\) and plugging in the values we have

$$s_{\bar{x}_1 - \bar{x}_2} = \sqrt{\dfrac{50 + 50}{2500}} \sqrt{\dfrac{(50 -1)(125^2) + (50 -1)(115^2)}{\left(50 + 50\right) -2}} = (0.2)(120.1041) = 24.02082$$

The test statistic is

$$t = \dfrac{\left( \bar{x}_1 - \bar{x}_2 \right) - \left( \mu_1 - \mu_2 \right) }{s_{\bar{x}_1 - \bar{x}_2}} = \dfrac{(526 - 475) - 0}{24.02082} = \dfrac{51}{24.02082} = 2.123158$$

39 / 53

Since no \(\alpha\) is given let us use the conventional starting point of \(\alpha = 0.05\)

With \(df=98\) and \(\alpha = 0.05\), two-tailed, the critical \(t\) value would be \(\pm 1.98446745\)

Since our calculated \(t = 2.1231\) exceeds the critical \(t = 1.9844\), we can easily reject the null hypothesis of no difference

Conclusion: These data suggest there is a difference in average circulation counts between the advertisement and no advertisement groups

We could have also used the the p-value approach, rejecting the null hypothesis of no difference if the p-value was \(\leq \alpha\)

The p-value of our calculate \(t\) turns out to be 0.0363 and so we can reject the null hypothesis

Note, in passing, that had we used \(\alpha = 0.01\) we would have been unable top reject the null hypothesis because \(0.0363\) is \(> 0.01\)

40 / 53

The 95% confidence interval is given by

$$(\bar{x}_1 - \bar{x}_2) \pm t_{\alpha/2}(s_{\bar{x}_1 - \bar{x}_2}) = 51 \pm 1.9844(24.02082) = 51 \pm 47.66692 = 3.33308 \text{ and } 98.66692$$

We can be about 95% confident that the true difference between the groups lies in this interval

Had we used the 99% interval for a test with \(\alpha = 0.01\) the interval would be

$$51 \pm 2.627(24.02082) = -12.10269 \text{ and } 114.1027$$

subsuming the null hypothesis difference of \(0\) and leaving us unable to reject the null hypothesis.

41 / 53

Example 2

Say we have a large data-set with a variety of information about several cars, gathered in 1974. One of the questions we have been tasked with testing is whether the miles per gallon yield of manual transmission cars in 1974 was greater than that of automatic transmission cars. Assume they want us to use \(\alpha = 0.05\).

We have thirteen manual transmission cars and 19 automatic transmission cars, and the means and standard deviations are 24.3923 and 6.1665 for manual, and 17.1473 and 3.8339 for automatic cars, respectively. The hypotheses are:

$$\begin{array}{l} H_0: \text{Mean mpg of manual cars is at most that of the mean mpg of automatic cars } (\bar{x}_{manual} \leq \bar{x}_{automatic}) \\ H_1: \text{Mean mpg of manual cars is greater than mean mpg of automatic cars } (\bar{x}_{manual} > \bar{x}_{automatic}) \end{array}$$

42 / 53

The calculated \(t\) is 4.1061 and the p-value is 0.0001425, allowing us to reject the null hypothesis

Conclusion: These data suggest that average mpg of automatic cars is not at most that of manual cars

Note a couple of things here:

(i) We have a one-tailed hypothesis test, and
(ii) we are assuming equal variances since both conditions are not met for assuming unequal variances

In addition, note that the confidence interval is found to be \((3.6415, 10.8483)\), indicating that we can be 95% confident that the average manual mpg is higher than average automatic mpg by anywhere between 3.64 mpg and 10.84 mpg

43 / 53

Comparing Matched or Paired Means

44 / 53

Matched (aka Dependent or Paired) Samples

Sometimes you may have two sets of measures on the same units. Now

  • \(H_0: \mu_d = 0; H_1: \mu_d \neq 0\) or
  • \(H_0: \mu_d \geq 0; H_1: \mu_d < 0\) or
  • \(H_0: \mu_d \leq 0; H_1: \mu_d > 0\)

\(\bar{d} = \dfrac{\sum{d_i}}{n}\)

\(s_d = \sqrt{\dfrac{\sum(d_i - \bar{d})^2}{n-1}}\) and \(s_{\bar{d}} = \dfrac{s_{d}}{\sqrt{n}}\)

Test Statistic: \(t = \dfrac{\bar{d} - \mu_{d}}{ s_{\bar{d}} }\)

With normally distributed population \(df = n-1\)

Interval estimate: \(\bar{d} \pm t_{\alpha/2} \left( s_{\bar{d}} \right)\)

45 / 53

The Testing Protocol

Let us see how the test is carried out with reference to a small data-set wherein we have six pre-school children's scores on a vocabulary test before a reading program is introduced into the pre-school \((x_2)\) and then again after the reading program has been in place for a few months \((x_2)\).

Vocabulary Scores pre- and post-intervention
Child ID Pre-intervention score Post-intervention score Difference = Pre - Post
1 6.0 5.4 0.6
2 5.0 5.2 -0.2
3 7.0 6.5 0.5
4 6.2 5.9 0.3
5 6.0 6.0 0.0
6 6.4 5.8 0.6
46 / 53

Note the column \(d_i\) has the difference of the scores such that for Child 1, \(6.0 - 5.4 = 0.6\), for Child 2, \(5.0 - 5.2 = -0.2\), and so on.

Note also that the mean, variance and standard deviation of \(d\) are calculated as follows:

\begin{eqnarray*} d_{i} &=& x_{1} - x_{2} \\ \bar{d} &=& \dfrac{\sum{d_i}}{n} \\ s^{2}_{d} &=& \dfrac{\sum(d_i - \bar{d})^2}{n-1} \\ s_d &=& \sqrt{\dfrac{\sum(d_i - \bar{d})^2}{n-1}} \end{eqnarray*}

47 / 53

Say we have no idea what to expect from the program. In that case, our hypotheses would be:

$$\begin{array}{l} H_0: \mu_d = 0 \\ H_1: \mu_d \neq 0 \end{array}$$

The test statistic is given by \(t = \dfrac{\bar{d} - \mu_d}{s_d/\sqrt{n}}; df=n-1\) and the interval estimate calculated as \(\bar{d} \pm t_{\alpha/2}\left(\dfrac{s_d}{\sqrt{n}}\right)\).

Once we have specified our hypotheses, selected \(\alpha\), and calculated the test statistic, the usual decision rules apply: ...

  • Reject the null hypothesis if the calculated \(p-value \leq \alpha\)
  • Do not reject the null hypothesis if the calculated \(p-value > \alpha\)

In this particular example, it turns out that \(\bar{d}=0.30\); \(s_d=0.335\); \(t = 2.1958\), \(df = 5\), \(p-value = 0.07952\) and 95% CI: \(0.3 \pm 0.35 = (-0.0512, 0.6512)\).

Given the large \(p-value\) we fail to reject \(H_0\) and conclude that these data do not suggest a statistically significant impact of the reading program.

48 / 53

Example 3

Over the last decade, has poverty worsened in Ohio's public school districts? One way to test worsening poverty would be to compare the percent of children living below the poverty line in each school district across two time points. For the sake of convenience I will use two American Community Survey (ACS) data sets that measure Children Characteristics (Table S0901), one the 2011-2015 ACS and the other the 2006-2010 ACS. While a small snippet of the data are shown below for the 35 school districts with data for both years, you can download the full dataset from here.

Percent of Children in Poverty
District 2006-2010 2011-2015
Akron City School District, Ohio 35.3 41.0
Brunswick City School District, Ohio 6.8 7.6
Canton City School District, Ohio 44.1 49.6
Centerville City School District, Ohio 10.5 5.4
Cincinnati City School District, Ohio 39.5 43.0
Cleveland Municipal School District, Ohio 45.8 53.3
49 / 53

$$\begin{array}{l} H_0: \text{ Poverty has not worsened } (d \leq 0) \\ H_1: \text{ Poverty has worsened } (d > 0) \end{array}$$

Subtracting the 2006-2010 poverty rate from the 2011-2015 poverty rate for each district and then calculating the average difference \((d)\) yields \(\bar{d} = 4.328571\) and \(s_{d} = 3.876746\)

With \(n=35\) we have a standard error \(s_{\bar{d}} = \dfrac{s_d}{\sqrt{n}} = \dfrac{3.876746}{\sqrt{35}} = 0.6552897\)

The test statistic is \(t = \dfrac{\bar{d}}{s_{\bar{d}}} = \dfrac{4.328571}{0.6552897} = 6.605584\) and has a \(p-value = 0.0000001424\), allowing us to easily reject the null hypothesis

... These data suggest that school district poverty has indeed worsened over the intervening time period. The 95% confidence interval is \((2.9968 \text{ and } 5.6602)\)

50 / 53

Example 4

A large urban school district in a Midwestern state implemented a reading intervention to boost the district's scores on the state's English Language Arts test. The intervention was motivated by poor performance of the district's \(4^{th}\) grade cohort. Three years had passed before that cohort was tested in the \(8^{th}\) grade. Did the intervention boost ELA scores, on average?

English Language Arts: Scaled scores, grades Three and Eight
Student ID Grade Scaled Score
AA0000001 3 583
AA0000002 3 583
AA0000003 3 583
AA0000004 3 668
AA0000005 3 627
AA0000006 3 617
51 / 53

$$\begin{array}{l} H_0: \text{ Intervention did not boost ELA scores } (d \leq 0) \\ H_1: \text{ Intervention did boost ELA scores } (d > 0) \end{array}$$

We have \(\bar{d} = 14.62594\), \(s_d = 66.27296\), \(df = 12955\) and the standard error is \(0.5822609\)

The test statistic then is \(t = \dfrac{\bar{d}}{s_{\bar{d}}} = \dfrac{14.62594}{0.5822609} = 25.11922\) and has a \(p-value\) that is very close to \(0\) and obviously way smaller than \(\alpha = 0.05\) and \(\alpha = 0.01\)

Hence we can reject the null hypothesis; these data suggest that the reading intervention did indeed boost English Language Arts scores on average.

52 / 53

Testing Options and the Protocol

  • Data are coming from a paired design -- Use the two-sample \(t-test\) for paired samples

  • Data are coming from two unpaired groups -- Use the two-sample \(t-test\) with

    • the assumption of equal variances if \(n_1 \geq 30\) and \(n_2 \geq 30\) and \(s_1 \approx s_2\)
    • the assumption of unequal variances if \(n_1 < 30\) or \(n_2 < 30\) and \(s_i \geq 2\left(s_j\right)\)
  • Use Levene's test for homogeneity of variances if the assumption of normality is not supported

  • Normality is not as big a deal since these tests are robust to small violations of normality

53 / 53

Agenda

  1. The Logic of Hypothesis Testing

  2. One-tailed versus Two-tailed hypotheses

  3. One-Sample t-tests

  4. Two-group t-tests

  5. Paired t-tests

  6. Assumptions of t-tests

2 / 53
Paused

Help

Keyboard shortcuts

, , Pg Up, k Go to previous slide
, , Pg Dn, Space, j Go to next slide
Home Go to first slide
End Go to last slide
Number + Return Go to specific slide
b / m / f Toggle blackout / mirrored / fullscreen mode
c Clone slideshow
p Toggle presenter mode
t Restart the presentation timer
?, h Toggle this help
Esc Back to slideshow