The Logic of Hypothesis Testing
One-tailed versus Two-tailed hypotheses
One-Sample t-tests
Two-group t-tests
Paired t-tests
Assumptions of t-tests
Hypothesis testing
is an inferential procedure that uses sample data to evaluate the credibility of a hypothesis about a population parameter. The process involves ...
Stating a hypothesis:
an assumption that can neither be fully proven nor fully disproven. For example,
Not more than 5% of GM trucks breakdown in under 10,000 miles
Heights of North American adult males is distributed with μ=72 inches
Mean year-round temperature in Athens (OH) is >62
10% of Ohio teachers are Accomplished
Mean county unemployment rate is 12.1%
Drawing a sample to test the hypothesis
Conducting the test itself to see if the hypothesis should be rejected
Null
and the Alternative
HypothesesNull Hypothesis: (H0) is the assumption believed to be true
Alternative Hypothesis: (H1) is the statement believed to be true if (H0) is rejected
H0: μ>72 inches; H1: μ≤72
H0: μ<72 inches; H1: μ≥72
H0: μ≤72 inches; H1: μ>72
H0: μ≥72 inches; H1: μ<72
H0: μ=72 inches; H1: μ≠72
H0 and H1 are mutually exclusive and mutually exhaustive
Mutually Exclusive: Either H0 or H1 is True ⋯ both cannot be true at the same time
Mutually Exhaustive: H0 and H1 exhaust the Sample Space ⋯; there are no other possibilities unknown to us
Type I and Type II Errors
Decision based on Sample |
Null is true |
Null is false |
---|---|---|
Reject the Null | Type I error | No error |
Do not reject the Null | No error | Type II error |
Type I Error: Rejecting the Null hypothesis H0 when H0 is true
i.e., we should not have rejected the Null hypothesis
Decision | H0 is True | H0 is False |
---|---|---|
Reject H0 | Type I error (α) | No error (1−β) |
Do not reject H0 | No error (1−α) | Type II error (β) |
Type II Error: Failing to reject the Null hypothesis H0 when H0 is false
i.e., we should have rejected the Null hypothesis
The probability of committing a Type I error = Level of Significance =α
We have to decide how often we want to make a Type I error (i.e., falsely Reject H0).
Conventionally we set this rate to one of the following α values: α=0.05 or α=0.01
Note the very cautious language ... Reject H0 versus Do Not Reject H0
Assume we want to know whether the roundabout on SR682 has had an impact on traffic accidents in Athens. We have historical data on the number of accidents in years past. Say the average per day used to be 6 (i.e., μ0=6).
To see if the roundabout has had an impact we could gather accident data for a random sample of 100 days (n=100) from the period after the roundabout was built.
Before we do that though, we will need to specify our hypotheses. What do we think might be the impact?
Let us say the City Engineer argues that the roundabout should have decreased accidents.
If he is correct then the sample mean ˉx should be less than the pre-roundabout population mean μ0 i.e., ˉx<μ0
If he is wrong then the sample mean ˉx should be at least as much as the pre-roundabout population mean μ0 i.e., ˉx≥μ0
We know from the theory of sampling distributions that the distribution of sample means, for all samples of size n, will be normally distributed (as shown below)
Most samples would be in the middle of the distribution but by sheer chance we could end up with a sample mean in the tails. This will happen with a very small probability but it could happen!!
If we believe the City Engineer, we would setup the hypotheses as follows:
very small
? By setting α either to 0.05 or to 0.01 We Reject H0 if P(tcalculated)≤α; the data provide sufficient evidence to conclude that the roundabout has reduced accidents
If P(tcalculated)>α then we Fail to reject H0; the data provide insufficient evidence to conclude that the roundabout has reduced accidents
Reject H0 if calculated t falls in the green region (i.e., calculated t≤−1.6603)
Do Not Reject H0 if calculated t falls in the grey region (i.e., tcalculated>−1.6603)
If we believe the City Engineer, we would setup the hypotheses as follows:
If this area is very small then we can conclude that the roundabout must have worked to reduce accidents
How should we define very small
? By setting α either to 0.05 or to 0.01
We can then Reject H0 if P(±tcalculated)≤α ; the data provide sufficient evidence to conclude that the roundabout has reduced accidents
If P(±tcalculated)>α then we will Fail to reject H0; the data provide insufficient evidence to conclude that the roundabout has reduced accidents
Reject H0 if calculated |t| falls in the green region (i.e., calculated t≤−1.98 or calculated t≥1.98)
Do Not Reject H0 if calculated |t| falls in the grey region (i.e., −1.98<calculated t<1.98)
State the hypotheses
changed
then H0 must specify that nothing has changed …H0:μ=μ0;H1:μ≠μ0… two-tailed different
then H0 must specify that nothing is different …H0:μ=μ0;H1:μ≠μ0… two-tailed impact
then H0 must specify that it had no impact …H0:μ=μ0;H1:μ≠μ0… two-tailed increased
then H0 must specify that it has not increased …H0:μ≤μ0;H1:μ>μ0… one-tailed decreased
then H0 must specify that it has not decreased …H0:μ≥μ0;H1:μ<μ0… one-tailed Collect the sample and set α=0.05 or α=0.01
Last year, Normal (IL) the city's motor pool maintained the city's fleet of vehicles at an average cost of 346 per car. This year Jack's Crash Shop is doing the maintenance. City notices that in a random sample of 36 cars fixed by Jack the mean repair cost is 330 with a standard deviation of 120. Is Jack saving the City money?
H0:μ≥346 and H1:μ<346. Let us choose α=0.05. Note df=n−1=36−1=35
sˉx=s√n=120√36=20, and hence t=ˉx−μ0sˉx=330−34620=−1620=−0.80
Fail to reject H0; the data suggest that Jack's prices may not differ from those predicted by the null hypothesis
Kramer's (TX) Police Chief learns that his staff clear 46.2% of all burglaries in the city. She wants to benchmark their performance and to do this she samples 10 other similar cities in Texas. She finds their numbers to be as follows:
Rate | Rate |
---|---|
44.2 | 32.1 |
40.3 | 32.9 |
36.4 | 29.0 |
49.4 | 46.4 |
51.7 | 41.0 |
Is Kramer's clearance rate significantly different from those of other similar Texas cities?
H0:μ=46.2 and H1:μ≠46.2. Set α=0.05
Note df=n−1=10−1=9, ˉx=40.34, and sˉx=2.4279
t=ˉx−μ0sˉx=40.34−46.22.4279=−2.414
Reject H0; the data suggest that Kramer's clearance rate does not conform with that of other similar Texas cities.
The Director of Philanthropy at the Fleckman Institute of the Arts is curious to assess the impact of this year's changes in federal tax laws on donations. Last year the average donation was 580. A random sample of 50 donors yields an average donation of 623.64 with a standard deviation of 84.27. Did the change in federal tax laws have an impact on donations? … solve this on your own
Springdale University is concerned that student volunteer activity has decreased. Last year their students volunteered an average of 7.3 hours of community service per month. This year, a random sample of 75 student volunteers reveals an average of 7.07 hours per month with a standard deviation of 1.29 hours. Should the University be concerned? … solve this on your own
Calculate the 95% confidence interval for a sample mean ˉx
Note that in this confidence interval, α=0.05; α/2=0.025
Use the Test Statistic with α=0.05 to make a decision
Note the similarity?
Common Parent Population
Separate Parent Populations
We often need to compare sample means across two groups. For example, are average earnings the same for men and women in a specific occupation? Perhaps we suspect (a) women are underpaid or (generally) that (b) their salaries differ from those of men.
Let the population and sample means be μm,μw and ˉxm,ˉxw, respectively
(a) H0:μm≤μw and H1:μm>μw, ∴H0:μm−μw≤0 and H1:μm−μw>0
(b) H0:μm=μw and H1:μm≠μw, ∴H0:μm−μw=0 and H1:μm−μw≠0
Standard Error of the difference in means: sˉxm−ˉxw=√s2mnm+s2wnw
Confidence Interval estimate: (ˉxm−ˉxw)±tα/2(sˉxm−ˉxw)
The Test Statistic: t=(ˉxm−ˉxw)−(μm−μw)√s2mnm+s2wnw=(ˉxm−ˉxw)−D0√s2mnm+s2wnw
The degrees of freedom for this test: df=(s2mnm+s2wnw)21(nm−1)(s2mnm)2+1(nw−1)(s2wnw)2
Note: We usually round down the df to the nearest integer
We have two ways of calculating the estimated standard error (sˉx1−ˉx2) and the degrees of freedom df
(1) When the population variances are assumed unequal
(2) When the population variances are assumed equal
Standard Error will be: (sˉx1−ˉx2)=√σ21n1+σ22n2
Degrees of Freedom will be: df=(s2mnm+s2wnw)21(nm−1)(s2mnm)2+1(nw−1)(s2wnw)2
Rule of thumb ...
Use this when n1 or n2 are <30 and
Either sample has a standard deviation at least twice that of the other sample
Standard Error will be: (sˉx1−ˉx2)=√n1+n2n1×n2√(n1−1)s2x1+(n2−1)s2x2(n1+n2)−2
Degrees of Freedom will be: df=(n1+n2)−2
Rule of thumb ...
Use this when the standard deviations are roughly equal, and
n1 and n2 ≥30
Assumptions:
(1) Random samples
(2) Variables are drawn from normally distributed Populations
Rules-of-thumb:
Draw larger samples if you suspect the Population(s) may be skewed
Go with assumption of equal variances
if both the following are met:
(a) Assumption theoretically justified, standard deviations fairly close, &
(b) n1≥30 and n2≥30
Go with assumption of unequal variances
if both the following are met:
(a) One standard deviation is at least twice the other standard deviation, &
(b) n1<30 or n2<30
Of course, some statistical software packages (SPSS, for instance) will run the test under both assumptions so you can choose
on the basis of the results (a bad idea in some eyes)
H0:σ21=σ22=σ23=⋯σ2k and HA: For at least one pair of (i,j) we have σ2i≠σ2j
Test Statistic: W=(N−k)k∑i=1ni(ˉZi−ˉZ)2(k−1)k∑i=1ni∑j=1(Zij−ˉZi)2
Zij=|Yij−ˉYi|; ˉZi is the mean for all Y in the ith group; ˉZ is the mean for all Y in the study; k is the number of groups in the study; and ni is the sample size for group i
If you opt for the more robust version that uses the Median, then, Zij=|Yij−˜Yi| where ˜Yi is median of ith group
W∼Fα,k−1,n−k
The Athens County Public Library is trying to keep its bookmobile alive since it reaches readers who otherwise may not use the library. One of the library employees decides to conduct an experiment, running advertisements in 50 areas served by the bookmobile and not running advertisements in 50 other areas also served by the bookmobile. After one month, circulation counts of books are calculated and mean circulation counts are found to be 526 books for the advertisement group with a standard deviation of 125 books and 475 books for the non-advertisement group with a standard deviation of 115 books. Is there a statistically significant difference in mean book circulation between the two groups?
Since we are being asked to test for a "difference" it is a two-tailed test, with hypotheses given by:
H0: There is no difference in average circulation counts (μ1=μ2)H1: There is a difference in average circulation counts (μ1≠μ2)
Since both groups have sample sizes that exceed 30 we can proceed with the assumption of equal variances and calculate the standard error and the degrees of freedom. The degrees of freedom as easy: df=n1+n2−2=50+50−2=98. The standard error is sˉx1−ˉx2=√n1+n2n1n2√(n1−1)s2x1+(n2−1)s2x2(n1+n2)−2 and plugging in the values we have
sˉx1−ˉx2=√50+502500√(50−1)(1252)+(50−1)(1152)(50+50)−2=(0.2)(120.1041)=24.02082
The test statistic is
t=(ˉx1−ˉx2)−(μ1−μ2)sˉx1−ˉx2=(526−475)−024.02082=5124.02082=2.123158
Since no α is given let us use the conventional starting point of α=0.05
With df=98 and α=0.05, two-tailed, the critical t value would be ±1.98446745
Since our calculated t=2.1231 exceeds the critical t=1.9844, we can easily reject the null hypothesis of no difference
Conclusion: These data suggest there is a difference in average circulation counts between the advertisement and no advertisement groups
We could have also used the the p-value approach, rejecting the null hypothesis of no difference if the p-value was ≤α
The p-value of our calculate t turns out to be 0.0363 and so we can reject the null hypothesis
Note, in passing, that had we used α=0.01 we would have been unable top reject the null hypothesis because 0.0363 is >0.01
The 95% confidence interval is given by
(ˉx1−ˉx2)±tα/2(sˉx1−ˉx2)=51±1.9844(24.02082)=51±47.66692=3.33308 and 98.66692
We can be about 95% confident that the true difference between the groups lies in this interval
Had we used the 99% interval for a test with α=0.01 the interval would be
51±2.627(24.02082)=−12.10269 and 114.1027
subsuming the null hypothesis difference of 0 and leaving us unable to reject the null hypothesis.
Say we have a large data-set with a variety of information about several cars, gathered in 1974. One of the questions we have been tasked with testing is whether the miles per gallon yield of manual transmission cars in 1974 was greater than that of automatic transmission cars. Assume they want us to use α=0.05.
We have thirteen manual transmission cars and 19 automatic transmission cars, and the means and standard deviations are 24.3923 and 6.1665 for manual, and 17.1473 and 3.8339 for automatic cars, respectively. The hypotheses are:
H0:Mean mpg of manual cars is at most that of the mean mpg of automatic cars (ˉxmanual≤ˉxautomatic)H1:Mean mpg of manual cars is greater than mean mpg of automatic cars (ˉxmanual>ˉxautomatic)
The calculated t is 4.1061 and the p-value is 0.0001425, allowing us to reject the null hypothesis
Conclusion: These data suggest that average mpg of automatic cars is not at most that of manual cars
Note a couple of things here:
(i) We have a one-tailed hypothesis test, and
(ii) we are assuming equal variances since both conditions are not met for assuming unequal variances
In addition, note that the confidence interval is found to be (3.6415,10.8483), indicating that we can be 95% confident that the average manual mpg is higher than average automatic mpg by anywhere between 3.64 mpg and 10.84 mpg
Sometimes you may have two sets of measures on the same units. Now
ˉd=∑din
sd=√∑(di−ˉd)2n−1 and sˉd=sd√n
Test Statistic: t=ˉd−μdsˉd
With normally distributed population df=n−1
Interval estimate: ˉd±tα/2(sˉd)
Let us see how the test is carried out with reference to a small data-set wherein we have six pre-school children's scores on a vocabulary test before a reading program is introduced into the pre-school (x2) and then again after the reading program has been in place for a few months (x2).
Child ID | Pre-intervention score | Post-intervention score | Difference = Pre - Post |
---|---|---|---|
1 | 6.0 | 5.4 | 0.6 |
2 | 5.0 | 5.2 | -0.2 |
3 | 7.0 | 6.5 | 0.5 |
4 | 6.2 | 5.9 | 0.3 |
5 | 6.0 | 6.0 | 0.0 |
6 | 6.4 | 5.8 | 0.6 |
Note the column di has the difference of the scores
such that for Child 1, 6.0−5.4=0.6, for Child 2, 5.0−5.2=−0.2, and so on.
Note also that the mean, variance and standard deviation of d are calculated as follows:
di=x1−x2ˉd=∑dins2d=∑(di−ˉd)2n−1sd=√∑(di−ˉd)2n−1
Say we have no idea what to expect from the program. In that case, our hypotheses would be:
H0:μd=0H1:μd≠0
The test statistic is given by t=ˉd−μdsd/√n;df=n−1 and the interval estimate calculated as ˉd±tα/2(sd√n).
Once we have specified our hypotheses, selected α, and calculated the test statistic, the usual decision rules apply: ...
In this particular example, it turns out that ˉd=0.30; sd=0.335; t=2.1958, df=5, p−value=0.07952 and 95% CI: 0.3±0.35=(−0.0512,0.6512).
Given the large p−value we fail to reject H0 and conclude that these data do not suggest a statistically significant impact of the reading program.
Over the last decade, has poverty worsened in Ohio's public school districts? One way to test worsening poverty would be to compare the percent of children living below the poverty line in each school district across two time points. For the sake of convenience I will use two American Community Survey (ACS) data sets that measure Children Characteristics (Table S0901)
, one the 2011-2015 ACS and the other the 2006-2010 ACS. While a small snippet of the data are shown below for the 35 school districts with data for both years, you can download the full dataset from here.
District | 2006-2010 | 2011-2015 |
---|---|---|
Akron City School District, Ohio | 35.3 | 41.0 |
Brunswick City School District, Ohio | 6.8 | 7.6 |
Canton City School District, Ohio | 44.1 | 49.6 |
Centerville City School District, Ohio | 10.5 | 5.4 |
Cincinnati City School District, Ohio | 39.5 | 43.0 |
Cleveland Municipal School District, Ohio | 45.8 | 53.3 |
H0: Poverty has not worsened (d≤0)H1: Poverty has worsened (d>0)
Subtracting the 2006-2010 poverty rate from the 2011-2015 poverty rate for each district and then calculating the average difference (d) yields ˉd=4.328571 and sd=3.876746
With n=35 we have a standard error sˉd=sd√n=3.876746√35=0.6552897
The test statistic is t=ˉdsˉd=4.3285710.6552897=6.605584 and has a p−value=0.0000001424, allowing us to easily reject the null hypothesis
... These data suggest that school district poverty has indeed worsened over the intervening time period. The 95% confidence interval is (2.9968 and 5.6602)
A large urban school district in a Midwestern state implemented a reading intervention to boost the district's scores on the state's English Language Arts test. The intervention was motivated by poor performance of the district's 4th grade cohort. Three years had passed before that cohort was tested in the 8th grade. Did the intervention boost ELA scores, on average?
Student ID | Grade | Scaled Score |
---|---|---|
AA0000001 | 3 | 583 |
AA0000002 | 3 | 583 |
AA0000003 | 3 | 583 |
AA0000004 | 3 | 668 |
AA0000005 | 3 | 627 |
AA0000006 | 3 | 617 |
H0: Intervention did not boost ELA scores (d≤0)H1: Intervention did boost ELA scores (d>0)
We have ˉd=14.62594, sd=66.27296, df=12955 and the standard error is 0.5822609
The test statistic then is t=ˉdsˉd=14.625940.5822609=25.11922 and has a p−value that is very close to 0 and obviously way smaller than α=0.05 and α=0.01
Hence we can reject the null hypothesis; these data suggest that the reading intervention did indeed boost English Language Arts scores on average.
Data are coming from a paired design -- Use the two-sample t−test for paired samples
Data are coming from two unpaired groups -- Use the two-sample t−test with
Use Levene's test for homogeneity of variances if the assumption of normality is not supported
Normality is not as big a deal since these tests are robust to small violations of normality
The Logic of Hypothesis Testing
One-tailed versus Two-tailed hypotheses
One-Sample t-tests
Two-group t-tests
Paired t-tests
Assumptions of t-tests
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |