Random samples of contestants who have appeared on two popular game shows (Wheel of Fortune and Jeopardy) are selected. The Wheel of Fortune contestants in the sample won an average of $5700 more than the Jeopardy contestants in the sample. A 95% confidence interval for the difference in the mean amount of money won by contestants on the two game shows is calculated to be (−1900, 13300). We would like to conduct a hypothesis test to determine whether the mean amounts won on the two game shows differ. Assume the appropriate normality assumptions are satisfied. We would:
(A) reject H0 at the 5% level of significance since the value 0 is contained in the 95% confidence interval. (B) fail to reject H0 at the 10% level of significance since the value 0 is contained in the 95% confidence interval. (C) fail to reject H0 at the 5% level of significance since the value 5700 is contained in the 95% confidence interval. (D) reject H0 at the 10% level of significance since the value 5700 is contained in the 95% confidence interval. (E) fail to reject H0 at the 5% level of significance since the value 0 is contained in the 95% confidence interval.
I know the answer is E. but I want to know why.
To test whether the means are different, we pose the null hypothesis: \(Ho: \mu_{\text{Wheel of Fortune}}-\mu_{\text{Jeopardy}}=0\) \(Ha: \mu_{\text{Wheel of Fortune}}-\mu_{\text{Jeopardy}}\ne 0\) So the null hypothesis is saying that the two means don't differ (i.e they are the same). Now, the confidence interval contains 0, which means that 0 is a plausible value. So, this agrees with the null hypothesis, so you can't reject it because it is a possible value (think of fail to reject as "accept") And the 5% significance is simply because you have a 95% confidence interval, which means that alpha, \(\alpha\), was set at 5%.
Join our real-time social learning platform and learn together with your friends!