Ask your own question, for FREE!
Statistics 11 Online
OpenStudy (anonymous):

In layman's terms, what is the significance of a sample variance? It seems exactly the same as standard deviance, just square rooted.

OpenStudy (mathmale):

I'd prefer to state, "the std. dev. is the square root of the variance." The standard deviation (note the correct spelling, please) is a measure of how much a given data point varies from the mean of all the data. You might get some clues regarding the meaning/significance of the std. deviation by examining the formula for it: \[S _{x}=\sqrt{\frac{ \sum_{}^{} (x _{i}-xbar)^{2}}{ n-1 }}\]

OpenStudy (mathmale):

x bar represents the mean of your data. Each (x sub i - x bar) represents how far the ith data point is from the mean of the data set. We square every such distance/difference so that we'll have only positive quantities under the radical sign. Note how closely this resembles the formula for mean: sum of all the data points mean of a set of data = ----------------------- number of data points The particular formula I've typed in here is for the standard deviation of a SAMPLE. There's a slightly different formula for the standard deviation of a POPULATION.

OpenStudy (mathmale):

In practical terms, Cathy, a small standard deviation indicates that most data points are quite close to the mean, or, in other words, they don't vary much from the mean. A large std. dev. means the opposite: the data points vary quite a bit from the mean of the data.

OpenStudy (anonymous):

Thank you mathmale. Maybe I can understand it better with an example. Let's say I have an n of 10. My std. dev is 2.644319 and my sample variance is 6.992424. I know that the std. dev. looks pretty good. But is the sample variance low or high? Is low variance better than high?

OpenStudy (mathmale):

So, Cathy: Why do we square each \[(x _{i}-xbar)?\]

OpenStudy (anonymous):

Because its only supposed to be positive numbers, right?

OpenStudy (mathmale):

I wouldn't personally say the std. dev. "looks pretty good." the std. dev. is a descriptor of your data. I'm fat, you're slim: both are descriptors (but note that I'd much rather be slim like you instead of fat like I already am). If the std. dev. is "small," we conclude that there's not much variation in x.

OpenStudy (mathmale):

If the std. dev. is "large," we conclude that there's a lot of variation about the mean of the data set.

OpenStudy (mathmale):

So, again: std. dev. describes how much variation there is about the mean of the data set in question. If we want uniformity on the production line, we'd hope for small std. deviations in product size.

OpenStudy (anonymous):

Maybe I can understand it better with an example. Let's say I have an n of 10. My std. dev is 2.644319 and my sample variance is 6.992424. I know that the std. dev. represents low variation. But is the sample variance low or high? Is low variance better than high?

OpenStudy (anonymous):

If you compare your s.d. with your mean, using s.d./mean, you have a measure of the relative size of your uncertainty. If your mean is 100 lb and your standard deviation is 10 lb, then your uncertainty is about 10%. Note that the units for mean and s.d. are the same. Units for s.d.^2 = variance would be lb^2, not very informative.

Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!