Skip to main content
MiscellaneousScience

EMarketer Gave Me A Shout Out! Guess My Science Degrees Have Come In Handy.

By October 21, 2009March 30th, 2014No Comments

I read an article from eMarketer regarding smartphone usage. They seemed to be drawing conclusions about their data that were subsubstantiated by their raw data. From my science days, it was beaten into me that two numbers are not different unless they are statistically significant from each other and there are varying degree of statistical significance. Here is a quick cut and paste from Wikipedia:

In statistics, a result is called statistically significant if it is unlikely to have occurred by chance. The phrase test of significance was coined by Ronald Fisher.The use of the word significance in statistics is different from the standard one, which suggests that something is important or meaningful. For example, a study that included tens of thousands of participants might be able to say with very great confidence that people of one race are more intelligent than people of another race by 1/20th of an IQ point. This result would be statistically significant, but the difference is small enough to be utterly unimportant. Many researchers urge that tests of significance should always be accompanied by effect size statistics, which approximate the size and thus the practical importance of the difference.The amount of evidence required to accept that an event is unlikely to have arisen by chance is known as the significance level or critical p-value: in traditional Fisherian statistical hypothesis testing, the p-value is the probability conditional on the null hypothesis of the observed data or more extreme data. If the obtained p-value is small then it can be said either the null hypothesis is false or an unusual event has occurred. It is worth stressing that p-values do not have any repeat sampling interpretation.

An alternative statistical hypothesis testing framework is the Neyman-Pearson frequentist school which requires that both a null and an alternative hypothesis to be defined and investigates the repeat sampling properties of the procedure i.e. the probability that a decision to reject the null hypothesis will be made when it is in fact true and should not have been rejected: a “false positive” or Type I error and the probability that a decision will be made to accept the null hypothesis when it is false Type II error.

More typically, the significance level of a test is such that the probability of mistakenly rejecting the null hypothesis is no more than the stated probability. This allows the test to be performed using non-significant statistics which has the advantage of reducing the computational burden while wasting some information.

It is worth stressing that Fisherian p-values are not Neyman-Pearson Type I errors. This confusion is unfortunately propagated by many statistics textbooks.

So when I saw the data, I asked about this.

And interestingly, they actually used my tweet in one of their articles. Cool!

Leave a Reply