The New York Times has a policy on anonymous sources. Great! But do they have a policy on statistics? They certainly need it. I mean, take a look at the graphic from an article on women and smart phones:
I mean, sure the numbers on the right hand side could be used to support the caption “women are more concerned with price, size and design” (and keyboards by their same reasoning) but the hell if I’m going to believe that without knowing what a reasonable estimate of the margin of error is in the survey being reported on. I mean, without such data, the graphic displayed has absolutely no content (and, if the margin of error is plus or minus three your still damn safe ignoring the results.) How can a major media outlet, and one that is constantly bashed as being intellectually elitist, get away with this kind of garbage? (Oh right they’re entertainment, and numbers are only entertaining if you don’t have to actually think about what they mean.)
Yep. There’s not much in the way of serious mainstream media anymore. It’s not that there’s a ‘liberal bias,’ it’s that there’s a bias towards making money and that means entertaining instead of enlightening.
The sum of the numbers in the graphic is 78% for the women and 76% for the women. Which suggests that (a) they probably asked “which factor is most important,” ignoring that most respondents would have more than one factor they consider important, and (b) for both genders, more respondents chose “none of the above” than chose any of the listed options. So in addition to your concerns about the margin of error, there is the problem that the survey was badly designed to begin with. I’ve seen more informative “statistical” graphics than this one on the pages of the Onion.
I am constantly running into this problem trying to read the news and such… what is the methodology? I recently heard an interesting statistic quoted from The Omnivore’s Dilemma that 33% of Americans eat at least one fast food meal a day. In trying to parse that, I was struck with so many questions that the statistic’s interest quickly faded. What is fast food? What is the margin of error? Are weekends counted? Is each participant in the study (of unknown size) tracked and required to eat fast food all seven days of the week, or is it that on any given day, E[# of Americans eating fast food today]/# of Americans = 0.33?
I guess learning some rudimentary statistics prohibits you from ever seeing factoids the same way again. I think that’s a good thing…
In Israel the law stipulates you need to write the sample size and std. err. in every published statistics. I’m surprised you don’t have an equivalent law in the states.
Come on. You’re a quantum scientist. Use your quantum consciousness to quantumly manifest a quantum margin of error. Well, quantumly speaking.
Unfortunately, JohnQ, “quantum” is not entirely analogous to “bullshit” even if *you* can’t tell the difference.
Great print journalism is still around. Unfortunately, it seems as though it is getting harder and harder to find. Thankfully, we now have the internet to compensate.
Sorry, my poor attempt at humor was misunderstood. I was poking fun out how often the word quantum is picked up in the public (“Quantum Healing”, “Quantum Wellness”, “Quantum Consciousness”, etc.) I meant in no way to equate to bullshit. (I certainly do not think that.) And I certainly didn’t mean to disparage anyone in the field.
Of course women are more concerned about price – they earn less.
I was far more concerned with my phone’s wifi capabilities than the price, Kea.
Yeah, that lot of stats is pretty bunk.
Visual Revelations: Improving Data Displays: Ours and the Media’s from Visual Revelations (Summer 2007) by Howard Wainer, Chance.
[Chance is the publication of The American Statistical Association (ASA) is a scientific and educational society founded in 1839 with the following mission: To promote excellence in the application of statistical science across the wealth of human endeavor.]
“In the transactions between scientists and the media, influence flows in both directions. About 25 years ago I wrote an oft-cited article with the ironic title: ‘How to Display Data Badly.’ In it I chose a dozen or so examples of flawed displays and suggested paths toward improvement. Two major newspapers, The New York Times and The Washington Post, were the source of most of my examples, which were drawn over a remarkably short period of time. It wasn’t hard to find examples of bad graphs.”
“Happily in the intervening years, the same papers have become increasingly aware of the canons of good practice and improved their data displays profoundly. Indeed, when one considers both the complexity of the data often displayed and the short time intervals permitted for their preparation, the results are frequently remarkable…. Graphical practices in scientific journals have not evolved as fast as the mass media. It is time we learned from their example….”
Damn right. That’s why I don’t believe any numbers unless they have a p-value next to them.