Total Posts:3|Showing Posts:1-3
Jump to topic:

Statistical cognition, information literacy

Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
8/5/2015 4:19:56 AM
Posted: 1 year ago
If you're a long-time follower of Language Log like me, you've probably read quite a few articles by Mark Liberman criticising the methodology employed and conclusions drawn by researchers who make use of statistics, as well as journalistic misinterpretations of them.

It is hardly news that researchers these days like to take tiny (but statistically significant) correlations and claim them to be a clear link. The media are then eager to take advantage of these studies, writing headlines after headlines.

This is the case of a recent study where music was found to be related to the way you think. I first read it on a French site, Slate.fr, and later found that the story had been repeated elsewhere as well. (https://www.google.com.hk...) I'm always sceptical of these claims, and it came as no surprise when Liberman criticise the study on Language Log.

http://languagelog.ldc.upenn.edu...

Even the strongest correlation in their study was very weak, with a correlation of -0.14. Liberman constructed a set of data to match the -0.14 correlation, and plotted the data onto a graph. They were strongly dispersed, with little evidence of correlation.

Why is this happening these days? Whose responsibility is it? Is it the researcher's fault that they're looking for 'fascinating' topics, conducting research on them and coming to misleading conclusions based on tiny correlations, as someone in the comments suggested? Is it the media's fault for sounding the trumpets so loudly, without having considered the accuracy of the conclusion drawn by the study? (Or are they purposefully ignoring the stats, preferring to make attractive headlines with little regard to accuracy?) Above all, what does this say about information literacy trends in society? Are we becoming a more gullible society?
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
Diqiucun_Cunmin
Posts: 2,710
Add as Friend
Challenge to a Debate
Send a Message
8/5/2015 4:24:19 AM
Posted: 1 year ago
BTW, Liberman didn't criticise the music study itself, but the media hype about it. However, there are other cases where he believed the researcher was also in error:

http://languagelog.ldc.upenn.edu...
The thing is, I hate relativism. I hate relativism more than I hate everything else, excepting, maybe, fibreglass powerboats... What it overlooks, to put it briefly and crudely, is the fixed structure of human nature. - Jerry Fodor

Don't be a stat cynic:
http://www.debate.org...

Response to conservative views on deforestation:
http://www.debate.org...

Topics I'd like to debate (not debating ATM): http://tinyurl.com...
medv4380
Posts: 200
Add as Friend
Challenge to a Debate
Send a Message
8/6/2015 12:44:49 AM
Posted: 1 year ago
At 8/5/2015 4:19:56 AM, Diqiucun_Cunmin wrote:
Why is this happening these days?

A good book to read on the subject if you're not already is "How to Lie with Statistics" by Darrel Huff

First this isn't anything new. Scientists have been abusing statistics for a long time now. You can't exactly blame them since the Scientific Method was never rewritten after much of the conclusions of statistics became obvious. The scientific method teaches you that all you need is 1 control and 1 test and science works. Statistics says that 1 is far too small of a sample and 30 controls and 30 tests would be a much better starting point, but not perfect. The difference in this teaching is why AntiVaxers believe a cherry picked sample of 12 proves that vaccines are unsafe, and anyone with a little stats knowledge knows that it was BS the moment the read the size of the N.

Also take into consideration that Statistics is only now being integrated into Math classes with Common Core. Statistics a few years ago was mostly an optional part of a math path as an elective only credit. Who wants to take a Math class, and not get Math credit for it?

Here is a more deceptive study.
http://news.utexas.edu...
It seams like a standard enough study with an N of 117. However, take into consideration it's making a claim about hormones, and cheating. Hormones are gender biased so you have to split it by gender equally. So a little less than 60. Then the groups are going to split again into cheaters and noncheaters so a little less than 30. That is the bare minimum assuming everything split 50/50, and it's unlikely that it split that way.

Then their is the sampling, and it's almost a certainty that this is just college students that were required to attend for an intro to psyc credit. Meaning the study is horribly biased towards college students. With an N that small it's entirely possible that some testosterone hyped frat boys biased their conclusions, or a number of morally upstanding women.

Are they right, or wrong? Unknown the sample is just too small, and hardly a proper randomized study. But you give that study to someone who wants to say Testosterone makes people cheat, and you have perfect Man Hater stories defended with by Science with a 'Study'.