Total Posts:9|Showing Posts:1-9
Jump to topic:

An Interesting Demonstration of Bias

Chaosism
Posts: 2,649
Add as Friend
Challenge to a Debate
Send a Message
4/19/2016 4:06:10 PM
Posted: 7 months ago
This is just an anecdotal account, so in case such a thing doesn't interest you, here's your advanced notice. :) I'm putting in this forum because it directly pertains to psychology; specifically, cognitive bias.

Background: One of the people that I frequently visit in my "volunteer" efforts in a ~65yo woman who suffers from significant mental impairment. She's has a childlike capacity in most aspects, though she is capable of living on her own (although she is generally managed and supervised by a health organization). She is, however, quite creative and imaginative. She's a very good person with a particular liking for animals.

She called me the other day and in the conversation, mentioned that she got a new calendar from the humane society, and that it featured pictures of abused animals. Later, when I arrived, she showed me the calendar that she was talking about, but on top of it was a separate pamphlet from the humane society which had one of those typical emotionally appealing photos of an abused dog. The calendar, however, was not from the humane society, but was, in fact, a regular calendar of cute dog photos. Since they happen to be in the mail together, she associated the calendar with the first thing she saw (the pamphlet) and preemptively drew the conclusion that calendar would feature abused animals. I know her well enough that I actually anticipated this might be the case, beforehand.

As she open the first pages to the array of happy and cute photos of dogs, you might think that she'd realize her error. On the contrary, I watched her eyes search through the photos, and rest on a photo of a dog curled up in a playful manner with its head up and looking at the camera; she pointed at it and explained to me that the dog was beaten to the point that its neck was broken. I pointed to another photo of a dog with a ball in his mouth and said, "that dog looks happy", to which she retorted with, "he's starving, so he's eating that for food". She continued to skim through the photos, pointing out and explaining only those which she could creatively rationalize to meet her preconceived conclusion, while totally ignoring all others. Another explanation of hers pertaining to an outdoor photo was, "they keep him outside all the time, even in the rain and snow. Poor little guy."

Her biggest one (her exclamation) was a photo of a husky rolling in the snow. She explained, "he was left out in the snow, and was practically frozen to death. How could someone do that to him?". I commented, "he looks happy, to me", to which she countered, "No, his eyes are closed. I don't think he made it; I think he's dead. I don't see him blinking. He's not moving, either. Yeah, he's dead." Keep in mind that we were looking at a photograph, here...

The conclusion is that this is such a clear-cut (albeit exaggerated) demonstration of Confirmation Bias, in that, she adopted her conclusion and then sought out evidence to confirm this conclusion. That which didn't, she either ignored or rationalized and twisted it to fit her conclusion. Her tendency to rationalize in an effort to prevent her conclusion from being wrong is pretty strong. She demonstrates so many psychological biases and tendencies so plainly and openly, that I could use her as a textbook example for so much! I find it extremely interesting, personally. And as a side note, I never have, and never will, treat her as some kind of test subject.
dee-em
Posts: 6,446
Add as Friend
Challenge to a Debate
Send a Message
4/20/2016 1:52:32 AM
Posted: 7 months ago
I suspect that this goes deeper than simple confirmation bias. She might have an obsession with looking for the worst rather than the best in a given situation. The pamphlet might have triggered it but it takes quite a bit of self deception to turn cute dog pictures into ones of animal abuse.
RoyLatham
Posts: 4,488
Add as Friend
Challenge to a Debate
Send a Message
4/20/2016 3:51:26 PM
Posted: 7 months ago
Confirmation bias works just as well with scientists. If data shows no effect, they will apply conditions to exclude some of it, and will use different statistical methods until they get a result they were looking for. Another trick is to keep repeating an experiment with slightly different parameters until by chance one of the random outcomes shows statistical significance. Twenty random experiments with meaningless results will on average produce one that is significant at the 90% confidence limits. Recent studies show that more than half of published science is wrong. It's in part a product of science being a profession that demands publication as a career objective, rather than truth seeking. Peer review doesn't work because being contrary is bad for a career. This wasn't much of problem in the past, when science was done for the sake of finding the truth rather than building a career.
Chaosism
Posts: 2,649
Add as Friend
Challenge to a Debate
Send a Message
4/20/2016 3:58:15 PM
Posted: 7 months ago
At 4/20/2016 1:52:32 AM, dee-em wrote:
I suspect that this goes deeper than simple confirmation bias. She might have an obsession with looking for the worst rather than the best in a given situation. The pamphlet might have triggered it but it takes quite a bit of self deception to turn cute dog pictures into ones of animal abuse.

Yes, that is a strong possibility. However, I did exclude in the background that the is quite a happy person by nature and tends to seeing things in a very positive light, and accept positive explanations over negative ones in generally neutral circumstances. Perhaps this could be a result of a "love to hate" towards people who abuse animals.
Chaosism
Posts: 2,649
Add as Friend
Challenge to a Debate
Send a Message
4/20/2016 4:10:46 PM
Posted: 7 months ago
At 4/20/2016 3:51:26 PM, RoyLatham wrote:
Confirmation bias works just as well with scientists. If data shows no effect, they will apply conditions to exclude some of it, and will use different statistical methods until they get a result they were looking for. Another trick is to keep repeating an experiment with slightly different parameters until by chance one of the random outcomes shows statistical significance. Twenty random experiments with meaningless results will on average produce one that is significant at the 90% confidence limits. Recent studies show that more than half of published science is wrong. It's in part a product of science being a profession that demands publication as a career objective, rather than truth seeking. Peer review doesn't work because being contrary is bad for a career. This wasn't much of problem in the past, when science was done for the sake of finding the truth rather than building a career.

That's why constant criticism is necessary. The very fact that a poll is issued regarding this issue this reflects and effort to combat this. No one claims the scientific process is infallible, but it is currently our best method of obtaining knowledge about the observable world. Even if the above statistics are true (which if done scientifically, they stand a good chance being wrong by their own results ;P), then it is not justified to apply this flaw to the process as a whole.
RoyLatham
Posts: 4,488
Add as Friend
Challenge to a Debate
Send a Message
4/21/2016 2:25:42 AM
Posted: 7 months ago
At 4/20/2016 4:10:46 PM, Chaosism wrote:
At 4/20/2016 3:51:26 PM, RoyLatham wrote:
... Recent studies show that more than half of published science is wrong. It's in part a product of science being a profession that demands publication as a career objective, rather than truth seeking. Peer review doesn't work because being contrary is bad for a career. This wasn't much of problem in the past, when science was done for the sake of finding the truth rather than building a career.

That's why constant criticism is necessary. The very fact that a poll is issued regarding this issue this reflects and effort to combat this. No one claims the scientific process is infallible, but it is currently our best method of obtaining knowledge about the observable world. Even if the above statistics are true (which if done scientifically, they stand a good chance being wrong by their own results ;P), then it is not justified to apply this flaw to the process as a whole.

There was no poll involved, and the scientific method is not being challenged. As far as I know, everyone agrees that method of forming an hypothesis and then testing it is a sound method. The problem is that current scientists are incompetent in carrying out the method.

A group of scientists selected 67 published papers from prestigious journals and attempted to replicate the results reported 75% of the results could not be replicated. Another similar effort found that 89% of the results could not be replicated. In a separate test, a paper deliberately modified to contain eight major errors in experimental procedures was submitted for peer review to 221 scientists. None caught all the errors, and the average caught was fewer than two. Only 30% of the reviewers recommended that the intentionally flawed paper not be published.

I'm summarizing the recent article by William A Wilson, http://www.firstthings.com...
Chaosism
Posts: 2,649
Add as Friend
Challenge to a Debate
Send a Message
4/21/2016 6:42:42 PM
Posted: 7 months ago
At 4/21/2016 2:25:42 AM, RoyLatham wrote:
At 4/20/2016 4:10:46 PM, Chaosism wrote:
At 4/20/2016 3:51:26 PM, RoyLatham wrote:
... Recent studies show that more than half of published science is wrong. It's in part a product of science being a profession that demands publication as a career objective, rather than truth seeking. Peer review doesn't work because being contrary is bad for a career. This wasn't much of problem in the past, when science was done for the sake of finding the truth rather than building a career.

That's why constant criticism is necessary. The very fact that a poll is issued regarding this issue this reflects and effort to combat this. No one claims the scientific process is infallible, but it is currently our best method of obtaining knowledge about the observable world. Even if the above statistics are true (which if done scientifically, they stand a good chance being wrong by their own results ;P), then it is not justified to apply this flaw to the process as a whole.

There was no poll involved, and the scientific method is not being challenged. As far as I know, everyone agrees that method of forming an hypothesis and then testing it is a sound method. The problem is that current scientists are incompetent in carrying out the method.

A group of scientists selected 67 published papers from prestigious journals and attempted to replicate the results reported 75% of the results could not be replicated. Another similar effort found that 89% of the results could not be replicated. In a separate test, a paper deliberately modified to contain eight major errors in experimental procedures was submitted for peer review to 221 scientists. None caught all the errors, and the average caught was fewer than two. Only 30% of the reviewers recommended that the intentionally flawed paper not be published.

I'm summarizing the recent article by William A Wilson, http://www.firstthings.com...

Sorry, Roy - I said "poll" but that was a mistake. That's interesting, and I'll look into that info.
Riwaaz_Ras
Posts: 1,046
Add as Friend
Challenge to a Debate
Send a Message
4/22/2016 5:39:36 AM
Posted: 7 months ago
At 4/20/2016 3:51:26 PM, RoyLatham wrote:
Confirmation bias works just as well with scientists. If data shows no effect, they will apply conditions to exclude some of it, and will use different statistical methods until they get a result they were looking for. Another trick is to keep repeating an experiment with slightly different parameters until by chance one of the random outcomes shows statistical significance. Twenty random experiments with meaningless results will on average produce one that is significant at the 90% confidence limits. Recent studies show that more than half of published science is wrong. It's in part a product of science being a profession that demands publication as a career objective, rather than truth seeking. Peer review doesn't work because being contrary is bad for a career. This wasn't much of problem in the past, when science was done for the sake of finding the truth rather than building a career.

Well said.
(This is not a goodbye message. I may or may not come back after ten years.)
keithprosser
Posts: 1,925
Add as Friend
Challenge to a Debate
Send a Message
4/22/2016 6:35:19 AM
Posted: 7 months ago
I think there is a difference between confirmation bias - which is largely unconscious - and deliberate cherry -picking and fudging research data to publish a paper. Theformer is seld-deception the other is scientific fraud.
Thanks roy for bring it to our attention, but its relation to the op is tenuous!