I Trusted The Science, And All I Got Was This Lousy T-Shirt!

Idiot T-Shirt

For more than a decade, I told incoming members of the D.C. Bar as part of their mandatory ethics training that such sessions as mine were essential to making their ethics alarms ring. To support that thesis, I related the finding of research performed by behavioral economist Dan Ariely when he was at M.I.T. Ariely created an experiment that was the most publicized part of his best-selling book “Predictably Irrational,” giving Harvard Business School students a test that had an obvious way to cheat built into it and offering a cash reward for the students who got the highest scores. He tracked how many students, with that incentive to be unethical, cheated. He also varied the experiment by asking some students to do simple tasks before they took the test: name five baseball teams, or state capitals, or U.S. Presidents.

None of these pre-test questions had any effect on the students’ likelihood of cheating, except for one question, which had a dramatic effect.  He discovered that students who were asked to recite a few of the Ten Commandments, unlike any of the other groups, never cheated at all. Never. None of them. Ariely told an NPR interviewer that he had periodically repeated the experiment elsewhere, with the same results. No individual who was asked to search his memory for a few of the Ten Commandments has ever cheated on Ariely’s test, though the percentage of cheaters among the rest of the testees is consistently in double figures. This result has held true, he said, regardless of the individual’s faith, ethnic background, or even whether they could name one Commandment correctly.

The classic moral rules, he concluded, reminded the students to consider right and wrong. It wasn’t the content of the Commandments that affected them, but what they represent: being good, or one culture’s formula for doing good. The phenomenon is called priming, and Ariely’s research eventually made me decide to start “The Ethics Scoreboard” and later this ethics blog.

Priming is a superb way to make sure your ethics alarms are turned on and in working order. All of us go through life focused on what ethicist call “non-ethical considerations,” the human motivations, emotions, needs and desires that drive us in everything they do—love, lust, greed, ambition, fear, ego, anger, passion…wanting that promotion, the new car, the compliment, fame, power. Good people do bad things because at the moment they are unethical, they aren’t thinking about ethics. If they were, they wouldn’t engage in the misconduct, because they would be “primed” and their ethics alarms would sound in time to stop them.

I still believe that the priming theory is sound, but it looks like Ariely’s alleged proof of the phenomenon can’t be trusted, because he can’t be trusted. Last week, an in-depth statistical analysis showed that a data set from one of the professor’s 2012 papers was fraudulent. That study had apparently demonstrated that people were more honest about how much mileage on their car if they had to sign a statement pledging that the number was accurate before they reported the mileage, rather than signing at the bottom of the page. Oops! No such study had ever been done,, and the “data” was produced using a random number generator.

Ariely responded to the claims in “humina humina” style. He said that an auto insurance company had collected the data, so it wasn’t his fault, though it was convenient that the faked results in the study lined up perfectly with Ariely’s theory. Since he didn’t check the data, the professor’s defense was that he was just incompetent and negligent, but not dishonest.

Oh. I don’t care. His employers, Duke University won’t reveal any of the details of the investigation they claim to have made into the matter. The study, which has been cited over 400 times by other scientists — is to be retracted.

I should have been suspicious of Ariely’s research earlier. Here, for example, I caught him in pushing a dubious proposition on NPR. In 2010 he told an interviewer a “fact” about the extent to which dentists agree on whether a tooth has a cavity (he said it was only 50% of the time). His stated source, Delta Dental insurance, denied his claims. And a few months ago, an Ariely paper from 2004 was branded with a special editorial “expression of concern” because there were over a dozen statistical impossibilities in the reported numbers. These couldn’t be checked, Ariely said, because he’d lost the original data file.

I have had similar disappointments with behavioral science studies and their researchers. I used social scientist Phillip Zimbardo’s 20 rules for resisting unethical influences in organizations and groups in my seminars for nearly a decade, and blogged about them here. Again, as with some of Areily’s conclusions, I have found them useful teaching tools. However, Zimbardo himself was exposed as being unethical himself, specifically regarding his Stanford Prison Experiment, which made him a national figure and launched his career in the field of analyzing culture-based misconduct. As detailed in a Medium article, his experiment was manipulated research designed by Zimbardo to reach a desired result: a lie. Worse, he has been covering up the lie for decades while still riding the wave of the fame it provided him.

When people tell us we should “follow the science” blindly because scientists are experts, those people are showing us that they…

1. ….don’t understand science, which is supposed to about nullius in verba, or “take nobody’s word for it”. Everything is supposed to be readily verifiable. Theories that are based on unverifiable claims are not trustworthy no matter who is peddling them or

2…they don’t understand scientists. Scientists are human beings, and no less corruptible, subject to bias, and governed by personal agendas and non-ethical considerations than the rest of us. Or,

3. …they are deliberately citing scientific theories and pronouncements knowing that they are not as definitive as they claim.

10 thoughts on “I Trusted The Science, And All I Got Was This Lousy T-Shirt!

    • Never heard of : https://gizmodo.com/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800 but . . .
      Back in the days when schools had nurses (yes, Virginia, they did, and what’s more, you couldn’t get away with saying you were sick if you weren’t, but if you were hiding from a bully, you could hang out for awhile), ours had a list of her personal “facts” on health and nutrition she gave out to parents. I only remember the one because my parents laughed so hard over it, and shook their heads, and agreed that it was True: “If you have to have a second helping of anything, it’s probably bad for you.” It was taped to the refrigerator door, and triggered the first theoretical argument I ever won from grown-ups, over the word “probably” re chocolate-covered Brussels sprouts.

  1. Maybe none of this can really be studied. Either the observation affects the experiment or the experiment has to be so non-analogous to real life or artificially primed (like the Stanford Experiment) that results are non-informative.

    • I took an introductory Sociology course early on in college and completed the semester but then ran away screaming. It just doesn’t seem to make sense that you can scientifically categorize and therefore understand and predict individual human behavior. Just doesn’t compute. I thought I could better understand human behavior by taking English courses.

    • Wow, a most illuminating article Mr. Schlecht.
      I guess there are solid essential reasons for long term studies regarding the covid vaccines after all.
      Oh wait; that has already begun with the global population and the jab. The results should be most interesting especially if objective people can analyze the raw uncensored data collected over the next several years.

  2. Fascinating. The takeaway is that human beings are human, they are biased, and scientific research studies should always be treated with suspicion.

    And that is as it should be. The judgement (not to mention ethics) of the researchers you mention is clearly deplorable, but typical of people in general. Yes, we should be able to expect more from people supposedly dedicated to science, but my cynical nature finds it unsurprising that they consistently disappoint us.

    A further comment to your excellent 3-part list — never trust a “consensus” of anything. Very often, everyone involved in said consensus, even if they can legitimately claim expert status in the field in which they are offering their opinion, rejects science.

    Wikipedia, for all its faults, has a good description of the scientific method:

    It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; experimental and measurement-based testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings.

    How many ways, besides falsifying data, did the researcher’s you mention violate this principle?

    1. Careful observation — known false data makes this impossible.
    2. Distortion of observation by cognitive assumptions — appears to be present in both.
    3. Formulation of hypothesis — Probably done, but corrupted due to 1. and 2.
    4. Experimental measurement — Fake data makes this meaningless.
    5. Refinement or elimination — In both cases, the research seemingly was designed to prove a hypothesis rather than refine or eliminate it.

    What they did is not science, it is opinion manipulation using a false imprimatur of science.

    Finally, rigorous skepticism seems to be missing everywhere these days. We accept too many suspicious things as truth without regard for the source, allow self-described experts to validate suspicious or facially flawed claims, allow our cognitive assumptions to override reason, and apply emotional validation to highly doubtful propositions.

    Recipe for disaster? You betcha.

  3. Is it the fault of “science” or poor education, or the hunger of We, The People for the readers’-digestible popular press?

  4. Without epistemology, “science” is just an appeal to authority. “Science says such and such.” “Trust the science.” “We’re ‘science,’ and you aren’t.”

    No, “science” isn’t made up of facts about the universe. It’s a bunch of experiments, the data from those experiments, and the hypotheses that we think the data support. It’s the results of applying the scientific method and science mindset, nothing more or less.

    Perhaps the stupidest type of argument humans can have is arguing about “facts” which are actually inferences or opinions based on not only the raw data, but also the things that they desire and fear.

    For example, if anyone says, “it’s a scientific fact that X is medically safe and effective,” then for most values of X they’re either ignorant or delusional, because human biology is inconveniently variable. Variable biology is part of how evolution works, but it also means that not every treatment will be good for every human. There’s always a chance someone will have a negative response to something otherwise harmless, which is why every commercial says, “ask your doctor about X,” so they can safely test your reaction to X. When a person says, “X is proven safe and effective,” what’s actually going on in their brain is that they think the risk of a bad reaction is small enough for them to accept, and they might think other people should accept that risk as well. They’re just not conscientious enough to acknowledge that’s what they mean.

    What you accept as “true” for the purposes of decision-making is based on what consequences you’re willing to deal with if you’re wrong. (Most humans forget that bit because it slows down their thinking, but sometimes they need to slow down before they crash into a wall–in this case, a wall of people who aren’t willing to accept the same risks.) The most they can honestly say is, “based on the experimental data and my relative willingness to accept different types of costs (known negatives) and risks (unknown negatives), I think I/we should do this.”

    The worst part is that if people simply acknowledged that this is how “facts” work, they could take steps to effectively negotiate with each other and build more confidence in whatever solution they reach together. Since they don’t do that, they end up missing out on those opportunities for reconciliation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.