CommunityQuality of science and science communication

Applying a skeptical eye to science reporting

One of my constant bugbears is the low quality of science reporting we see in the mainstream media, particularly in disciplines that are interesting but very prone to irreproducible or flawed results such as health and nutrition.

One of the major risks of these studies is the use of non-random samples. A classic kind of non-random sample is known as WEIRD: Western, Educated, Industrialized, Rich, and Democratic. In other words, the kinds of people typically found in Australian, UK, and American universities, and “coincidentally” the easiest source of test subjects for professors and PhD students at these universities.

Of course, the population of the world is so diverse is that it’s really, really, hard to get a genuinely random sample. Just try to imagine the logistics of running an experiment where 500 out of 7 billion people needed to be picked with equal probability. Aside from anything else, how would you identify them. Birth certificate? Mobile phone? Passport number? Even if there was hypothetically a massive database available to researchers containing this information with 100% accuracy, these still wouldn’t cover everyone, leading to skewed results.

Since this means that a non-random sample is almost guaranteed, it is critical for science to disclose its sampling method and for the broader applicability of results arrived at to be appropriately qualified. Unfortunately, there are many incentives to skirt around this issue when publishing. A headline of “44% of Harvard students had higher blood sugar levels after eating three cannoli in an hour” is way less exciting to the media than “Dessert leads to diabetes and death!”

Science is excellent (when done right) at answering specific questions, but that doesn’t mean that the systemic outcomes of a specific experimental result can be predicted. For example, there is strong evidence heating cooking oils produces carcinogenic substances. But there is not yet evidence that these carcinogens are actually absorbed by the body if eaten. The experiment isn’t flawed per se. The problem comes, however, when we make a logical leap from to A to Z without checking if all the intervening steps also hold true.

The media is complicit in choosing not to seek out the right qualifications that apply to scientific results, either. (Shout out to Not Exactly Rocket Science for being an awesome science blog that really does do the necessary work in this area). Most famously, the journalist John Bohannon ran and published the results of a clinical trial which “proved” that chocolate aids in weight loss. The supposed authority garnered from the study led to the media credulously picking up and circulating the findings widely, with only minimal interest in correcting the story once they found out they had been had. If we’re being generous, they simply don’t have the time or expertise to check the story out properly. But a more realistic assessment would seem to be journalists know sensational headlines generate clicks, and therefore the incentive to properly validate a story is low.

Information professionals can’t be smug either. The field of information technology often doesn’t achieve any kind of scientific validation. Self-sponsored studies are often used to “prove” technology benefits, which are then picked up and reporting by technology mags without any further probing.

This misuse and poor explanation of scientific “studies” leads to a skepticism in the general public which is very dangerous to our modern world. As people twig to the fact that science can be gamed by vested interests, they return to gut instinct and reliance on trusted networks for decision-making (which themselves are becoming dangerously self-referential in the social media echo chamber).

In fact, it is not that much of a stretch to link the abuse of nutrition and environmental “science” reporting to the skepticism and denialism arising in everything from climate change to domestic violence statistics. Further, the sometimes defensive response of the scientists and journalists involved is just making things worse.

We need an honest conversation about the incentives that exist for science and the media to be sensational rather than cautious with what they publish. This problem is actively hurting the ability of our societies to understand and adopt new and valuable knowledge. I personally have no doubt that it is costing all of us both money and lives. But on the other hand, I don’t have a study to prove it.

Source: The Conversation, io9

5/5 - (1 vote)

Stephen Bounds

Stephen Bounds is an Information and Knowledge Management Specialist with a wide range of experience across the government and private sectors. As founding editor of RealKM and Executive, Information Management at Cordelta, Stephen provides clear strategic thinking along with a hands-on approach to help organisations successfully develop and implement modern information systems.

Related Articles

One Comment

  1. A timely addition to this debate from Neuroskeptic:

    One might be forgiven for feeling that the odds are stacked against theories these days. It seems like if there’s not enough evidence for your claim, people won’t believe it, but if there’s lots of evidence, no-one will believe it either.

    In truth, I think what this “Catch 22″ reveals is a growing lack of confidence in the published literature in the field, and specifically in the publication process which incentivizes publication bias and p-hacking. It’s the spectre of these biases that makes people skeptical of “too much” evidence.

Back to top button