Beware of Scientific “Claims” in the Media
Sir David Spiegelhalter knows a thing or two about statistics. At the University of Cambridge, one of his areas of study is how the concept of risk and statistical evidence are discussed in society, and he is also the current president of the Royal Statistical Society.
In our current “post-truth” era (that I’ve written about before), where “alternative facts” and “fake news” are buzzwords in headlines daily, it is important to consider how reporting on science factors in. If there are biased news reports on political matters, is it possible that biased news reports on things like health information, science discoveries, and others could exist as well?
To many scientists, the idea of “p-hacking” is not a new concept. (If you’re unfamiliar and want to try your hand at p-hacking, there is a great article and interactive graphic over at fivethirtyeight.) Briefly, “p-hacking” means that a scientist will collect data on a particular project with the idea of proving or disproving her or his hypothesis. Sometimes, it is not crystal clear, and you can manipulate the data a bit (maybe you look at a ratio of the values instead of the raw values themselves, maybe you look at specific groups only, and so on) then you can see the trend more clearly. Does it mean that the data are not reliable? Not necessarily, but there is an obsession in science with the p-value as a way to measure whether or not an experiment was “significant.” Of course, the meaning of “significant” to a statistician may be different than the meaning of “significant” to an oncologist (perhaps a good idea for a future article here on Science Daily Dose!).
Back to Sir David Spiegelhalter. He recently made an address as President of the Royal Statistical Society where he said the following about scientific data and reporting by the media in today’s society:
These two topics have a close connection: both are associated with claims of a decrease in trust in expertise, and both concern the use of numbers and scientific evidence.
In other words, we (as scientists and science communicators) should be careful to over-state claims from scientific reports. According to Spiegelhalter, these kinds of “exaggerations” can erode the public’s trust in science and scientists.
What might this kind of exaggeration look like? I have a personal example from my research that I can share. During my PhD, I developed nanoparticles to use as a new method to deliver anti-cancer drugs (this was the topic of a paper I wrote for Frontiers for Young Minds). My main motivation for entering academic research was that I wanted to be on the “front line” of developing new technologies that could make a difference for patients one day in the future. As a doctor, you can prescribe a treatment for patients that has been through the rigorous protocols necessary for safe health practices. As a researcher, I can develop new ideas for disease treatment that one day might replace the existing therapies because they are more efficient, or have fewer side effects, or some other benefit. Over my decade-long research career, this has been my driving motivation.
My first “big” scientific result was published in the Journal of the American Chemical SocietyUCLA in 2010. I was beyond enthusiastic when the Daily Bruin, ‘s student newspaper, wanted to interview me for an article they were writing (an article that would become their front-page story).
I ran around campus the day that the paper was published, picking up as many copies as I could find. None of my family lived in or near Los Angeles, so I wanted to collect copies to give them. I read the first page and flipped inside to continue the story. It was so cool to see my research story published there! And then I saw the closing remarks of the article:
“We are delivering drugs and saving lives.” My heart sank. While I don’t remember the exact words I used, I knew immediately that I would have never said that. I am always careful not to over-state my own research. Of course the goal would be ultimately to save lives, but even 7 years ago I knew that it wasn’t what my research was doing. It was developing something new that might one day turn into a treatment for patients, but it likely would be modified many times over before making it into the clinic, and it would almost certainly be used as a new method of chemotherapy, not as a cure. I never wrote to the author of the article – the newspaper was already printed and the mis-quoted line wasn’t so bad, but every time I look at it I cringe a little. It reminds me that when communicating science with the public, it’s so important to be honest.
Enthusiasm is always good, but honesty must take precedence. As a researcher I am used to the dynamic landscape of science (and particularly cancer) research. Every few weeks or months, it seems, a new study comes out in one area or another, filling in gaps of knowledge and clarifying other bits. But communicating to the public in such terms brings an expectation that can easily fatigue the public over time. So how can science communicators better combat this? One way is to be honest in reporting. While catchy, “click-bait” headlines are good for traffic, they are often misleading. Another way is to put the discovery into an appropriate context. In what way is it interesting? What impact is it making now, or could it make in the future? Does or could it have any impact in another area?
If there is always a groundbreaking (read: earth-shattering, monumental, never-before-seen) discovery, but it is not making any change in the daily life or thinking of the public, what does it matter? Context and honesty are vital, because otherwise, fatigue (and mistrust) can set in, and “groundbreaking” loses its meaning.