Is Science Doomed?

3 minute read

Published:

The recent buzz around Ariely and Gino got me thinking—how sturdy is the foundation of science with the occasional shaky findings shaking its grounds? This isn’t just coffee table talk; it’s a question worth diving into, preferably with some data to splash around.

Eager to shed some light on this conundrum, I embarked on a DIY adventure in the world of simulations, which you can peek at on my GitHub page. My model simulates the growth of scientific literature over time, with newer papers citing earlier works at random. While this model significantly diverges from reality—most notably in its assumption of random rather than scale-free network connections—it serves as a basic framework to ponder the larger question. Another simplification is that not all citations directly support a paper’s findings; they may reference related work or background information without affecting the core conclusions. It’s a bit of a simplification, admittedly, but hey, we’re exploring here, not splitting atoms!

citation_network

I played around with marking a fraction of these papers as “scientifically inaccurate” (dramatically highlighted in red), to see how this incorrectness cascades through the citation network. Intrigued by how the seeds of inaccuracy might sprout in the scientific field, I embarked on two distinct simulations. For the first, I let randomness take the wheel, distributing the label of “scientifically inaccurate” across our network without bias. Then, taking a more calculated approach in the second simulation, I singled out those quieter voices in the citation chorus—papers cited less than the average—for our “inaccuracy” tag.

A quick search suggests that from 8% to 17% of scientific findings are incorrect. With this range as our compass, I navigated the simulations to see what patterns would emerge, curious to uncover what this could tell us about the robustness and resilience of science in the face of potential misinformation.

First Simulation

impact_of_wrong_papers During the simulation, an observation emerged: as the number of our starting number scientific papers expanded over time, the curve’s smoothness improved, hinting at an increasing predictability in the model’s behavior. Yet, the standout finding lies within the percentages—within our specified range of inaccuracy, only 20% to 40% of papers managed to dodge the inaccuracies after five generations of growth. This outcome underscores a critical aspect of the scientific journey. It serves as a reminder of the dynamic interplay between growth and accuracy in science, highlighting the essential role of rigorous scrutiny and correction in maintaining the integrity of the scientific corpus.

Second Simulation

impact_of_wrong_papers_V2 When I selectively targeted the less popular papers for our “inaccuracy” label, the picture brightened significantly, with 80% to 90% correctness prevailing. It seems that our scientific garden is more resilient when misinformation stems from the shadows rather than the spotlight.

There’s undoubtedly a fascinating mathematical story behind these visuals, though that’s a tale for another day. What’s evident is the clear pattern emerging from our simulations, hinting at underlying principles waiting to be deciphered.

Conclusion

Navigating between these two scenarios, it’s likely our reality isn’t as dire as one might fear. With a mix of vigilance, self-correction, and the inherent robustness of the scientific method, it’s safe to say science is not doomed! Our exploration, albeit simplified, offers a glimpse of hope and resilience in the scientific endeavor. So, let’s keep the curiosity alive and the dialogue open, as we continue to chart this ever-expanding universe of knowledge.