Since I wrote the following, circumstances have been such that we are now clearly in a war of method paradigms. Let’s call one paradigm Science Hierarchies and the other Science Networks. As you’re reading this, reflect on the dogmas of COVID and public health. Reflect on the dogmas of climate science. Just think about how truth-seekers operating in information networks are increasingly able to outperform well-paid experts anointed by bureaucrats. And believe that it is a war. The censorship campaigns let us know.
Science is undergoing a wrenching evolutionary change.
In fact, most of what we consider to be carried out in the name of science is dubious at best, flat wrong at worst. It appears we’re putting too much faith in science — particularly the kind of science that relies on reproducibility.
In a University of Virginia meta-study, half of 100 psychology study results could not be reproduced. Most of what we consider to be carried out in the name of science is dubious at best, flat wrong at worst.
Experts making social science prognostications turned out to be mostly wrong, according to political science writer Philip Tetlock’s decades-long review of expert forecasts.
But there is perhaps no more egregious example of bad expert advice than in the area of health and nutrition. As I wrote for Future Frontiers:
For most of our lives, we’ve been taught some variation on the food pyramid. The advice? Eat mostly breads and cereals, then fruits and vegetables, and very little fat and protein. Do so and you’ll be thinner and healthier. Animal fat and butter were considered unhealthy. Certain carbohydrate-rich foods were good for you as long as they were whole grain. Most of us anchored our understanding about food to that idea.
“Measures used to lower the plasma lipids in patients with hyperlipidemia will lead to reductions in new events of coronary heart disease,” said the National Institutes of Health (NIH) in 1971. (“How Networks Bring Down Experts (The Paleo Example),” March 12, 2015)
The so-called “lipid theory” had the support of the US surgeon general. Doctors everywhere fell in line behind the advice. Saturated fats like butter and bacon became public enemy number one. People flocked to the supermarket to buy up “heart healthy” margarines.
And yet, Americans were getting fatter.
But early in the 21st century something interesting happened: people began to go against the grain (no pun) and they started talking about their small experiments eating saturated fat. By 2010, the lipid hypothesis — not to mention the USDA food pyramid — was dead. Forty years of nutrition orthodoxy had been upended.
Now, of course, the experts are joining the chorus from the rear.
The Problem Goes Deeper
But the problem doesn’t just affect the soft sciences, according to science writer Ron Bailey:
The Stanford statistician John Ioannidis sounded the alarm about our science crisis 10 years ago. “Most published research findings are false,” Ioannidis boldly declared in a seminal 2005 PLOS Medicine article. What’s worse, he found that in most fields of research, including biomedicine, genetics, and epidemiology, the research community has been terrible at weeding out the shoddy work largely due to perfunctory peer review and a paucity of attempts at experimental replication.
Richard Horton of the Lancet writes, “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.” And according to Julia Belluz and Steven Hoffman, writing in Vox,
Another review found that researchers at Amgen were unable to reproduce 89 percent of landmark cancer research findings for potential drug targets. (The problem even inspired a satirical publication called the Journal of Irreproducible Results.)
Contrast the progress of science in these areas with that of applied sciences such as computer science and engineering, where more market feedback mechanisms are in place. It’s the difference between Moore’s Law and Murphy’s Law.
So what’s happening?
Science’s Evolution
Three major catalysts are responsible for the current upheaval in the sciences. First, a few intrepid experts have started looking around to see whether studies in their respective fields are holding up. Second, competition among scientists to grab headlines is becoming more intense. Third, informal networks of checkers — “amateurs” — have started questioning expert opinion and talking to each other. And the real action is in this third catalyst, creating as it does a kind of evolutionary fitness landscape for scientific claims.
In other words, for the first time, the cost of checking science is going down as the price of being wrong is going up.
Now, let’s be clear. Experts don’t like having their expertise checked and rechecked, because their dogmas get called into question. When dogmas are challenged, fame, funding, and cushy jobs are at stake. Most will fight tooth and nail to stay on the gravy train, which can translate into coming under the sway of certain biases. It could mean they’re more likely to cherry-pick their data, exaggerate their results, or ignore counterexamples.
Far more rarely, it can mean they’re motivated to engage in outright fraud. (Here’s another.)
What’s your political type?
Find out right now by taking The World’s Smallest Political Quiz.
Method and Madness
Not all of the fault for scientific error lies with scientists. Some of it lies with methodologies and assumptions most of us have taken for granted for years. Social and research scientists have far too much faith in data aggregation, a process that can drop the important circumstances of time and place. Many researchers make inappropriate inferences and predictions based on a narrow band of observed data points that are plucked from wider phenomena in a complex system. And, of course, scientists are notoriously good at getting statistics to paint a picture that looks like their pet theories.
Some sciences even have their own holy scriptures, like psychology’s Diagnostic and Statistical Manual (DSM). These guidelines, when married with government funding, lobbyist influence, or insurance payouts, can protect incomes but corrupt practice.
Perhaps the most significant methodological problem with science is overreliance on the peer-review process. Peer review can perpetuate groupthink, the cartelization of knowledge, and the compounding of biases.
The Problem with Expert Opinion
The problem with expert opinion is that it is often cloistered and restrictive. When science starts to seem like a walled system built around a small group of elites (many of whom only share ideas with each other) — hubris can take hold. No amount of training or smarts can keep up with an expansive network of people who have a bigger stake in finding the truth than in shoring up the walls of a guild or cartel.
It’s true that, to some degree, we have to rely on experts and scientists. It’s a perfectly natural part of specialization and division of labor that some people will know more about some things than you and that you are likely to need their help at some point. (I try to avoid accounting, and I am probably not very good at brain surgery, either.) But that doesn’t mean that we shouldn’t question authority, even when the authority knows more about their field than we do.
Alexander Bard’s Information-Tech Epochs
1. Spoken (Mouth)
a. Language
b. Dialogs
c. Stories
2. Written (Page)
a. Pictograms
b. Tablet
c. Paper
3. Printed (Press)
a. Printed
b. Produced
c. Mass Produced
4. Networked (Internet)
a. Telegraphed and telephoned
b. Hyperlinked
c. Distributed Ledgers
5. Synthesized (AI)
a. AI
b. Swarm Intelligence
c. Agentic AI (?)
(Note: Bard’s version has AI as a subset of 4. I pull it into 5.)
The Power of Networks
But when you get an army of networked people — sometimes amateurs — thinking, talking, tinkering, and toying with ideas — you can hasten a proverbial paradigm shift. And this is exactly what we are seeing.
It’s becoming harder for experts to count on the vagaries and denseness of their disciplines to keep their power. But it’s in cross-disciplinary pollination of the network that so many different good ideas can sprout and be tested.
The best thing that can happen to science is that it opens itself up to everyone, even people who are not credentialed experts. Then, let the checkers start to talk to each other. Leaders, influencers, and force-multipliers will emerge. You might think of them as communications hubs or bigger nodes in a network. Some will be cranks and hacks. But the best will emerge, and the cranks will be worked out of the system in time.
The network might include a million amateurs willing to give a pair of eyes or a different perspective. Most in this army of experimenters get results and share their experiences with others in the network. What follows is a wisdom-of-crowds phenomenon. Millions of people not only share results but challenge the orthodoxy.
How Networks Contribute to the Republic of Science
In his legendary 1962 essay, “The Republic of Science,” scientist and philosopher Michael Polanyi wrote the following passage. It beautifully illustrates the problems of science and of society, and it explains how they will be solved in the peer-to-peer age:
Imagine that we are given the pieces of a very large jigsaw puzzle, and suppose that for some reason it is important that our giant puzzle be put together in the shortest possible time. We would naturally try to speed this up by engaging a number of helpers; the question is in what manner these could be best employed.
Polanyi says you could progress through multiple parallel-but-individual processes. But the way to cooperate more effectively is to let them work on putting the puzzle together in the sight of the others so that every time one helper fits in a piece of it, all the others will immediately watch out for the next step that becomes possible as a consequence.
Under this system, each helper will act on his own initiative, by responding to the latest achievements of the others, and the completion of their joint task will be greatly accelerated. In a nutshell, we have here the way in which a series of independent initiatives are organized to a joint achievement by mutually adjusting themselves at every successive stage to the situation created by all the others acting likewise.
Just imagine if Polanyi had lived to see the Internet.
Fame, funding, and cushy jobs are at stake when dogmas are challenged.
This is the Republic of Science. This is how smart people with different interests and skill sets can help put together life’s great puzzles.
In the Republic of Science, there is certainly room for experts. But they are hubs among nodes. In this network, leadership is earned not by sitting atop an institutional hierarchy with the plumage of a postdoc, but by contributing, experimenting, communicating, and learning with the rest of a larger hive mind.
This is science in the peer-to-peer age.
Max Borders is Senior Advisor to The Advocates. You can read more from him at Underthrow.
What do you think?
Rate the degree to which government authorities should intervene on this issue:
Author
Advocates for Self-Government is nonpartisan and nonprofit. We exist to help you determine your political views and to promote a free, prosperous, and self-governing society.