We live in a world with unprecedented access to information. At the same time, we’re witnessing more attempted information control than ever before. Have you noticed the irony? It’s gotten so strange and ubiquitous that we even have new terms entering the vernacular, like “freedom of reach,” which are just euphemisms for soft censorship.
What Is Freedom of Reach?
Many online platforms claim to protect free speech in principle, but they fail to do so in practice. They say users have the right to openly express their thoughts and opinions in these spaces. Then they systematically curtail who can see what users and creators have posted.
On online platforms, your “reach” refers to how far your message travels. It’s a measurement of how many eyeballs you are reaching. This metric tracks how many people are being served your message by a platform’s algorithm.
These online spaces are touted as digital town halls, yet they actively shadowban or reach-limit certain types of content or ideas. They often do this without letting users know which ideas are being throttled.
This practice renders certain types of content practically invisible. Sure, users technically have the freedom to say what they want, but platforms use algorithms to pick and choose which messages are served to other users.
The censored content isn’t removed, but it also isn’t served to your own followers, even though they opted in to receive your content. It isn’t highlighted in relevant keyword searches. It doesn’t have the opportunity to organically filter upward. Limiting factors, implemented by the platforms themselves, guarantee that just about no one is seeing what you’ve said.
Different standards are applied to different users. There’s no transparency about what those standards are, how they function, or which topics and keywords the platform would prefer users avoid. Users of the same exact service are treated unequally.
Even good content, the kind that can accrue large amounts of organic interaction or even go viral, gets hidden. It remains just about invisible to all but a few. The amplification that would have made these ideas pop to the top of the algorithm is almost completely stifled.
Some platforms own the practice. Others deny it. One of the more insidious features of this practice is plausible deniability. The way this is done makes it invisible and easily disavowed. And that means there is no recourse for the user or content creator.
They’re Gatekeeping Ideas from You
By choosing which keywords to boost or suppress, platforms are making unilateral decisions. These are decisions that propel or gatekeep ideas and concepts. This is not trivial. The propagation or suppression of certain messages over others can alter the trajectory of our entire culture.
The entire marketplace of ideas is being altered according to mysterious criteria that are rarely made available to the public. The algorithms the platforms use are opaque black boxes. There is no way for a user or content creator to track the magical message filtration and suppression occurring inside that black box. There is no accountability or transparency here.
If an idea is stifled through forced suppression, it never has the opportunity to gain momentum organically over time. And if the censored information happens to be important or of societal value, beneficial progress could be halted as a result. This has a direct, measurable impact on the marketplace of ideas.
Protecting Real Diversity of Thought
This suppression limits the breadth and depth of the marketplace of ideas. Instead of opening up discourse and allowing the people themselves to choose which concepts fare best, viewpoints outside a preapproved range simply never get seen. The platform decides, top-down, which thoughts are permissible and which are not.
In a real marketplace of ideas, truth emerges from open competition among thoughts, ideas, and opinions in a free and open environment. Ideally, within this environment, we openly share, debate, promote, or dismiss viewpoints as they come under scrutiny. We hone our opinions accordingly. Then our ideas pass or fail society’s open evaluation test.
In a worst-case scenario, suppression leads to terrible ideas going underground to proliferate unquestioned. In an environment where certain topics feel verboten, people become less likely to vocalize their thoughts. They hide their true perspectives out of fear. They end up forming opinions that go unchallenged because they are never put to the test through open exchange. Without sounding boards, they build additional opinions on false premises.
Open dialogue prevents this. When challenged, people are able to see where their opinions have holes, fallacies, or illogical conclusions. This gives everyone the opportunity to learn, pivot, and grow when their ideas do not stand up. This iterative growth leads toward a shared wisdom that cannot otherwise emerge.
Reverse Engineering the Overton Window
The Overton window refers to the range of political ideas considered socially acceptable and publicly discussable at a given time. Absent artificial manipulation, it is an emergent organic construct. It forms when ideas within a society are put to the test over time and then adopted or rejected by the mainstream. Ideas inside the window are accepted. Ideas outside the window are considered fringe.
But when the mainstream is not allowed to hear certain points of view at all, the Overton window ceases to be organic. This ordinarily useful, naturally occurring phenomenon gets hijacked. It is then twisted to manufacture consent or dissent by making certain ideas seem more popular and accepted while suppressing others. People are thus manipulated into accepting or rejecting certain notions over others through a prescriptive dialogue that serves an agenda.
For instance, an idea—even one that has significant grassroots support—can, through manipulation of the Overton window, be prevented from reaching critical mass. The algorithm throttles the idea based on its settings, rejecting it before it can gain any traction.
What You Can Do About It
True freedom requires open and honest communication on all “sides.” And communication cannot occur without listening, feedback, and response. If we’re allowing the communication process to be overtaken, railroaded, curtailed, or otherwise manipulated, how can we expect to protect the marketplace of ideas? How can we find the ideas with the most merit? How can we implement them for the overall betterment of society?
Question why you’re receiving certain messages and not others. Audit your feed by tracking what you are and are not being shown. Use this information to notice any gaps forming.
Deliberately seek out information that you haven’t been exposed to by searching for perspectives you wouldn’t ordinarily see. Visit sites that you know the algorithm would never show you.
Be aware that you may encounter a barrier to entry in this effort. It’s difficult to look up what you don’t know you should be searching for if you don’t even know it exists. So ask around. Encourage friends to do the same thing so you can all become more informationally well-rounded together.
Demand that the sites you interact with be transparent about how content is ranked and suppressed. Ask to see how it is determined that certain people receive certain messaging while others do not. Ask for clear disclosures of site policies.
Choose whether you’d like to opt out of certain platforms that are not transparent about their practices. Check the policies of the sites you use, and favor frequenting platforms that maintain better policies.
Focus on the restoration of a shared reality—one in which we encounter a true free marketplace of ideas with informed consent. Favor the formation of organic Overton windows that provide open access to a full range of available information without algorithmic siloing.
Informed consent is crucial to maintaining freedom and independence. Without questioning and understanding how the information we’re consuming reaches us, and why we were chosen to receive certain messages instead of others, our ability to self-govern becomes crippled at best. Taking steps every day to ensure that our choices reflect our values can help us restore true freedom of speech to the digital landscape.