Fixing the Feed
In the first post, I described the broken information supply chain — the collapse of local journalism, algorithms optimized for engagement over truth, eroded institutional trust, and industrialized disinformation. In the thesis post, I argued that fixing it requires structural intervention on three layers: the supply, the filter, and the infrastructure.
This post takes the first layer apart. What actually works to fix what people are reasoning from?
The answer is not a single policy or platform feature. It is a set of interlocking interventions — some boring, some counterintuitive, all supported by evidence. None is sufficient alone. Together, they form the outline of a functioning information ecosystem.
The Collapse, by the Numbers
Before we talk about solutions, we need to be precise about the problem.
Between 2004 and 2020, the United States lost more than 2,100 newspapers. Roughly 1,800 communities became "news deserts" — places with no local news coverage at all (Abernathy, 2020, University of North Carolina Hussman School of Journalism, News Deserts and Ghost Newspapers).
This is not just a cultural loss. It is a measurable civic failure. When local newspapers close, municipal borrowing costs increase by 5 to 11 basis points — because the oversight function disappears, governance deteriorates, and investors price in the increased risk of mismanagement (Gao, Lee & Murphy, 2020, Journal of Financial Economics). Reduced local news coverage leads to lower political engagement and reduced government accountability (Hayes & Lawless, 2018, Information, Communication & Society).
The communities left behind did not stop consuming information. They started consuming whatever filled the vacuum: partisan media, social media, and algorithmically curated content optimized not for accuracy but for engagement. The supply chain didn't just break. It was replaced by something worse.
Public Broadcasting: The Boring Solution That Works
The single most consistent finding in comparative media research is this: countries with well-funded public broadcasters have higher news trust, more substantive coverage, and less sensationalism.
The Reuters Institute for the Study of Journalism at Oxford publishes an annual Digital News Report surveying news consumption and trust across dozens of countries. The 2022 report found that trust in news was 69% in Finland, 65% in Denmark, and 26% in the United States. The pattern holds year after year. Countries with strong public media systems — Finland (YLE), Denmark (DR), the UK (BBC), Germany (ARD and ZDF), Japan (NHK) — consistently outperform countries that rely primarily on commercial media.
Rodney Benson (2018, The International Journal of Press/Politics) compared public media systems across 18 democracies and found that countries with greater public funding of media produced more hard news coverage and less sensationalism. This is not a coincidence. It is a structural effect: when the business model is public funding rather than advertising revenue, the incentive shifts from capturing attention to serving the public interest.
Daniel Hallin and Paolo Mancini (2004, Comparing Media Systems, Cambridge University Press) identified three media system models: Liberal (US, UK), Democratic Corporatist (Northern Europe), and Polarized Pluralist (Mediterranean). The Democratic Corporatist model — characterized by strong public broadcasting, organized civil society, and professional journalistic norms — is associated with higher political knowledge and participation among citizens.
Victor Pickard (2020, Democracy Without Journalism?, Oxford University Press) argues that the collapse of local journalism in the United States is a market failure — not a natural outcome but a predictable consequence of an advertising-dependent business model that ceased to function when advertising revenue migrated to platforms. The fix, he argues, requires public intervention: direct funding of journalism as civic infrastructure, the way we fund roads, courts, and schools.
This is not a radical proposition. It is the norm in most functioning democracies. The United States is the outlier — not because public media is impractical, but because of a specific political history that has conflated public funding with government control. The two are not the same. The BBC is publicly funded. It is not state propaganda. The distinction matters.
Nonprofit and Membership Journalism
Public broadcasting is not the only structural alternative to advertising-dependent media. The last two decades have seen the emergence of nonprofit and membership-funded journalism models that produce high-impact work without the engagement incentive.
ProPublica, founded in 2007 as a nonprofit investigative newsroom, has won six Pulitzer Prizes — demonstrating that philanthropically funded journalism can compete at the highest level of quality and impact. The Guardian's membership model has sustained one of the world's major newsrooms without a paywall and without the advertising incentive structure that degrades coverage quality. De Correspondent, a Dutch membership-funded platform, built a model around "unbreaking news" — slow, substantive journalism that explicitly rejects the engagement optimization of commercial media.
The American Journalism Project, launched in 2019 as a venture philanthropy fund for local news, had invested over $50 million in local nonprofit newsrooms by 2023. This is a structural intervention: redirecting capital toward journalism that serves communities rather than advertisers.
These models are not perfect. They depend on philanthropic and reader support, which can be fickle. They do not solve the scale problem — most news deserts cannot be filled by a single nonprofit. But they demonstrate that the business model, not the demand for quality journalism, is what broke. When the incentive structure changes, the journalism improves.
Platform Design: Friction, Nudges, and the Accuracy Prompt
The most counterintuitive finding in recent misinformation research is also one of the most replicated: you can significantly reduce misinformation sharing with absurdly simple interventions.
Gordon Pennycook, along with Epstein, Mosleh, Arechar, Eckles, and David Rand (2021, Nature), found that simply asking people to consider the accuracy of a headline — before they decided whether to share it — reduced sharing of misinformation by approximately 50%. Not a media literacy course. Not a fact-check. Not a warning label. A single question: "How accurate is this headline?"
The mechanism is straightforward. On social media, people are not primarily thinking about accuracy. They are thinking about engagement — what will get reactions, what expresses their identity, what is funny or outrageous. The accuracy prompt redirects attention from "Will this get a reaction?" to "Is this true?" — a shift that most people, regardless of political affiliation, are willing to make when prompted.
Mosleh, Pennycook, and Rand (2021, Journal of Experimental Psychology: General) found that adding friction — a prompt asking "Are you sure you want to share this?" — reduced sharing of false content. Twitter's experiment with prompting users to read an article before retweeting it (implemented in 2020) produced a 40% increase in article opens and a 33% decrease in retweets without reading, according to Twitter's internal reporting.
These are design interventions. They do not require users to become more educated, more disciplined, or more intelligent. They require platforms to add a moment of friction where currently there is none — a speed bump between the impulse and the action. The fact that such a small change produces such a large effect tells you something important about the architecture of the problem: much of misinformation sharing is not driven by belief. It is driven by thoughtlessness. People are not deliberately spreading falsehoods. They are sharing without thinking, because the platform is designed to make sharing effortless and thinking optional.
This is fixable. It is a design choice. The current design optimizes for speed and volume. An alternative design could optimize for accuracy — not by censoring content, but by inserting the cognitive pause that the platform currently removes.
The EU Experiment: Regulation at Scale
The European Union's Digital Services Act (DSA), which entered into force in November 2022 and became fully applicable to all platforms by February 2024, represents the most comprehensive regulatory attempt to address the structural incentives of the attention economy.
For Very Large Online Platforms — those with 45 million or more monthly EU users — the DSA requires mandatory systemic risk assessments, algorithmic transparency and auditability, researcher access to data, and a ban on dark patterns (deceptive design elements that manipulate user behavior).
The "systemic risk" framework is novel. It requires platforms to proactively identify and mitigate risks that their systems pose to public discourse — not just remove individual pieces of bad content, but assess whether their algorithmic design systematically amplifies harmful content. This shifts the regulatory focus from content moderation (a reactive, whack-a-mole approach) to system design (a structural intervention).
Daphne Keller of the Stanford Cyber Policy Center has noted that the framework's success depends entirely on enforcement capacity — the EU Commission must be able to audit platforms meaningfully, which requires technical expertise and political will. Natali Helberger (2020, European Journal of Communication) argues that algorithmic transparency alone is insufficient without meaningful accountability mechanisms. Transparency without consequences is a performance, not a reform.
The DSA is an experiment. It is too early to measure its long-term effects. But it represents a structural recognition that the information environment is a designed system — and that the design choices made by a handful of private companies have public consequences that justify public oversight.
Community Notes: Crowdsourced Correction
Twitter's Community Notes program (launched as Birdwatch in January 2021, rebranded in 2022) represents an alternative to top-down fact-checking: crowdsourced correction with a structural twist.
The key innovation is the "bridging algorithm." A note is only shown publicly if it is rated as helpful by users who typically disagree with each other — users from different political orientations. This means a note cannot be weaponized by a single partisan group. It must satisfy people on both sides of the aisle. Allen, Arechar, Pennycook, and Rand (2021) found in an early evaluation that notes were indeed rated as helpful across partisan lines, suggesting the bridging mechanism works.
The limitation is speed. Misinformation spreads fastest in its first hours. Community Notes, driven by volunteer effort, takes hours to days to produce and rate notes. By the time a note appears, most of the virality window has passed. This is the fundamental tension: crowdsourced quality takes time, and the platform rewards speed.
This connects to a deeper finding. Vosoughi, Roy, and Aral (2018, Science) analyzed the spread of verified true and false news on Twitter and found that false news spreads approximately six times faster than true news. The asymmetry is structural: false claims are typically more novel, more emotionally arousing, and more shareable than accurate corrections. Any system that relies on post-hoc correction will always be playing catch-up against a platform architecture that rewards speed over accuracy.
Community Notes is a valuable experiment. But it is a patch on a system whose fundamental incentive structure remains unchanged.
Prebunking: Inoculation Before Exposure
If debunking — correcting misinformation after exposure — is always fighting from behind, the research suggests a more promising alternative: prebunking — building resistance before exposure, the way a vaccine builds immunity before infection.
The theoretical foundation is inoculation theory, first developed by William McGuire in the early 1960s. The insight is simple: just as a weakened virus triggers the immune system to build defenses, exposing people to weakened forms of manipulation techniques triggers cognitive defenses against those techniques when encountered at full strength.
Sander van der Linden and colleagues at Cambridge have been the leading researchers applying this framework to misinformation. Roozenbeek and van der Linden (2019, Journal of Cognition) developed the "Bad News" game — an online browser game that teaches players to recognize misinformation techniques: emotional manipulation, polarization, conspiracy thinking, trolling, impersonation, and discrediting. Players who completed the game showed improved ability to identify misinformation and reduced perceived reliability of fake headlines. The effect crossed political ideology — it was not a partisan intervention.
In 2022, van der Linden's team scaled the approach. Roozenbeek, van der Linden, and colleagues (2022, Science Advances) conducted a large preregistered study (N > 5,000) showing that short prebunking videos — approximately 90 seconds each — teaching recognition of manipulation techniques (emotional language, false dichotomies, scapegoating) improved participants' ability to identify manipulative content by 5 to 10 percentage points. In collaboration with Google's Jigsaw unit, prebunking videos were tested as pre-roll ads on YouTube in Poland, the Czech Republic, and Slovakia — regions targeted by anti-refugee disinformation — reaching over one million viewers with measurable improvement in manipulation technique recognition.
Maertens, Roozenbeek, Basol, and van der Linden (2021, Harvard Kennedy School Misinformation Review) found that the inoculation effect from the Bad News game lasted at least three months, though with some decay.
The advantage of prebunking over debunking is structural. Debunking is reactive and topic-specific — you correct one false claim at a time, always after the damage is done. Prebunking is proactive and technique-based — you teach people to recognize the methods of manipulation, regardless of topic. This makes it scalable in a way that fact-checking never can be.
The limitation is effect size. The improvements are real but modest — 5 to 10 percentage points. Prebunking is not a silver bullet. It is one layer of defense in a system that needs many.
Lewandowsky, Cook, Ecker, and colleagues documented this complementary challenge in The Debunking Handbook 2020: even after correction, people continue to rely on misinformation — the "continued influence effect." Walter, Cohen, Holbert, and Morag (2020, Psychological Bulletin) conducted a meta-analysis of 65 studies on fact-checking and correction, finding that corrections generally reduce belief in misinformation (average effect size d = 0.29), but the effects are smaller for politically concordant misinformation — that is, misinformation that aligns with the person's political identity.
The implication: prebunking works better than debunking, but neither is sufficient. Both are demand-side interventions — they improve how individuals process information. They do not change the information supply itself.
The Asymmetry Problem
There is one finding in the research that is important and uncomfortable, and I am going to state it plainly because the data demands it.
Yochai Benkler, Robert Faris, and Hal Roberts of the Harvard Berkman Klein Center analyzed over four million stories and 1.25 million Facebook shares during and after the 2016 US election. Their findings, published in Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (Oxford University Press, 2018), are clear: the misinformation problem in American media is asymmetric.
The right-wing media ecosystem operated with Breitbart as its center, largely disconnected from professional journalistic norms and fact-checking practices. The center-left ecosystem remained anchored to professional outlets — the New York Times, the Washington Post, CNN — that, whatever their flaws, operate within norms of editorial accountability, correction policies, and source verification.
This does not mean the left is immune to misinformation. It is not. It does not mean centrist or left-leaning outlets never get things wrong. They do. It means that the structural relationship to journalistic norms differs between the two ecosystems in a way that is empirically measurable and directionally consistent.
I state this not to score a political point but because any honest analysis of the information supply chain must account for this asymmetry. A "both sides are equally misinformed" framing is not supported by the data. The structural problems are different, and the solutions must account for that difference. Pretending otherwise in the name of balance is itself a form of misinformation — the kind that sacrifices accuracy for the appearance of fairness.
What's Missing
Everything in this post — public broadcasting, nonprofit journalism, platform friction, regulation, prebunking, Community Notes — addresses the supply side. These interventions improve the quality and accessibility of reliable information. They make it easier to encounter truth and harder to spread falsehood without thinking.
But they share a common limitation: they assume that if you put good information in front of people, people will use it.
They won't. Not reliably. Not without help. Because the filter — the cognitive bias system described in the second post — is still running. You can fix the feed and people will still scroll past the correction to share the outrage. You can fund the best journalism in the world and people will still read only the sources that confirm what they already believe.
The supply side is necessary. It is not sufficient. The next post turns inward — to the cognitive filter itself, and what the evidence says about fixing it.
The information supply chain is broken, but not beyond repair. The tools exist — public media, platform redesign, prebunking, structural reform. The question is not whether they work. The evidence says they do. The question is whether we will build them.
Next: Debiasing the Reasoner — on what actually works to fix the cognitive filter, and why "just think harder" is not on the list.