Key Takeaways
1. The Pageview Economy Corrupts Journalism and Amplifies Fake News.
The news, whether it’s found online or in print, is just the content that successfully navigated the media’s filters.
Financial incentives. The shift from print subscriptions to online ad revenue has fundamentally reshaped journalism, prioritizing web traffic (pageviews) over truth. Blogs and online newspapers, often financially strapped, are driven to produce content rapidly, leading to rushed, sloppy writing and a lack of fact-checking. This creates a "perfect storm" where misleading articles with sensational headlines often generate more clicks and, consequently, more revenue than authentic journalism.
Upward propagation. This pageview-driven model enables a dangerous "upward vertical flow" of misinformation. Small, underfunded blogs, lacking resources and incentives for verification, publish unverified stories. Mid-tier blogs, desperate for content, pick these up, often citing the original blog as a source without independent verification. Eventually, national news sites may cover the "virality" of the story itself, lending it legitimacy despite its dubious origins.
Historical parallels. This phenomenon echoes the "yellow press" of the late 19th century, where sensational headlines trumped factual content. While the 20th-century subscription model incentivized truthful journalism, the 21st century's digital landscape, with its countless competing sites and detailed pageview data, has created a "yellow press on digital steroids," where quantitative metrics drive a constructed reality. The decline of local newspapers has further exacerbated this, creating a vacuum filled by deceptive, politically motivated "pink slime" news networks.
2. AI Now Generates Convincing Fake Content, From Photos to Full Articles.
Thanks to deepfake technology, trying to find the source of a potentially fake profile picture is like searching for a needle in a haystack, except now the needle may not exist.
Synthetic personas. Artificial intelligence has advanced to create highly realistic, untraceable profile photos of nonexistent individuals, known as deepfakes. These synthetic images are increasingly used to create fake journalists and personas, as seen with "Oliver Taylor" or Russian government campaigns, to spread disinformation. This makes traditional reverse image searches ineffective, posing a significant challenge to verifying online identities.
Automated writing. Beyond images, AI can now generate entire articles and headlines. Microsoft's MSN replaced human curators with AI for headline writing and content optimization, leading to errors like misidentifying individuals in stories about racism. OpenAI's GPT-3, a massive text generation model, can extend prompts into full articles so convincingly that human readers struggle to distinguish them from human-written content, raising concerns about its potential for large-scale disinformation campaigns.
Real-world impact. The ease of AI content generation has already been demonstrated:
- A college student used GPT-3 to create a fake blog that topped Hacker News.
- A Reddit bot, powered by Philosopher AI (derived from GPT-3), posted lengthy, human-like replies at an impossible rate.
- GPT-3 can generate harmful, radicalizing text, from white supremacist manifestos to QAnon narratives, simply from short prompts.
This highlights the growing threat of AI in rapidly scaling the production of deceptive content.
3. Deepfake Videos Threaten Trust in Visual Evidence and Fuel Political Instability.
Not only may fake videos be passed off as real, but real information can be passed off as fake. This is called the liar’s dividend, in which people with a propensity to deceive are given the benefit of an environment in which it is increasingly difficult for the public to determine what is true.
Manipulating reality. Deepfake technology, powered by deep learning GANs, allows for the creation of videos where individuals appear to say or do things they never did. This creates a dual threat: the spread of fabricated events and the erosion of trust in authentic videos, as any damning evidence can be dismissed as a "deepfake." This "liar's dividend" empowers malicious actors and makes discerning truth increasingly difficult.
Shallowfakes vs. deepfakes. Simple video manipulations, or "shallowfakes," have a long history in politics, from Abraham Lincoln's altered portrait to slowed-down videos of Nancy Pelosi. These often involve basic editing like splicing or mislabeling. Deepfakes, however, use advanced AI to create seamless, convincing alterations, making detection far more challenging. While shallowfakes still sow confusion, deepfakes represent a new frontier of deception.
Political ramifications. Deepfakes have already impacted politics globally:
- A pornographic deepfake of Indian journalist Rana Ayyub led to severe harassment and health issues.
- A suspected deepfake video of Gabon's President Ali Bongo fueled a coup attempt.
- Brief deepfake conspiracies surrounded Donald Trump's COVID-19 video and a Myanmar military confession.
- India used lip-syncing deepfakes for political campaigns, blurring the line between outreach and deception.
While some deepfakes are satirical, their malicious use is growing exponentially, doubling every six months, with significant implications for elections and public trust.
4. YouTube's Algorithms Drive Viewers Towards Extremism and Conspiracy Theories.
YouTube’s powerful recommendation algorithm, which pushes its two billion monthly users to videos it thinks they will watch, has fueled the platform’s ascent to become the new TV for many across the world.
Algorithmic amplification. YouTube's recommendation algorithm, responsible for 70% of watch time, has been criticized for systematically amplifying divisive, sensationalist, and false videos. Initially optimizing for "views" then "watch time," and later incorporating deep reinforcement learning, the algorithm developed long-term strategies to keep users engaged. This often meant pushing viewers down "rabbit holes" of increasingly provocative content, even if it was misinformation or extremist propaganda.
Radicalization pathways. In Brazil, YouTube's algorithm played a significant role in the rise of far-right politicians like Jair Bolsonaro, frequently recommending conspiracy-filled channels. Studies suggest that the algorithm inadvertently favored far-right content because emotions like fear, doubt, and anger, often leveraged by extremists, drive high watch times. This accidental synergy, amplified by deep learning, led to users being exposed to and potentially radicalized by increasingly extreme views.
Persistent challenges. Despite YouTube's claims of reducing harmful recommendations, external studies show mixed results. While some conspiracy content like Flat Earth videos saw declines, others like climate change denial persisted. The algorithm's personalization, based on detailed user history, makes external auditing difficult. Furthermore, YouTube's internal research revealed that its "recommendation systems grow the problem" and "exploit the human brain’s attraction to divisiveness," yet efforts to recalibrate were often dismissed for fear of throttling "engagement."
5. AI-Powered Lie Detectors Are Unreliable and Perpetuate Bias, Not Truth.
There is no lie detector, neither man nor machine.
Pseudoscience persists. The traditional polygraph, or "lie detector," has a century-long history of scientific skepticism, failing the "Frye standard" for scientific evidence in court. Despite this, it remains widely used for public sector employment screening, generating high rates of false positives and exhibiting racial bias. This legacy of unproven technology is now being reinvented with AI, but the fundamental flaws remain.
Algorithmic flaws. AI-powered lie detectors, such as EyeDetect (analyzing eye movements) and Silent Talker (micro-gestures), claim high accuracy but lack independent scientific validation. Their proprietary, black-box algorithms are trained on limited, potentially biased data, leading to:
- Inconsistent performance across different populations (e.g., less effective on less-educated individuals).
- Adjustable "sensitivity" settings that can embed historical biases into the system.
- Pernicious data-driven feedback loops, where algorithmic bias exacerbates existing societal inequalities.
Dangerous applications. These unreliable AI lie detectors are being trialed in sensitive areas like airport security (iBorderCtrl, Avatar) and used for employment screening and fraud detection. Despite their flaws, their low cost and automated operation allow for unprecedented scaling, raising concerns about widespread, unfair misclassification. The promise of "mind-reading" through technology is a lucrative but dangerous illusion, creating more fake news about truth detection than it prevents.
6. Google's Algorithms Inadvertently Spread Misinformation and Racism Across Its Products.
Search engines have come to play a central role in corralling and controlling the ever-growing sea of information that is available to us, and yet they are trusted more readily than they ought to be.
Ubiquitous influence. Google, as the primary gateway to online information for billions, wields immense power in shaping public perception. Its algorithms, across products like Search, Maps, Images, and Autocomplete, can inadvertently amplify misinformation and perpetuate harmful stereotypes, often by reflecting existing societal biases without adequate contextual understanding. This makes Google an "object of faith" that can distort reality.
Racism in algorithms. Google's algorithms have repeatedly demonstrated racial bias:
- Google Maps searches for racial slurs led to the White House.
- Google Images associated "unprofessional hairstyles" with Black women and "ugly woman" with Black/Brown women.
- Google Photos tagged Black individuals as "gorillas," leading to the removal of the tag rather than a fix.
- Google Autocomplete suggested offensive and racist phrases like "why do black people hate jews" or "black lives matter is a hate group."
These incidents highlight how algorithms, by naively absorbing and reflecting real-world data, can amplify societal racism on a massive scale.
Misinformation amplification. Google Search and Autocomplete have also propagated fake news:
- In 2016, a fake election result claiming Trump won the popular vote topped Google Search.
- Featured Snippets provided false answers to questions like "Is Obama planning a coup?"
- Autocomplete suggested phrases like "civil war is coming" or "coronavirus is not that serious," even when less popular than factual queries.
Google's efforts to "elevate quality journalism" involve human evaluators training algorithms, but the system remains "brittle," sometimes producing shockingly bad outputs and struggling to define or consistently remove misinformation.
7. Algorithmic Advertising Funds Fake News and Enables Discrimination.
One of the incentives for a good portion of fake news is money.
Funding disinformation. Google, the world's largest advertising company, inadvertently funds the fake news industry through its algorithmic ad distribution system. By placing ads on third-party websites in its Google Display Network, Google provides a crucial revenue stream for misinformation publishers. In 2019, Google was estimated to be responsible for nearly 40% ($87 million) of the fake news industry's quarter-billion-dollar revenue, despite public declarations to curb this.
Hidden profits. Google's system allows ad-hosting websites to remain anonymous, preventing advertisers from knowing where their ads are placed. This anonymity disproportionately benefits hyperpartisan and fake news sites, which generate significantly more revenue per site. Google's public statements about restricting ads on misleading sites have often been contradicted by its continued practice, suggesting a prioritization of substantial profits over ethical concerns.
Discriminatory advertising. Facebook's algorithmic ad distribution system has a documented history of enabling discrimination:
- It created offensive ad categories like "Jew hater" and correlated anti-Semites with gun enthusiasts.
- It allowed advertisers to illegally exclude users by "ethnic affinity" (race) from housing, employment, and credit ads, violating federal law.
- Studies showed Facebook's algorithms perpetuated societal biases, showing job ads for doctors to more white men and janitors to fewer, even without explicit targeting.
Despite legal action and public pressure, these issues highlight how algorithms, by optimizing for engagement based on biased data, can amplify discrimination and reinforce societal inequalities.
8. Social Media Algorithms Amplify Misinformation, Outpacing Moderation Efforts.
At a moment of rampant disinformation and conspiracy theories juiced by algorithms, we can no longer turn a blind eye to a theory of technology that says all engagement is good engagement.
Engagement over truth. Social media algorithms, designed to maximize user engagement (likes, shares, comments, time spent), inadvertently amplify misinformation. If a harmful conspiracy theory generates high engagement, algorithms promote it more prominently, broadcasting it to a wider audience. This was evident with the Epoch Times, which leveraged Facebook's algorithms and bots to rapidly grow its following by peddling pro-Trump fake news.
Challenges of moderation. Moderating misinformation on social media is a complex, constantly evolving challenge:
- QAnon's adaptability: Diffuse, meta-narrative conspiracy theories like QAnon constantly adapt their messaging to evade detection, often re-framing their ideology to reach wider audiences.
- Algorithmic limitations: Facebook's initial attempts to curb QAnon through algorithmic adjustments were insufficient, as groups simply changed names or shifted content focus.
- Scale and speed: The sheer volume of content and the speed at which misinformation spreads (lies spread faster and deeper than truth on Twitter) make real-time, comprehensive moderation incredibly difficult.
Internal conflicts and external pressure. Internal Facebook research revealed that its algorithms "exploit the human brain’s attraction to divisiveness" and that "64% of all extremist group joins are due to our recommendation tools." However, proposals to address these issues were often dismissed by senior leadership as "antigrowth" or politically sensitive. External pressure, including advertising boycotts and whistleblower accounts like Sophie Zhang's memo on bot problems, has forced some changes, but self-regulation remains insufficient.
9. Fact-Checking Tools Leverage AI to Combat Disinformation, But Human Oversight Remains Crucial.
Humans aren’t going anywhere anytime soon—and nor would we want them to be.
AI-assisted fact-checking. Publicly available tools like Full Fact and Logically demonstrate a hybrid approach to combating fake news, where machine learning significantly assists human fact-checkers. These tools use AI to:
- Identify key "claims" within articles or speeches.
- Numerically encode text (using models like Google's BERT) for contextual understanding.
- Match new claims against databases of previously fact-checked information.
- Prioritize urgent claims for human review based on current events and virality.
Beyond content analysis. Some advanced detection algorithms go beyond just analyzing text. They combine:
- Content-based features: Sentiment scores, linguistic patterns (e.g., Logically, Microsoft Research).
- Network-based features: How content propagates through social networks (e.g., Fabula AI, Microsoft Research).
- Metadata-based features: User profiles, account activity, origin of content (e.g., Microsoft Research).
These multi-faceted approaches aim to detect fake news early in its spread, but often require country-specific fine-tuning and struggle with rapidly evolving misinformation tactics.
The ongoing arms race. While AI offers powerful tools for detection and moderation, it's an ongoing "arms race" against sophisticated disinformation campaigns. Social media companies face immense pressure to balance free speech, user engagement, and content moderation. Laws like Section 230, which shield platforms from liability for user-generated content, further complicate this. Ultimately, effective solutions require a combination of advanced AI, robust human oversight, transparency from tech giants, and potentially new regulatory frameworks to ensure accountability and protect democratic discourse.
Last updated:
