Artificial intelligence has accelerated the scale, speed, and sophistication of misinformation.
From deepfakes and synthetic text to automated bot networks, AI-driven misinformation now affects elections, financial markets, public health, and national security.
Governments, platforms, journalists, and businesses are responding with regulation, detection tools, and media literacy initiatives.
The following AI in misinformation statistics summarize the current state of AI-related misinformation, why it matters, and who is most affected.
- AI Misinformation Awareness Statistics
- AI-Generated Content and Deepfake Statistics
- Social Media AI Misinformation Statistics
- Election and Political AI Misinformation Statistics
- Health AI Misinformation Statistics
- Financial and Fraud AI Misinformation Statistics
- Detection and Countermeasure Statistics
- Regulation and Policy AI Misinformation Statistics
- Frequently Asked Questions
AI Misinformation Awareness Statistics
- 58% of U.S. adults say misinformation is a major problem in society (Source: Pew Research Center).
- 64% of Americans say fabricated news stories cause confusion about basic facts (Source: Pew Research Center).
- 59% of global respondents worry about distinguishing real from fake online information (Source: Reuters Institute).
- 52% of people say they have encountered misinformation online in the past week (Source: Reuters Institute).
- 74% of U.S. adults believe misinformation reduces trust in government (Source: Pew Research Center).
- 67% of people are concerned about AI-generated misinformation specifically (Source: Ipsos).
- 71% of respondents expect AI to make misinformation harder to detect (Source: Edelman Trust Barometer).
- 55% of users say they are not confident identifying manipulated media (Source: Pew Research Center).
- Trust in news fell in 21 of 46 countries surveyed in 2023 (Source: Reuters Institute).
- 48% of users say they avoid news due to concerns about accuracy (Source: Reuters Institute).
- 69% of Americans say misinformation affects political discourse (Source: Pew Research Center).
- 61% of respondents believe social media companies do too little to stop misinformation (Source: Pew Research Center).
- 62% of global respondents say misinformation spreads faster than fact-based news (Source: Reuters Institute).
- 46% of people say they have shared news later found to be inaccurate (Source: Pew Research Center).
- Concern about misinformation increased year-over-year in most democracies surveyed (Source: Reuters Institute).
AI-Generated Content and Deepfake Statistics
- The number of deepfake videos online increased by more than 500% between 2019 and 2023 (Source: Sensity AI).
- 96% of detected deepfake videos online were non-consensual or misleading (Source: Sensity AI).
- 70% of people cannot reliably identify deepfake videos (Source: iProov).
- Deepfake incidents were reported in more than 90 countries by 2023 (Source: Sensity AI).
- Audio deepfakes are increasingly used in fraud schemes (Source: FBI IC3).
- Deepfake detection accuracy varies widely, often below 70% in real-world conditions (Source: NIST).
- AI tools significantly reduce the time required to create synthetic media (Source: Stanford AI Index).
- Political deepfakes increased during election periods (Source: Brookings Institution).
- Video manipulation is harder to detect than text manipulation (Source: MIT Media Lab).
- Compression reduces deepfake detection accuracy by over 20% (Source: IEEE).
- 78% of U.S. adults support labeling AI-generated media (Source: Pew Research Center).
- Deepfake takedown requests increased year over year on major platforms (Source: Meta Transparency Report).
- Synthetic media is increasingly multimodal (text, image, audio) (Source: Stanford AI Index).
- Detection tools struggle most with short-form video (Source: MIT CSAIL).
- Governments are beginning to regulate deepfakes explicitly (Source: OECD).
Social Media AI Misinformation Statistics
- Social media is the most common source of misinformation exposure (Source: Pew Research Center).
- False news spreads faster than true news on Twitter/X (Source: MIT Sloan).
- Automated accounts amplify misinformation more rapidly than human users (Source: Carnegie Mellon University).
- Visual misinformation spreads faster than text-only misinformation (Source: NYU Center for Social Media).
- Meta removed billions of pieces of coordinated inauthentic behavior content (Source: Meta Transparency Report).
- AI systems assist with content moderation at scale (Source: Meta).
- 45% of people rely on social media for news (Source: Reuters Institute).
- Corrections reach fewer users than the original false content (Source: Harvard Kennedy School).
- Platform labels reduce belief in false content (Source: Pew Research Center).
- Bot networks have been identified in dozens of countries (Source: Freedom House).
- Engagement is higher for emotionally charged misinformation (Source: MIT Sloan).
- Algorithmic amplification influences misinformation reach (Source: Mozilla Foundation).
- Teen users report frequent exposure to misleading content (Source: Common Sense Media).
- Social platforms invest billions in safety and integrity systems (Source: Statista).
- Moderation accuracy varies by language (Source: UNESCO).
Election and Political AI Misinformation Statistics
- 90% of countries face misinformation threats to elections (Source: International IDEA).
- 61% of U.S. voters are concerned about AI’s role in elections (Source: Pew Research Center).
- Coordinated disinformation campaigns have been detected in over 70 countries (Source: Oxford Internet Institute).
- Political misinformation increases during election cycles (Source: Reuters Institute).
- Foreign interference remains a significant source of misinformation (Source: EU DisinfoLab).
- AI reduces the cost of political messaging (Source: RAND Corporation).
- Political deepfakes have gone viral during campaigns (Source: Brookings Institution).
- Only a minority of countries regulate AI political advertising (Source: OECD).
- Fact-checking demand spikes during elections (Source: IFCN).
- 59% of Americans support banning undisclosed AI political ads (Source: Pew Research Center).
- AI-generated robocalls have been documented (Source: FCC).
- Election misinformation takedowns increased in 2024 (Source: Meta).
- Political trust declines when misinformation spreads (Source: Pew Research Center).
- Voter confidence is affected by perceived misinformation (Source: International IDEA).
- Governments classify AI misinformation as a security risk (Source: WEF).
Health AI Misinformation Statistics
- WHO confirms AI is increasingly used to scale health misinformation (Source: WHO).
- Health misinformation spreads faster than corrective information (Source: Nature Human Behaviour).
- Vaccine misinformation reduced confidence in multiple regions (Source: The Lancet).
- One-third of U.S. adults encountered false health information online (Source: Pew Research Center).
- Visual health misinformation is shared more than text (Source: Reuters Institute).
- Health misinformation increases during crises (Source: WHO).
- Public health agencies invest in infodemic management (Source: OECD).
- Bots amplified COVID-19 misinformation (Source: NIH).
- Doctors report patients citing false online health information (Source: AMA).
- Only 40% of users verify health claims before sharing (Source: Pew Research Center).
- AI tools assist with health misinformation monitoring (Source: Google Transparency Report).
- Health scams increased during pandemics (Source: FTC).
- Corrective information reaches fewer users than false claims (Source: Nature).
- Trust in online health information varies widely (Source: Pew Research Center).
- WHO does not publish a global percentage for AI-generated health misinformation (Source: WHO).
Financial and Fraud AI Misinformation Statistics
- U.S. consumers lost $10+ billion to fraud in 2023 (Source: FTC).
- AI-enabled scams contributed to rising losses (Source: FTC).
- Voice cloning scams increased significantly year-over-year (Source: FBI IC3).
- Investment fraud is the top loss category (Source: FTC).
- Social engineering remains the dominant attack vector (Source: Verizon DBIR).
- Deepfake impersonation incidents are documented (Source: FBI).
- Fraudsters use generative AI for personalization (Source: Europol).
- Financial misinformation can move markets (Source: SEC).
- Scam success rates improve with personalization (Source: Europol).
- Consumers struggle to detect AI scams (Source: Norton).
- Banks increased AI fraud detection spending (Source: Statista).
- Regulators issued AI fraud warnings (Source: FINRA).
- Cross-border fraud increased (Source: World Bank).
- AI chatbots are used in scam interactions (Source: Europol).
- Fraud losses continue to rise annually (Source: FTC).
Detection and Countermeasure Statistics
- AI text detectors have limited reliability (Source: Stanford AI Index).
- Deepfake detection accuracy averages below 70% (Source: NIST).
- Hybrid human-AI moderation performs best (Source: MIT CSAIL).
- Content labeling reduces sharing of false posts (Source: Pew Research Center).
- Fact-checking partnerships expanded globally (Source: IFCN).
- Watermarking adoption increased (Source: Stanford AI Index).
- Platforms rely heavily on automated moderation (Source: Meta).
- Multilingual misinformation is harder to detect (Source: UNESCO).
- Real-time moderation reduces virality (Source: Meta).
- False negatives remain a major challenge (Source: NIST).
- Detection tools lag behind generation tools (Source: Stanford AI Index).
- Cross-platform collaboration improves outcomes (Source: OECD).
- User reporting remains critical (Source: Pew Research Center).
- Investment in AI safety increased (Source: Statista).
- Detection effectiveness varies by content type (Source: MIT).
Regulation and Policy AI Misinformation Statistics
- Over 40 countries proposed AI-related legislation (Source: OECD).
- The EU AI Act addresses synthetic media transparency (Source: European Commission).
- 78% of Americans support AI regulation (Source: Pew Research Center).
- Disclosure rules reduce deceptive practices (Source: Brookings).
- Enforcement varies widely by country (Source: UNESCO).
- Platforms face stricter reporting requirements (Source: DSA).
- Governments fund AI literacy programs (Source: OECD).
- Cross-border cooperation increased (Source: G7).
- Regulatory lag remains a concern (Source: Stanford HAI).
- Transparency reports expanded (Source: Meta).
- Civil society involvement increased (Source: UNESCO).
- AI governance is a G20 priority (Source: G20).
- Policy impact assessments are ongoing (Source: OECD).
- National strategies increasingly mention misinformation (Source: WEF).
- Enforcement capacity remains uneven (Source: OECD).
Frequently Asked Questions
What is AI misinformation?
AI misinformation is false or misleading content created or amplified using artificial intelligence tools, including text, images, audio, and video.
How common are AI-generated deepfakes?
Deepfakes have increased by several hundred percent in recent years, especially in politics and fraud.
Can people detect AI misinformation?
Most people struggle to reliably identify AI-generated content without labels or tools.
Are governments regulating AI misinformation?
Yes, but regulation varies widely, and enforcement lags behind technology.
What helps reduce AI misinformation?
Media literacy, transparent labeling, strong detection tools, and cross-platform cooperation are most effective.
Find more stats: