The Next Tobacco? Social Media Giants Face Historic Legal Reckoning — the potential implications for Insurers could be about to become very real.
A legal storm is gathering in the United States that could redefine how corporations are held accountable for digital harm — and reshape the way insurers and risk managers assess emerging technology exposure. Over 4,000 lawsuits have now been filed against Meta (Facebook and Instagram), Snap (Snapchat), ByteDance (TikTok), and Alphabet (YouTube), accusing them of knowingly designing platforms to addict users, particularly young people, with devastating consequences: depression, anxiety, eating disorders, self-harm, and suicide.
The Litigation Landscape
The cases, now consolidated into two massive multi-jurisdictional proceedings (one state, one federal) represent one of the largest coordinated actions since the opioid and tobacco settlements that changed corporate liability history. At the centre of the claims is a fundamental argument: that these social media platforms are not merely hosts of third-party content, but deliberately engineered products that exploit psychological vulnerabilities for profit. Courts have largely rejected dismissal attempts under Section 230’s immunity blanket, opening the door for the first trials in 2026 – beginning with a Los Angeles case brought by a 19-year-old woman who claims she became addicted to social media at age nine, leading to severe anxiety and body dysmorphia.
“The Same Kind of Corporate Misconduct”
Matthew Bergman, founder of the Social Media Victims Law Center in Seattle, has become one of the most prominent figures driving the lawsuits. His commentary in Bloomberg’s upcoming documentary Can’t Look Away: The Case Against Social Media draws direct parallels with tobacco litigation. “In the case of Facebook, you have internal documents saying ‘tweens are herd animals,’ ‘kids have an addict’s narrative,’ and ‘our products make girls feel worse about themselves.’ You have the same kind of corporate misconduct,” Bergman says in the film. Bergman’s firm was the first to file user-harm cases against social media platforms in 2022, following the explosive whistleblower revelations by Frances Haugen, a former Meta product manager.
Haugen released a trove of internal documents showing that Meta’s leadership knew their products were exacerbating mental health issues among teens, particularly girls — yet continued to prioritise engagement metrics over wellbeing.
Inside the Evidence
The pretrial discovery process has already unearthed staggering volumes of material: more than six million internal company documents, 150 executive depositions, and testimony from over 100 psychologists, neuroscientists, and behavioural experts. Executives including Mark Zuckerberg (Meta) and Evan Spiegel (Snap) have been deposed, with evidence expected to show internal awareness of the psychological toll of social media engagement features — such as infinite scrolling, ‘likes,’ and algorithmic reinforcement loops that keep users compulsively checking their feeds.
The Defence
The tech giants are pushing back hard. Meta says it disagrees with the allegations and points to new safety features limiting what teens can see and who can contact them. YouTube argues it’s “a streaming service, not a social network,” where people mainly watch content on TV screens rather than interact. Snap and TikTok have declined to comment publicly but are expected to challenge causation — arguing that social media use cannot be directly linked to individual mental health outcomes.
The Broader Risk Landscape
For insurers and risk managers, this litigation should ring alarm bells. The allegations of ‘addictive design’ open an entirely new frontier of corporate risk: behavioural harm through algorithmic engineering. Potential exposures span multiple insurance lines: Directors & Officers (D&O), Professional Indemnity / Errors & Omissions, Cyber Liability, and ESG-related coverages. There are also parallel claims from over 1,000 U.S. school districts seeking reimbursement for counselling and intervention programs linked to social media-induced student mental health crises.
What This Means for Risk and Governance
The lawsuits mark a cultural inflection point. Insurers, boards, and regulators will need to reconsider how they define ‘foreseeable harm’ in the age of AI-driven engagement. If an algorithm is programmed to maximise time spent online — and executives know that can lead to self-harm or eating disorders — is that negligence? Does a corporate duty of care extend to protecting users from behavioural manipulation?
The Road Ahead
Late 2025: Los Angeles Superior Court Judge Carolyn Kuhl is expected to rule on summary judgment motions. Mid-2026: First federal trial begins, led by a Kentucky school district. 2026–27: Potential jury verdicts or landmark settlements could follow, redefining corporate responsibility for online harm.
The ReSure View
At ReSure, we see this litigation as part of a wider global shift: from data protection to human protection. A new class of risk is emerging: psychological and behavioural harm driven by digital systems.
For risk professionals, the implications are immediate: expect new exclusions and product redefinitions in tech, cyber, and D&O policies. Anticipate regulatory guidance on digital product safety and child protection. Prepare for cross-border exposure, as Australian, UK, and EU regulators track these proceedings closely.


