Threats & Risks to Children Online

Understand the full spectrum of online harms facing children today — from exploitation and grooming to cyberbullying, harmful content, and emerging AI-generated risks.

8 Articles All Audiences Awareness

Why Awareness Matters

Understanding threats is the first step to preventing harm. These articles provide evidence-based overviews of each risk category, designed to inform without sensationalizing.

Critical All audiences

Understanding Online Child Sexual Exploitation and Abuse (OCSEA)

An evidence-based overview of online child sexual exploitation and abuse — the most severe category of child online harm. Documents the alarming scale: 88 million+ CSAM files reported to NCMEC in 2022 alone, representing a 329% increase over five years. Covers all major categories — image and video-based abuse, live-streaming exploitation (a growing threat across Southeast Asia and beyond), AI-generated CSAM (now a recognized and rapidly expanding category), and financial sextortion targeting teenage boys. Explains the technological ecosystem enabling these crimes, the role of encrypted messaging and dark web platforms, and how industry tools including PhotoDNA, Thorn's Safer, and IWF hash lists form the first line of technological defense.

Parents Educators

Cyberbullying and Online Harassment

A data-rich examination of online harassment affecting children — from direct threats and impersonation to pile-on pile-ons, image-based abuse, and coordinated exclusion campaigns. Draws on Ofcom research showing 1 in 4 UK children aged 8-17 have experienced online bullying, and UNICEF data indicating cyberbullying victims are 2-9x more likely to consider self-harm. Analyzes platform design features that enable harassment (anonymous accounts, viral amplification, weak reporting flows) and those that mitigate it — including Instagram's Restrict feature and YouTube's comment filtering. Includes behavioral indicators for parents and teachers, and guidance on documenting incidents for both school and law enforcement reporting.

Parents Tech companies

Exposure to Harmful Content

Algorithmic content recommendation has transformed childhood exposure to harmful material from occasional to systematic. This article maps the categories — graphic violence, suicide and self-harm promotion (linked to the "Werther effect" in clinical research), pro-eating-disorder content (accessed by 13% of teenage girls in Ofcom studies), online pornography (first exposure now averaging age 11 in the UK), and hate speech — against the developmental evidence on their impact on children at different ages. Examines how EU DSA Article 34 risk assessments and Ofcom's harmful content duties on VSPs are shifting legal responsibility for algorithmic amplification from users to platforms, and what "systems-level" mitigation now means in regulatory practice.

Parents Tech companies

Data Exploitation and Privacy Violations

Children are among the most heavily profiled demographic groups on the internet, yet have the fewest legal mechanisms to understand or contest how their data is used. This article exposes the data ecosystem: the tracking pixel networks embedded in children's apps, the data broker industry that aggregates behavioral profiles from infancy, and the behavioral advertising auction systems that serve targeted ads to children based on inferred emotional states. Covers major enforcement actions — FTC's $5.7M settlement with Musical.ly/TikTok (2019), EU DPC investigations into Meta, and ICO's Children's Code audits — and explains what the prohibition on profiling children for commercial purposes means in operational terms under GDPR and UK AADC Standard 8.

Parents Educators

Online Grooming — Tactics and Warning Signs

Online grooming — the process by which adults build trust with children to facilitate exploitation — has migrated rapidly from social media to gaming platforms, live streaming, and encrypted messaging apps. This article maps the grooming process (target identification, access, trust-building, desensitization, isolation, exploitation) against specific platform features that predators exploit: private messaging in games, gifting systems that create obligation, and disappearing content that leaves no record. Drawing on CEOP, NCA, and FBI data, it identifies the highest-risk platform types and age groups, provides a comprehensive warning signs checklist for parents and educators, and explains the statutory obligations on platforms to detect and report grooming behaviour under UK OSA s.101 and US PROTECT Act provisions.

Emerging All audiences

AI-Generated Risks — Deepfakes, Generative AI, and Children

Artificial intelligence is reshaping the threat landscape for child protection faster than legal frameworks can respond. This article examines four converging AI-related risks: (1) AI-generated CSAM — synthetic imagery that courts in multiple jurisdictions are now treating as criminal material regardless of whether a real child was involved; (2) generative AI chatbots that can simulate grooming interactions at scale with no human predator; (3) deepfake technology used for non-consensual intimate imagery (NCII) against teenage girls; and (4) voice cloning used in scams targeting children and parents. References the 2024 ITU/UNICEF/UN Committee joint statement on AI and children's rights, and assesses how the EU AI Act's high-risk classification system and proposed US algorithmic accountability frameworks are beginning to address these emerging harms.

Parents Educators

Excessive Screen Time and Digital Wellbeing

The relationship between children's digital media use and wellbeing is both contested and consequential. This article cuts through the polarized debate to present the best available evidence: the robust findings (smartphone use is associated with reduced sleep quality in children aged 10-14; heavy social media use correlates with lower self-reported wellbeing in teenage girls; notification systems are designed to create compulsive checking habits in developing brains) alongside the genuine uncertainties. Covers age-specific WHO and AAP recommendations, the evidence for and against "screen time" as a useful metric, the emerging "digital diet" framing, and practical strategies for families and schools — including how the UK OSA's harmful design provisions are beginning to place legal obligations on platforms to limit features that foster compulsive use in children.

Policymakers Parents

Online Radicalization of Youth

Online radicalization of young people has become one of law enforcement's most pressing concerns, with gaming platforms and fringe internet communities now operating as primary recruitment environments for far-right, jihadist, and incel ideologies. This article explains the psychological mechanics of online radicalization (identity vulnerability, grievance amplification, in-group belonging) and the specific digital pathways — YouTube rabbit holes, Discord servers, Telegram channels, and 4chan-adjacent communities — through which young people are progressively exposed to more extreme content. Draws on ISD Global research, EU RAN (Radicalization Awareness Network) data, and GIFCT (Global Internet Forum to Counter Terrorism) initiatives. Covers Prevention/Preveention/Programme Counter Violent Extremism (CVE) approaches validated in UK, Denmark, and Germany that schools and parents can draw on, and explains platforms' current legal obligations under DSA Article 34 terrorist content provisions.