The Policy Imperative
By 2026, over 50 countries have enacted or are developing mandatory child online protection frameworks. The shift from voluntary industry codes to enforceable legal requirements is now the global norm — and the WePROTECT Global Threat Assessment documents why urgency is needed.
A strategic planning guide for governments at any stage of child online protection strategy development — from countries with no existing framework to those seeking to advance from basic to integrated response. Structured around WePROTECT's Model National Response six domains with specific tools for each: in Legislation & Policy, a legislative gap analysis against ECPAT's country-by-country legal mapping and WePROTECT's seven minimum legislative requirements; in Law Enforcement, establishing dedicated ICAC units, INTERPOL cooperation arrangements, and digital forensics capacity; in Private Sector Collaboration, industry self-regulation frameworks, government-industry information sharing, and mandatory reporting legal frameworks; in Victim Support, survivor-centred service design and cross-government coordination. Covers the ITU National COP Assessment methodology as a diagnostic starting point and WePROTECT's four-phase Maturity Model (Building → Enhancement → Integration → Maturity) as the progress benchmark. Used by 42 governments as their primary strategic planning reference.
A rigorous comparative analysis of what the world's most advanced child online protection frameworks have achieved — and where significant gaps remain. UK: the Online Safety Act 2023 shifted from voluntary codes to mandatory duties of care, but Ofcom's risk register revealed fewer than 5% of in-scope services had adequate child safety measures when the Act came into force — demonstrating the scale of implementation challenges even in well-resourced regulatory systems. Australia: the eSafety Commissioner's Safety by Design framework produced measurable platform cooperation improvements, but the under-16 social media ban's first compliance assessments show significant variation in platform response quality. EU: the DSA's risk-based approach produced unprecedented VLOP transparency, but Digital Services Coordinators remain significantly under-resourced. COPPA case study: 26 years of FTC enforcement produced a functional framework for under-13s but demonstrably failed 13-17s — the gap KOSA, COPPA 2.0, and state AADC laws now address. Also covers positive lessons from emerging economies: Kenya's Data Protection Act child provisions, Indonesia's IEEE 2089 mandate, and Brazil's LGPD enforcement.
A policy-level analysis of the most technically and legally contested question in current child online protection: how to verify user age accurately enough to protect children without creating identity surveillance infrastructure. Covers the full policy design space: mandatory hard verification (Australia's social media ban approach — AUD 49.5M penalties for non-compliance); mandatory age estimation (UK OSA's "highly effective" standard, accepting both verification and estimation methods); technology-neutral mandates (California AADC's minimum-standard-but-method-neutral approach); and voluntary certification (IEEE 2089.1-2024 AVID scheme). Draws on the OECD's 2025 global landscape mapping of age assurance regulatory approaches across 40+ countries. Constitutional considerations: the US First Amendment creates a higher bar for mandatory identification than ECHR Article 10 — explaining the divergent US-European trajectories. Includes a policy option comparison matrix (effectiveness, privacy impact, inclusivity, constitutional risk, implementation cost) for use in legislative design processes.
Artificial intelligence is creating child safety risks faster than any previous technology shift, and existing regulatory frameworks were not designed to address them. This policy briefing maps the threat landscape and nascent regulatory responses. The four primary AI risks: AI-generated CSAM — synthetic imagery already classified as criminal material in the UK, Australia, and EU; generative AI chatbots capable of simulating grooming relationships at scale; deepfake technology enabling non-consensual intimate imagery against children with no physical contact; and AI-driven recommendation systems that amplify self-harm and radicalization content with alarming precision. Regulatory responses: the EU AI Act's high-risk classification for AI systems affecting children in educational and care contexts; the 2024 joint statement by ITU, UNICEF, and the UN Committee on the Rights of the Child calling for mandatory child rights impact assessments for all AI systems affecting children; and NCMEC/IWF's joint working group on AI-CSAM detection standards. Examines the international coordination gap: AI systems are trained, deployed, and generate harm globally — making unilateral national regulation insufficient.
Key Policy Resources
WePROTECT Global Threat Assessment
Annual threat assessment used by 42+ countries as their baseline reference. Covers all six MNR domains with country-level data.
View Report
ITU National COP Assessment Reports
Country-specific assessments of child online protection status across legislation, policy, law enforcement, and industry — available in 6 UN languages.
View Report
OECD How's Life for Children in the Digital Age (2025)
Evidence-based research on cyberbullying, social media use, screen time, and wellbeing outcomes for children across OECD member countries.
View Report