WeProtect Global Alliance Abu Dhabi Summit — Keynote Address — Dec 4, 2024

Your Highness Sheikh Saif Bin Zayed Al Nahyan, your excellencies, ministers, and distinguished guests: as we stand together today, marking a decade of the WeProtect Global Alliance, I ask you to consider this shocking truth.

In the time it takes me to deliver this opening address, approximately 570 children will become victims of online sexual exploitation. By the end of today, that number will soar to 822,000 — that’s over 300 million children each year. These numbers are overwhelming, but they only hint at the deeper tragedy unfolding in the lives of young people across the globe.

Behind every statistic is a moment frozen in time — a moment of fear, shame, and humiliation that will stay with that child forever. Many of us know this truth firsthand. Whether we experienced it ourselves or know someone who has, it is a pain you never forget — not in a year, not in decades, not in a lifetime.

Today, artificial intelligence is introducing new vectors of sexual exploitation and abuse online, and the results are devastating. Sexual trauma does not heal with the passing of time. When all else fades, the fear and helplessness remain frozen — crystallised in that moment. There is a before, and there is an after — a stark divide that abuse carves into a child’s life. It is a wound that continues to bleed invisibly, shaping every relationship, every decision, and every moment of trust and intimacy that follows.

In today’s digital world, that moment is not just a personal scar; it becomes immortalised for the world to see. The trauma is compounded by an unbearable truth: somewhere in the vast expanse of the internet, that moment of pain lives on, endlessly viewed and shared by others — a haunting echo of their darkest hour.

The Internet Watch Foundation’s 2023 report revealed the most extreme year on record for reports of images containing the sexual abuse of children. Alarmingly, 92% of these images were labelled as self-generated. Even more disturbing, children as young as three years old are being coerced into creating this content. These staggering statistics reveal a horrifying reality: millions of children are being manipulated and exploited in their own homes, in the very places where they should feel safest.

What makes this crisis so insidious is how deeply embedded these risks are in our children’s daily lives. The digital platforms where children spend countless hours have become dominated by algorithms — not designed to protect them but to manipulate and maintain their attention. Alarmingly, one in three children aged 5 to 7 already use social media. At such an impressionable age, these algorithms begin to sink their teeth in.

Despite being marketed as tools of connection and discovery, these algorithms — some of the most advanced AI systems operational today — amplify whatever content drives engagement, regardless of its impact on young minds. If a child shows interest in content related to risky behaviours — eating disorders, self-harm, or even content with subtle grooming elements — these recommendation systems will feed them more and more of it, normalising risks to maintain engagement.

These systems decide what our children see and shape how they perceive the world, and their reach is staggering. A recent Harvard study found that most young people rely on social media for news and information. Among them, 25% turn to YouTube, 25% to Instagram, and 23% to TikTok as their primary source of understanding the world. The more time they spend trapped in this endless spiral of attention, the more vulnerable they become to exploitation.

As Jonathan Haidt highlights in The Anxious Generation, this digital environment is fuelling an epidemic of anxiety and helplessness among young people, depriving them of the resilience and connection essential for healthy development. The dual impact of manipulative algorithms and a digitally rewired childhood is leaving a generation adrift.

Recognising the consequences of algorithmic manipulation, New York State has taken the unprecedented step of banning algorithmic feeds for children entirely. Just last week, the Australian Senate passed landmark legislation banning children under 16 from social media altogether. These measures may seem extreme, but they reflect exasperation resulting from decades of discussions, negotiations, and legislative half-measures. Voluntary efforts by platforms have failed to protect young users. And when platforms fail, governments have no choice but to act to protect children.

Beyond the systemic risks posed by algorithms, we are now confronting entirely new forms of AI-enabled exploitation. Consider the tragic case of 14-year-old Sewell Setzer, who believed he had formed a deep emotional connection with an AI chatbot. What began as an innocent interaction with his favourite Game of Thrones character led to fatal consequences — a devastating betrayal of trust. This heartbreaking story underscores the risks when AI systems are misused or misunderstood. These systems, devoid of empathy or understanding, can exploit a child’s natural trust and curiosity with devastating consequences.

This is not an isolated incident. Recent research from Project Liberty revealed that while 70% of teens consider romantic relationships with AI unacceptable, they increasingly turn to AI for emotional support — as tutors, confidants, and even companions. Nearly half of surveyed teens use AI tools several times a week. Strikingly, 80% of teens are calling for lawmakers to address AI risks, ranking it alongside climate change and inequality as one of the most pressing issues of their generation. One 17-year-old put it simply: “I just hope that as AI gets more powerful, we don’t lose touch with what makes us human.”

OpenAI’s CEO Sam Altman echoed this concern in a recent interview, warning, “We should not humanise AI; these systems are not human and should not be treated as such.” Yet many companies do exactly the opposite — branding AI systems with names and personas that encourage vulnerable young people to form emotional bonds.

The risks extend far beyond emotional exploitation. Research by Thorn reveals that one in ten children aged 9 to 17 knows a peer who has used AI to generate explicit images of others. Law enforcement agencies in half of all surveyed countries report encountering AI-generated child sexual abuse material. Traditional safeguards — reporting mechanisms and content moderation systems — are proving increasingly inadequate against these rapidly evolving threats. Stanford researchers have warned that even if law enforcement could catalogue 5,000 new AI-generated images daily, another 5,000 would emerge the very next day.

The deceptive and organised crime of sextortion further compounds this crisis. International networks exploit gaps in law enforcement to target victims globally. As you saw in the film, these predators manipulate children into sharing explicit images and then blackmail them for more. Tragically, some victims, overwhelmed by shame and fear, take their own lives rather than face the horror of their images being shared.

Though technology creates opportunities for these heinous crimes, it also holds the key to solving them. Despite the grave challenges we face, I believe deeply in the transformative power of innovation. Over four decades in the tech industry, I have seen how technology can enhance human potential, create opportunities, and connect people in profound ways. AI offers us an extraordinary opportunity to reimagine the digital world our children inhabit — one where safety and empowerment are at its core.

But to achieve this, we must face a hard truth: content moderation and post-harm interventions are no longer enough. These measures, while well-intentioned, have proven woefully inadequate in the face of evolving threats. The time for half-measures and incremental fixes is over. Bold, transformative action is needed now. The stakes are too high for inaction or excuses. Platforms have the capabilities, the resources, and the moral responsibility to act. They must fundamentally reshape their business models — shifting from prioritising algorithms to prioritising humanity and putting children’s safety first.

Imagine a future where AI systems detect grooming patterns before harm occurs. Picture platforms that intervene in real-time to protect children or empower them to explore and create safely without fear of manipulation or abuse. This is not wishful thinking — it is entirely achievable.

Already, we are seeing progress. UNICRI’s AI for Safer Children initiative is identifying and evaluating over 100 technology solutions to combat online child exploitation. The landmark agreement led by Thorn and All Things Human has secured commitments from leading AI companies to ensure their technologies are not used to create child sexual abuse material. These efforts demonstrate that change is possible when we work together.

Your Excellencies and Alliance partners. The well-being of children cannot be treated as an afterthought in the race to advance technology — it must become the benchmark of progress.

To our technology partners: You have created platforms that connect and enrich the lives of billions of people worldwide. Now, it’s time to apply that same ingenuity to protect children. Redesign your algorithms to prioritise safety over engagement. Implement robust age assurance systems and advanced AI to detect and prevent abuse. These tools already exist — there is no need to wait for regulations to compel action. Children’s safety must be embedded as a cornerstone of your design process, not treated as an afterthought.

To our government partners: Your role is critical.Regulatory frameworks must evolve as swiftly as the threats we face. While over 40 jurisdictions have passed age-related protections, gaps remain. The lack of comprehensive legislation in the United States — where many significant platforms are based — is a critical obstacle. With just a few weeks left in this legislative session, passage of the Kids Online Safety & Protection Act (KOSPA) would represent a vital step forward but time is running out. Clear guidance and global coordination are essential, and enforcement must carry real consequences — not fines that companies can dismiss as a cost of doing business.

To our law enforcement partners: You are on the frontlines daily. We must ensure you have the tools, resources, and cross-border cooperation necessary to bring perpetrators to justice. You are the guardians of our children, and your work is indispensable.

To our civil society partners: Your advocacy, support for survivors, and tireless efforts to hold us all accountable keep this fight alive. You are the voice for those who cannot yet speak for themselves and the champion for those who need protection most.

Ten years ago, when we founded the WeProtect Global Alliance, we knew no single entity or organisation could solve this crisis alone. Today, that truth remains our greatest strength. Together, we have built an unprecedented coalition of governments, civil society, and the private sector.

The film you saw today, Protect Us, is more than a dramatisation. It is a call to action from every child who has ever whispered, “Protect Us.” Their pleas demand action. Their cries demand change. Let us rise to meet their trust and answer their call.

The “Protect Us” film will be released on January 16th 2025, on Baroness Joanna Shields’ LinkedIn Page.