Hello everyone. It's a privilege to speak to you on the fifth anniversary of UNICRI's Centre for Artificial Intelligence and Robotics.
In an incredibly short timeframe, the progress made by the Center for AI globally in driving forward the development and deployment of responsible AI has been phenomenal. I wish I could be with you in the Hague today to celebrate this important milestone, but rest assured, I am with you in solidarity with our shared goal of advancing AI for the benefit of all.
For over 35 years, I have been building tech platforms and products, and I've seen first-hand how new technologies can propel us forward and shape human lives for the better. AI is no exception, as we use it today to tackle a myriad of global challenges, from improving the accuracy of cancer screening to combating human trafficking and preventing online child exploitation.
This Centre identified early on that if we develop and deploy AI responsibly, it will play a pivotal role in international security and crime prevention - helping protect people everywhere and build a more secure and safer world. The world we need.
But to do so requires political will and bold action – the kind of action the AI Centre of UNICRI has fostered as a global convener and connector of law enforcement, industry and government united in its shared purpose.
For example, the 'AI for Safer Children Global Hub' enables law enforcement to access 60 different AI tools from 35 technology providers to fight online child sexual exploitation and abuse. It seeks to build law enforcement capacity and improve access to AI tools that can support the fight to combat child sexual abuse and exploitation online.
The work of the Centre has been fundamental in fostering a cross-jurisdiction dialogue amongst law enforcement professionals and, at long last, establishing collaboration between the tech industry and law enforcement - bridging the digital divide that has long hampered progress on keeping users safe and secure.
AI is evolving, and each development introduces new and ever more complex vulnerabilities and security threats that have the potential to cause significant harm when placed in the wrong hands. The AI Centre at UNCRI is setting a fantastic example, leveraging AI for the good of children everywhere, but the fight is far from over.
I have always believed that technology can be a force for good and endeavoured to put this into practice in my life and work, and I see the same sense of purpose in Irakli and the team at UNICRI's Centre for AI.
When I served in the UK government as the first Minister for Internet Safety and Security, I was regularly confronted with the darker sides of technology. It was clear even ten years ago that we had lost our grip on the internet. From mobile to encrypted apps to the dark web – these spaces were becoming playgrounds for criminality and exploitation on previously unimaginable levels.
Our primary efforts at the time were concentrated on countering the spread of dangerous extremist ideologies and protecting children from online abusers. I saw the power of AI deployed first-hand when, in response to extremists engaging in highly sophisticated strategies to radicalise online and disseminate dangerous propaganda, we developed tools and models to detect patterns in language and imagery that indicate nefarious activity. We created sophisticated AI classifiers to identify eight different types of extremist speech and ran our AI models across the open web to rapidly identify nefarious content meant to incite violence and automatically notify the tech platforms to take it down.
On the child protection front, with the full force of the UK government behind me, I founded WeProtect, a multi-stakeholder organisation that, over the past nine years, has grown to over 100 country members, 64 tech companies, 83 civil society organisations and 9 intergovernmental organisations, each one bringing their ideas, unique experiences and energies to protect children from online sexual abuse and exploitation on a global level. By way of example, Thorn, one of our members, is equipping industry and government with its 'Safer' tool, which uses machine learning algorithms to identify, remove, and report child sexual abuse material at scale.
Since my time in government, AI's profound capacity to impact human progress has become more apparent, and so have its risks. New and powerful large language models such as the recently released GPT-3 and ChatGPT are dazzling not just technologists but ordinary users worldwide with their ability to create anything with a language structure – answering questions, drafting essays, or even writing songs in a near-human and convincing way.
These models provide a window into how AI could change the world whilst introducing new and profound issues and risks - from algorithmic bias, privacy violations, ethical challenges and lack of operational transparency.
The ability of these models to generate radicalising texts in real time could have dire consequences for our safety and security. Large language models can produce polemics parroting conspiracy theorists and extremists, and they could be used to groom children en masse by speaking their language and gaining their trust.
Now is the time to build knowledge, awareness and resilience in our response mechanisms to ensure we can protect all in society from harm.
It will be some time before AI reaches human levels of intelligence, but democratic nations must tackle AI's ethical challenges and governance right now. That is why organisations like the Global Partnership for AI, which I have co-chaired for the past two years, are so important. GPAI is a multi-stakeholder organisation of over 25 countries and expert researchers, academics and business leaders working to ensure AI is grounded in human rights, inclusion, diversity, innovation, and economic growth. The same principles that underlie the UN Sustainable Development Goals and what makes the AI Centre at UNICRI such a powerful force.
The AI Centre of UNICRI is a true model for multi-stakeholder collaboration, bringing together institutions as wide-reaching as Interpol, the International Telecommunications Union, the Institute of Electrical and Electronics Engineers, the World Economic Forum, and many more laser-focused on the critical mission at hand to ensure that AI harms no one and benefits all in society.
Today, we reflect on what the Centre for AI and Robotics has accomplished over the past five years and focus our efforts on the challenges ahead. Let's ensure that today's proceedings continue to advance global cooperation and action to protect human rights, democracy and our very way of life. And there is no better place than the city of peace and justice to do so.