UK’s AI Safety Summit kicks off with the “Bletchley Declaration”

2023 11 01T143459Z 527164584 MT1PRA74407358 RTRMADP 3 PA IMAGES scaled
PA via Reuters US Vice-President Kamala Harris giving a speech on artificial intelligence at the U.S. Embassy in London, before she attends the AI safety summit, the first global summit on the safe use of artificial intelligence. Picture date: Wednesday November 1, 2023.No Use UK. No Use Ireland. No Use Belgium. No Use France. No Use Germany. No Use Japan. No Use China. No Use Norway. No Use Sweden. No Use Denmark. No Use Holland. No Use Australia.

Around 100 political leaders, technology moguls, and experts in artificial intelligence (AI) gathered in the United Kingdom to inaugurate the first-ever global summit focusing on the potential hazards of AI, following the rapid ascent of this technology.

The ongoing technological revolution brings forth both aspirations and concerns, which will take centre stage at this gathering at Bletchley Park Manor, the historic World War II code-breaking hub in central England. The summit’s agenda involves discussions regarding the potential risks associated with next-generation artificial intelligence, including entities like ChatGPT. Political figures are attending, including Ursula von der Leyen, President of the European Commission, Antonio Guterres, Secretary-General of the UN, Kamala Harris, the American Vice President, and Georgia Meloni, the Prime Minister of Italy, who stands as the sole G7 head of state attending the event. Despite concerns and apprehensions about industrial espionage, China will also be represented, although the level of representation remains undisclosed.

Well-known entrepreneurs from Silicon Valley such as Sam Altman and Elon Musk are also participating. Musk urged for a “pause” in advanced artificial intelligence research, including systems like ChatGPT4, along with numerous international experts in March, saying that the world needs to regulate AI first. Generative artificial intelligence, capable of swiftly producing texts, sounds, and images upon a simple request, have made substantial advancements, with next-generation models anticipated in the coming months.

Nonetheless, in an open letter to UK’s Prime Minister Rishi Sunak, one hundred international organisations, experts, and activists criticised the closed-door nature of the summit, its dominance by major tech corporations, and limited access for civil society.

In the absence of standardized policies, the United Kingdom aims through this summit to foster international collaboration on the impactful issues of AI. The organisers managed to attain an initial international declaration regarding the nature of AI risks and hope to establish an expert group on artificial intelligence fashioned after the Intergovernmental Panel on Climate Change.

On Wednesday, Britain unveiled the “Bletchley Declaration,” a collaborative agreement with nations like the United States and China, with the goal of enhancing international cooperation in the realm of AI safety. This declaration, agreed to by 28 countries and the European Union, was released on the inaugural day of the AI Safety Summit held at Bletchley Park.

“The Declaration fulfils key summit objectives in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration,” read Britain’s accompanying statement to the declaration.

While holding tremendous potential for fields like medicine and education, the AI advancements also raise concerns about posing an existential threat by potentially destabilising societies, facilitating weapon manufacturing, or eluding human control, as highlighted in a report released by the British government last week. The risks stemming from advanced artificial intelligence are substantial, the report showed, so this summit provides an opportunity to ensure that the appropriate experts are gathered to provide insights and discuss strategies to reduce the risks.

The key challenge lies in establishing safeguards without impeding the innovation pursued by AI labs and tech giants. The European Union and the United States have opted for a regulatory approach, and US President Joe Biden introduced a set of rules and principles on Monday aimed at setting a precedent for international compliance. Recently, several companies, including OpenAI, Meta (Facebook), and DeepMind (Google), consented to disclose some of their AI security protocols upon the United Kingdom’s request.

More from Qonversations

Tech

Screenshot 2024 10 09 at 4.39.39 PM

From chips to pets: How AI is shaping veterinary care

Tech

2023 12 11T034945Z 1905948959 RC2QU4AX6OR5 RTRMADP 3 INDONESIA GOTO

Why is TikTok in the news again?

Global Affairs

Screenshot 2024 10 08 at 11.15.16 AM

Fusion project with Russia will continue despite ‘rough sea’ – ITER Chief

Global Affairs

Screenshot 2024 10 07 at 1.42.20 PM

Who will own lunar time? The U.S. and China compete for moon’s time standard

Front of mind