Item 1 of 2 Words reading “Artificial intelligence”, miniature of robot and toy hand are pictured in this illustration taken December 14, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

[1/2]Words reading “Artificial intelligence”, miniature of robot and toy hand are pictured in this illustration taken December 14, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Purchase Licensing Rights, opens new tab

SEOUL, May 21 (Reuters) – Sixteen companies at the forefront of developing Artificial Intelligence pledged on Tuesday at a global meeting to develop the technology safely at a time when regulators are scrambling to keep up with rapid innovation and emerging risks.

The companies included U.S. leaders Google

(GOOGL.O), opens new tab

, Meta

(META.O), opens new tab

, Microsoft

(MSFT.O), opens new tab

and OpenAI, as well as firms from China, South Korea and the United Arab Emirates.

They were backed by a broader declaration from the Group of Seven (G7) major economies, the EU, Singapore, Australia and South Korea at a virtual meeting hosted by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol.

South Korea’s presidential office said nations had agreed to prioritise AI safety, innovation and inclusivity.

“We must ensure the safety of AI to … protect the wellbeing and democracy of our society,” Yoon said, noting concerns over risks such as deepfake.

Participants noted the importance of interoperability between governance frameworks, plans for a network of safety institutes, and engagement with international bodies to build on

agreement, opens new tab

at a first meeting to better address risks.

Companies also committing to safety included, – backed by China’s Alibaba

(9988.HK), opens new tab

, Tencent

(0700.HK), opens new tab

, Meituan

(3690.HK), opens new tab

and Xiaomi

(1810.HK), opens new tab

– UAE’s Technology Innovation Institute, Amazon

(AMZN.O), opens new tab


(IBM.N), opens new tab

and Samsung Electronics

(005930.KS), opens new tab


They committed to publishing safety frameworks for measuring risks, to avoid models where risks could not be sufficiently mitigated, and to ensure governance and transparency.

“It’s vital to get international agreement on the ‘red lines’ where AI development would become unacceptably dangerous to public safety,” said Beth Barnes, founder of METR, a group promoting AI model safety, in response to the declaration.

Computer scientist Yoshua Bengio, known as a “godfather of AI”, welcomed the commitments but noted that voluntary commitments would have to be accompanied by regulation.

Since November, discussion on AI regulation has shifted from longer-term doomsday scenarios to more practical concerns such as how to use AI in areas like medicine or finance, said Aidan Gomez, co-founder of large language model firm Cohere on the sidelines of the summit.

China, which co-signed the “Bletchley Agreement” on collectively managing AI risks during the first November meeting, did not attend Tuesday’s session but will attend an in-person ministerial session on Wednesday, a South Korean presidential official said.

Tesla’s Elon Musk, former CEO of Google Eric Schmidt, Samsung Electronics’ Chairman Jay Y. Lee and other AI industry leaders participated in the meeting.

The next meeting is to be in France, officials said.

Sign up here.

Reporting by Joyce Lee; Editing by Ed Davies and Christian Schmollnger

Our Standards: The Thomson Reuters Trust Principles., opens new tab

[ For more curated Computing news, check out the main news page here]

The post Second global AI summit secures safety commitments from companies first appeared on

New reasons to get excited everyday.

Get the latest tech news delivered right in your mailbox

You may also like

Notify of
Inline Feedbacks
View all comments

More in computing