Who are the people urging the world to stop AI and climate “catastrophe”?

Among them are billionaire Richard Branson, former U.N. Secretary-General Ban Ki-moon and Charles Oppenheimer, J. Robert Oppenheimer’s grandson.

2024 01 24T000205Z 1176313067 RC2VR0AD7JU5 RTRMADP 3 BRITAIN CYBER AI scaled
FILE PHOTO: ChatGPT logo and AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

Dozens of high-profile figures in business and politics are calling on world leaders to address the existential risks of artificial intelligence and the climate crisis. Virgin Group founder Richard Branson, along with former United Nations Secretary-General Ban Ki-moon, and Charles Oppenheimer, the grandson of American physicist J. Robert Oppenheimer, signed an open letter urging action against the escalating dangers of the climate crisis, pandemics, nuclear weapons and ungoverned AI.

The letter underlined the dangers of climate crisis and uncontrolled AI

The letter is urging leaders to adopt a proactive approach rather than merely managing these issues. It emphasizes the need for a long-term strategy, evidence-based decision-making in considering the perspectives of all affected parties.

Expressing concern over the world’s perilous state, the letter contends that leaders are not responding with the necessary wisdom and urgency. It highlights the observable impacts of these threats, such as a rapidly changing climate, a pandemic causing millions of deaths and significant economic costs, and the explicit consideration of nuclear weapons in conflicts. The signatories stress that these threats have the potential to endanger life on Earth even further.

The letter was released by The Elders

The letter, published on Thursday by The Elders, a non-governmental organization initiated by Nelson Mandela and Richard Branson to address global human rights issues, was shared with global governments and calls for immediate multilateral action. The proposed measures include financing the transition away from fossil fuels, establishing an equitable pandemic treaty, resuming nuclear arms negotiations, and building global governance structures to ensure that AI is employed for positive purposes. Additionally, support for the message comes from the Future of Life Institute, a non-profit organization founded by MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, which aims to guide transformative technologies like AI toward enhancing life while mitigating large-scale risks.

Tegmark said The Elders and his organization wanted to convey that, while not in and of itself “evil,” the technology remains a tool that could lead to some dire consequences, if it is left to advance rapidly in the hands of the wrong people. “The old strategy for steering toward good uses, when it comes to new technology, has always been learning from mistakes,” Tegmark declared. “We invented fire, then later we invented the fire extinguisher. We invented the car, then we learned from our mistakes and invented the seatbelt and the traffic lights and speed limits. But when the power of the technology crosses a threshold, the learning-from-mistakes strategy becomes awful,” Tegmark added.

The letter will be discussed during the Munich Security Conference

The letter was issued ahead of the Munich Security Conference, where government officials, military leaders and diplomats will discuss international security amid escalating global armed conflicts, including the Ukraine War and the Gaza conflict. Tegmark will be attending the event to advocate the message of the letter.

The Future of Life Institute last year also released an open letter backed by leading figures including Elon Musk and Apple co-founder Steve Wozniak, which called on AI labs like OpenAI to pause work on training AI models that are more powerful than GPT-4, currently the most advanced AI model from Sam Altman’s OpenAI. The technologists called for such a pause in AI development to avoid a “loss of control” of civilization, which might result in a mass wipeout of jobs and an outsmarting of humans by computers.

More from Qonversations

Tech

South korea laser

What is Block-I laser and why is South Korea starting a mass production?

Tech

Car Engine Mercedes

Want to get your dream car? Check out the 10 best car engines

Tech

2023 12 07T144919Z 952583431 RC2DS4AYOGG0 RTRMADP 3 AI EUROPE CONFERENCE OPENSOURCE

Why are manufacturers slow to embrace Generative AI?

Tech

Robot Suicide

Are Robots self-destructing? The hidden crisis of ‘Robot Suicide’