From venturebeat.com

Former OpenAI superalignment co-leader Jan Leike beside a crossed out OpenAI pinwheel logo

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.


Earlier this week, the two co-leaders of OpenAI’s superalignment team Ilya Sutskever, former chief scientist and Jan Leike, a researcher — both announced within hours they were resigning from the company.

This was notable not only given their seniority at OpenAI (Sutskever was a co-founder), but because of what they were working on: superalignment refers to the development of systems and processes to control superintelligent AI models, ones that exceed human intelligence.

But following the departures of the two superalignment co-leads, OpenAI’s superalignment team has reportedly been disbanded, according to a new article from Wired (where my wife works as editor-in-chief).

Now today Leike has taken to his personal account on X to post a lengthy thread of messages excoriating OpenAI and its leadership for neglecting “safety” in favor of “shiny products.”

VB Event

The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.

Request an invite

As he put it in one message of his thread on X: “over the past years, safety culture and processes have taken a backseat to shiny products.”

But over the past years, safety culture and processes have taken a backseat to shiny products.

— Jan Leike (@janleike) May 17, 2024

Leike, who joined the company in early 2021, also stated openly that he had clashed with OpenAI’s leadership, presumably CEO Sam Altman (whom Leike’s direct colleague and superalignment co-lead Sutskever had moved to oust late last year) and/or president Greg Brockman, chief technology officer Mira Murati, or others at the top of the masthead.

Leike stated in one post: “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

I joined because I thought OpenAI would be the best place in the world to do this research.

However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.

— Jan Leike (@janleike) May 17, 2024

He also stated “we urgently need to figure out how to steer and control AI systems much smarter than us”

Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us.

— Jan Leike (@janleike) May 17, 2024

OpenAI pledged a little less than a year ago, in July 2023 to dedicate 20% of its total computational resources (aka “compute”) toward this effort to superalign superintelligences — namely its expensive Nvidia GPU (graphics processing unit) clusters used to train AI models.

All of this was supposedly part of OpenAI’s quest to responsibly develop artificial generalized intelligence (AGI), which it has defined in its company charter as “highly autonomous systems that outperform humans at most economically valuable work.”

Leike said that, despite this pledge, “my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”

Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.

— Jan Leike (@janleike) May 17, 2024

Read Leike’s full thread on X.

The news is likely to be a major black eye on OpenAI amid its rollout of the new GPT-4o mutimodal foundation model and ChatGPT desktop Mac app announced on Monday, as well as a headache to its big investor and ally Microsoft who is preparing for a large conference — Build — next week.

We’ve reached out to OpenAI for a statement on Leike’s remarks and will update when we hear back.

VB Daily

Stay in the know! Get the latest news in your inbox daily

By subscribing, you agree to VentureBeat’s Terms of Service.

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

[ For more curated Computing news, check out the main news page here]

The post OpenAI’s former superalignment leader blasts company: ‘safety culture and processes have taken a backseat’ first appeared on venturebeat.com

New reasons to get excited everyday.



Get the latest tech news delivered right in your mailbox

You may also like

Subscribe
Notify of
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

More in computing