Home National OpenAI CEO Sam Altman Is Leaving Company’s Safety Committee, Here’s All You Need To Know

OpenAI CEO Sam Altman Is Leaving Company’s Safety Committee, Here’s All You Need To Know

by rajtamil
0 comment 0 views

openai ceo sam altman is leaving company's safety committee, here's all you need to know

OpenAI CEO Sam Altman is leaving the company's safety and security committee to focus on 'critical' safety decisions related to OpenAI's projects and operations. Altman’s departure from the safety and security committee comes after a letter from five US senators questioned OpenAI’s policies in a letter to him recently.

"The Safety and Security Committee will become an independent Board oversight committee focused on safety and security, to be chaired by Zico Kolter, Director of the Machine Learning Department with the School of Computer Science at Carnegie Mellon University, and including Adam D’Angelo, Quora co-founder and CEO, retired US Army General Paul Nakasone and Nicole Seligman, former EVP and General Counsel of Sony Corporation," the company said in blog post.

All are existing members of OpenAI’s board of directors. The committee will oversee the safety and security processes guiding OpenAI’s model development and deployment.

The safety and security committee will be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed.

OpenAI said that the commission reviewed their latest AI model, o1, after Altman left. The group will keep getting regular updates from OpenAI’s safety teams and can still delay model releases if there are safety issues.

“As part of its work, the Safety and Security Committee will continue to receive regular reports on technical assessments for current and future models, as well as reports of ongoing post-release monitoring,” OpenAI wrote in the post.

“We are building upon our model launch processes and practices to establish an integrated safety and security framework with clearly defined success criteria for model launched, the ChatGPT maker said.

As per TechCrunch, nearly half of the OpenAI staff that once focused on AI’s long-term risks have left, and ex-OpenAI researchers have accused Altman of opposing 'real' AI regulation in favour of policies that advance OpenAI’s corporate aims.

The report also claimed that OpenAI has increased its expenditures on federal lobbying, budgeting $800,000 for the first six months of 2024 versus $260,000 for all of last year.

Altman also earlier this spring joined the US Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which provides recommendations for the development and deployment of AI throughout US critical infrastructure.

You may also like

Leave a Comment

2024 All Right Reserved.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.