Tech

OpenAI Says It’s ‘Dedicated’ to Making Sure Its AI Tools Don’t Cause Harm in Letter to US Lawmakers

OpenAI, responding to questions from US lawmakers, said it’s dedicated to making sure its powerful AI tools don’t cause harm, and that employees have ways to raise concerns about safety practices.

The startup sought to reassure lawmakers of its commitment to safety after five senators including Senator Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI’s policies in a letter addressed to Chief Executive Officer Sam Altman.

“Our mission is to ensure artificial intelligence benefits all of humanity, and we are dedicated to implementing rigorous safety protocols at every stage of our process,” Chief Strategy Officer Jason Kwon said Wednesday in a letter to the lawmakers.

Specifically, OpenAI said it will continue to uphold its promise to allocate 20 percent of its computing resources toward safety-related research over multiple years. The company, in its letter, also pledged that it won’t enforce non-disparagement agreements for current and former employees, except in specific cases of a mutual non-disparagement agreement. OpenAI’s former limits on employees who left the company have come under scrutiny for being unusually restrictive. OpenAI has since said it has changed its policies.

Altman later elaborated on its strategy on social media.

“Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations,” he wrote on X.

a few quick updates about safety at openai:

as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.

our team has been working with the US AI Safety Institute on an agreement where we would provide…

— Sam Altman (@sama) August 1, 2024

Kwon, in his letter, also cited the recent creation of a safety and security committee, which is currently undergoing a review of OpenAI’s processes and policies.

In recent months, OpenAI has faced a series of controversies around its commitment to safety and ability for employees to speak out on the topic. Several key members of its safety-related teams, including former co-founder and chief scientist Ilya Sutskever, resigned, along with another leader of the company’s team devoted to assessing long-term safety risks, Jan Leike, who publicly shared concerns that the company was prioritizing product development over safety.

© 2024 Bloomberg LP

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Uh oh. Looks like you're using an ad blocker.

We charge advertisers instead of our audience. Please whitelist our site to show your support for Nirala Samaj