OpenAI whistleblowers describe a reckless and secretive culture

A group of OpenAI insiders are calling out what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful artificial intelligence systems ever created.

The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company hasn't done enough to prevent its artificial intelligence systems from becoming dangerous.

Members say OpenAI, which started as a nonprofit research lab and went public with the release of ChatGPT in 2022, is prioritizing profits and growth in an effort to build artificial general intelligence, or AGI, the industry term for a computer. program capable of doing everything a human can do.

They also allege that OpenAI used harsh tactics to prevent workers from voicing their concerns about the technology, including restrictive non-disparagement agreements that departing employees were asked to sign.

“OpenAI is really excited about building AGI and is recklessly racing to be first,” said Daniel Kokotajlo, a former researcher in OpenAI's governance division and one of the group's organizers.

The group published an open letter on Tuesday calling on major artificial intelligence companies, including OpenAI, to establish greater transparency and greater protection for whistleblowers.

Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees approved the letter anonymously because they feared retaliation from the company, Kokotajlo said. A current and former employee of Google DeepMind, Google's central artificial intelligence laboratory, also signed.

A spokeswoman for OpenAI, Lindsey Held, said in a statement: “We are proud of our track record of delivering the most capable and safe AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is critical given the importance of this technology, and we will continue to engage with governments, civil society and other communities around the world.”

A Google spokesperson declined to comment.

The campaign comes at a difficult time for OpenAI. It is still reeling from last year's attempted coup, when members of the company's board of directors voted to fire Sam Altman, the chief executive, over concerns about his sincerity. Mr. Altman was brought back days later and the board was recast with new members.

The company also faces legal battles with content creators who have accused it of stealing copyrighted works to train its models. (The New York Times sued OpenAI and its partner, Microsoft, for copyright infringement last year.) And the recent unveiling of a hyper-realistic voice assistant was marred by a public spat with Hollywood actress Scarlett Johansson , who claimed that OpenAI had imitated his voice without permission.

But nothing stuck more than the accusation that OpenAI was too cavalier about security.

Last month, two senior AI researchers, Ilya Sutskever and Jan Leike, left OpenAI under a cloud. Dr Sutskever, who had been on OpenAI's board of directors and voted to fire Mr Altman, had raised alarms about the potential risks of powerful artificial intelligence systems. His departure was seen by some safety-conscious employees as a setback.

So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI's “superalignment” team, focused on managing the risks of powerful AI models. In a series of public posts announcing his departure, Dr Leike said he believed “safety culture and processes have taken a back seat to brilliant products”.

Neither Dr. Sutskever nor Dr. Leike signed the open letter written by the former employees. But their exit prompted other former OpenAI employees to speak out.

“When I signed up for OpenAI, I didn't subscribe to this attitude of, 'Let's put things out into the world and see what happens and fix them later,'” Saunders said.

Some of the former employees have ties to Effective Altruism, a utilitarian-inspired movement that has been concerned in recent years with preventing existential threats from artificial intelligence. Critics have accused the movement of promoting doomsday scenarios about technology, such as the idea that an alternative out-of-control artificial intelligence system could take over and wipe out humanity.

Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to predict AI advances. He was not, to put it mildly, optimistic.

In his previous job at an AI security organization, he predicted that AGI would arrive in 2050. But after seeing how quickly AI was improving, he shortened his timeframe. He now believes there is a 50% chance that AGI will arrive by 2027, in just three years.

He also believes that the probability that advanced AI will destroy or catastrophically harm humanity – a grim statistic often shortened to “p(doom)” in AI circles – is 70%.

At OpenAI, Kokotajlo noted that although the company had safety protocols in place – including a joint effort with Microsoft known as a “deployment safety board,” which was supposed to review new models for key risks before they were released publicly – he rarely seemed to slow anything down.

For example, he said, in 2022 Microsoft began quietly testing in India a new version of its Bing search engine that some OpenAI employees believed contained a then-unreleased version of GPT-4, the large language model at cutting edge of OpenAI. Mr. Kokotajlo said he had been told that Microsoft had not obtained safety board approval before testing the new model, and after the board learned of the tests – through a series of reports that Bing was behaving strangely towards users – did nothing to stop Microsoft. from spreading it more widely.

A Microsoft spokesman, Frank Shaw, disputed those claims. He said the tests in India had not used GPT-4 or any OpenAI model. The first time Microsoft released GPT-4-based technology was in early 2023, he said, and it was reviewed and approved by a predecessor on the safety board.

Eventually, Kokotajlo said he became so concerned that, last year, he told Altman that the company should “go security oriented” and dedicate more time and resources to protecting against AI risks rather than strive forward to improve its models. He said that Mr. Altman had said he agreed with him, but that not much had changed.

He left in April. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly” as his systems approach human-level intelligence .

“The world is not ready, and we are not ready,” Kokotajlo wrote. “And I am concerned that we are moving forward regardless and rationalizing our actions.”

OpenAI said last week that it had begun training a new flagship AI model and was forming a new safety and security committee to explore risks associated with the new model and other future technologies.

As he left, Mr. Kokotajlo refused to sign OpenAI's standard documents for departing employees, which included a strict non-disparagement clause that prevented them from saying negative things about the company or risk having their invested capital stolen.

Many employees could lose millions of dollars if they refuse to sign up. Mr. Kokotajlo's acquired fortune was worth about $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to give it all up.

(A small firestorm erupted last month after Vox reported the news of these deals. In response, OpenAI said it had never recovered the capital it acquired from former employees, and would not do so. Mr. Altman has said they were “genuinely embarrassed” that they were not aware of the agreements, and the company said it would remove non-disparagement clauses from its standard documentation and release former employees from their agreements.)

In their open letter, Mr. Kokotajlo and other former OpenAI employees call for an end to the use of non-disparagement and non-disclosure agreements at OpenAI and other AI companies.

“Broad confidentiality agreements prevent us from voicing our concerns except to the very companies that may be failing to address these issues,” they write.

They also ask AI companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise security concerns.

They hired a pro bono lawyer, Lawrence Lessig, a prominent legal scholar and activist. Mr. Lessig also advised Frances Haugen, a former Facebook employee who became a whistleblower and accused the company of putting profits before safety.

In an interview, Lessig said that while traditional whistleblower protections typically apply to reports of illegal activity, it is important that employees of AI companies are able to freely discuss risks and potential harms, given the importance of technology.

“Employees represent an important line of security defense, and if they cannot speak freely without retaliation, that channel will be closed,” he said.

Ms. Held, a spokeswoman for OpenAI, said the company has “avenues for employees to express their concerns,” including an anonymous integrity hotline.

Kokotajlo and his team are skeptical that self-regulation alone will be enough to prepare for a world with more powerful AI systems. So they also ask legislators to regulate the sector.

“There needs to be some sort of transparent, democratically accountable governance structure responsible for this process,” Kokotajlo said. “Instead of a couple of different private companies competing with each other and keeping everything secret.”

Leave a Reply

Your email address will not be published. Required fields are marked *