Posted on

OpenAI was a research lab – now it’s just another tech company

OpenAI was a research lab – now it’s just another tech company

The thing about asking investors for money is that they want to see returns.

OpenAI started with a famously altruistic mission: to help humanity by developing artificial general intelligence. But over time, it became one of the best-funded companies in Silicon Valley. Now the tension between these two facts is coming to a head.

Weeks after releasing a new model that it claims can be “sensible,” OpenAI is heading toward abandoning its nonprofit status, some of its most senior employees leaving the company and CEO Sam Altman — once briefly ousted over apparent trust concerns became – cementing his position as one of the most powerful people in tech.

On Wednesday, OpenAI’s longtime chief technology officer Mira Murati announced she was leaving the company “to create time and space for my own explorations.” On the same day, research director Bob McGrew and Barret Zoph, vice president of post-training, said they were also leaving. Altman called the leadership changes “a natural part of business” in an X post following Murati’s announcement.

“I’m not saying it’s normal for it to happen so abruptly, of course, but we are not a normal company,” Altman wrote.

But it follows a trend of departures that intensified last year after the board’s failed attempt to fire Altman. OpenAI co-founder and chief scientist Ilya Sutskever, who broke the news of his firing to Altman before publicly walking back his criticism, left OpenAI in May. Jan Leike, a key OpenAI researcher, gave up just days later, saying that “security culture and processes have taken a back seat to shiny products.” Nearly all OpenAI board members at the time of the ouster, with the exception of Quora CEO Adam D’Angelo, resigned, and Altman secured a seat.

The company that once fired Altman because he “wasn’t always open in his communication” has since been redesigned by him.

No longer just a “donation”

OpenAI began as a nonprofit laboratory and later developed a for-profit subsidiary, OpenAI LP. The for-profit arm may raise funds to build artificial general intelligence (AGI), but the nonprofit’s mission is to ensure that AGI benefits humanity.

In a bright pink box on a webpage about OpenAI’s board structure, the company emphasizes that “it would be prudent” to view any investment in OpenAI “in the spirit of a donation” and that investors “may not see a return.”

Investors’ profits are capped at 100x, with excess returns helping the nonprofit prioritize social benefit over financial gain. And when the for-profit side deviates from that mission, the nonprofit side can step in.

We are far beyond the “spirit of a donation” here

OpenAI is now reportedly approaching a valuation of $150 billion – about 37.5 times reported revenue – and there is no path to profitability in sight. The company is looking to raise funds from companies such as Thrive, Apple and a UAE-backed investment firm, with a minimum investment of a quarter of a billion dollars.

OpenAI doesn’t have deep pockets or existing incumbents like Google or Meta, both of which are building competing models (although it’s worth noting that these are publicly traded companies with their own responsibilities to Wall Street). former OpenAI researchers, is on the heels of OpenAI attempting to raise $40 billion in new funding. We are far beyond the “spirit of a donation” here.

OpenAI’s “for-profit, non-profit managed” structure puts the company at a disadvantage when it comes to greed. So it made perfect sense that Altman told employees earlier this month that OpenAI would be reorganized as a for-profit company next year. This week, Bloomberg reported that the company is considering becoming a nonprofit corporation (like Anthropic) and that investors plan to give Altman a 7 percent stake. (Altman almost immediately denied this in a staff meeting, calling it “ridiculous.”)

And crucially, OpenAI’s nonprofit parent company would reportedly lose control as part of these changes. Just a few weeks after this news broke, Murati and Co. were out.

Both Altman and Murati claim that the timing is just coincidental and that the CTO just wants to leave the company while the company is on an “upturn.” Murati (through representatives) declined to speak with him The edge about the sudden move. Wojciech Zaremba, one of OpenAI’s last remaining co-founders, compared the departures to “the hardships faced by parents in the Middle Ages, when six out of eight children died.”

Whatever the reason, this represents a near-complete change in OpenAI leadership since last year. Besides Altman himself, the last remaining member was seen in September 2023 Wired The cover features President and co-founder Greg Brockman, who supported Altman during the coup. But he has also been on personal leave of absence since August and is not expected to return until next year. The same month he bowed out, another co-founder and key leader, John Schulman, left the company to work for Anthropic.

When asked for comment, OpenAI spokesperson Lindsay McCallum Rémy pointed this out The edge to previous comments to CNBC.

And no longer just a “research laboratory”

As Leike suggested in his farewell message to OpenAI about “shining products,” turning the research lab into a for-profit company puts many of its long-time employees in a difficult position. Many probably joined to focus on AI research, not to develop and sell products. And while OpenAI is still a nonprofit organization, it’s not hard to guess how a for-profit version would work.

Research labs operate on longer timelines than companies chasing revenue. You can delay product releases if necessary, reducing the pressure to get to market and expand quickly. Perhaps most importantly, they can be more conservative when it comes to security.

There is already evidence that OpenAI is focusing on fast rather than cautious launches: according to a source The Washington Post In July, the company announced that it was hosting a launch party for GPT-4o “before it was known whether it was safe to launch.” The Wall Street Journal reported on Friday that security staff were working 20 hours a day and had no time to double-check their work. Initial test results showed that GPT-4o was not safe enough for use, but it was used anyway.

Meanwhile, OpenAI researchers continue to work on developing what they believe are the next steps toward human-level artificial intelligence. o1, OpenAI’s first “reasoning” model, is the start of a new series that the company hopes will power intelligent automated “agents.” The company consistently introduces features that are just ahead of the competition – this week it rolled out Advanced Voice Mode to all users, just days before Meta announced a similar product at Connect.

So what will OpenAI become? All signs point to a conventional technology company being under the control of a powerful executive – the very structure it was designed to avoid.

“I think that this will hopefully be a great transition for everyone involved, and I hope that OpenAI will be stronger in it, just as we are in all of our transitions,” Altman said on stage at Italian Tech Week, shortly after Murati’s departure was announced.