December 9, 2024


It’s not every day that the world’s most famous company sets itself on fire. Yet it feels like that’s what happened last Friday when OpenAI’s board announced It fired its chief executive, Sam Altman, because he was “not consistently forthright in his communications with the board.” In corporate-speak, they are fighting the words as they come: They hinted that Altman was lying.

The dismissal set off a baffling sequence of events that kept the tech industry glued to their social feeds all weekend: First, it wiped $48 billion off the valuation of OpenAI’s largest partner, Microsoft. Speculation of wrongdoing ran high, but employees, Silicon Valley veterans, and investors rallied around Altman, and the next day a conversation was taking place To bring him back. Instead of some violent incident, reporting indicated This was primarily a dispute over whether Altman was manufacturing and selling AI responsibly. By Monday, negotiations had failed, with most OpenAI employees threatening to resign, and Altman announced he was joining Microsoft.

all the time, something Other Goes up in flames: The imagination that anything other than the profit motive is going to govern the way AI is developed and deployed. Concerns about “AI safety” will be heightened every time tech giants look to take advantage of a new revenue source.

It’s hard to overstate how wild this whole saga is. In a year when artificial intelligence has swept the business world, OpenAI, with its ubiquitous ChatGPT and Dall-E products, has been the center of the universe. And Altman was its world-renowned spokesperson. In fact, he has been the most prominent spokesperson for AI.

For a high-flying company’s own board to remove a CEO of such stature on a Friday without any warning or prior indication that anything serious was happening – Altman did. Now! Taking center stage at a much-hyped conference to announce the launch of OpenAI’s app store – this is almost unheard of. (Many have compared the events to Apple’s famous canning of Steve Jobs in 1985, but that also occurred after Lisa and Macintosh sales failed to meet expectations, not during the peak success of the Apple II. )

So what exactly is happening?

Well, the first thing that’s important to know is that OpenAI’s board is, by design, constituted differently than most corporations – it’s a non-profit structured to safeguard AI development rather than maximize profitability. Is a profitable organization. Most boards are tasked with ensuring that their CEOs are best serving the company’s financial interests; OpenAI’s board is tasked with ensuring that their CEO is not being negligent in the development of artificial intelligence and is acting in the best interests of “humanity”. This non-profit board controls the for-profit company OpenAI.

got it?

As said by Jeremy Khan Luck, ,The structure of OpenAI was designed to enable OpenAI to raise the tens or even hundreds of billions of dollars needed to succeed in its mission of building artificial general intelligence (AGI)… as well as capitalist forces. To prevent, and especially prevent, a single tech giant from controlling AGI. And yet, Khan says, as soon as Altman struck a $1 billion deal with Microsoft in 2019, “the structure was basically a time bomb.” The ticking became louder when Microsoft sinks another $10 billion At OpenAI last January.

We still don’t know what the board actually meant by saying that Altman “was not consistently candid in his communications.” But reporting has focused on a growing divide between the company’s science arm, led by the co-founder, chief scientist and board member. Ilya Sutskeverand the commercial branch, led by Altman.

We to do know that altman has Has been in expansion mode recently, looking for billions in new investment From Middle Eastern Sovereign Wealth Fund Starting a chip company to rival AI chipmaker Nvidia, and One billion more than Softbank For a venture with former Apple design chief Jony Ive to develop AI-focused hardware. And this is on top of launching the aforementioned OpenAI app store for third-party developers, which will allow anyone to create custom AIs and sell them on the company’s marketplace.

The working narrative now appears to be that Altman’s expansionist mindset and his drive to commercialize AI – and there’s probably a lot more we don’t yet know about this – clashed with the Sutskever faction, who became concerned. The company he co-founded was growing very fast. At least two members of the board are associated with the so-called effective altruism movementWhich sees AI as a potentially destructive force that could destroy humanity.

The Board decided that Altman’s conduct violated the Board’s order. But they also (somehow, wildly) failed to anticipate just how much of a blow they would be if they fired Altman. And that blow came with hurricane force; OpenAI employees and Silicon Valley power players like Airbnb’s Brian Chesky and Eric Schmidt spent the weekend “I am Spartacus”-ing Altman.

It’s not hard to see why. Talks were going on with OpenAI Sell ​​shares to investors at a valuation of $86 billion, Microsoft, which has invested more than $11 billion in OpenAI and now uses OpenAI’s technology on its platforms, was apparently informed of the board’s decision to fire Altman five minutes before the wider world. I went. Its leadership was furious and appears to have led an effort to reinstate Altman.

But beyond all this lies the question whether there should really be any safeguards for the AI ​​development model championed by Silicon Valley’s prime movers; Should the board be able to remove a founder whom they believe is not acting in the best interests of humanity – which, again, is their stated mission – or should they be required to continue expanding and scaling up the board? Must search.

Look, even if the OpenAI board turns out to be the real villain in this story, as venture capital analyst Eric Newcomer points out, we probably should Take Its decision seriously, Firing Altman probably wasn’t a call made lightly, and just because they’re scrambling now that it turns out the call was a potential financial threat to the company doesn’t mean their concerns are unfounded. Were. far from it.

In fact, whatever it is, it has already succeeded in underscoring just how aggressively Altman is pursuing business interests. For most tech giants, this would be a “well done” situation, but Altman has carefully cultivated the aura of a burlesque guru warning the world of major disruptive changes. remember those shepherd dog eyes congressional hearing Where a few months ago he begged to regulate the industry lest it become too powerful? Altman’s entire complaint is that he’s a tired messenger who wants to lay the groundwork for responsible use of AI that benefits humanity — yet he’s also roaming the world attracting investors wherever possible. , is doing everything it can to capitalize on this moment of intense AI interest.

To those watching closely, it always seemed like an act – a few weeks after those hearings, after all, altman fought real world rules The EU was seeking to ban AI deployment. And we forget that OpenAI was originally founded as a nonprofit that claimed to operate with extreme transparency — before Altman turned it into a for-profit company that funded its own business. Keeps the models secret.

Now, I don’t believe for a moment that AI is on the verge of becoming powerful enough to destroy the human race – I think that’s something that happens in Silicon Valley (Including OpenAI’s new interim CEO Emmett Shearer) moving away from a science-fictional sense of self-importance, and A Unique Clever Marketing Strategy – but I to do Imagine the many harms and dangers that AI can cause in the short term. And having AI security concerns exposed so thoroughly at the Valley’s fingertips is no joy.

You’d like to believe that executives at AI-building companies who think there’s a significant risk of global catastrophe here can’t be sidelined just because Microsoft lost some stock value. But we are here.

Sam Altman is first and foremost a pitchman for the biggest tech products of the year. No one is sure how useful or interesting most of those products will be in the long run, and they aren’t making much money at the moment – so most of the value is tied up in the pitchman himself. Investors, OpenAI employees, and partners like Microsoft should want Altman to travel around the world telling everyone how AI is going to eclipse human intelligence any day now, far more than a high-functioning chatbot would require.

That’s why, more than anything, it proves to be a coup for Microsoft. Now he’s got Altman in-house, where he can cheerlead for the AI ​​and make deals to his heart’s content. They still have the technical license to OpenAI, and OpenAI will need Microsoft more than ever.

Now, it may still turn out that this was nothing more than a power struggle between board members, and it was a coup that went wrong. But if it turns out that the board had genuine concerns and they reported them to Altman to no avail, no matter how you feel about AI safety concerns, we should be concerned about this outcome. Should: Another consolidation of power at one of the largest tech companies and even less accountability for the product than before.

If anyone still believes that a company can pursue the development of a product like AI without taking increasing orders from Big Tech, I hope the Altman debacle has disabused them of that fantasy. The reality is that no matter what other inputs are given to the company behind ChatGPT, the output will be the same: money talks.

Source: www.latimes.com

Leave a Reply

Your email address will not be published. Required fields are marked *