It was a research lab, but now it is a tech company
On the Demise of OpenAI, a Company that formerly worked overtime to educate and protect scientists: An openAI spokesperson told the Wall Street Journal on Monday, May 24
Over the past year, a trend of departures has been building, following the failed attempt by the board to fire Altman. OpenAI cofounder and chief scientist Ilya Sutskever, who delivered Altman the news of his firing before publicly walking back his criticism, left OpenAI in May. Jan Leike, a key OpenAI researcher, quit just days later, saying safety culture and processes have been taken a backseat to shiny products. Most of the Openai board members quit, except for Adam D’Angelo, who got a seat.
In a bright pink box on a webpage about OpenAI’s board structure, the company emphasizes that “it would be wise” to view any investment in OpenAI “in the spirit of a donation” and that investors could “not see any return.”
There’s already evidence OpenAI is focusing on fast launches over cautious ones: a source told The Washington Post in July that the company threw a launch party for GPT-4o “prior to knowing if it was safe to launch.” The Wall Street Journal reported on Friday that the safety staffers worked 20-hour days and didn’t have time to double-check their work. The initial tests show that GPT-4o isn’t safe, but it wasDeployed anyway.
OpenAI, Anthropic, and the For-Profit Side of Charity: A Case Study in Crowdfunding an AI-Aided Startup
Research labs work on longer timelines than companies do. They can delay product releases when necessary, with less pressure to launch quickly and scale up. They can be a little more conservative about safety.
And crucially, in the course of these changes, OpenAI’s nonprofit parent would reportedly lose control. Only a few weeks after this news was reported, Murati and company were out.
OpenAI doesn’t have deep pockets or existing established businesses like Google or Meta, which are both building competing models (though it’s worth noting that these are public companies with their own responsibilities to Wall Street.) Fellow AI startup Anthropic, which was founded by former OpenAI researchers, is nipping at OpenAI’s heels while looking to raise new funds at a $40 billion valuation. We’re way past the “spirit of a donation” here.
Excess returns support the nonprofit to focus on societal benefits over financial gain, so investor profits are capped at 100x. The nonprofit side can intervene if the for-profit side strays from the mission.
Are Wars Killing the Worlds? Or Will Artificial Intelligence Crack the Big Bads? A Conversation with Paul Altman at Burning Man
I’m also wary of the supposed bonanza that will come when all of our big problems are solved. Let’s concede that AI might actually crack humanity’s biggest conundrums. We humans would have to actually implement those solutions, and that’s where we’ve failed time and again. We don’t need a large language model to tell us war is hell and we shouldn’t kill each other. wars continue
Universal basic income, which he believes will help cushion the blow of lost wages, is a fan of his. Artificial intelligence might indeed generate the wealth to make such a plan feasible, but there’s little evidence that the people who amass fortunes—or even those who still eke out a modest living—will be inclined to embrace the concept. Altman might have had a great experience at Burning Man, but some kind souls of the Playa seem to be up in arms about a proposal, affecting only people worth over $100 million, to tax some of their unrealized capital gains. It’s a dubious premise that such people—or others who become super rich working at AI companies—will crack open their coffers to fund leisure time for the masses. One of the US’s major political parties can’t stand Medicaid, so one can only imagine how populist demagogues will regard UBI.
When most of our current jobs go the way of lamplighters, we don’t know what life will be like. He asked tech leaders and celebrities to give their Spotify lists in a new show on his show this week. When explaining why he chose the tune “Underwater” by Rüfüs du Sol, Altman said it was a tribute to Burning Man, which he has attended several times. He says the festival is a good example of what post- AGI can look like, where people are focused on doing stuff for each other and making amazing gifts to get each other.
Altman correctly notes that the march of technology has brought what were once luxuries to everyday people—including some unavailable to pharaohs and lords. The person who never liked air-conditioning was Charlemagne. Working class people with public assistance, as well as some on their own, have access to large screen TVs, delivery services that bring pumpkin lattes and pet food to their doors, and delivery services that bring large screen TVs. But Altman is not acknowledging the whole story. Despite massive wealth, not everyone is thriving, and many are homeless or severely impoverished. William Gibson said that paradise is not evenly distributed. That’s not because technology has failed—we have. I suspect the same will be true if AGI arrives, especially since so many jobs will be automated.
Source: No, Sam Altman, AI Won’t Solve All of Humanity’s Problems
Sam Altman, The Strawberry Shortcut, or How Artificial Intelligence is Ushering in a Golden Age: An Answer to Altman
No matter what you think of Sam Altman, it’s indisputable that this is his truth: Artificial general intelligence–AI that matches and then exceeds human capabilities–is going to obliterate the problems plaguing humanity and usher in a golden age. I suggest that this concept be dubbed The Strawberry Shortcut in honor of Openai’s recent breakthrough in artificial reasoning. Like the shortcake, the premise looks appetizing but is less substantial in the eating.
Maybe he published this to dispute a train of thought that dismisses the apparent gains of large language models as something of an illusion. He says Nuh-uh. He said in the interview that deep learning works and mocked those who said that programs like GPT4o were simply stupid engines. “Once it can start to prove unproven mathematical theorems, do we really still want to debate: ‘Oh, but it’s just predicting the next token?'” he said.