The Stink is on OpenAI
This May End Quickly
What I like to call “the stink” has settled over OpenAI. Its Large Language Model technology is being buried in the press. CEO Sam Altman is being treated like he just lost a playoff game. (Image from ChatGPT.)
The stink is an attitude about tech companies that lose their way. I’ve seen it dozens of times. Leaders that seemed mighty suddenly develop feet of clay. Companies can fail quickly.
It happened to An Wang and Wang Labs. It happened to Jim Clark and Silicon Graphics, to Jeff Immelt of General Electric. I’m sure you can cite several other examples.
Size is no protection against the stink. The product doesn’t work, it’s aimed at the wrong market, there’s something better out there. The leader is too rigid, the company can’t adjust, the stock is down, the stock is out.
It’s true the stink isn’t always fatal.
The stink got on IBM a generation ago and it survived. It got on AT&T a decade ago and it survived. The stink was on Larry Ellison of Oracle for years, but he finally got a clue on the cloud and then on AI.
But the comebacks are always tentative, partial. Oracle is still worth “just” $500 billion. Sounds like a lot. Larry’s fortune is top five. But it’s a pittance next to the Cloud Czars who went with open source and put out the $1 billion/quarter to play the cloud game in 2010. They’re worth trillions.
Once the stink gets on you, it doesn’t wash off. Even changing CEOs doesn’t work. You were known for one thing, that one thing is no longer a thing, so you’re no longer a thing, either. If you’re lucky you get bought, but then you’re a subsidiary and you’re gradually digested. You’re fish food.
What Happened?
Gary Marcus didn’t kill OpenAI, although lazy people in the media will claim he did. You might as well say I did it, and I’m just a reporter. The stink has been clear to me for months, maybe for years, because Large Language Models don’t lead to Artificial General Intelligence, which was OpenAI’s claim all along.
LLMs are look-up models. You train a database with pointers, the software finds them in response to a query, and there are a variety of methods for input and output. That’s not intelligence. It’s a librarian.
LLMs are very useful. But they have no creativity. They don’t ask questions, and questions are at the heart of human intelligence. If the answers exist, the software will find them, assuming the answers are already in the database. If they’re not, you’re Shit Out of Luck. What’s worse, of course, is that LLMs lie. They make things up. They have no morals or grounding in the real world.
We all know this now. I knew it two years ago. It just took time for momentum and money to get the message. It took DeepMind time to match the capability, Anthropic time to figure out the marketing, Google time to match it.
Suddenly it’s a commodity.
Where From Here?
When I wrote about cauterizing OpenAI, I was talking about a process that’s now ongoing, of pretending it’s an isolated case. Some will say it’s just not marketed well, as Anthropic is. Some will say it just doesn’t have the user base that Google has. Others will just blame Sam Altman.
AI will go on. Small models that don’t seem to work now will be tweaked and will get better. Productivity will continue to grow and will justify spending. It just won’t be what it was.
Because we caught on to Altman some will breathe a sigh of relief, like they did almost 30 years ago when Netscape started failing. That wasn’t “the big one.”
But the big one is coming.



