Microsoft won’t let them fail, it would be too emberassing.
lol that might already be the main factor keeping them alive
Now that capital has integrated them into their system they will not be allowed to fail. At least for now.
Or Microsoft and Meta will make sure there’s less competition in the future for their own LLMs?
It seems like MS could really fuck them up if they stopped using OpenAI for all their azure stuff. As of now I don’t think MS relies on their own LLM for anything?
MS abandons basically anything new that doesn’t make them even more absurdly rich instantly these days.
Game pass and Xbox aren’t profitable but maybe they’re trying to change that now.
Yea but Xbox division is like 2 decades old so they’re probably afraid of shit canning the whole thing. It is likely that Spencer goes down with the ship though after failing to achieve anything beyond securing Xbox’s position in third place so firmly and distantly that the only remaining option is Microsoft just buying Sony completely to get the market the hard way. They’ll wait until Kamala replaces Lena Kahn at the FTC first though.
They are choosing to create as many layers of separation as possible. While integrating the systems directly. This is just how global capital is currently ran.
The iron law of “nothing ever happens” necessitates this
spoiler
Nah but for real how much life can this bubble still have left?
The iron law of “nothing ever happens”
There are decades where nothing ever happens, and there are weeks where we are so back
Almost like quantitative changes turn into qualitative changes or something
They rupture even
A lot, because nothing ever happens.
Inshallah
and nothing of value is at risk of being lost
Good, please take the entire fake industry with you
No offense to the AI researchers here (actually maybe only one person lol), but the people who lead/make profit off of/fundraise off of your efforts now are demons
I do think that if OpenAI goes bust that’s gonna trigger a market panic that’s gonna end the hype cycle.
Inshallah I am fed up of dealing with these charlatans at work
A solution in search of a problem
I just know the AI hype guys in my dept are gonna get promoted and I’ll be the one answering why our Azure costs are astronomical while we have not changed our portfolio size at all lol
You need to stand up an AI team to analyse the issue and identify efficiencies (firing the AI hype guys and moving to an AI hype outsourcing model)
AI hype guys? yeah sorry AI can do that for us now
My guess for the dynamics: openAI investors panic, force the company to cut costs and increase pricing, other AI company investors panic, same result, AI becomes prohibitively expensive for a lot of use cases ending the hype cycle.
yeah I think that’s very plausible
I think that’s the best argument for why the tech industry won’t let that happen. All of the big tech stocks are getting a boost from this massive grift.
Worst case scenario one of the tech giants buys them. Then they pare back the expenses and hide it in their balance sheet, and keep everyone thinking AGI is just around the corner.
It’s certainly possible, but I don’t think any of the tech giants are in a position to do that today. Google, Microsoft, and Amazon are in a cost cutting cycle, Meta’s csuite is probably on a short leash after the metaverse boomdoggle. Apple is the most likely one because they’re generally behind everyone else across all ML products but especially LLMs, but afaik they’re bracing for seeing drops in sales for the first time in 15 years, so buying openAI might be a tough pitch.
I believe that Microsoft owns a huge portion of OpenAI, like just short of majority stake
As far as “AI” goes, it’s here to stay. As for OpenAI they will probably be bought off by one of the big ones, as is usually the case with these companies.
I agree that this tech has lots of legitimate uses, and it’s actually good for the hype cycle to end early so people can get back to figuring out how to apply this stuff where it makes sense. LLMs also managed to suck up all the air in the room, but I expect the real value is going to come from using them as a component in larger systems utilizing different techniques.
Yeah but integrating LLMs with other systems is already happening.
Most recent case is out of Deepmind, where they managed to get silver medalist score in the International Mathematics Olympiad (IMO) using a LLM with a formal verification language (LEAN) and then using synthetic data and reinforcement learning. Although I think they had to manually formalize the problem before feeding it to the algorithm, and also it took several days to solve the problems (except for one that took minutes), so there’s still a lot of space for improvement.
Sure, but you can do a lot more than that. You could combine LLMs as part of a bigger system of different kinds agents, each specializing in different things. Similarly to the way different parts of the brain focus on solving different types problems. Sort of along the lines of what this article is describing https://archive.ph/odeBU
It’s kind of like how graphics cards are used to optimize specific repeated computations but not used for general computation
Good analogy, it’s a tool for solving a fairly narrow problem in a particular domain.
I think they’re owned by Microsoft in some murky way designed to slip past monopoly concerns
Have they tried replacing their workers with AI to save money?
Nature is healing.
Only losing $5 billion a year, so you could run it off Nvidia’s market cap growth since 2022 for another 500 years?
I like how it mentions Nvidia and Microsoft as if this shit is an anomaly and it’s actually profitable for the other guys and won’t collapse we promise
Nvidia is in sell the shovels business, they’ll be fine even if stock craters
With how hyped their stock price is right now, I’m not entirely convinced they would be fine if the bubble pops. Like, it’s all made up Monopoly-money anyway, but if 90% of your valuation disappears overnight I feel like that might actually just rip all institutional money out of your company in an economic panic attack.
people use GPUs for real (not LLMs or making shrimp jesus pictures) shit, worse case scenario is there will be a firesale for gamers and unrelated machine learning companies to enjoy. Nvidia’s stock will take a hit, but it won’t kill them like it will kill hundreds of companies that went a little too all-in on LLMs.
Yeah like Lehman Bro’s. The only bad guys in the industry.
Is this because AI LLMs don’t do anything good or useful? They get very simple questions wrong, will fabricate nonsense out of thin air, and even at their most useful they’re a conversational version of a Google search. I haven’t seen a single thing they do that a person would need or want.
Maybe it could be neat in some kind of procedurally generated video game? But even that would be worse than something written by human writers. What is an LLM even for?
They have places they can be used, and I think that some of the smaller models might find their way into more niches as time goes on.
But there’s just not enough uses for OpenAI to make back their investment. The hope that LLMs would turn into a General AI is pretty much dead, and the results are in from the early adopters AIs more often increase workloads rather than decrease them.
I’ve been thinking AI generated dialogue in Animal Crossing would be an improvement over the 2020 game.
To clarify I’m not wanting the writers at the animal crossing factory to be replaced with ChatGPT. Having conversations that are generated in real time in addition to the animals’ normal dialogue just sounds like fun. Also I want them to be catty again because I like drama.
Make the villagers petty assholes like the original game and RETVRN the crabby personality type and it would be an improvement.
Nah, something about AI dialogue is just soulless and dull. Instantly uninteresting. Same reason I don’t read the AI slop being published in ebooks. It has no authorial intent and no personality. It isn’t even trying to entertain me. It’s worse than reading marketing emails because at least those have a purpose.
It depends on the training data. Once you use all data available, you get the most average output possible. If you limit your training data you can partially avoid the soullessness, but it’s more unhinged and buggy.
Cory Doctorow has a good write-up on the reverse centaur problem and why there’s no foreseeable way that LLMs could be profitable. Because of the way they’re error-prone, LLMs are really only suited to low-stakes uses, and there are lots of low-stakes, low-value uses people have found for them. But they need high-value use-cases to be profitable, and all of the high-value use-cases anyone has identified for them are also high-stakes.
Thank you. This is a good article. Are there any good book length things I could read on this topic?
I do not know. Perhaps Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell.
Is there a single LLM you can’t game into apologizing for saying something factual then correcting itself with a completely made up result lol
I think there are legitimate uses for this tech, but they’re pretty niche and difficult to monetize in practice. For most jobs, correctness matters, and if the system can’t be guaranteed to produce reasonably correct results then it’s not really improving productivity in a meaningful way.
I find this stuff is great in cases where you already have domain knowledge, and maybe you want to bounce ideas off and the output it generates can stimulate an idea in your head. Whether it understands what it’s outputting really doesn’t matter in this scenario. It also works reasonably well as a coding assistant, where it can generate code that points you in the right direction, and it can be faster to do that than googling.
We’ll probably see some niches where LLMs can be pretty helpful, but their capabilities are incredibly oversold at the moment.
We might eventually get to a point where LLMs are a useful conversational user interface for systems that are actually intrinsically useful, like expert systems, but it will still be hard to justify their energy cost for such a trivial benefit.
The costs of operation aren’t intrinsic though. There is a lot of progress in bringing computational costs down already, and I imagine we’ll see a lot more of that happening going forward. Here’s one example of a new technique resulting in cost reductions of over 85% https://lmsys.org/blog/2024-07-01-routellm/
AI is great for asking questions, not answering them
The LLM characters will send you on a quest, and then you’ll go do it, and then you’ll come back and they won’t know you did it and won’t be able to give you a reward, because the game doesn’t know the LLM made up a quest, and doesn’t have a way to detect that you completed the thing that was made up.
1 trillion more parameters just a trillion more parameters bro i swear we’ll be profitable then bro
Startups having 12 months of runway before insolvency is pretty normal. OpenAI’s valuation and burn rate might be a problem since they’ll need to do a bigger round, but I doubt it. They are basically the hottest startup on the planet right now. I think this article is interesting but ultimately doesn’t mean anything.
Wow so the tech industry overhyped something to boost stocks. Unbelievable
unprecedented!