I’ve seen reports and studies that show products advertised as including / involving AI are off-putting to consumers. And this matches what almost every person I hear irl or online says. Regardless of whether they think that in the long-term AI will be useful, problematic or apocalyptic, nobody is impressed Spotify offering a “AI DJ” or “AI coffee machines”.
I understand that AI tech companies might want to promote their own AI products if they think there’s a market for them. And they might even try to create a market by hyping the possibilities of “AI”. But rebranding your existing service or algorithms as being AI seems like super dumb move, obviously stupid for tech literate people and off-putting / scary for others. Have they just completely misjudged the world’s enthusiasm for this buzzword? Or is there some other reason?
The things that make a company successful are not the same as the things that make executives successful
It was super cool for like three weeks. Now it’s the gambler’s fallacy they’re hanging on to.
Ohhh, you fn nailed it. Nice comparison.
Attracts investors.
When people are evaluating companies, and see a company missing out on the current trend, how is that going to factor into their valuation of the stock prices?
Because the boss thinks it sounds cool and doesn’t want to be the only kid in the block without an AI product to sell.
This as much as anything else.
I work at a company that’s looking to integrate AI into their product. The VP who started us down the AI road has made it clear that he doesn’t see it being the reason that anyone would choose our product over a competitor’s, but we also don’t want to be seen as the only ones who don’t have it.
businesses are hoping they can use ai to higher cheaper labor. they want to be able to higher someone who does not know how to use the relevant program but can coax a machine to produce it or can get enough information from a machine to do the job. also they hope to eliminate human trainers and such.
FYI
higherhireno no. I meant they can hire cheaper labor and get them to a higher potential and…yeah I typoed.
No worries mate, we all po it.
They want to create some hype and look cool by using AI chatbots. And most normies don’t care about privacy and the dangers of AI in the future, they only care about “wow I can use AI for bla… bla…”
But they have no idea, that one day AI could take over their jobs… and rich people like Sam Altman are getting richer, and he only pays you with
UBI moneysome pieces of computinghttps://x.com/tsarnick/status/1789107043825262706
Also, AI companies aim for government contracts and medium / big corpos.
Just dumb. In the case of the company i work for at least.
A lot of people have come to realize that LLMs and generative AI aren’t what they thought it was. They’re not electric brains that are reasonable replacements for humans. They get really annoyed at the idea of a company trying to do that.
Some companies are just dumb and want to do it anyway because they misread their customers.
Some companies know their customer hate it but their research shows that they’ll still make more money doing it.
Many people that are actually working with AI realize that AI is great for a much larger set of problems. Many of those problems are worth a ton of money; (eg. monitoring biometric data to predict health risks earlier, natural disaster prediction and fraud detection).
Many people that are actually working with AI realize that AI is great for a much larger set of problems. Many of those problems are worth a ton of money; (eg. monitoring biometric data to predict health risks earlier, natural disaster prediction and fraud detection
None of those are LLMs though, or particularly new.
You’re right. They’re not LLMs and they’re not particularly new.
The main new part is that new techniques in AI and better hardware means that we can get better answers than we used to be able to get. Many people also realize that there’s a lot of potential to develop systems that are much better at answering those questions.
So when people ask, “Why are companies investing in AI when customers hate AI.” Part of the answer is that they’re investing in something different than what most people think of when they hear “AI”.
Two mains reasons:
Attracting investors
Attracting talented workers by signaling they are doing technical research
Also, people working in the industry might not even use those products. They want a cool job not a cool product
As a tech worker, it’s more towards attracting investors.
I was discussing this with a friend. We came to the conclusion that “entrepreneur” means “unskilled, uneducated and unable to work” and that the harder a product is marketed, the more worthless it is.
It hypes investors. Investors are the customers.
My understanding is that a lot of venture capitalist funding is driven by gut feel and personal connection. Like, they’ll tell you that they’re the vanguard of the future with a vision, but most of the time they’re just cliquey bros going “dude, sick” and burning money.
There’s an anecdote in the book “the cold start problem” about how zoom got funding even though the guys funding it thought it was a solved problem, that a new video company wouldn’t go anywhere, but the zoom guy was their bro so they gave him millions of dollars.
I feel like it’s possible some future will look back at this the way we look at feudalism. Just like, that’s such a bad system , why did people put up with it?
Just like, that’s such a bad system , why did people put up with it?
Because hindsight is 20/20 and people had preconceptions back then that filled in the gaps, as they do right now.
The gaps are and were actually full of nonsense like “he’s my buddy I’ll give him money” but people expect the process to be a lot more reliable and solid, because they think they’d be more careful with that kind of money, not realising that to some millions are pocket change (and nobody is careful with pocket change) and that others gamble with other people’s money and thus are a lot more cavalier.
If my workplace is in any way representative, it’s because decisions are made by close to retirement out of touch old geezers who want to virtue signal very hard that they are not out of touch old geezers. So they push the “new thing” for lack of any actually innovative ideas of their own. Then, when the younger team members who do have some rough knowledge of the “new thing” try to explain why it might be a bad idea, they call them afraid of progress and double down on the “new thing” even harder.
It’s a “don’t want to miss the ship” thing where companies have to invest in whatever’s trending in case it becomes successful and gives you an advantage. If they wait until it’s proven, they might miss a competitive advantage (having to start learning after others). In the case of AI it’s even more important since the promise sounds actually useful (the summarize anything quickly bit at least), unlike, say, NFT. At least that’s kind of how it got explained to me at one of my jobs.
AI normalising data collection