As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.

  • Lettuce eat lettuce
    link
    fedilink
    English
    102 years ago

    AI will follow the same path as VR IMO. Everybody will freak out about it for a while, tons of companies will try getting into the market.

    And after a few years, nobody will really care about it anymore. The people that use it will like it. It will become integrated in subtle and mundane ways, like how VR is used in TikTok filters, Smart phone camera settings, etc.

    I don’t think it will become anything like general intelligence.

    • magic_lobster_party
      link
      fedilink
      42 years ago

      The problem with VR is the cost of a headset. It’s a high cost of entry. Few want to buy another expensive device unless it’s really worth it.

      Generative AI has a small cost of entry for the consumer. Just log in to a site, maybe pay some subscription fee, and start prompting. I’ve used it to quickly generate Excel formulas for example. Instead of looking for a particular answer in some website with SEO garbage I can get an answer immediately.

    • @[email protected]
      link
      fedilink
      English
      102 years ago

      That’s exactly what was said about the internet in 1990. We have no idea what the next step will be.

    • R0cket_M00se
      link
      fedilink
      English
      62 years ago

      Nah, this ain’t it.

      So here’s the thing about AI, every company desperately wants their employees to be using it because it’ll increase their productivity and eventually allow upper management to fire more people and pass the savings onto the C suites. Just like with computerization.

      The problem is that you can’t just send all of your spreadsheets on personal financial data to OpenAI/Bing because from a security perspective that’s a huge black hole of data exfiltration which will make your company more vulnerable. How do we solve the problem?

      In the next five to ten years you will see everyone from Microsoft/Google to smaller more niche groups begin to offer on-premise or cloud based AI models that are trained on a standardized set of information by the manufacturer/distributor, and then personally trained on company data by internal engineers or a new type of IT role completely focused on AI (just like how we have automation and cloud engineering positions today.)

      The data worker of the future will have a virtual assistant that emulates everything that we thought Google assistant and Cortana was going to be, and will replace most data entry positions. Programmers will probably be fewer and further between, and the people that keep their jobs in general will be the ones who can multiply and automate their workload with the ASSISTANCE of AI.

      It’s not going to replace us anytime soon, but it’s going to change the working environment just as much as the invention of the PC did.

  • @[email protected]
    link
    fedilink
    English
    182 years ago

    Let this sink in: some companies got $100k from VCs where the project was pretty much a software that made API Calls to ChatGPT.

    Obviously the bubble will burst.

  • @[email protected]
    link
    fedilink
    English
    112 years ago

    Of course. Sure, AI image generated stuff are impressive but no way those companies could cover the operational, R&D cost if VC were not injecting shit load of fake money.]

    • @[email protected]
      link
      fedilink
      English
      42 years ago

      Yeah early this year, I was crunching the numbers on even a simple client to interface with LLM APIs. It never made sense, the monthly cost I would have to charge vs others using it to at least feel financially safe, never felt like a viable business model or real value add. That’s not even including Generative Art, which would definitely be much more. So, don’t even know how any of these companies charging <$10/mo are profitable.

      • @[email protected]
        link
        fedilink
        English
        32 years ago

        Generative art is actually much easier to run than LLMs. You can get really high resolutions on SDXL (1024x1024) using only 8gb of Vram (although it’d be slow). There’s no way you can get anything but the smallest of text generative models into that same amount of VRAM.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          2 years ago

          Oh wow, that’s good to know. I always attributed visual graphics to be way more intensive. Wouldn’t think a text generative model to take up that much Vram

          Edit: how many parameters did you test with?

          • @[email protected]
            link
            fedilink
            English
            12 years ago

            Sorry, just seeing this now- I think with 24gb of vram, the most you can get is a 4bit quantized 30b model, and even then, I think you’d have to limit it to 2-3k of context. Here’s a chart for size comparisons: https://postimg.cc/4mxcM3kX

            By comparison, with 24gb of vram, I only use half of that to create a batch of 8 768x576 photos. I also sub to mage.space, and I’m pretty sure they’re able to handle all of their volume on an A100 and A10G

  • @[email protected]
    link
    fedilink
    English
    262 years ago

    Where’s all the “NoOoOoO this isn’t like crypto it’s gonna be different” people at now?

    • @[email protected]
      link
      fedilink
      English
      22
      edit-2
      2 years ago

      That’s an incredibly bad comparison. LLMs are already used daily by many people saving them time in different aspects of their life and work. Crypto on the other hand is still looking for it’s everyday use case.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        2 years ago

        Yeah, I assumed the general consensus was “alt coins” in crypto or the scams themselves are the “bubble”. But, Ethereum and initial projects that basically create the foundational technologies (smart contracts, etc) are still respected and I’d say has a use case, but is not “production ready?”. So for AI/ML in LLMs at least, things like LLaMa, Stability’s, GPT’s, Anthropic’s Claude, are not included in this bubble, since they aren’t necessarily built on top of each other, but are separate implementations of a foundation. But, anything a layer higher maybe is.

    • R0cket_M00se
      link
      fedilink
      English
      132 years ago

      We’re too busy automating our jobs.

      Really though, this was never like crypto/NFTs. AI is a toolset used to troubleshoot and amplify workloads. Tools survive no matter what, whereas crypto/NFT’s died because they never had a use case.

      Just because a bunch of tech bros were throwing their wallets at a wall full of start ups that’ll fail doesn’t mean AI as a concept will fail. That’s no different than saying because of the dot.com bubble that websites and the Internet are going to be a fad.

      Websites are a tool, just because everyone and their brother has one for no reason doesn’t mean actual use cases won’t appear (in fact they already exist, much like the websites that survived the internet bubble.)

    • @[email protected]
      link
      fedilink
      English
      152 years ago

      I can derive value from LLMs. I already have. There’s no value in crypto. And if you tell me there is, I won’t agree. It’s bullshit. So is this, but to a lesser degree.

      Mint some NFTs and tell me how that improves your life.

  • @[email protected]
    link
    fedilink
    42 years ago

    Dot com is a bubble because some just host a website and got massive fund, same as crypto some just a random similar coin to eth they got massive fund. But Ai now can replace jobs, need massive fund to train so not much similar startup copying and startup barrier is high, also few free money going around at the same time due to interest rate. I think this might just be different.

    • Ragnell
      link
      fedilink
      22 years ago

      @DarkMatter_contract It’s not that the AI CAN replace jobs, it’s that they’re gonna use it to replace jobs anyway.

      The burst will come from those companies succeeding and quickly destroying a lot of their customer’s businesses in the process.

  • Altima NEO
    link
    fedilink
    English
    32 years ago

    What happened to everyone freaking out about AI taking our jobs?

    • FaceDeer
      link
      fedilink
      62 years ago

      Based on this article it seems they’ve moved to the “denial” phase.

  • HousePanther
    link
    fedilink
    English
    52 years ago

    Well, then we are facing two bubbles at the same time: AI and cyber currencies. Once both those bubbles burst, the fallout is going to make the dot-com era bubble look like small suds by comparison.

  • @[email protected]
    link
    fedilink
    English
    172 years ago

    So, who are the up and comers? Not every company in the dotcom era died. Some grew very large and made a lot of people rich.

  • @[email protected]
    link
    fedilink
    English
    52 years ago

    Whatever this iteration of “AI” will be, it has a limit that the VC bubble can’t fulfill. That’s kind of the point though because these VC firms, aided by low interest rates, can just fund whatever tech startup they think has a slight chance of becoming absorbed in to a tech giant. Most of the AI companies right now are going to fail, as long as they do it as cheaply as possible, the VC firms basically skim the shit that floats to the top.

  • @[email protected]
    link
    fedilink
    English
    172 years ago

    I read an article once about how when humans hear that someone has died, the first thing they try and do is come up with a reason that whatever befell the deceased would not happen to them. Some of the time there was a logical reason, some of the time there’s not, but either way the person would latch onto the reason to believe they were safe. I think we’re seeing the same thing here with AI. People are seeing a small percentage of people lose their job, with a technology that 95% of the world or more didn’t believe was possible a couple years ago, and they’re searching for reasons to believe that they’re going to be fine, and then latching onto them.

    I worked at a newspaper when the internet was growing. I saw the same thing with the entire organization. So much of the staff believed the internet was a fad. This belief did not work out for them. They were a giant, and they were gone within 10 years. I’m not saying we aren’t in an AI bubble now, but, there are now several orders of magnitude more money in the internet now than there was during the Dot Com bubble, just because it’s a bubble doesn’t mean it wont eventually consume everything.

    • @[email protected]
      link
      fedilink
      English
      92 years ago

      The thing is, after enough digging you understand that LLMs are nowhere near as smart or as advanced as most people make them to be. Sure, they can be super useful and sure, they’re good enough to replace a bunch of human jobs, but rather than being the AI “once thought impossible” they’re just digital parrots that make a credible impersonation of it. The real AI, now renamed AGI, is still very far.

      • @[email protected]
        link
        fedilink
        English
        42 years ago

        I am not sure they have to reach AGI to replace almost everyone. The amount of investment in them is now higher than it has ever been. Things are, and honestly have been, going quick. No, they are not as advanced as some people make them out to be, but I also don’t think the next steps are as nebulously difficult as some want to believe. But I would love it if you save this comment and come back in 5 years and laugh at me, I will probably be pretty relieved as well

      • @[email protected]
        link
        fedilink
        English
        52 years ago

        The real AI, now renamed AGI, is still very far

        The idea and name of AGI is not new, and AI has not been used to refer to AGI since perhaps the very earliest days of AI research when no one knew how hard it actually was. I would argue that we are back in those time though since despite learning so much over the years we have no idea how hard AGI is going to be. As of right now, the correct answer to how far away is AGI can only be I don’t know.