• @[email protected]
    link
    fedilink
    English
    124 days ago

    wrong. it’s that it’s not intelligent. if it’s not intelligent, nothing it says is of value. and it has no thoughts, feelings or intent. therefore it can’t be artistic. nothing it “makes” is of value either.

  • @[email protected]
    link
    fedilink
    English
    024 days ago

    AI will become one of the most important discoveries humankind has ever invented. Apply it to healthcare, science, finances, and the world will become a better place, especially in healthcare. Hey artist, writers, you cannot stop intellectual evolution. AI is here to stay. All we need is a proven way to differentiate the real art from AI art. An invisible watermark that can be scanned to see its true “raison d’etre”. Sorry for going off topic but I agree that AI should be more open to verification for using copyrighted material. Don’t expect compensation though.

  • PostiveNoise
    link
    fedilink
    124 days ago

    Either the article editing was horrible, or Eno is wildly uniformed about the world. Creation of AIs is NOT the same as social media. You can’t blame a hammer for some evil person using it to hit someone in the head, and there is more to ‘hammers’ than just assaulting people.

    • @[email protected]
      link
      fedilink
      English
      3
      edit-2
      24 days ago

      Eno does strike me as the kind of person who could use AI effectively as a tool for making music. I don’t think he’s team “just generate music with a single prompt and dump it onto YouTube” (AI has ruined study lo fi channels) - the stuff at the end about distortion is what he’s interested in experimenting with.

      There is a possibility for something interesting and cool there (I think about how Chuck Pearson’s eccojams is just like short loops of random songs repeated in different ways, but it’s an absolutely revolutionary album) even if in effect all that’s going to happen is music execs thinking they can replace songwriters and musicians with “hey siri, generate a pop song with a catchy chorus” while talentless hacks inundate YouTube and bandcamp with shit.

      • PostiveNoise
        link
        fedilink
        124 days ago

        Yeah, Eno actually has made a variety of albums and art installations using generative simple AI for musical decisions, although I don’t think he does any advanced programming himself. That’s why it’s really odd to see comments in an article that imply he is really uninformed about AI…he was pioneering generative music 20-30 years ago.

        I’ve come to realize that there is a huge amount of misinformation about AI these days, and the issue is compounded by there being lots of clumsy, bad early AI works in various art fields, web journalism etc. I’m trying to cut back on discussing AI for these reasons, although as an AI enthusiast, it’s hard to keep quiet about it sometimes.

        • @[email protected]
          link
          fedilink
          English
          224 days ago

          Eno is more a traditional algorist than “AI” (by which people generally mean neural networks)

          • PostiveNoise
            link
            fedilink
            124 days ago

            Sure. I worked in the game industry and sometimes AI can mean ‘pick a random number if X occurs’ or something equally simple, so I’m just used to the term used a few different ways.

          • @[email protected]
            link
            fedilink
            English
            323 days ago

            I could see him using neural networks to generate and intentionally pick and loop short bits with weird anomalies or glitchy sounds. Thats the route I’d like AI in music to go, so maybe that’s what I’m reading in, but it fits Eno’s vibe and philosophy.

            AI as a tool not to replace other forms of music, but doing things like training it on contrasting music genres or self made bits or otherwise creatively breaking and reconstructing the artwork.

            John Cage was all about ‘stochastic’ music - composing based on what he divined from the I Ching. There are people who have been kicking around ideas like this for longer than the AI bubble has been around - the big problem will be digging out the good stuff when the people typing “generate a three hour vapor wave playlist” can upload ten videos a day…

  • @[email protected]
    link
    fedilink
    English
    024 days ago

    Ollama and stable diffusion are free open source software. Nobody is forcing anybody to use chatGPT

    • @[email protected]
      link
      fedilink
      English
      024 days ago

      Ollama is FOSS, SD has a proproprietary but permissive, source-available license, but it is not what most people would associate with “open-source”

      • @[email protected]
        link
        fedilink
        English
        123 days ago

        Fair, it may not be strictly FOSS but I think my point still stands. If people are worried about AI being owned by “the elite” they can just run Ollama.

  • @[email protected]
    link
    fedilink
    English
    524 days ago

    The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        23 days ago

        More broadly, I would expect UBI to trigger a golden age of invention and artistic creation because a lot of people would love to spend their time just creating new stuff without the need to monetise it but can’t under the current system, and even if a lot of that would be shit or crazily niche, the more people doing it and the freer they are to do it, the more really special and amazing stuff will be created.

        • @[email protected]
          link
          fedilink
          English
          223 days ago

          I don’t know nearly enough history to be an expert on this subject, but I’ve heard that one of the causes of the Enlightenment was because peasants and poors were able to afford to spend time learning and creating, rather than substinance-farming.

  • @[email protected]
    link
    fedilink
    English
    024 days ago

    No brian eno, there are many open llm already. The problem is people like you who have accumulated too much and now control all the markets/platforms/medias.

    • PostiveNoise
      link
      fedilink
      024 days ago

      Totally right that there are already very impressive open source AI projects.

      But Eno doesn’t control diddly, and it’s odd that you think he does. And I assume he is decently well off, but I doubt he is super rich by most people’s standards.

  • Wren
    link
    fedilink
    English
    424 days ago

    The biggest problem with AI is the damage it’s doing to human culture.

    • @[email protected]
      link
      fedilink
      English
      123 days ago

      Not solving any of the stated goals at the same time.

      It’s a diversion. Its purpose is to divert resources and attention from any real progress in computing.

  • @[email protected]
    link
    fedilink
    English
    024 days ago

    Reading the other comments, it seems there are more than one problem with AI. Probably even some perks as well.

    Shucks, another one or these complex issues huh. Weird how everything you learn something about turns out to have these nuances to them.

    • @[email protected]
      link
      fedilink
      English
      124 days ago

      most of the replies can be summarized as “the biggest problem with AI is that we live under capitalism”

  • @[email protected]
    link
    fedilink
    English
    424 days ago

    AI has a vibrant open source scene and is definitely not owned by a few people.

    A lot of the data to train it is only owned by a few people though. It is record companies and publishing houses winning their lawsuits that will lead to dystopia. It’s a shame to see so many actually cheering them on.

    • @[email protected]
      link
      fedilink
      English
      324 days ago

      So long as there are big players releasing open weights models, which is true for the foreseeable future, I don’t think this is a big problem. Once those weights are released, they’re free forever, and anyone can fine-tune based on them, or use them to bootstrap new models by distillation or synthetic RL data generation.

  • @[email protected]
    link
    fedilink
    English
    1124 days ago

    And those people want to use AI to extract money and to lay off people in order to make more money.

    That’s “guns don’t kill people” logic.

    Yeah, the AI absolutely is a problem. For those reasons along with it being wrong a lot of the time as well as the ridiculous energy consumption.

    • @[email protected]
      link
      fedilink
      English
      1224 days ago

      The real issues are capitalism and the lack of green energy.

      If the arts where well funded, if people where given healthcare and UBI, if we had, at the very least, switched to nuclear like we should’ve decades ago, we wouldn’t be here.

      The issue isn’t a piece of software.

    • gian
      link
      fedilink
      English
      123 days ago

      Yeah, the AI absolutely is a problem.

      AI is noto a problemi by itself, the problemi is that most of the people who make decisions in the workplace about these things do not understand what they are talking about and even less what something is capable of.

      My impression is that AI now is what blockchain was some years ago, the solution to every problemi,which was of course false.

  • @[email protected]
    link
    fedilink
    English
    524 days ago

    No?

    Anyone can run an AI even on the weakest hardware there are plenty of small open models for this.

    Training an AI requires very strong hardware, however this is not an impossible hurdle as the models on hugging face show.

    • @[email protected]
      link
      fedilink
      English
      424 days ago

      But the people with the money for the hardware are the ones training it to put more money in their pockets. That’s mostly what it’s being trained to do: make rich people richer.

      • @[email protected]
        link
        fedilink
        English
        124 days ago

        But you can make this argument for anything that is used to make rich people richer. Even something as basic as pen and paper is used everyday to make rich people richer.

        Why attack the technology if its the rich people you are against and not the technology itself.

        • @[email protected]
          link
          fedilink
          English
          123 days ago

          It’s not even the people; it’s their actions. If we could figure out how to regulate its use so its profit-generation capacity doesn’t build on itself exponentially at the expense of the fair treatment of others and instead actively proliferate the models that help people, I’m all for it, for the record.

      • Riskable
        link
        fedilink
        English
        024 days ago

        This completely ignores all the endless (open) academic work going on in the AI space. Loads of universities have AI data centers now and are doing great research that is being published out in the open for anyone to use and duplicate.

        I’ve downloaded several academic models and all commercial models and AI tools are based on all that public research.

        I run AI models locally on my PC and you can too.

        • @[email protected]
          link
          fedilink
          English
          223 days ago

          That is entirely true and one of my favorite things about it. I just wish there was a way to nurture more of that and less of the, “Hi, I’m Alvin and my job is to make your Fortune-500 company even more profitable…the key is to pay people less!” type of AI.

    • @[email protected]
      link
      fedilink
      English
      324 days ago

      Yah, I’m an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.

      It’s really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.

      Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I’m working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it’s the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)

      • Riskable
        link
        fedilink
        English
        024 days ago

        Would you say your research is evidence that the o1 model was built using data/algorithms taken from OpenAI via industrial espionage (like Sam Altman is purporting without evidence)? Or is it just likely that they came upon the same logical solution?

        Not that it matters, of course! Just curious.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          24 days ago

          Well, OpenAI has clearly scraped everything that is scrap-able on the internet. Copyrights be damned. I haven’t actually used Deep seek very much to make a strong analysis, but I suspect Sam is just mad they got beat at their own game.

          The real innovation that isn’t commonly talked about is the invention of Multihead Latent Attention (MLA), which is what drives the dramatic performance increases in both memory (59x) and computation (6x) efficiency. It’s an absolute game changer and I’m surprised OpenAI has released their own MLA model yet.

          While on the subject of stealing data, I have been of the strong opinion that there is no such thing as copyright when it comes to training data. Humans learn by example and all works are derivative of those that came before, at least to some degree. This, if humans can’t be accused of using copyrighted text to learn how to write, then AI shouldn’t either. Just my hot take that I know is controversial outside of academic circles.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        24 days ago

        It’s possible to run the big Deepseek model locally for around $15k, not $100k. People have done it with 2x M4 Ultras, or the equivalent.

        Though I don’t think it’s a good use of money personally, because the requirements are dropping all the time. We’re starting to see some very promising small models that use a fraction of those resources.