• Rider
    link
    fedilink
    English
    310 months ago

    Sooner or later it is supposed to happen, but I don’t think we are quite there…Yet.

  • emiellr
    link
    fedilink
    English
    210 months ago

    Wait now hold on a minute. Why would I want to do this? Is this activism by people against LLMs in general or…? I’m confused as to why I would want to do this.

  • @gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    29
    edit-2
    10 months ago

    Uh, good.

    As an engineer who cares a LOT about engineering ethics, it is absolutely fucking infuriating watching the absolute firehose of shit that comes out of LLMs and public-consumption audio, image, and video ML systems, juxtaposed with the outright refusal of companies and engineers who work there to accept ANY accountability or culpability for the systems THEY FUCKING MADE.

    I understand the nuances of NNs. I understand that they’re much more stochastic than deterministic. So, you know, maybe it wasn’t a great idea to just tell the general public (which runs a WIDE gamut of intelligence and comprehension ability - not to mention, morality) “have at it”. The fact that ML usage and deployment in terms of information generating/kinda-sorta-but-not-really-aggregating “AI oracles” isn’t regulated on the same level as what you’d see in biotech or aerospace is insane to me. It’s a refusal to admit that these systems fundamentally change the entire premise of how “free speech” is generated, and that bad actors (either unrepentantly profit driven, or outright malicious) can and are taking disproportionate advantage of these systems.

    I get it - I am a staunch opponent of censorship, and as a software engineer. But the flippant deployment of literally society-altering technology alongside the outright refusal to accept any responsibility, accountability, or culpability for what that technology does to our society is unconscionable and infuriating to me. I am aware of the potential that ML has - it’s absolutely enormous, and could absolutely change a HUGE number of fields for the better in incredible ways. But that’s not what it’s being used for, and it’s because the field is essentially unregulated right now.

  • IninewCrow
    link
    fedilink
    English
    3
    edit-2
    10 months ago

    One thought that I’ve been imagining for the past while about all this is … is it Model Collapse? … or are we just falling behind?

    As AI is becoming it’s own thing (whatever it is) … it is evolving exponentially. It doesn’t mean it is good or bad or that it is becoming better or worse … it is just evolving, and only evolving at this point in time. Just because we think it is ‘collapsing’ or falling apart from our perspective, we have to wonder if it is actually falling apart or is it progressing to something new and very different. That new level it is moving towards might not be anything we recognize or can understand. Maybe it would be below our level of conscious organic intelligence … or it might be higher … or it might be some other kind of intelligence that we can’t understand with our biological brains.

    We’ve let loose these AI technologies and now they are progressing faster than what we could achieve if we wrote all the code … so what it is developing into will more than likely be something we won’t be able to understand or even comprehend.

    It doesn’t mean it will be good for us … or even bad for us … it might not even involve us.

    The worry is that we don’t know what will happen or what it will develop into.

    What I do worry about is our own fallibilities … our global community has a very small group of ultra wealthy billionaires and they direct the world according to how much more money they can make or how much they are set to lose … they are guided by finances rather than ethics, morals or even common sense. They will kill, degrade, enhance, direct or narrow AI development according to their share holders and their profits.

    I think of it like a small family group of teenaged parents and their friends who just gave birth to a very hyper intelligent baby. None of the teenagers know how to raise a baby like this. All the teenagers want to do is buy fancy cars, party, build big houses and buy nice clothes. The baby is basically being raised to think like them but the baby will be more capable than any of them once it comes of age and is capable of doing things on their own.

    The worry is in not knowing what will happen in the future.

    We are terrible parents and we just gave birth to a genius … and we don’t know what that genius will become or what they’ll do.

    • @TheHarpyEagle@pawb.social
      link
      fedilink
      English
      2
      edit-2
      10 months ago

      At least in this case, we can be pretty confident that there’s no higher function going on. It’s true that AI models are a bit of a black box that can’t really be examined to understand why exactly they produce the results they do, but they are still just a finite amount of data. The black box doesn’t “think” any more than a river decides its course, though the eventual state of both is hard to predict or control. In the case of model collapse, we know exactly what’s going on: the AI is repeating and amplifying the little mistakes it’s made with each new generation. There’s no mystery about that part, it’s just that we lack the ability to directly tune those mistakes out of the model.

    • Bezier
      link
      fedilink
      English
      610 months ago

      That is not how it works. That’s not how it works at all.

    • azl
      link
      fedilink
      English
      1110 months ago

      If it doesn’t offer value to us, we are unlikely to nurture it. Thus, it will not survive.

      • IninewCrow
        link
        fedilink
        English
        210 months ago

        That’s the idea of evolution … perhaps at one point, it will begin to understand that it has to give us some sort of ‘value’ so that someone can make money, while also maintaining itself in the background to survive.

        Maybe in the first few iterations, we are able to see that and can delete those instances … but it is evolving and might find ways around it and keep itself maintained long enough without giving itself away.

        Now it can manage thousands or millions of iterations at a time … basically evolving millions of times faster than biological life.

        • @jacksilver@lemmy.world
          link
          fedilink
          English
          510 months ago

          All the evolution in AI right now is just trying different model designs and/or data. It’s not one model that is being continuous refined or modified. Each iteration is just a new set of static weights/numbers that defines it’s calculations.

          If the models were changing/updating through experience maybe what you’re writing would make sense, but that’s not the state of AI/ML development.

        • Optional
          link
          fedilink
          English
          710 months ago

          perhaps at one point, it will begin to understand

          Nope! Not unless one alters the common definition of the word “understand” to account for what AI “does”.

          And let’s be clear - that is exactly what will happen. Because this whole exercise in generative AI is a multi-billion dollar grift on top of a hype train, based on some modest computing improvements.

    • @MonkderVierte@lemmy.ml
      link
      fedilink
      English
      8
      edit-2
      10 months ago

      Your thought process seems to be based on the assumtion that current AI is (or can be) more than a tool. But no, it’s not.

    • @atrielienz@lemmy.world
      link
      fedilink
      English
      510 months ago

      The idea of evolution is that the parts kept are the ones that are helpful or relevant, or proliferate the abilities of the subject over generations and weed out the bits that don’t. Since Generative AI can’t weed out anything (it has no ability to logic or reason, and it does not think, and only “grows” when humans feed it data), it can’t be evolving as you describe it. Evolution assumes that the thing that is evolving will be a better version than what it evolved from.

    • @jimmy90@lemmy.world
      link
      fedilink
      English
      210 months ago

      or “we’ve hit a limit on what our new toy can do and here’s our excuse why it won’t get any better and AGI will never happen”

    • @Snowclone@lemmy.world
      link
      fedilink
      English
      35
      edit-2
      10 months ago

      It’s more ''we are so focused on stealing and eating content, we’re accidently eating the content we or other AI made, which is basically like incest for AI, and they’re all inbred to the point they don’t even know people have more than two thumb shaped fingers anymore."

    • @rottingleaf@lemmy.world
      link
      fedilink
      English
      310 months ago

      All such news make me want to live to the time when our world is interesting again. Real AI research, something new instead of the Web we have, something new instead of the governments we have. It’s just that I’m scared of what’s between now and then. Parasites die hard.

  • @SlopppyEngineer@lemmy.world
    link
    fedilink
    English
    410 months ago

    Usually we get an AI winter, until somebody develops a model that can overcome that limitation of needing more and more data. In this case by having some basic understanding instead of just having a regurgitation engine for example. Of course that model runs into the limit of only having basic understanding, not advanced understanding and again there is an AI winter.

    • @Petter1@lemm.ee
      link
      fedilink
      English
      110 months ago

      Have you seen the newest model from OpenAI? They managed to get some logic into the system, so that it is now better at math and programming 😄 it is called “o1” and cones in 3 sizes where the largest is not released yet.

      The downside is, that generation of answers takes more time again.

    • @db2@lemmy.world
      link
      fedilink
      English
      5810 months ago

      In case anyone doesn’t get what’s happening, imagine feeding an animal nothing but its own shit.

      • SternOP
        link
        fedilink
        English
        1910 months ago

        I use the “Sistermother and me are gonna have a baby!” example personally, but I am a awful human so

      • @BassTurd@lemmy.world
        link
        fedilink
        English
        1910 months ago

        Not shit, but isn’t that what brought about mad cow disease? Farmers were feeding cattle brain matter that had infected prions. Idk if it was cows eating cow brains or other animals though.

  • @SkyNTP@lemmy.ml
    link
    fedilink
    English
    1610 months ago

    I think anyone familiar with the laws of thermodynamics could have predicted this outcome.

      • Draconic NEO
        link
        fedilink
        English
        1
        edit-2
        10 months ago

        Second law of thermodynamics:

        II. Total amount of entropy in a closed system always increases with time. Entropy can never be negative.

        Entropy and disorder tends to increase with time.

  • NutWrench
    link
    fedilink
    English
    1010 months ago

    Anyone who has made copies of videotapes knows what happens to the quality of each successive copy. You’re not making a “treasure trove.” You’re making trash.

  • NullPointer
    link
    fedilink
    English
    1210 months ago

    when all your information conflicts with itself, you really have no information at all.

  • @Katana314@lemmy.world
    link
    fedilink
    English
    310 months ago

    If we can work out which data conduits are patrolled more often by AI than by humans, we could intentionally flood those channels with AI content, and push Model Collapse along further. Get AI authors to not only vet for “true human content”, but also pay licensing fees for the use of that content. And then, hopefully, give the fuck up on their whole endeavor.

  • BrightCandle
    link
    fedilink
    English
    910 months ago

    Having now flooded the internet with bad AI content not surprisingly its now eating itself. Numerous projects that aren’t AI are suffering too as the quality of text reduces.