• @[email protected]
    link
    fedilink
    English
    6
    edit-2
    1 year ago

    Like a billion hours of YouTube videos out there I am not seeing the issue plus the entire library of Congress

  • @[email protected]
    link
    fedilink
    English
    38
    edit-2
    1 year ago

    Most people here don’t understand what this is saying.

    We’ve had “pure” human generated data, verifiably so since LLMs and ImageGen didn’t exist. Any bot generated data was easily filterable due to lack of sophistication.

    ChatGPT and SD3 enter the chat, generate nearly indistinguishable data from humans, but with a few errors here and there. These errors while few, are spectacular and make no sense to the training data.

    2 years later, the internet is saturated with generated content. The old datasets are like gold now, since none of the new data is verifiably human.

    This matters when you’ve played with local machine learning and understand how these machines “think”. If you feed an AI generated set to an AI as training data, it learns the mistakes as well as the data. Every generation it’s like mutations form until eventually it just produces garbage.

    Training models on generated sets slowly by surely fail without a human touch. Scale this concept to the net fractionally. When 50% of your dataset is machine generated, 50% of your new model trained on it will begin to deteriorate. Do this long enough and that 50% becomes 60 to 70 and beyond.

    Human creativity and thought have yet to be replicated. These models have no human ability to be discerning or sleep to recover errors. They simply learn imperfectly and generate new less perfect data in a digestible form.

  • Uranium3006
    link
    fedilink
    21 year ago

    now that the low hanging fruit of internet scraping is exhausted, we’re gonna have to start purpose-building datasets. this will be expensive and might be the new bottleneck on AI progress.

  • spawnsalot
    link
    fedilink
    41 year ago

    It would be hilarious if we entered the deep fried Marquaud era of ai where responses degenerate into rehashed responses that just get progressively more jumbled and unintelligible as the models cannibalise each other’s generated content

  • @[email protected]
    link
    fedilink
    English
    30
    edit-2
    1 year ago

    For a rough approach, imagine a parrot taught by another parrot, which was in turn taught by another parrot which was taught by a human.

    Sure, some things might survive as somewhat understandable vaguelly human sounding sentences, but overall it’s still going to be pretty bad a few parrots down the chain.

    • @[email protected]
      link
      fedilink
      English
      901 year ago

      The Internet is fucked now, the only valuable untainted training data is the Internet as it existed prior to this AI bullshit coming online. Confirmed human content is going to be super valuable, so expect our privacy to be fucked as well…

        • IninewCrow
          link
          fedilink
          English
          31 year ago

          Even that is going to turn into a shit show … It will become a copy of a copy of a copy of a backup of a backup of a copy and all of it will just get rendered down to some common basics based on whatever the hell was marketed and promoted by bots

    • @[email protected]
      link
      fedilink
      English
      39
      edit-2
      1 year ago

      The collapse won’t stop ai output from spamming the internet though. It will just make it worse and more likely to be incorrect

    • @[email protected]
      link
      fedilink
      English
      41 year ago

      It’s not going to. It’s just going to get more widespread and harder to detect. The incentives favor developing better and better AI. Luckily one of the solutions to this issue is - wait for it - AI. With a good enough AI, especially a generally intelligent one you don’t need search engines anymore. You just ask and it gives you the answer. If you think AI couldn’t do this reliably then that is not the AI I’m talking about.

    • @[email protected]
      link
      fedilink
      English
      43
      edit-2
      1 year ago

      Oh goody. I’ve been wanting to use this since my slashdot days… today is my first chance!

      Your post advocates a
      
      [x] technical
      [ ] legislative
      [ ] market-based
      [ ] vigilante
      
      approach to fighting (ML-generated) spam. Your idea will not work. Here is why
      it won't work. [One or more of the following may apply to your particular idea,
      and it may have other flaws which used to vary from state to state before a bad
      federal law was passed.]
      
      [ ] Spammers can easily use it to harvest email addresses
      [ ] Mailing lists and other legitimate email uses would be affected
      [ ] No one will be able to find the guy or collect the money
      [ ] It is defenseless against brute force attacks
      [ ] It will stop spam for two weeks and then we'll be stuck with it
      [ ] Users of email will not put up with it
      [x] Microsoft will not put up with it
      [ ] The police will not put up with it
      [x] Requires too much cooperation from spammers
      [x] Requires immediate total cooperation from everybody at once
      [ ] Many email users cannot afford to lose business or alienate potential employers
      [ ] Spammers don't care about invalid addresses in their lists
      [ ] Anyone could anonymously destroy anyone else's career or business
      
      Specifically, your plan fails to account for
      
      [ ] Laws expressly prohibiting it
      [x] Lack of centrally controlling authority for email^W ML algorithms
      [ ] Open relays in foreign countries
      [ ] Ease of searching tiny alphanumeric address space of all email addresses
      [x] Asshats
      [ ] Jurisdictional problems
      [ ] Unpopularity of weird new taxes
      [ ] Public reluctance to accept weird new forms of money
      [ ] Huge existing software investment in SMTP
      [ ] Susceptibility of protocols other than SMTP to attack
      [ ] Willingness of users to install OS patches received by email
      [ ] Armies of worm riddled broadband-connected Windows boxes
      [x] Eternal arms race involved in all filtering approaches
      [x] Extreme profitability of spam
      [ ] Joe jobs and/or identity theft
      [ ] Technically illiterate politicians
      [ ] Extreme stupidity on the part of people who do business with spammers
      [x] Dishonesty on the part of spammers themselves
      [ ] Bandwidth costs that are unaffected by client filtering
      [x] Outlook
      
      and the following philosophical objections may also apply:
      
      [x] Ideas similar to yours are easy to come up with, yet none have ever
      been shown practical
      [ ] Any scheme based on opt-out is unacceptable
      [ ] SMTP headers should not be the subject of legislation
      [ ] Blacklists suck
      [ ] Whitelists suck
      [ ] We should be able to talk about Viagra without being censored
      [ ] Countermeasures should not involve wire fraud or credit card fraud
      [ ] Countermeasures should not involve sabotage of public networks
      [ ] Countermeasures must work if phased in gradually
      [ ] Sending email should be free
      [x] Why should we have to trust you and your servers?
      [ ] Incompatiblity with open source or open source licenses
      [x] Feel-good measures do nothing to solve the problem
      [ ] Temporary/one-time email addresses are cumbersome
      [ ] I don't want the government reading my email
      [ ] Killing them that way is not slow and painful enough
      
      Furthermore, this is what I think about you:
      
      [x] Sorry dude, but I don't think it would work.
      [ ] This is a stupid idea, and you're a stupid person for suggesting it.
      [ ] Nice try, assh0le! I'm going to find out where you live and burn your
      house down!
      
      • @[email protected]
        link
        fedilink
        English
        51 year ago

        It’s important to understand that a language modelling AI can only produce responses based on its inputs.

    • _haha_oh_wow_
      link
      fedilink
      English
      1081 year ago

      Sorry, best we can do is a race to the bottom fueled by greed and incompetence.

      • TheEntity
        link
        fedilink
        61 year ago

        I’m sure we can compromise on a mandatory database of registered AI-generated content that only the corporations can read from but everyone using AI-generated content is required by law to write to, with hefty fines (but only for regular people).

      • SkaveRat
        link
        fedilink
        English
        241 year ago

        Low Background Radiation Steel was/is valuable, because it’s made of steel from before nuclear testing. As the bombs contaminated the produced steel.

        In the same sense, anything before the creation of LLMs would be considered “low background radiation” content, as that’s the only content to be sure to be made without LLMs in the loop

  • Dojan
    link
    fedilink
    English
    211 year ago

    I mean it makes sense. Machine learning is fantastic at noticing patterns, and the stuff they generate most definitely do have patterns. We might not notice them, but the models will pick up on them and eventually, if you keep training them on that data, they’ll skew more and more in that direction.

    They’ve been marketing things like there isn’t a limit to how good these things can get, but there is. Nothing is infinite.

    • circuitfarmer
      link
      fedilink
      English
      171 year ago

      I’ve tried to make this point several times to folks in the industry. I work in AI, and yet every time I approach some people with “you know it ultimately just repeats patterns”, I’m met with scoffs and those people telling me I’m just not “seeing the big picture”.

      But I am, and the truth is that there are limits. This tech is not the digital singularity the marketers and business goons want everyone to think it is.

      • @[email protected]
        link
        fedilink
        English
        91 year ago

        It repeats things that sort of sound intelligent to try and convince everyone that actual intelligent thought is taking place? It really is just like humans!

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          Tell me about it. All the government contractors I work with. Just repeating the same submittal over and over and over again.

        • Dojan
          link
          fedilink
          English
          31 year ago

          They don’t really parrot unless they’re overfitted.

          It’s more that they have been trained to produce a certain kind of result. One method you can train them on is by basically assigning a score on how good the output is. Doing this manually takes a lot of time (Google has been doing this for years via captcha), or you could train other models to score text for you.

          The obvious problem with the latter solution is that then you need to ensure that that model is scoring roughly in line with how humans would score it; the technical term for this is alignment. There’s a pretty funny story about that with GPT-2, presented in a really cute animation format by Robert Miles.

  • @[email protected]
    link
    fedilink
    English
    341 year ago

    Back when i was though concept art as a subject at college my teacher had a name for this.

    “Incest” cause every generation of art that references other art becomes more and more strange looking and detached from reality.

    If you thought Skyrim weapons look ridiculous you should have seen my classmates Skyrim inspired weapons.

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      If you think that looked ridiculous, you should have seen the Skyrim weapons inspired by your classmates weapons.

  • @[email protected]
    link
    fedilink
    English
    151 year ago

    Its funny how something like this get posted every few days and people keep falling for it like its somehow going to end AI. The people that make these models are acutely aware of how to avoid model collapse.

    It’s totally fine for AI models to train on AI generated content that is of high enough quality. Part of the research to train models is building data sets with a text description matching the content, and filtering out content that is not organic enough (or even specifically including it as a ‘bad’ example for the AI to avoid). AI can produce material indistinguishable from human work, and it produces material that wasn’t originally in the training data. There’s no reason that can’t be good training data itself.

    • @[email protected]
      link
      fedilink
      English
      51 year ago

      Especially since they can just pay someone to sit down and sift through it, or re-use the old training data that they already have from before it all blew up.

  • @[email protected]
    link
    fedilink
    English
    41 year ago

    Ok, seriously? Fuck this research. It’s bullshit.

    Want to know how I can declare that so confidently? Because I wrote a program called duo. It’s literally two chatbots instead of one, running locally on 5+ year old hardware. These are low powered llama’s fine tuned by the community for general purpose last year

    I just played a DND campaign with a chatbot and her hallucinated girlfriend (ai 1 wrote the prompt for AI 2, no edits or modifications). I’ve never played DND before, but they said they wanted to go to a haunted escape room. I have been to one of the most haunted locations in America, so I decided to be DM, and apparently they come with their own dice. Tomorrow I’m going to send the transcript to a friend who was looking for a DND player

    Yes, clickbait is terrible training data, and low grade LLMs can really pump it out.

    I had enough fun I fell asleep at my desk, and I did nothing but describe a location I’ve been to and the sounds I heard (and some urban legends)…I could spend a month and have replaced myself in the experience.

    Other times I’ve let them run with no interaction on my part they’ve hallucinated (feasible) apps I’m not making to the point I could throw it into a design document, and games good enough to land on my to-do list.

    Why don’t people see this for the miracle technology this is? If it isn’t reliable on one pass, do a second to evaluate the first, another to run chain of thought on problem areas, another one to flesh it out and rinse and repeat if you need to.

    This is such a simple engineering problem it’s not even funny

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        That’s how someone with ADHD sounds without a filter (we can understand each other at least). All I did is leave out the transitions that links these (to me, obviously related) concepts together

        LLMs are the other way around - way to much transition with little substance.

        Everything about my experiences experimenting with LLMs sounds unhinged without proof anyways. So I don’t see a need to edit my late night rant, eventually I’ll start a blog to lay out my methodology and chat logs to support it

  • FaceDeer
    link
    fedilink
    141 year ago

    This article is from June 12, 2023. That’s practically stone-aged as far as AI technology has been progressing.

    The paper it’s based on used a very simplistic approach, training AIs purely on the outputs of its previous “generation.” Turns out that’s not a realistic real-world scenario, though. In reality AIs can be trained on a mixture of human-generated and AI-generated content and it can actually turn out better than training on human-generated content alone. AI-generated content can be curated and custom-made to be better suited to training, and the human-generated stuff adds back in the edge cases that might disappear when doing repeated training generations.