AI Industry Struggles to Curb Misuse as Users Exploit Generative AI for Chaos::Artificial intelligence just can’t keep up with the human desire to see boobs and 9/11 memes, no matter how strong the guardrails are.

  • @[email protected]
    link
    fedilink
    English
    102 years ago

    This is a part of a bigger topic people need to be aware of. As more and more AI is used in public spaces and the internet, people will find creative ways to exploit it.

    There will always be ways to make the AI do stuff the owners don’t want it to. You could think of it like the exploits used in speedrunning, but in this case there’s a lot more variety. Just like you can make an AI generate morally questionable material, you could potentially find a way to exploit the AI of a self driving car to do whatever you can think of.

    • @[email protected]
      link
      fedilink
      English
      3
      edit-2
      2 years ago

      This is trivially fixable, it’s just at 2-3x the per query cost so it isn’t deemed worth it for high volume chatbots given the low impact of jailbreaking.

      For anything where jailbreaking would somehow be a safety concern, that cost just needs to be factored in.

      • @[email protected]
        link
        fedilink
        English
        12 years ago

        That’s true for all the things that can have a query cost. What about those AI applications that don’t have any financial cost to the user? For instance, The Spiffing Brit continues to find interesting ways to exploit the YouTube Algoritm. I’m sure you can apply that same “hacker mentality” to anything with AI in it.

        At the moment, many of those applications are on the web, and that’s exactly where a query costs can be a feasible way to limit the number of experiments you can reasonably run in order to find your favorite exploit. If it’s too expensive, you probably won’t find anything worth exploiting, and that should keep the system relatively safe. However, nowadays more and more AI is finding its way in the real world, which means that those exploits are going to have some very spicy rewards.

        Just imagine if the traffic lights were controlled by an AI, and you found an exploit that allowed you to get the green light on demand? Applications like this don’t have any API query costs. You just need to be patient and try all sorts of weird stuff to see how the lights react. Sure, you can’t run a gazillion experiments in an hour, which means that you might not find anything worth exploiting. Since there would be millions of people experimenting with the system simultaneously, surely someone would find an exploit.

  • Praise Idleness
    link
    fedilink
    English
    512 years ago

    You know what else can make elmo knocking up an anime character while smoking pot? Paper and pen. These obsessions with regulating AI usage is not only impossible but also just plain stupid.

    • This is fine🔥🐶☕🔥
      link
      fedilink
      English
      92 years ago

      This is an unfair comparison.

      Pen and paper art, or even using Photoshop require one to put in time and efforts and have skills. AI tools don’t.

      • Praise Idleness
        link
        fedilink
        English
        212 years ago

        Yeah printing is dangerous because it’ll be lot easy to write lewd, dangerous books! Back then you had to hire people to do so!

        • This is fine🔥🐶☕🔥
          link
          fedilink
          English
          32 years ago

          Ah yes, photorealistic images (and videos) are as effective as text.

          Btw that also is an unfair argument because printing technology printed same book many times. You still need an author to write source text.

          AI generates different images within minutes.

          But please continue pretending AI generated images and videos are not a problem.

          • @[email protected]
            link
            fedilink
            English
            3
            edit-2
            2 years ago

            the printing press decreased the speed of publication by a larger margin (months to hours for a big book like the bible) and aguably kicked off a century of incredibly bloody warfare with luther and then the counter reformation.

            I dont see how being able to get a decent image of Marx with tits from a few mins of generating images is so much more dangerous.

            • This is fine🔥🐶☕🔥
              link
              fedilink
              English
              12 years ago

              You still needed writers to come up with a new material for printing press. It only increased distribution of existing material.

              Which isn’t case with machine generated text and images. You can get any hateful or depraved output within minutes.

              • @[email protected]
                link
                fedilink
                English
                1
                edit-2
                2 years ago

                That was exactly the point the church made against the printing press, without needing scribes anyone could come up with whatever foul heresy they liked and publish it for distribution.

                The chief difference between now and then is what we consider unpermissable. Otherwise the agrument is the same, we cannot trust people to publish whatever they like or terrible things will happen.

          • @[email protected]
            link
            fedilink
            English
            22 years ago

            It’s really not a problem. We have both open source and proprietary solutions for generative AI. If you have the hardware for it, you can generate images locally for free. If you don’t, just use one of the many available services.

            It’s literally giving the power of expression to almost everyone, including artists.

            Also let’s not talk about jobs/money. Technology replacing jobs isn’t something new and that’s what humanity should strive toward.

        • @[email protected]
          link
          fedilink
          English
          102 years ago

          It honestly been really enlightening for me seeing all the same arguments that were made against the printing press and the camera being made against generative AI for text and images. Shows just how little people have changes over hundreds of years.

  • @[email protected]
    link
    fedilink
    English
    912 years ago

    Serious question - why should anyone care about using AI to make 9/11 memes? Boobs I can see the potential argument against at least (deep fakes and whatnot), but bad taste jokes?

    Are these image generation companies actually concerned they’ll be sued because someone used their platform to make an image in bad taste? Even if such a thing we’re possible, wouldn’t the responsibility be on the person who made it? Or at worst the platform that distributed the images -As opposed to the one that privately made it?

    • @[email protected]
      cake
      link
      fedilink
      English
      782 years ago

      I don’t see adobe trying to stop people from making 911 memes in photoshop nor have they been sued over anything like that, I dont get why AI should be different. It’s just a tool.

      • Kühlschrank
        link
        fedilink
        English
        72 years ago

        The problem for Adobe is that the AI work is being done on their computers, not yours, so it could be argued that they are liable for generated content. ‘Could’ because it’s far from established but you can imagine how nervous this all must make their lawyers.

      • @[email protected]
        link
        fedilink
        English
        222 years ago

        That’s a great analogy, wish I’d thought of it

        I guess it comes down to whether the courts decide to view AI as a tool like photoshop, or a service - like an art commission. I think it should be the former, but I wouldn’t be at all surprised if the dinosaurs in the US gov think it’s the latter

    • @[email protected]
      link
      fedilink
      English
      42 years ago

      I’d guess that they are worried the IP owners will sue them for singing their IP.

      So sonic creators will say, your profiting by using sonic and not paying us for the right to use him.

      But I agree that deep fakes can be pretty bad.

      • El Barto
        link
        fedilink
        English
        142 years ago

        your profiting

        You are profiting = you’re profiting.

    • @[email protected]
      link
      fedilink
      English
      182 years ago

      Protect the brand. That’s it.

      Microsoft doesn’t want non-PC stuff being associated with the Bing brand.

      It’s what a ton of the ‘safety’ alignment work is about.

      This generation of models doesn’t pose any actual threat of hostile actions. The “GPT-4 lied and said it was human to try to buy chemical weapons” in the safety paper at release was comical if you read the full transcript.

      But they pose a great deal of risk to brand control.

      Yet still apparently not enough to run results through additional passes which fixes 99% of all these issues, just at 2-3x the cost.

      It’s part of why articles like these are ridiculous. It’s broadly a solved problem, it’s just the cost/benefit of the solution isn’t enough to justify it because (a) these issues are low impact and don’t really matter for 98% of the audience, and (b) the robust fix is way more costly than the low hanging fruit chatbot applications can justify.

      • Terrasque
        link
        fedilink
        English
        12 years ago

        Microsoft doesn’t want non-PC stuff being associated with the Bing brand.

        You mean bing, the porn Google? Yeah, that might be a tad too late

  • Lantern
    link
    fedilink
    English
    822 years ago

    Was not expecting to see a pregnant sonic flying a plane today.

  • @[email protected]
    link
    fedilink
    English
    292 years ago

    Misuse lol. People need to get their panties out of their butthole. You build a photo generator and get mad when someone uses it to make a picture of Marx with tits. Who cares? Crybabies can cry about it.

  • @[email protected]
    link
    fedilink
    English
    382 years ago

    Meanwhile bing images blocks 90% of my generation attempts for unsavory content when the prompt is generally something that should be safe even for kids. Why do we only get the extremes?

    • @[email protected]
      link
      fedilink
      English
      112 years ago

      Why did no one care about the misuse of the term AI until these image generators or LLMs? Seriously, people have been talking about video game “AI”, chess “AI” and stuff like that. It’s understood that when people say “AI” they don’t mean “general machine intelligence” or anything like that. And frankly LLMs and image generators fit the bill better than most of the things we’ve used the term for previously

      As for “can we stop talking about them”, these and LLMs are already having some pretty huge impacts on modern society - for better or worse, it’d be pretty odd for us all to decide to just stop talking about them.

      • @[email protected]
        link
        fedilink
        English
        72 years ago

        The difference from prior use of the term “AI” and these technologies is, as you said, before it was understood that it was a short hand, not actual intelligence. Now you have a bunch of panicky people acting as if skynet has arrived.

        They really haven’t had much of an impact beyond people talking about them all the damn time, especially the fear mongering. At present, these are really just expensive toys. Computer image and gibberish generators.

        The real concerns with developing technologies should be in regards to things like facial recognition and so-called self driving cars. These technologies present actual dangers to society and public safety, not to mention the complex legal questions that come with their use.

        • @[email protected]
          link
          fedilink
          English
          52 years ago

          ^They really haven’t had much of an impact beyond people talking about them all the damn time, especially the fear mongering. At present, these are really just expensive toys. Computer image and gibberish generators.

          I highly disagree. Almost everyone I know under the age of 40 uses LLMs to some extent in the course of their job already, whether it’s as simple as composing emails or as significant as using copilot/chatGPT to code. And just today I read an article about an entire call center getting laid off this week to be replaced by an LLM.

          I completely agree that a lot of the hype is overblown, but “AI” is absolute significant in our society, and so we talk about it

          • @[email protected]
            link
            fedilink
            English
            42 years ago

            It seems everyone you know under the age of 40 is in a very specific subset of the workforce. They do not represent a significant portion of the workforce. I would love to read that article about the call center so I can keep an eye out for news when that plan completely fails. I’m assuming it must be a consumer facing call center to be so brazen. They wouldn’t risk business accounts (big money) on an llm, the technology just isn’t there.

        • @[email protected]
          link
          fedilink
          English
          22 years ago

          Facial recognition and image generation is the same technology applied in a different way

  • @[email protected]
    link
    fedilink
    English
    452 years ago

    Why didnt someone warn us about this? Nobody said this might happen, nobody! Not a single person tried to be the voice of reason!

  • AutoTL;DRB
    link
    fedilink
    English
    72 years ago

    This is the best summary I could come up with:


    Both Meta and Microsoft’s AI image generators went viral this week for responding to prompts like “Karl marx large breasts” and fictional characters doing 9/11.

    “I don’t think anyone involved has thought anything through,” X (formally Twitter) user Pioldes posted, along with screenshots of AI-generated stickers of child soldiers and Justin Trudeau’s buttocks.

    One Bing user went further, and posted a thread of Kermit committing a variety of violent acts, from attending the January 6 Capitol riot, to assassinating John F. Kennedy, to shooting up the executive boardroom of ExxonMobil.

    In the race to one-up competitors’ AI features, tech companies keep launching products without effective guardrails to prevent their models from generating problematic content.

    Messing around with roundabout prompts to make generative AI tools produce results that violate their own content policies is referred to as jailbreaking (the same term is used when breaking open other forms of software, like Apple’s iOS).

    Midjourney bans pornographic content, going as far as blocking words related to the human reproductive system, but users are still able to bypass the filters and generate NSFW images.


    The original article contains 1,220 words, the summary contains 181 words. Saved 85%. I’m a bot and I’m open source!