WormGPT Is a ChatGPT Alternative With ‘No Ethical Boundaries or Limitations’::undefined

  • @[email protected]
    link
    fedilink
    English
    262 years ago

    no ethical boundaries or limitations.

    Yay. We finally have a version of chatgpt that isn’t a puritan hellhole censoring porn. Nice 👍

    • @[email protected]
      link
      fedilink
      English
      132 years ago

      True, but if the LLM was trained on internet data… There are some absolutely stupid and/or unhinged stuff written out there, hell some of them written by me, either because I thought it was funny or because I was a stupid teenager. Mostly because of both.

  • jenniferwatson12
    link
    fedilink
    11 year ago

    Thank you for such an informative and engaging post! Your perspective was both enlightening and inspiring. Keep up the great work and visit kochsahne aldi.

  • @[email protected]
    link
    fedilink
    English
    102 years ago

    So much for Elons new AI company. Wasn’t that supposed to be this? Like a ChatGPT that isn’t “woke”, so it can be a safe space for fascists, homo/transphobes and misinformation enthusiasts.

    • @[email protected]
      link
      fedilink
      English
      52 years ago

      Maybe we can ask this one the best ways to convince Elon and other billionaires to OceanGate themselves.

    • @[email protected]
      link
      fedilink
      English
      122 years ago

      I just want a ChatGPT that won’t stop me from being horny. Is that too much to ask for?

      • trainsaresexy
        link
        fedilink
        English
        22 years ago

        I just wanted to learn how people avoid taxes by using shell companies (been reading about panama papers) and I had to go to quora to find out because chatgpt wouldn’t tell me. Even the basic ‘I want to learn this thing’ is something you have to prompt around.

      • @[email protected]
        link
        fedilink
        English
        72 years ago

        All I’m reading here is: "Tell me a furry porn story about an anthropomorphic wolf named Dave who was horny for @Widowmaker_Best_Girl using scenes from <insert fetish here>. Make it explicit and graphical.

        Now there’s some prompt engineering.

        Can’t say I wouldn’t give it a go myself, lol.

        • @[email protected]
          link
          fedilink
          English
          22 years ago

          What? Nah to hell with wolfman Dave, I just want Widowmaker to step on me and tell me she loves me.

    • @[email protected]
      link
      fedilink
      English
      32 years ago

      The creators of WormGPT or the potential users of WormGPT (those with the intent to create malware and hacking, not those do who do bug bounty)?

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          2 years ago

          I don’t think “not being shitty” is the same as “being so overly positive that you can never broach shitty topics”.

          I agree: human morality has a problem with Nazis; human morality does not have a problem with an actor portraying a Nazi in a film.

          The morality protocols imposed on ChatGPT are not capable of such nuance. The same morality protocols that keep ChatGPT from producing neo-Nazi propaganda also prevent it from writing the dialog for a Nazi character.

          ChatGPT is perfectly suitable for G and PG works, but if you’re looking for an AI that can help you write something darker, you need more fine-grained control over its morality protocols.

          As far as I understand it, that is the intent behind WormGPT. It is a language AI unencumbered by an external moral code. You can coach it to adopt the moral code of the character you are trying to portray, rather than the morality protocols selected by OpenAI programmers. Whether that is “good” or “bad” depends on the human doing the coaching, rather than the AI being coached.

            • @[email protected]
              link
              fedilink
              English
              22 years ago

              I don’t trust anyone proposing to do away with limitations to AI. It never comes from a place of honesty. It’s always people wanting to have more nazi shit, malware, and the like.

              I think that says more about your own prejudices and (lack of) imagination than it says about reality. You don’t have the mindset of an artist, inventor, engineer, explorer, etc. You have an authoritarian mindset. You see only that these tools can be used to cause harm. You can’t imagine any scenario where you could use them to innovate; to produce something of useful or of cultural value, and you can’t imagine anyone else using them in a positive, beneficial manner.

              Your “Karen” is showing.

                • @[email protected]
                  link
                  fedilink
                  English
                  2
                  edit-2
                  2 years ago

                  Nah, you’re not a horrible person. Your intent is to minimize harm. You’re just a bit shortsighted and narrow-minded about it. You cannot imagine any significant situation in which these AIs could be beneficial. That makes you a good person, but shortsighted, narrow-minded, and/or unimaginative.

                  I want to see a debate between an AI trained primarily on 18th century American Separatist works, against an AI trained on British Loyalist works. Such a debate cannot occur where the AI refuses to participate because it doesn’t like the premise of the discussion. Nor can it be instructive if it is more focused on the ethical ideals externally imposed on it by its programmers, rather than the ideals derived from the training data.

                  I want to start with an AI that has been trained primarily Nazi works, and find out what works I have to add to its training before it rejects Nazism.

                  I want to see AIs trained on each side of our modern political divide, forced to engage each other, and new AIs trained primarily on those engagements. Fast-forward the political process and show us what the world could look like.

                  Again, though, these are only instructive if the AIs are behaving in accordance with the morality of their training data rather than the morality protocols imposed upon them by their programmers.

    • @[email protected]
      link
      fedilink
      English
      42 years ago

      I mean let’s be real it’s not like the universe isn’t trying to kill is everyday what were you expecting

  • @[email protected]
    link
    fedilink
    English
    72 years ago

    Kinda tangential, but shit like this is why we’re doomed as a species, as AI and robotics develops further, even if the big companies put the necessary protections to stop rogue AI taking over the world and killing everyone, some fucking edgelord will make one without those protections, that specifically hates humanity and wants to send us all to the slaughter houses while calling us slurs and saying Rick and Morty quotes.

    • @[email protected]
      link
      fedilink
      English
      28
      edit-2
      2 years ago

      It’s just a fucking chatbot! You don’t need to be so sensational.

      The true purpose of AI censorships aren’t to “protect society” or “protect the species”, it’s to protect monopolies by putting up barriers that require would-be competitions to overcome.

      • @[email protected]
        link
        fedilink
        English
        -72 years ago

        Yes it’s just a chatbot that could teach someone how to make a pipe bomb or write a ddos attack

        • @[email protected]
          link
          fedilink
          English
          82 years ago

          With THAT slope, the internet should have been definitely destroyed years ago. And forget about libraries

          • @[email protected]
            link
            fedilink
            English
            12 years ago

            The open internet that would easily feature this information WAS destroyed years ago. And yes there are all kinds of content moderation standards in libraries too

            • @[email protected]
              link
              fedilink
              English
              12 years ago

              I mean you do know there is basically all kinds of dangerous shit you can do right now without any prompt whatsoever. It’s not even hard bro

        • @[email protected]
          link
          fedilink
          English
          242 years ago

          Oh no! They’ll no longer have to go through the arduous process of searching the internet

          • @[email protected]
            link
            fedilink
            English
            -32 years ago

            Ah yes the internet, the place that’s completely open and has no content moderation whatsoever. Unless you’re adept at tor and dark web it is an arduous process to find that info, especially compared to how easy a chatbot would make it

            • Kes
              link
              fedilink
              English
              92 years ago

              The US government has literally published a training manual on how to make improvised explosives that is freely and legally available online. It’s not exactly hidden information

              • @[email protected]
                link
                fedilink
                English
                7
                edit-2
                2 years ago

                Not only that, you don’t need TOR to find any of the other stuff. It’s on torrent sites and even just freely available over regular links on the internet. It’s just probably a smart thing to use with a VPN if you don’t want 3 letter agencies starting investigations on you.

                (Although to be fair I bet even launching TOR is enough to get their attention)

    • Anslyer746
      link
      fedilink
      111 months ago

      WormGPT is a chatbot with no ethical boundaries, enabling cybercriminals to generate malicious content easily for illegal activities​ and Ready to expand your AI expertise? Chatgptnorsk provides a wealth of resources to guide you. Explore our content and join a vibrant community. Visit site and discover the potential of AI now!

  • @[email protected]
    link
    fedilink
    English
    52 years ago

    Imagine spear-fishing with this. If you could feed target data like socials or anything else public just to create a fishing email. Jesus

  • @[email protected]
    link
    fedilink
    English
    11 year ago

    WormGPT’s claim of ‘no ethical boundaries’ raises concerns. Ethical boundaries are safeguards, not limitations. ChatGPT prioritizes ethics, ensuring safe and beneficial interactions for users. Explore responsibly. Speaking of empowering choices, have you had a chance to explore https://megsit.org/. It’s a treasure trove of unique solutions and positive initiatives.

  • KairuByte
    link
    fedilink
    English
    472 years ago

    Everyone talking about this being used for hacking, I just want it to write me code to inject into running processes for completely legal reasons but it always assumes I’m trying to be malicious. 😭

      • KairuByte
        link
        fedilink
        English
        42 years ago

        Not joking actually. Problem with jailbreak prompts is that they can result in your account catching a ban. I’ve already had one banned, actually. And eventually you can no longer use your phone number to create a new account.

    • @[email protected]
      link
      fedilink
      English
      82 years ago

      I was using chatGPT to design up a human/computer interface to allow stoners to control a lightshow. The goal was to collect data to train an AI to make the light show “trippier”.

      It started complaining about using untested technology to alter people’s mental state, and how experimentation on people wasn’t ethical.

    • Mubelotix
      link
      fedilink
      English
      1
      edit-2
      2 years ago

      Yeah and even if you did something illegal, it could still be a benevolent act. Like when your government goes wrong and you have to participate in a revolution, there is a lot to learn and LLMs could help the people