• @[email protected]
    link
    fedilink
    English
    12 years ago

    This might apply to LLMs and such but there’s no reason a true AGI couldn’t be completely unbiased though it could also be biased in a way that benefits itself.

    • @[email protected]
      link
      fedilink
      English
      32 years ago

      How do you solve the problem of ethics? Is there even such a thing as objectively true ethics?

      You have to answer that question before you can even start saying that being unbiased is possible in the first place.

      • @[email protected]
        link
        fedilink
        English
        02 years ago

        If we’re speaking of an AGI then I don’t need to solve those issues but it’s going to do it for me. By definition AGI doesn’t need a human to improve itself.

        • @[email protected]
          link
          fedilink
          English
          32 years ago

          How will you tell the AI what the proper ethics for humans are?

          After all, you want the AI to be in service of humans, of us… right? If not, what is going to stop the AI from just being entirely self-serving?

          • @[email protected]
            link
            fedilink
            English
            -1
            edit-2
            2 years ago

            I think we have a very different view of what a true AGI will be like. I don’t need to tell/teach it anything. It’ll be million times smarter than me and hopefully will teach me instead.

            Nothing stops it from being entirely self-serving. That’s why I expect it to destroy us.

              • @[email protected]
                link
                fedilink
                English
                02 years ago

                I think it’s inevitable so might aswell hope it’ll turn out fine, but I doubt it. What I’m looking forward to is the ideal version of it