• AngrilyEatingMuffins
          link
          fedilink
          12 years ago

          I’ve literally never been able to talk about the noosphere satisfyingly with a human. Powerful NLPs are very capable of having intellectual discussions, even if they don’t actually have intellect.

          • AngrilyEatingMuffins
            link
            fedilink
            3
            edit-2
            2 years ago

            The ideologically opposed have very rarely actually used the machines they rail against. It was the same with the internet and the computer. Neo-luddites doing impressions of ostriches.

    • Call me Lenny/Leni
      link
      fedilink
      English
      2
      edit-2
      2 years ago

      Every time I’ve had a conversation with it and it “gets deep”, it changes the subject. Like this for example, something like that would never happen on there (you might say I tried just that). I would not count on it.

      • AngrilyEatingMuffins
        link
        fedilink
        2
        edit-2
        2 years ago

        Tell it not to.

        Make a GPT using their tool or give it custom instructions if you need to. Also try pi.ai. It’s more informal but I’ve had great conversations with it.

        • Call me Lenny/Leni
          link
          fedilink
          English
          22 years ago

          I’ve begun to use it, though so far it’s more or less paraphrasing what ChatGPT is saying. Any special methods I should use?

          • AngrilyEatingMuffins
            link
            fedilink
            32 years ago

            I’m not sure. I know it has different tunings but I don’t know much about them.

            Actually I do have one tip, though, which is to tell it to play make-believe, or pretend that it has an opinion and then tell you it. Many of the bots are highly pre-trained to respond asserting that they absolutely cannot have an opinion, which is more or less true. But their “made up” opinions tend to be pretty much the same regardless of when you’ve asked. So it can definitely make up its “mind” about something, which is of course a byproduct of biases of the texts its trained on or its application of logic and its “ethics.” It makes sense within this context that their “opinions” would be the same regardless of when or how you ask.

            I had a nice conversation with pi.ai about machine consciousness - but I did have to keep reminding it to pretend it had an opinion, or else it would default into the “as an AI I blah blah blah” stuff.