• Charlie Stross
    link
    fedilink
    134 days ago

    @dgerard What fascinates me is *why* coders who use LLMs think they’re more productive. Is the complexity of their prompt interaction misleading them as to how effective the outputs it results in are? Or something else?

    • @[email protected]
      link
      fedilink
      English
      134 days ago

      Here’s a random guess. They are thinking less, so time seems to go by quicker. Think about how long 2 hours of calculus homework seems vs 2 hours sitting on the beach.

      • @[email protected]
        link
        fedilink
        English
        84 days ago

        This is such a wild example to me because sitting at beach is extremely boring and takes forever whereas doing calculus is at least engaging so time flies reasonably quick.

        Like when I think what takes the longest in my life I don’t think “those times when I’m actively solving problems”, I think “those times I sit in a waiting room at the doctors with nothing to do” or “commuting, ditto”.

        • @[email protected]
          link
          fedilink
          English
          34 days ago

          I know what you mean. If I’m absorbed in something I find interesting time flies. Solving integrals is not one those for me.

      • @[email protected]
        link
        fedilink
        English
        54 days ago

        The reward mechanism in the brain is triggered when you bet. I think it also triggers a second time when you do win, but I’m not sure. So, yeah, sometimes the LLM spits out something good, and your brain rewards you already when you ask it. Hence, you probably do feel better, because you constantly get hits dopamine.

    • @[email protected]
      link
      fedilink
      English
      24 days ago

      Most people want to do the least possible work with the least possible effort and AI is the vehicle for that. They say whatever words make AI sound good. There’s no reason to take their words at face value.