• @[email protected]
    link
    fedilink
    134 months ago

    You’d think these centralised LLM search providers would be caching a lot of this stuff, eg perplexity or claude.

    • @[email protected]
      link
      fedilink
      English
      384 months ago

      There’s two prongs to this

      1. Caching is an optimization strategy used by legitimate software engineers. AI dorks are anything but.

      2. Crippling information sources outside of service means information is more easily “found” inside the service.

      So if it was ever a bug, it’s now a feature.

      • @[email protected]
        link
        fedilink
        154 months ago

        Third prong, looking constantly for new information. Yeah, most of these sites may be basically static, but it’s probably cheaper and easier to just constantly recrawl things.

    • @[email protected]
      link
      fedilink
      English
      84 months ago

      They’re absolutely not crawling it every time they nee to access the data. That’s an incredible waste of processing power on their end as well.

      In the case of code though that does change somewhat often. They’d still need to check if the code has been updated at the bare minimum.