We have all seen AI-based searches available on the web like Copilot, Perplexity, DuckAssist etc, which scour the web for information, present them in a summarized form, and also cite sources in support of the summary.

But how do they know which sources are legitimate and which are simple BS ? Do they exercise judgement while crawling, or do they have some kind of filter list around the “trustworthyness” of various web sources ?

  • ikt
    link
    fedilink
    English
    8
    edit-2
    27 days ago

    tbh they’re accurate enough most of the time hence why billions of people are using them

    • @[email protected]
      link
      fedilink
      3927 days ago

      That’s actually not why billions of people are using them. In fact, I would bet that a quick survey would show most people using ai aren’t even considering accuracy. But, you could always ask ai and see what it says, I guess…

    • @[email protected]
      link
      fedilink
      727 days ago

      The hallucination rates with current models are quite high, especially the reasoning ones with rates like 70%. Wouldn’t call that accurate. I think most times we are just not interested enough to even check for accuracy in some random search. We often just accept the answer, that is given, without any further thought.

      • ikt
        link
        fedilink
        English
        127 days ago

        are you sure your settings are correct? what are you asking that gets a 70% hallucination rate?

        • @[email protected]
          link
          fedilink
          126 days ago

          I should have mentioned, where I got this from. I’m not an AI researcher myself - so AINAAIR. I’m referencing this youtube video from TheMorpheus (News and Informations/Tutorials about various IT stuff, including AI research)(Video is in german). For example the diagram at 3:00.