• @[email protected]
    link
    fedilink
    10
    edit-2
    1 year ago

    I mean it makes sense:

    https://urologyspecialistsnc.com/soda-cause-kidney-stones/

    Keep in mind, not all types of soda are equally capable of contributing to kidney stones.

    We recommend consuming any soda in moderation, and if you must, stick to the light citrus types that have less sugar and chemicals. Please note, it’s best to avoid soda altogether if kidney stones run in your family

    Alternatives to soda include fresh fruit juices. Orange juice has been studied and shown to decrease the risk of stone formation. Fresh-squeezed lemonade is also great for preventing the formation of kidney stones. These beverages are high in citrate which binds to calcium in the urinary tract, preventing stone formation

  • mechoman444
    link
    fedilink
    111 year ago

    I was actually drinking my own urine as I read this post. But not for kidney stones…

  • @[email protected]
    link
    fedilink
    181 year ago

    It’s the rule of modern engineering. You will always be served the worst possible product that can claim to have some utility. If it’s not on the edge of being useful someone didn’t engineer hard enough.

    • Kogasa
      link
      fedilink
      101 year ago

      That’s not necessarily wrong, but not the big explaining factor here I think. The technological challenges behind aligning ML models with factual reality aren’t solved, so it’s not an engineering decision. It’s more that AI is remarkably easy to market as being more capable than it is

      • @[email protected]
        link
        fedilink
        41 year ago

        To expand: I feel like it should be emphasised more that current “AI” models are, at best, hallucinating.

        Their output may look real enough and for some purposes they may be perfectly suitable, but ultimately, they have no concept of the semantic objects related to the words they learn and the semantic relationships between those objects. Without that, they can’t possibly guarantee that the implied semantic connection of the combination of words they produce aligns with the actual relationships.

        You can use a LLM to help translate bullet points into text of a given tone (like abstracts for theses that sound scientific), but you’ll still have to check the factuality and consistency of those texts. When using them to write texts about something you already know, that’s doable and can save you some work. But using it like in the OP to aggregate and present “new” facts without supervision is dangerous, because you can’t actually verify what you don’t already know.

        But “Copilot can scrape your data to give you some pointers and spare some of the tedium of finding it yourself, but you shouldn’t take it for gospel truth” doesn’t quite sell as nicely as “Microsoft Copilot leverages the power of AI to boost productivity, unlock creativity, and helps you understand information better”.

  • Shurimal
    link
    fedilink
    7
    edit-2
    1 year ago

    Just for kicks entered the same thing to Brave search and it’s AI seems to give a much saner answer. Google search is an absolute joke these days.

    • @[email protected]
      link
      fedilink
      31 year ago

      I have no idea how Google manages to be so terrible. At least half the employees have to be actively sabotaging the company with the way it runs.

  • さようなら
    link
    fedilink
    English
    341 year ago

    We should start poisoning the LLMs by spreading misinformation in online spaces. That would be funny i think

      • @[email protected]
        link
        fedilink
        31 year ago

        Common mistake. You know that if you drink too much urine, there will be nothing left to piss, right?

    • Fishbone
      link
      fedilink
      201 year ago

      And that’s definitely the most unhinged thing the AI said in OP’s image.

    • @[email protected]
      link
      fedilink
      11 year ago

      To be fair every time I give Gemini a go it’s hot garbage.

      But then 4o seems to be worse than gpt4. Just feels like it’s regurgitating garbage