• @[email protected]
    link
    fedilink
    English
    103 months ago

    I use chatgpt as a suggestion. Like an aid to whatever it is that I’m doing. It either helps me or it doesn’t, but I always have my critical thinking hat on.

    • @[email protected]
      link
      fedilink
      23 months ago

      Same. It’s an idea generator. I asked what kinda pie should I should make. I saw one I liked and then googled a real recipe.

      I needed a SQL query for work. It gave me different methods of optimization. I then googled those methods, implemented, and tested it.

  • RedSnt 👓♂️🖥️
    link
    fedilink
    15
    edit-2
    3 months ago

    I’ve been using o3-mini mostly for ffmpeg command lines. And a bit of sed. And it hasn’t been terrible, it’s a good way to learn stuff I can’t decipher from the man pages. Not sure what else it’s good for tbh, but at least I can test and understand what it’s doing before running the code.

      • RedSnt 👓♂️🖥️
        link
        fedilink
        13 months ago

        True, in many cases I’m still searching around because the explanations from humans aren’t as simplified as the LLM. I’ll often have to be precise in my prompting to get the answers I want which one can’t be if they don’t know what to ask.

        • @[email protected]
          link
          fedilink
          23 months ago

          And that’s how you learn, and learning includes knowing how to check if the info you’re getting is correct.
          LLM confidently gives you easy to digest bite, which is plain wrong 40 to 60% of the time, and even if you’re lucky it will be worse for you.

          • RedSnt 👓♂️🖥️
            link
            fedilink
            13 months ago

            I’m in the kiddie pool, so I do look things up or ask what stuff does. Even though I looked at the man page for printf (printf.3 I believe), there was nothing about %*s for example, and searching for these things outside of asking LLM’s is some times too hard to filter down to the correct answer. I’m on 2 lines of code per hour, so I’m not exactly rushing.
            Shell scripting is quite annoying to be sure. Thinking of learning python instead.

            • @[email protected]
              link
              fedilink
              13 months ago

              Come on, I just googled printf bash and the first link gave me very comprehensive page on how it works and what parameters are and how to use them. It was 3 pages on my phone.
              Please, don’t get what I am about to say the wrong way, but if this was too complicated to you, this is your problem, not anything else. This is how people learn, there is no cheat code to it, you need to learn how to find the information and how to absorb it, and no robot will ever do it for you.
              Bash is confusing mess, sure, but using random words genrtator to chew it for you will make things worse for you. It’s very possible that you’re on 2 lines per hour precisely because you’re using LLM.

  • @[email protected]
    link
    fedilink
    18
    edit-2
    3 months ago

    I feel this hard with the New York Times.

    99% of the time, I feel like it covers subjects adequately. It might be a bit further right than me, but for a general US source, I feel it’s rather representative.

    Then they write a story about something happening to low income US people, and it’s just social and logical salad. They report, it appears as though they analytically look at data, instead of talking to people. Statisticians will tell you, and this is subtle: conclusions made at one level of detail cannot be generalized to another level of detail. Looking at data without talking with people is fallacious for social issues. The NYT needs to understand this, but meanwhile they are horrifically insensitive bordering on destructive at times.

    “The jackboot only jumps down on people standing up”

    • Hozier, “Jackboot Jump”

    Then I read the next story and I take it as credible without much critical thought or evidence. Bias is strange.

      • @[email protected]
        link
        fedilink
        23 months ago

        “Wet sidewalks cause rain”

        Pretty much. I never really thought about the causal link being entirely reversed, moreso that the chain of reasoning being broken or mediated by some factor they missed, which yes definitely happens, but now I can definitely think of instances where it’s totally flipped.

        Very interesting read, thanks for sharing!

    • Lady Butterfly
      link
      fedilink
      English
      43 months ago

      Can you give me an example of conclusions on one level of detail can’t be generalised to another level? I can’t quite understand it

      • @[email protected]
        link
        fedilink
        6
        edit-2
        3 months ago

        Perhaps the textbook example is the Simpson’s Paradox.

        This article goes through a couple cases where naively and statically conclusions are supported, but when you correctly separate the data, those conclusions reverse themselves.

        Another relevant issue is Aggregation Bias. This article has an example where conclusions about a population hold inversely with individuals of that population.

        And the last one I can think of is MAUP, which deals with the fact that statistics are very sensitive in whatever process is used to divvy up a space. This is commonly referenced in spatial statistics but has more broad implications I believe.


        This is not to say that you can never generalize, and indeed, often a big goal of statistics is to answer questions about populations using only information from a subset of individuals in that population.

        All Models Are Wrong, Some are Useful

        • George Box

        The argument I was making is that the NYT will authoritatively make conclusions without taking into account the individual, looking only at the population level, and not only is that oftentimes dubious, sometimes it’s actively detrimental. They don’t seem to me to prove their due diligence in mitigating the risk that comes with such dubious assumptions, hence the cynic in me left that Hozier quote.

        • Lady Butterfly
          link
          fedilink
          English
          43 months ago

          That’s really interesting and I really appreciate you writing that out

  • Kane
    link
    fedilink
    23 months ago

    Exactly this is why I have a love/hate relationship with just about any LLM.

    I love it most for generating code samples (small enough that I can manually check them, not entire files/projects) and re-writing existing text, again small enough to verify everything. Common theme being that I have to re-read its output a few times, to make 100% sure it hasn’t made some random mistake.

    I’m not entirely sure we’re going to resolve this without additional technology, outside of ‘the LLM’-itself.

  • balderdash
    link
    fedilink
    163 months ago

    Deepseek is pretty good tbh. The answers sometimes leave out information in a way that is misleading, but targeted follow up questions can clarify.

    • snooggums
      link
      fedilink
      English
      53
      edit-2
      3 months ago

      Like leaving out what happened in Tiananmen Square in 1989?

        • @[email protected]
          link
          fedilink
          93 months ago

          In my opinion it should have been the politburo that was pureed under tank tracks and hosed down into the sewers instead of those students.

          • @[email protected]
            link
            fedilink
            3
            edit-2
            3 months ago

            The western narrative about Tiananmen Square is basically orthogonal to the truth?

            Like it’s not just filled with fabricated events like tanks pureeing students, it completely misses the context and response to tell a weird “china bad and does evil stuff cuz they hate freedom” story.

            The other weird part is that the big setpieces of the western narrative, like tank man getting run over by tanks headed to the square are so trivial to debunk, just look at the uncropped video, yet I have yet to see 1 lemmiter actually look at the evidence and develop a more nuanced understanding. I’ve even had them show me compilations of photos from the events and never stop to think “Huh, these pictures of gorily lynched cops, protesters shot in streets outside the square, and burned vehicles aren’t consistent with what I’ve been told, maybe I’ve been mislead?”

            • Max
              link
              fedilink
              1
              edit-2
              3 months ago

              I just read the entire article you linked and it seems pretty inline with what I was taught about what happened in school. And it definitely doesn’t make me sympathetic to the PLA or the government.

              • @[email protected]
                link
                fedilink
                1
                edit-2
                3 months ago

                Then your school did a better job of educating you than anyone talking about thousands of protesters getting ground into paste. Mine told me that tens of thousands of protesters were all blocked into the square, then tanks machinegunned them all down and ran them over, and the only picture to make it out of the event was Tank Man blocking the tanks from entering the square.

                The point isn’t to make you sympathetic to the PLA, if you have a more nuanced understanding than “china killed 1000s of protestors because they fear and hate freedom”, you’re already ahead of 9/10 lemmitors, including the one I was responding to.

                You can’t have a constructive discussion with someone whose analysis begins and ends with “china bad”, because they are incapable of actually engaging with the material beyond twisting any data into hostile evidence, and making up some if none is available.

          • @[email protected]
            link
            fedilink
            43 months ago

            It really is so convenient, there are so many CPC members, but they all happen to be near a conveniently placed wall that is more than enough.

          • snooggums
            link
            fedilink
            English
            8
            edit-2
            3 months ago

            Is it though? I really can’t tell.

            Poe’s law has been working overtime recently.

            Edut: saw a comment further down that it is a default deepseek response for censored content, so yeah a joke. People who don’t have that context aren’t going to get the joke.

        • @[email protected]
          link
          fedilink
          23 months ago

          Are we calling the communist party of China and their history of genocide and general evil, some kind of culture now?

          Can’t believe how hostile people are against nazis, we should have respected their cultural use of gas chambers.

  • @[email protected]
    link
    fedilink
    English
    103 months ago

    I just use it to write emails, so I declare the facts to the LLM and tell it to write an email based on that and the context of the email. Works pretty well but doesn’t really sound like something I wrote, it adds too much emotion.

    • @[email protected]
      link
      fedilink
      53 months ago

      This is what LLMs should be used for. People treat them like search engines and encyclopedias, which they definitely aren’t

      • @[email protected]
        link
        fedilink
        43 months ago

        Yeah, that has been my experience so far. LLMs take as much or more work vs the way I normally do things.

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮
    link
    fedilink
    English
    103 months ago

    Most of my searches have to do with video games, and I have yet to see any of those AI generated answers be accurate. But I mean, when the source of the AI’s info is coming from a Fandom wiki, it was already wading in shit before it ever generated a response.

    • @[email protected]
      link
      fedilink
      English
      63 months ago

      I’ve tried it a few times with Dwarf Fortress, and it was always horribly wrong hallucinated instructions on how to do something.

  • @[email protected]
    link
    fedilink
    833 months ago

    First off, the beauty of these two posts being beside each other is palpable.

    Second, as you can see on the picture, it’s more like 60%

    • @[email protected]
      link
      fedilink
      253 months ago

      No it’s not. If you actually read the study, it’s about AI search engines correctly finding and citing the source of a given quote, not general correctness, and not just the plain model

      • @[email protected]
        link
        fedilink
        293 months ago

        Read the study? Why would i do that when there’s an infographic right there?

        (thank you for the clarification, i actually appreciate it)

  • @[email protected]
    link
    fedilink
    4
    edit-2
    3 months ago

    This, but for Wikipedia.

    Edit: Ironically, the down votes are really driving home the point in the OP. When you aren’t an expert in a subject, you’re incapable of recognizing the flaws in someone’s discussion, whether it’s an LLM or Wikipedia. Just like the GPT bros defending the LLM’s inaccuracies because they lack the knowledge to recognize them, we’ve got Wiki bros defending Wikipedia’s inaccuracies because they lack the knowledge to recognize them. At the end of the day, neither one is a reliable source for information.

    • @[email protected]
      link
      fedilink
      73 months ago

      There’s an easy way to settle this debate. Link me a Wikipedia article that’s objectively wrong.

      I will wait.

    • Ms. ArmoredThirteen
      link
      fedilink
      English
      163 months ago

      If this were true, which I have my doubts, at least Wikipedia tries and has a specific goal of doing better. AI companies largely don’t give a hot fuck as long as it works good enough to vacuum up investments or profits

      • @[email protected]
        link
        fedilink
        13 months ago

        Your doubts are irrelevant. Just spend some time fact checking random articles and you will quickly verify for yourself how many inaccuracies are allowed to remain uncorrected for years.

      • @[email protected]
        link
        fedilink
        English
        23 months ago

        Because some don’t let you. I can’t find anything to edit Elon musk or even suggest an edit. It says he is a co-founder of OpenAi. I can’t find any evidence to suggest he has any involvement. Wikipedia says co-founder tho.

          • @[email protected]
            link
            fedilink
            1
            edit-2
            3 months ago

            Ah, but, don’t forget that OpenAI is intending to share their models (if not their data too) with the federal government in exchange for special treatment. And you know who’s in the government now?

          • @[email protected]
            link
            fedilink
            English
            33 months ago

            Interesting! Cheers! I didn’t go farther than openai wiki tbh. It didn’t list him there so I figured it was inaccurate. It turns out it is me who is inaccurate!

      • @[email protected]
        link
        fedilink
        33 months ago

        There are plenty of high quality sources, but I don’t work for free. If you want me to produce an encyclopedia using my professional expertise, I’m happy to do it, but it’s a massive undertaking that I expect to be compensated for.

    • @[email protected]
      link
      fedilink
      11 month ago

      Well yes but also no. Every text will be potentially wrong because authors tend to incorporate their subjectivity in their work. It is only through inter-subjectivity that we can get closer to objectivity. How do we do that ? By making our claims open to scrutiny of others, such as by citing sources, publishing reproducible code and making available the data we gathered on which we base our claims. Then others can understand how we came to the claim and find the empirical and logical errors in our claims and thus formulate very precise criticism. Through this mutual criticism, we, as society, will move ever closer to objectivity. This is true for every text with the goal of formulating knowledge instead of just stating opinions.

      However one can safely say that Chatgpt is designed way worse then Wikipedia, when it comes to creating knowledge. Why ? Because Chatgpt is non-reproducible. Every answer is generated differently. The erroneous claim you read in a field you know nothing about may not appear when a specialist in that field asks the same question. This makes errors far more difficult to catch and thus they “live” for far longer in your mind.

      Secondly, Wikipedia is designed around the principle of open contribution. Every error that is discovered by a specialist, can be directly corrected. Sure it might take more time then you expected until your correction will be published. On the side of Chatgpt however there is no such mechanism what so ever. Read an erroneous claim? Well just suck it up, and live with the ambiguity that it may or may not be spread.

      So if you catch errors in Wikipedia. Go correct them, instead of complaining that there are errors. Duh, we know. But an incredible amount of Wikipedia consists not of erroneous claims but of knowledge open to the entire world and we can be gratefull every day it exists.

      Go read “Popper, Karl Raimund. 1980. „Die Logik der Sozialwissenschaften“. S. 103–23 in Der Positivismusstreit in der deutschen Soziologie, Sammlung Luchterhand. Darmstadt Neuwied: Luchterhand.” if you are interested in the topic

      Sorry if this was formulated a little aggressively. I have no personal animosity against you. I just think it is important to stress that while yes, both may have their flaws, Chatgpt and Wikipedia. Wikipedia is non the less way better designed when it comes to spreading knowledge then Chatgpt, precisely because of the way it handles erroneous claims.

    • Do not bring Wikipedia into this argument.

      Wikipedia is the library of Alexandria and the amount of effort people put into keeping Wikipedia pages as accurate as possible should make every LLM supporter be ashamed with how inaccurate their models are if they use Wikipedia as training data

      • @[email protected]
        link
        fedilink
        113 months ago

        TBF, as soon as you move out of the English language the oversight of a million pair of eyes gets patchy fast. I have seen credible reports about Wikipedia pages in languages spoken by say, less than 10 million people, where certain elements can easily control the narrative.

        But hey, some people always criticize wikipedia as if there was some actually 100% objective alternative out there, and that I disagree with.

        • Fair point.

          I don’t browse Wikipedia much in languages other than English (mainly because those pages are the most up-to-date) but I can imagine there are some pages that straight up need to be in other languages. And given the smaller number of people reviewing edits in those languages, it can be manipulated to say what they want it to say.

          I do agree on the last point as well. The fact that literally anyone can edit Wikipedia takes a small portion of the bias element out of the equation, but it is very difficult to not have some form of bias in any reporting. I more use Wikipedia as a knowledge source on scientific aspects which are less likely to have bias in their reporting

      • @[email protected]
        link
        fedilink
        13 months ago

        With all due respect, Wikipedia’s accuracy is incredibly variable. Some articles might be better than others, but a huge number of them (large enough to shatter confidence in the platform as a whole) contain factual errors and undisguised editorial biases.

        • It is likely that articles on past social events or individuals will have some bias, as is the case with most articles on those matters.

          But, almost all articles on aspects of science are thoroughly peer reviewed and cited with sources. This alone makes Wikipedia invaluable as a source of knowledge.

      • @[email protected]
        link
        fedilink
        English
        33 months ago

        Idk it says Elon Musk is a co-founder of openAi on wikipedia. I haven’t found any evidence to suggest he had anything to do with it. Not very accurate reporting.

        • @[email protected]
          link
          fedilink
          33 months ago

          Isn’t co-founder similar to being made partner at a firm? You can kind of buy your way in, even if you weren’t one of the real originals.

          • @[email protected]
            link
            fedilink
            English
            33 months ago

            That is definitely how I view it. I’m always open to being shown I am wrong, with sufficient evidence, but on this, I believe you are accurate on this.

          • @[email protected]
            link
            fedilink
            English
            13 months ago

            Paywalled link, but yes, someone pointed that out and I was surprised that there is such a small pool of info about it. You’d think wiki would elaborate more on it, or that OpenAi wiki might detail it. BUT, I haven’t read either in their entirety. Just something I saw that wasn’t detailed too well.

    • @[email protected]
      link
      fedilink
      153 months ago

      What topics are you an expert on and can you provide some links to Wikipedia pages about them that are wrong?

  • @[email protected]
    link
    fedilink
    173 months ago

    I did a google search to find out how much i pay for water, the water department where I live bills by the MCF (1,000 cubic feet). The AI Overview told me an MCF was one million cubic feet. It’s a unit of measurement. It’s not subjective, not an opinion and AI still got it wrong.

      • @[email protected]
        link
        fedilink
        11
        edit-2
        3 months ago

        LLMs are actually pretty good for looking up words by their definition. But that is just about the only topic I can think of where they are correct even close to 80% of the time.

        • @[email protected]
          link
          fedilink
          English
          13 months ago

          yeah. some things I’d be shocked if they were correct 1% of the time. some things, like that, I might expect them to be correct about 80%, yeah.

  • Foxlore
    link
    fedilink
    English
    263 months ago

    Talking with an AI model is like talking with that one friend, that is always high that thinks they know everything. But they have a wide enough interest set that they can actually piece together an idea, most of the time wrong, about any subject.

  • @[email protected]
    link
    fedilink
    63 months ago

    If it’s being designed to answer questions, then it should simply be an advanced search engine that points to actual researched content.

    The way it acts now, it’s trying to be an expert based one “something a friend of a friend said”, and that makes it confidently wrong far too often.