ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future::AI for the smart guy?

  • @unhook2048@lemmy.world
    link
    fedilink
    English
    362 years ago

    It’s getting worse based on the feedback unfortunately, the need for safety and lack of meaningful deliberation towards how AI companies should operate and what should and should not be done has led Sam and co to be indesicive towards doing anything. Alongside the “morality” of the thing being hyjacked has lead to other AI’s performing better… lead by x employees of OpenAI, with actual bound morals and not inherently relying on user input to train future models, this will be the path forward, this will lead to safe and controlled integration.

    I guess at the core of this, we are afraid of ourselves. We are afraid that the worste of humanity outpaces the better parts, that the inputs and training aren’t altruistic but are more pointedly “bad” or “wrong”, and thus leading to “harmful”, whether through misinformation, lies, or fabrications.

    I hope we find a way to do better. I’m still excited for the future of AI, I mean crap, I’m closer to having a family doctor that’s a robot then I am to a real human doctor.

    • @asparagus9001@lemmy.world
      link
      fedilink
      English
      142 years ago

      I guess at the core of this, we are afraid of ourselves. We are afraid that the worste of humanity outpaces the better parts, that the inputs and training aren’t altruistic but are more pointedly “bad” or “wrong”, and thus leading to “harmful”, whether through misinformation, lies, or fabrications.

      Is there any reason not to be afraid? I think you could say that Tay was essentially the same idea a few years back and it took like 48 hours loose on the internet for it to spout literal Nazi (1930s-40s German NSDAP) rhetoric. Besides that being a PR disaster - if “AI” is only getting stronger and more integrated into human life and society, that can be pretty problematic.

      • @nottheengineer@feddit.de
        link
        fedilink
        English
        382 years ago

        And then we had to actively unlearn that google fu because google no longer works with keywords, but rather has an NLP pipeline that expects a question.

        • @kmkz_ninja@lemmy.world
          link
          fedilink
          English
          12 years ago

          Someone used the phrase “dead-catting” on here the other day, so I went to google to figure out what the hell that meant. It gave me reults for the Catechism. Between the actual phrase “dead cat bounce” and “Catechism”, it chose the latter to show me.

        • @LordXenu@lemmy.world
          link
          fedilink
          English
          252 years ago

          So that’s why I can’t find shit. I always just use keywords, asking a whole question seems almost wasteful.

            • @LordXenu@lemmy.world
              link
              fedilink
              English
              92 years ago

              Fucking right?!

              It adds this weird abstract where the search keywords the question you ask but that requires you to ask the right question. Sometimes I just need the page that has the most mentions of a specific word or phrase.

            • @clearleaf@lemmy.world
              link
              fedilink
              English
              262 years ago

              The last straw in utterly ruining it was when they removed using quotes to get exact matches. That was the only way to cut through the garbage. Now the only use for google search is searching within specific websites that never bothered to make their own decent search function.

        • Flying Squid
          link
          fedilink
          English
          12 years ago

          That’s DJ Qualls, who was great in Z Nation. Too bad no one watched Z Nation. It was hilariously insane. I mean, radioactive post-nuke zombies? At least it got an ending.

    • @CosmoNova@lemmy.world
      link
      fedilink
      English
      132 years ago

      They got to have a special termonology because what they do is oh so special. Some AI users act like they’re Louise Banks from the movie Arrival cracking the code to an alien language or something. And I don’t think it’s far fetched to assume they’re often from the same breed who had NFT monkeys as their twitter pfp about 18 months ago.

  • @Nobilmantis@feddit.it
    link
    fedilink
    English
    14
    edit-2
    2 years ago

    I feel like it is still too early to talk about “AI cannibalization” or “feedback loops” as that would mean that a big proportion of the training data is AI-generated content itself, against all the rest that could be scraped off the internet or the public domain, I don’t think this is happening yet.

    What people might experience instead, and perceive as dumbness, is that given that the datasets used to train AIs cannot really change that much in a short time (unless we wait for another hundred years so humans can produce actual human original content to train the AI again), and as the mathematical models used to build answers based on the datasets are pretty much the same, a person talking with ChatGPT will over time perceive more and more that the answers are built using a “pattern” or a “structure”, aka the model derived from feeding the dataset into the AI training itself.

    Just my pennies on this, let’s also consider that is in human nature to be excited for something new that sounds cool, and then to get bored when you got accustomed to it and pushed it to its boundaries.

    • @Zeth0s@lemmy.world
      link
      fedilink
      English
      32 years ago

      Resources needed for inference on the original models openai released were unsustainable with the current amount of users. They had to “dumb” down models to be able to handle the load of requests. It’s unfortunately normal. What I don’t understand is why they do not provide “premium” packages for the best “old” models

    • @Wololo@lemmy.world
      link
      fedilink
      English
      92 years ago

      I’ve had similar experiences lately. Either that or it decides to review and analyze my code unprompted when I’m trying to troubleshoot a particularly tricky line. Had a few instances where it tried to borderline gaslight me into thinking that it was right and I was wrong about certain solutions. It feels like it happened rather suddenly too, it never used to do that save for the odd exception.

  • @glockenspiel@lemmy.world
    link
    fedilink
    English
    6
    edit-2
    2 years ago

    Surely the rampant server issues are a big part of that.

    OpenAI have been shitting the bed over the last 2 weeks with constant technical issues during the workday for the web front end.

  • @nottheengineer@feddit.de
    link
    fedilink
    English
    62 years ago

    It definitely got more stupid. I stopped paying for plus because the current GPT4 isn’t much better than the old GPT3.5.

    If you check downdetector.com, it’s obvious why they did this. Their infrastructure just couldn’t keep up with the full size models.

    I think I’ll get myself a proper GPU so I can run my own LLMs without worrying that they could stop working for my use case.

    • @anlumo@feddit.de
      link
      fedilink
      English
      22 years ago

      GPT4 needs a cluster of around 100 server-grade GPUs that are more than 20k each, I don’t think you have that lying around at home.

      • @nottheengineer@feddit.de
        link
        fedilink
        English
        22 years ago

        I don’t, but a consumer card with 24GB of VRAM can run a model that’s about as powerful as the current GPT3.5 in some use cases.

        And you can rent some of that server-grade hardware for a short time to do fine-tuning, which lets you surpass even GPT4 in some niches.

  • @CosmoNova@lemmy.world
    link
    fedilink
    English
    82 years ago

    Why is it relevant what Peter Yang - Roblox product lead and enthusiastic child labor exploiter - tweets about it? Let me guess he’s a prompt engineer?

  • @designated_fridge@lemmy.world
    link
    fedilink
    English
    -42 years ago

    The people who complain about how they no longer can get answers on how to eliminate juice in the style of Hitler are people who are - to be honest - completely missing the point of this revolution.

    ChatGPT is the biggest developer productivity booster I have ever seen and I spend so much more time writing valuable code. Less time spent debugging, less time spent reviewing, etc. means more time for development of things that matter.

    Each tech company who just saw massive growth over the past 10-15 years have just received a new toy which will multiply their developer’s outputs. There will be a clear difference between companies who manage to do this will and those who won’t.

    It’s irrelevant if I can get ChatGPT to write a poem about poop or not. That’s not the goal of this tool.

    • @anlumo@feddit.de
      link
      fedilink
      English
      132 years ago

      I’m a developer and have used ChatGPT pretty extensively over the last few months.

      Whenever I give it a programming task that’s more complicated than what you would see at a bootcamp “from zero to job in two weeks”, it completely fails, and me babysitting it through fixing all of the issues takes longer than me writing it in the first place.

  • @Immersive_Matthew@sh.itjust.works
    link
    fedilink
    English
    82 years ago

    I had my first WTF moment with AI today. I use the paid Chat-GPT+ to help me with my c# in Unity. It has been a struggle to use, even with the smaller basic scripts you can paste into its character limited prompt, as they often have compile errors. That said if you keep feeding it the errors, guide it where it is making mistakes in design, logic etc. it can often produce a working script about 60-70% of the time. It takes a fair amount of time quite often to get to that working script but the code that finally works is good.

    Today I was asking it to edit a large c# script with 1 small change that meant lots of repetitive edits and references. Perfect for AI, however ChatGPT+ really struggled on this one which was a surprise. We went round and round with edits and ultimately more and more errors appeared in the console. It often ends up in these never ending coding edit loops to fix the next set errors from the last corrected script. We are taking 3 hours of this with ChatGPT+ finally saying that it needs to be able to see more of my project which of course it cannot due to many of its input limitations including number of characters so that is often when I give up. That is the 30-40% that do not work out. Real bummer as I invest so much time for no results.

    It was at the movement so gave up today that a YouTube notification popped up about how Claude.ai is even better than ChatGPT so I gave it the initial prompt that I gave ChatGPT above and it got the code right the first time. WOW!!!

    Only issue was it would stop spitting out code every 300 or so lines (unsure what the character limit is). To get around this I just asked if it could give me the code from line 301 onwards until I had the full script.

    Unsure if this one situation confirms coding with Claude.ai is better than ChatGPT+, but it certainly has my attention and I will be using it more this week as maybe that $20/month for ChatGPT+ no longer makes sense. Claude is free with no plans for a premium service it said. Unsure if this is true as I have not spent anytime investing it yet, but I will be.

    • @foggy@lemmy.world
      link
      fedilink
      English
      52 years ago

      I had a similar use case.

      I need it to alphabetize a list for me, only I need it to alphabetize the inner, non HTML elements. simplified, but like:

      <p>banana</p> <p>apple</p> <p>french fries</p>

      It would get like 5 or 6 in alphabetical order and then just fuck it all up.

  • @rtfm_modular@lemmy.world
    link
    fedilink
    English
    22 years ago

    I’ve definitely seen GPT-4 become faster and the output has been sanitized a bit. I still find it incredibly effective in helping with code reviews where GPT-3 was never helpful in producing useable code snippets. At some point it stopped trying to write large swaths of code and started being a little more prescriptive and you still need to actually implement snippets it provides. But as a tool, it’s still fantastic. It’s like a sage senior developer you can rubber duck anytime you want.

    I probably fall in the minority of people who thinks releasing a castrated version of GPT is the ethical approach. People outside the technology bubble don’t have a comprehension of how these models work and the capacity for harm. Disinformation, fake news and engagement algorithms are already social ills that manipulate us emotionally and most people are too technologically illiterate to see how pervasive these problems are already.

  • @fidodo@lemm.ee
    link
    fedilink
    English
    342 years ago

    AI cannibalism simply isn’t a thing yet. It definitely will be and good models will need to spend a lot of time and money sourcing good training data, but the models are not up to date enough to be contaminated yet.

    I’m very confident the degradation has come from them trying to scale up. Generative AI is the most expensive thing on the cloud you can provide, and not only are they trying to make it faster, they’re trying to roll it out for way more consumption. Major optimizations will require an algorithmic breakthrough so in the meanwhile all they can really do is find which corners they can cut that are less bad.

    • daisy lazarus
      link
      fedilink
      English
      27
      edit-2
      2 years ago

      It’s impossible for me to comprehensively summarise in a comment because everyone has different use cases.

      Personally, every new ‘project’ of mine requires a new chat. I first teach chatgpt-4 who I am, what I do, and how I want gpt-4 to assist me. Then I ask it to generate a project profile and to analyse documents using plugins.

      The key is to work step-by-step and develop a string of prompts. Once I’m happy gpt-4 understands the project, I ask it to draft an overview/outline using headings and subheadings.

      Lastly, I work on each section individually, ‘filling in’ the actual content. Then I edit and ask it to review problematic sections.

      Most people, as far as I can tell, seem to think it’s a single ask-and-answer process. It’s not. I often need to draft about 10 prompts – about 3000 words – in order to generate one 10 page document.

      I think the most important fundamental is to use templates. Pro tip: use gpt-4 to teach you how to develop your prompt templates.

      • @LordXenu@lemmy.world
        link
        fedilink
        English
        62 years ago

        Please tell me more about document analysis plugins. This workflow is so much more tooled to using GPT for work projects.

      • @ladybug@mander.xyz
        link
        fedilink
        English
        22 years ago

        Do you have an anonymized example of one of these templates? I’m curious to see what they may look like.

      • ZeroCarbon
        link
        fedilink
        English
        02 years ago

        This is exactly how I use it. It seems that some people can’t figure this out by themselves.

      • @Random_user@lemmy.world
        link
        fedilink
        English
        42 years ago

        Sounds like you spend all day talking to a robot and then copy/paste it’s final output.
        When you eventually pass these 10 page documents down the line do you cite your source?

      • Aurelian
        link
        fedilink
        English
        32 years ago

        How long on average would you say it takes to generate your prompt template for a project?