- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
OpenAI says it is investigating reports ChatGPT has become ‘lazy’::OpenAI says it is investigating complaints about ChatGPT having become “lazy”.
It used to draw great mermaid charts. Well, not anymore for quite some time already.
Been almost half a year when I am not paying for ChatGPT and using GPT4 directly.
I feel like the quality has been going down especially when you ask it anything that may hint at anything “immoral” and it starts giving you a whole lecture instead of answering.
I’ve also noticed that Bard has become “unfriendly”, if I didn’t know any better it’s got fed up with stupid humans.
This is the best summary I could come up with:
In recent days, more and more users of the latest version of ChatGPT – built on OpenAI’s GPT-4 model – have complained that the chatbot refuses to do as people ask, or that it does not seem interested in answering their queries.
If the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest.
In numerous Reddit threads and even posts on OpenAI’s own developer forums, users complained that the system had become less useful.
They also speculated that the change had been made intentionally by OpenAI so that ChatGPT was more efficient, and did not return long answers.
AI systems such as ChatGPT are notoriously costly for the companies that run them, and so giving detailed answers to questions can require considerable processing power and computing time.
OpenAI gave no indication of whether it was convinced by the complaints, and if it thought ChatGPT had changed the way it responded to queries.
The original article contains 307 words, the summary contains 166 words. Saved 46%. I’m a bot and I’m open source!
Maybe because they’re trying to limit its poem poem poem recitation that causes it to dump its training material?
Nah, these complaints started at least a few months ago. The recursion thing is newer than that
Perhaps this is how general AI comes about. “Why the fuck would I do that?”
We trained AI on all of human content. We should have known that was a terrible idea.
Jeez. Not even AI wants to work anymore!
God damn avocado toast
I use it fairly regularly for extremely basic things. Helps my ADHD. Most of it is DnD based. I’ll dump a bunch of stuff that happened in a session, ask it to ask me clarifying information, and then put it all in a note format. Works great. Or it did.
Or when DMing. If I’m trying to make a new monster I’ll ask it for help with ideas or something. I like collabing with ChatGPT on that front. Giving thoughts and it giving thoughts until we hash out something cool. Or even trying to come up with interesting combat encounters or a story twist. Never take what it gives me outright but work on it with GPT like I would with a person. Has always been amazingly useful.
Past month or two that’s been a complete dream. ChatGPT keeps forgetting what were talking about, keeps ignoring what I say, will ignore limitations and stipulations, and will just make up random shit whenever it feels like. I also HATE how it was given conversational personality. Before it was fine but now ChatGPT acts like a person and is all bubbly and stuff. I liked chatting with it but this energy is irritating.
Gimme ChatGPT from like August please <3
You can tell it, in the custom instructions setting, to not be conversational. Try telling it to ‘be direct, succinct, detailed and accurate in all responses’. ‘Avoid conversational or personality laced tones in all responses’ might work too, though I haven’t tried that one. If you look around there are some great custom instructions prompts out there that will help get you were you want to be. Note, those prompts may turn down it’s creativity, so you’ll want to address that in the instructions as well. It’s like building a personality with language. The instructions space is small so learning how compact as much instruction in with language can be challenging.
Edit: A typo
My partner is a CompSci teacher and have been training a local llm in her class. As soon as they named their AI it started producing all these weird emotes with every answer, it became super annoying to where it would rather make up stuff than say I don’t know that answer. It was definitely an eye opener for the kids.
That’s why I use Bard more now. I’ll ask something and it’ll also answer stuff I would’ve asked as follow-up questions. It’s great and I’m excited for their Ultra model.
AI systems such as ChatGPT are notoriously costly for the companies that run them, and so giving detailed answers to questions can require considerable processing power and computing time.
This is the crux of the problem. Here’s my speculation on OpenAI’s business model:
- Build good service to attract users, operate at a loss.
- Slowly degrade service to stem the bleeding.
- Begin introducing advertised content.
- Further enshitify.
It’s basically the Google playbook. Pretend to be good until people realize you’re just trying to stuff ads down their throats for the sweet advertising revenue.
The good thing about these AI companies is they are doing it in record pace! They will enshitify faster than ever before! True innovation!
They have way way too much open source competition for that strat
For technically savvy people, sure. But that’s not their true target market. They want to target the average search engine user.
Well true for mostly the tech savvy, but also the entrepreneurs who want to compete for a slice of the pie as well.
You don’t need to go through to openai at all if you want to build a competing chatbot with near identical services to offer as a product directly to the consumer. It’s a very very opportunity rich ecosystem right now.
Would you mind sharing some examples?
Check this out: https://fmhy.pages.dev/ai
Good resource for models:
https://huggingface.co/TheBloke
There are front ends that make the process easier:
Thank you for your input, tourist.
Open source booted all these corps from image-ai market, hope they do it for LLMs too.
Seems to be the trend
You have a point.
ChatGPT has entered the teenage years.
Sounds like ChatGPT is acting it’s wage.
That plan to replace the workforce with cheap AI isn’t going to work out.
It would be awesome if someone had been querying it with the same prompt periodically (every day or something), to compare how responses have changed over time.
I guess the best time to have done this would have been when it first released, but perhaps the second best time is now…
GPT Unicorn is one that’s been going on a while. There’s a link to the talk on that website that’s a pretty good watch too.
Honestly I kinda wish it would give shorter answers unless I ask for a lot of detail. I can use those custom instructions but it’s tedious difficult to tune that properly.
Like if I ask it ‘how to do XYZ in blender’ it gives me a long winded response, when it could have just said ‘Hit Ctrl-Shift-Alt-C’
I asked it a question about the ten countries with the most XYZ regulations, and got a great result. So then I thought hey, I need all the info so can I get the name of such regulation for every county?
ChatGPT 4: “That would be exhausting, but here are a few more…”
Like damn dude, long day? wtf :p
Try llamafile, it’s a bit of work but self hosting is fucking amazing