Just out of curiosity. I have no moral stance on it, if a tool works for you I’m definitely not judging anyone for using it. Do whatever you can to get your work done!
I find it helpful to translate medical abbreviations to English. Our doctors tend to go overboard with abbreviations, there are lots I know but there are always a few that leave me scratching my head. ChatGPT seems really good at guessing what they mean! There are other tools I can use, but ChatGPT is faster and more convenient - I can give it context and that makes it more accurate.
I know many people my slightly younger than me are using chatgpt to breeze though university assignments. Apparently there’s one website that uses gpt that even draws diagrams for you, so you don’t have to make 500 UML and class diagrams that take forever to create.
If only they would also understand what they’re delivering.
I don’t have any bosses, but as a consultant, I use it a lot. Still gotta charge for the years of experience it takes to understand the output and tweak things, not the hours it takes to do the work.
Basically this. Knowing the right questions and context to get an output and then translating that into actionable code in a production environment is what I’m being paid to do. Whether copilot or GPT helps reach a conclusion or not doesn’t matter. I’m paid for results.
A lot of people are going to get fucked if they are…
It’s using the “startup method” where they gave away a good service for free, but they already cut back on resources when it got popular. So what you read about it being able to do six months ago, it can’t do today.
Eventually they’ll introduce a paid version that might be able to do what the free one did.
But if you’re just blindly trusting it, you might have months of low quality work and haven’t noticed.
Like the lawyers recently finding out it would just make up caselaw and reference cases. We’re going to see that happen more and more as resources are cut back.
Like the lawyers recently finding out it would just make up caselaw and reference cases. We’re going to see that happen more and more as resources are cut back.
It’s been notorious for doing that from the very beginning though
That may have been their plan, but Meta fucked them from behind and released LLama which now runs on local machines, up to 30B parameter size and by end of the year will run at better than GPt3.5 ability on an iphone.
Local llms, like airoboros, WizardLm, Stable Vicuña or Stable Coder are real alternatives in many domains.
Uh, Llama, at least the versions I can run (up to 64B on CPU if I’m into waiting an hour for the reply) is far behind gpt3.5, and that is without considering GPT4. Even GPT3.5 is a toy compared to 4.
Llama2 is supposedly better, but still not quite at 3.5 levels. Of course, that’s amazing considering the resource difference, but if all you care about is the endresult, then you still have to wait for some advancements.
Huh? They already introduced the paid version half a year ago, and that was the one being responsible for the buzz all along. The free version was mediocre to begin with and has not gotten better.
When people complain that ChatGPT doesn’t comply to their expectations it’s usually a confusion between these two.
Anyone blindly trusting it is a grade A moron, and would’ve just found another way to fuck up whatever they were working on if ChatGPT didn’t exist.
ChatGPT is a tool, if someone doesn’t know what they’re doing with it then they are gonna break stuff, not ChatGPT.
This is exactly like people who defend Tesla by saying it’s your fault if you believed their claims about what a Tesla can do…
Which isn’t a surprise, there’s a huge overlap between being gullible to believe either companies claims, and some people will vend over backwards to defend thos companies because of sink cost fallacy
I don’t know what OpenAI even claims that ChatGPT can do, but if you trust marketing from any company then you’re gonna get burnt.
I’m not defending the company in any way, more just defending that in general LLMs can be useful tools, but people need to make educated decisions and take a bit of responsibility.
My boss pays for it! I don’t use it that much, but it’s pretty useful from time to time instead of going through a bunch of unrelated Google results.
Personally I prefer quality over quantity so I don’t use it
I use it as a search engine for the LLVM docs.
Works so much better than doxygen.
But it’s no secret.
Yesterday I was working on a training PowerPoint and it occurred to me that I should probably simplify the language. Had GPT convert it to 3rd-grade language, and it worked pretty well. Not perfect, but it helped.
I’m also writing an app as a hobby and, although GPT goes batshit crazy from time to time, overall it has done most of the coding grunt-work pretty well.
Last job, I ran simple tech support instructions through a reading analyser and also had instructions written for children. I even had screenshots with offensively large red arrows pointing at every step. I lliterally wrote those instructions so you don’t have to read.
It still was too complex for some idiots.
I’ve made PDF guides, videos they can reference, and even had personal training, and it baffles me how some people just can’t pay attention. If it was something complicated, i would understand, but it is just simple stuff like resetting your password, opening apps, finding documents, etc.
It’s always the older people too for some reason. I know this is supposed to be a gen z/alpha thing, but these older people have no attention span and have the memory of a fish. I know they didn’t grow up with this stuff, but cmon it’s literally a step by step guide
removed by mod
I use it and encourage my staff and other departments to use it.
I feel that we’re at a horse vs tractor or human computer vs digital computer event. In the next 10+ years those who are AI ignorant will be under employed or unemployed. Get it now and learn to use it as a force multiplier just like tractors and digital computers were.
The arguments against AI eerily mirror the arguments against tractors and digital computers.
removed by mod
removed by mod
Only used it a couple of times for work when researching some broad topics like data governance concepts.
It’s a good tool for learning because you can ask it about a subject and then ask it to explain the subject “as a metaphor to improve comprehension” and it does a pretty good job. Just make sure you use some outside resources to ensure you’e not being hallucinated all over.
My bosses use it to write their emails (ESL).
removed by mod
yeah my biggest use case it quick summaries of things. it’s great getting a few bullet points, and i miss details a lot less.
ESL is actually a great use, although there’s a risk someone might not catch a hallucination/weird tone issue. Still it would be really helpful there.
Best used in tandem with something like languagetool.org for the final revision.
I’ve run emails through it to check tone since I’m hilariously bad at reading tone through text, but I’m pretty limited in how I can make use of that. There’s info I deal with that is sensitive and/or proprietary that I can’t paste into just any text box without potential severe repercussions.
I tried it once or twice and it worked well. It’s too stupid now to be worth the attempt. The amount of time spent fixing its mistakes has resulted in net zero time savings.
Aside from asking it coding questions (which are generally a helpful pointer in the right direction), i also ask it alot of questions like “Turn these values into an array” or something similar when i have to make an array of values (or anything else that’s repetitive) and am too lazy to do it myself. Just a slight speedup in work.
My whole team was playing around with it and for a few weeks it was working pretty well for a coupl3 of things. Until the answers started to become incorrect and not useful.
I run a board game store, so just for a chuckle I asked it about what’s popular this year or what to order and kept getting the same answer about only having accurate data from 2021 and prior.