@[email protected] to Microblog [email protected]English • 1 month agoCritical thinkingslrpnk.netimagemessage-square216fedilinkarrow-up11.6K
arrow-up11.6KimageCritical thinkingslrpnk.net@[email protected] to Microblog [email protected]English • 1 month agomessage-square216fedilink
minus-squareTheTechnician27linkfedilinkEnglish21•1 month ago It’s a two-pass solution, but it makes it a lot more reliable. So your technique to “make it a lot more reliable” is to ask an LLM a question, then run the LLM’s answer through an equally unreliable LLM to “verify” the answer? We’re so doomed.
minus-square@[email protected]linkfedilinkEnglish3•edit-21 month agoGive it a try. The key is in the different prompts. I don’t think I should really have to explain this, but different prompts produce different results. Ask it to create something, it creates something. Ask it to check something, it checks something. Is it flawless? No. But it’s pretty reliable. It’s literally free to try it now, using ChatGPT.
minus-squareTheTechnician27linkfedilinkEnglish11•1 month ago I don’t think I should really have to explain this, but different prompts produce different results.
minus-square@[email protected]linkfedilinkEnglish2•1 month agoHey, maybe you do. But I’m not arguing anything contentious here. Everything I’ve said is easily testable and verifiable.
So your technique to “make it a lot more reliable” is to ask an LLM a question, then run the LLM’s answer through an equally unreliable LLM to “verify” the answer?
We’re so doomed.
Give it a try.
The key is in the different prompts. I don’t think I should really have to explain this, but different prompts produce different results.
Ask it to create something, it creates something.
Ask it to check something, it checks something.
Is it flawless? No. But it’s pretty reliable.
It’s literally free to try it now, using ChatGPT.
Hey, maybe you do.
But I’m not arguing anything contentious here. Everything I’ve said is easily testable and verifiable.