I trust you bro
ALL conversations are logged and can be used however they want.
I’m almost certain this “detector” is a simple lookup in their database.
If they have one, and that’s IF, then of course they won’t release it. They’re still trying to find a use case for their stupid toy so that they can charge people for it. Releasing the counter agent would be completely contradictory to their business model. It’s like Umbrella Corp. but even dumber.
Doubt
The detector is most likely a machine learning algorithm. That said, releasing that would allow for adversarial training. (An LLM that would not be detected). Therefore they can only offer maybe an api to use it but can not give unlimited access to the model.
If u release an api for it u can still use that to make training data to beat it.
That’s what the Chinese tried with chatgpt. Didn’t go well.
Huh? Use chatgpt to generate training data to train another ai? Thats pretry common actually I believe even mistral does that hence why u need somthing like dolphin to remove the alignment by openai.
This is the reason. Releasing it would invalidate it.
You can just ask ChatGPT if a text was written by it.
If it is, it’s legally obligated to tell you!Don’t joke about this, the college professors will hear you.
a search bar for your DB doesn’t count guys.
They’re keeping everything anyway, so what’s preventing them from doing a DB look up to see if it (given a large enough passage of text) exist in their output history?
I believe the actual detector is similar. They know what sentences are likely generated by chatgpt, since that’s literally in their model. They probably also have to some degree reverse engineered typical output from competing models.
Let me guess: too much processing power?
shhh, my professor may use it
My unpopular opinion is when they’re assigning well beyond 40 hours per week of homework, cheating is no longer unethical. Employers want universities to get students used to working long hours.
I agree, and I teach. A huge part of learning is having the time to experiment and process what you’ve learnt. However, doing that in a way that can be controlled, examined, etc, is very difficult so many institutions opt for tons of homework etc.
If the assignment is so easy ChatGPT can do it, it’s too easy.
I wonder if this means they’ve discovered a serious flaw that they don’t know how to fix yet?
I think the more like explanation is that being able to filter out AI-generated text gives them an advantage over their competitors at obtaining more training data.
The flaw is in the training to make it corporate friendly. Everything it says eventually sounds like a sexual harassment training video, regardless of subject.
If they aren’t willing to release it, then the situation is no different from them not having one at all. All these claims openai makes about having whatever system but hiding it, is just tobtry and increase hype to grab more investor money.
I call bullshit.
She goes to another school
(for intelligent ificial art)Probably because it doesn’t work. It’s not difficult for Open AI to see if any given conversation is one of their conversations. If I were them I would hash the results of each conversation and then store that hash in a database for quick searching.
That’s useless for actual AI detection