• @[email protected]
    link
    fedilink
    English
    4111 months ago

    ALL conversations are logged and can be used however they want.

    I’m almost certain this “detector” is a simple lookup in their database.

  • @[email protected]
    link
    fedilink
    English
    811 months ago

    If they have one, and that’s IF, then of course they won’t release it. They’re still trying to find a use case for their stupid toy so that they can charge people for it. Releasing the counter agent would be completely contradictory to their business model. It’s like Umbrella Corp. but even dumber.

  • nomad
    link
    fedilink
    English
    3511 months ago

    The detector is most likely a machine learning algorithm. That said, releasing that would allow for adversarial training. (An LLM that would not be detected). Therefore they can only offer maybe an api to use it but can not give unlimited access to the model.

  • @[email protected]
    link
    fedilink
    English
    13
    edit-2
    11 months ago

    You can just ask ChatGPT if a text was written by it.
    If it is, it’s legally obligated to tell you!

  • chiisana
    cake
    link
    fedilink
    English
    2611 months ago

    They’re keeping everything anyway, so what’s preventing them from doing a DB look up to see if it (given a large enough passage of text) exist in their output history?

    • @[email protected]
      link
      fedilink
      English
      1611 months ago

      I believe the actual detector is similar. They know what sentences are likely generated by chatgpt, since that’s literally in their model. They probably also have to some degree reverse engineered typical output from competing models.

    • @[email protected]
      link
      fedilink
      English
      9
      edit-2
      11 months ago

      My unpopular opinion is when they’re assigning well beyond 40 hours per week of homework, cheating is no longer unethical. Employers want universities to get students used to working long hours.

      • Amanda
        link
        fedilink
        English
        111 months ago

        I agree, and I teach. A huge part of learning is having the time to experiment and process what you’ve learnt. However, doing that in a way that can be controlled, examined, etc, is very difficult so many institutions opt for tons of homework etc.

    • Amanda
      link
      fedilink
      English
      111 months ago

      If the assignment is so easy ChatGPT can do it, it’s too easy.

  • Flying Squid
    link
    fedilink
    English
    1011 months ago

    I wonder if this means they’ve discovered a serious flaw that they don’t know how to fix yet?

    • @[email protected]
      link
      fedilink
      English
      711 months ago

      I think the more like explanation is that being able to filter out AI-generated text gives them an advantage over their competitors at obtaining more training data.

    • @[email protected]
      link
      fedilink
      English
      1911 months ago

      The flaw is in the training to make it corporate friendly. Everything it says eventually sounds like a sexual harassment training video, regardless of subject.

  • @[email protected]
    link
    fedilink
    English
    6711 months ago

    If they aren’t willing to release it, then the situation is no different from them not having one at all. All these claims openai makes about having whatever system but hiding it, is just tobtry and increase hype to grab more investor money.

  • Echo Dot
    link
    fedilink
    English
    3711 months ago

    Probably because it doesn’t work. It’s not difficult for Open AI to see if any given conversation is one of their conversations. If I were them I would hash the results of each conversation and then store that hash in a database for quick searching.

    That’s useless for actual AI detection