Originality.AI looked at 8,885 long Facebook posts made over the past six years.

Key Findings

  • 41.18% of current Facebook long-form posts are Likely AI, as of November 2024.
  • Between 2023 and November 2024, the average percentage of monthly AI posts on Facebook was 24.05%.
  • This reflects a 4.3x increase in monthly AI Facebook content since the launch of ChatGPT. In comparison, the monthly average was 5.34% from 2018 to 2022.
    • @[email protected]
      link
      fedilink
      English
      143 months ago

      The most annoying part of that is the shitty render. I actually have an account on one of those AI image generating sites, and I enjoy using it. If you’re not satisfied with the image, just roll a few more times, maybe tweak the prompt or the starter image, and try again. You can get some very cool-looking renders if you give a damn. Case in point:

      • @[email protected]
        link
        fedilink
        English
        73 months ago

        😍this is awesome!

        A friend of mine has made this with your described method:

        PS: 😆the laptop on the illustration in the article! Someone did not want pay for high end model and did not want to to take any extra time neither…

  • Lexam
    link
    fedilink
    English
    103 months ago

    If you want to visit your old friends in the dying mall. Go to feeds then friends. Should filter everything else out.

  • @[email protected]
    link
    fedilink
    English
    483 months ago

    This is a pretty sweet ad for https://originality.ai/ai-checker

    They don’t talk much about their secret sauce. That 40% figure is based on “trust me bro, our tool is really good”. Would have been nice to be able to verify this figure / use the technique elsewhere.

    It’s pretty tiring to keep seeing ads masquerading as research.

    • @[email protected]
      link
      fedilink
      English
      43 months ago

      There’s an AI reply option now. Interested to know how far that is off just being part of the regular comments.

  • @[email protected]
    link
    fedilink
    English
    463 months ago

    Keep in mind this is for AI generated TEXT, not the images everyone is talking about in this thread.

    Also they used an automated tool, all of which have very high error rates, because detecting AI text is a fundamentally impossible task

    • @[email protected]
      link
      fedilink
      English
      13 months ago

      Yeah. This is a way bigger problem with this article than anything else. The entier thing hinges on their AI-detecting AI working. I have looked into how effective these kinds of tools are because it has come up at my work, and independent review of them suggests they’re, like, 3-5 times worse than the (already pretty bad) accuracy rates they claim, and disproportionatly flag non-native English speakers as AI generated. So, I’m highly skeptical of this claim as well.

    • @[email protected]
      link
      fedilink
      English
      33 months ago

      AI does give itself away over “longer” posts, and if the tool makes about an equal number of false positives to false negatives then it should even itself out in the long run. (I’d have liked more than 9K “tests” for it to average out, but even so.) If they had the edit history for the post, which they didn’t, then it’s more obvious. AI will either copy-paste the whole thing in in one go, or will generate a word at a time at a fairly constant rate. Humans will stop and think, go back and edit things, all of that.

      I was asked to do some job interviews recently; the tech test had such an “animated playback”, and the difference between a human doing it legitimately and someone using AI to copy-paste the answer was surprisingly obvious. The tech test questions were nothing to do with the job role at hand and were causing us to select for the wrong candidates completely, but that’s more a problem with our HR being blindly in love with AI and “technical solutions to human problems”.

      “Absolute certainty” is impossible, but balance of probabilities will do if you’re just wanting an estimate like they have here.

      • @[email protected]
        link
        fedilink
        English
        53 months ago

        I have no idea whether the probabilities are balanced. They claim 5% was AI even before chatgpt was released, which seems pretty off. No one was using LLMs before chatgpt went viral except for researchers.

        • @[email protected]
          link
          fedilink
          English
          23 months ago

          chat bots have been a thing, for a long time. I mean, a half decently trained Markov can handle social media postings and replies

        • @[email protected]
          link
          fedilink
          English
          33 months ago

          Im pretty sure chatbots were a thing before AI. They certainly werent as smart but they did exists.

        • @[email protected]
          link
          fedilink
          English
          43 months ago

          Chatbots doesn’t mean that they have a real conversation. Some just spammed links from a list of canned responses, or just upvoted the other chat bots to get more visibility, or the just reposted a comment from another user.

    • billwashere
      link
      fedilink
      English
      163 months ago

      It probably is but it’s a large sample size and if the selection is random enough, it’s likely sufficient to extrapolate some numbers. This is basically how drug testing works.

      • @[email protected]
        link
        fedilink
        English
        13 months ago

        And statistical analysis. The larger the universe, the smaller the true random sample you need

    • @[email protected]
      link
      fedilink
      English
      23 months ago

      Hmm, “the junk human users are posting”, or “the human junk users are posting”? We are talking about Facebook here, after all.

      • @[email protected]
        link
        fedilink
        English
        23 months ago

        Well, there’s also 0.1% who are relatives of old people who are tring to keep in touch with the batty old meme-forwarders. I was one of those until the ones who mattered most to me shuffled off this mortal coil.

  • @[email protected]
    link
    fedilink
    English
    823 months ago

    It’s incredible, for months now I see some suggested groups, with an AI generated picture of a pet/animal, and the text is always “Great photography”. I block them, but still see new groups every day with things like this, incredible…

    • @[email protected]
      link
      fedilink
      English
      6
      edit-2
      3 months ago

      For me it’s some kind of cartoon with the caption “Best comic funny 🤣” and sometimes “funny short film” (even though it’s a picture)

      Like, Meta has to know this is happening. Do they really think this is what will keep their userbase? And nobody would think it’s just a little weird?

      • @[email protected]
        link
        fedilink
        English
        5
        edit-2
        3 months ago

        Well, maybe it is the taste of people still being there… I mean, you have to be at least a little bit strange, if you are still on facebook…

    • @[email protected]OP
      link
      fedilink
      English
      423 months ago

      I have a hard time understanding facebook’s end game plan here - if they just have a bunch of AI readers reading AI posts, how do they monetize that? Why on earth is the stock market so bullish on them?

      • @[email protected]
        link
        fedilink
        English
        153 months ago

        Engagement.

        It’s all they measure, what makes people reply to and react to posts.

        People in general are stupid and can’t see or don’t care if something is AI generated

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        3 months ago

        AI can put together all that personal data and create very detailed profiles on everyone, automatically. From that data, an Ai can add a bunch of attributes that are very likely to be true as well, based on what the person is doing every day, working, education, gender, social life, mobile data location, bills etc etc.

        This is like having a person follow every user around 24 hours per day, combined with a psychologist to interpret and predict the future.

        It’s worth a lot of money to advertisers of course.

      • @[email protected]
        link
        fedilink
        English
        33 months ago

        They want dumb users consuming ai content, they need LLM content because the remaining users are too stupid to generate the free content that people actually want to click.

        Then they pump ads to you based on increasingly targeted AI slop selling more slop.

      • WalrusDragonOnABike [they/them]
        link
        fedilink
        English
        373 months ago

        As long as they can convince advertisers that the enough of the activity is real or enough of the manipulation of public opinion via bots is in facebook’s interest, bots aren’t a problem at all in the short-term.

        • @[email protected]
          link
          fedilink
          English
          53 months ago

          surely at some point advertisers will put 2 and 2 together when they stop seeing results from targeted advertising.

          • @[email protected]
            link
            fedilink
            English
            123 months ago

            I think you give them too much credit. As long as it doesn’t actively hurt their numbers, like x, it’s just part of the budget.

  • @[email protected]
    link
    fedilink
    English
    25
    edit-2
    3 months ago

    The bigger problem is AI “ignorance,” and it’s not just Facebook. I’ve reported more than one Lemmy post the user naively sourced from ChatGPT or Gemini and took as fact.

    No one understands how LLMs work, not even on a basic level. Can’t blame them, seeing how they’re shoved down everyone’s throats as opaque products, or straight up social experiments like Facebook.

    …Are we all screwed? Is the future a trippy information wasteland? All this seems to be getting worse and worse, and everyone in charge is pouring gasoline on it.

    • Pennomi
      link
      fedilink
      English
      93 months ago

      No one understands how LLMs work, not even on a basic level.

      Well that’s just false.

        • Pennomi
          link
          fedilink
          English
          43 months ago

          I did not know that. There’s a bunch of news articles going around claiming that even the creators of the models don’t understand them and that they are some sort of unfathomable magic black box. I assumed you were propagating that myth, but I was clearly mistaken.

      • Traister101
        link
        fedilink
        English
        83 months ago

        Educate my family on how they work then please and thanks. I’ve tried and they refuse to listen, they’d prefer to trust the lying corpos trying to sell it to us

        • Pennomi
          link
          fedilink
          English
          113 months ago

          “Your family” isn’t who I was talking about. Researchers and people in the space understand how LLMs work in intricate detail.

          Unless your “no one” was colloquial, then yes, I totally agree with you! Practically no one understands how they work.

    • @[email protected]
      link
      fedilink
      English
      2
      edit-2
      3 months ago

      *where you think they sourced from AI

      you have no proof other than seeing ghosts everywhere.

      Not get me wrong, fact checking posts is important, but you have no evidence if it is AI, human brain fart or targeted disinformations 🤷🏻‍♀️

      • @[email protected]
        link
        fedilink
        English
        7
        edit-2
        3 months ago

        No I mean they literally label the post as “Gemini said this”

        I see family do it too, type something into Gemini and just assume it looked it up or something.

        • @[email protected]
          link
          fedilink
          English
          23 months ago

          I see no problem if the poster gives the info, that the source is AI. This automatically devalues the content of the post/comment and should trigger the reaction that this information is to be taken with a grain of salt and it needs to factchecked in order to improve likelihood that that what was written is fact.

          An AI output is most of the time a good indicator about what the truth is, and can give new talking points to a discussion. But it is of course not a “killer-argument”.

          • @[email protected]
            link
            fedilink
            English
            4
            edit-2
            3 months ago

            The context is bad though.

            The post I’m referencing is removed, but there was a tiny “from gemini” footnote in the bottom that most upvoters clearly missed, and the whole thing is presented like a quote from a news article and taken as fact by OP in their own commentary.

            And the larger point I’m making is this pour soul had no idea Gemini is basically an improv actor compelled to continue whatever it writes, not a research agent.

            My sister, ridiculously smart, professional and more put together than I am, didn’t either. She just searched for factual stuff from the Gemini app and assumed it’s directly searching the internet.

            AI is a good thinker, analyzer, spitballer, initial source and stuff yes, but it’s being marketed like an oracle and that is going to screw the world up.

  • @[email protected]
    link
    fedilink
    English
    9
    edit-2
    3 months ago

    Also… the tremendous irony here is Meta is screwing themselves over.

    They’ve hedged their future on AI, and are smart enough to release the weights and fund open research, yet their advantage (a big captive dataset, aka Facebook/Instagram/WhatsApp users) is completely overrun with slop that poisons it. It’s as laughable as Grok (X’s AI) being trained on Twitter.

    • @[email protected]
      link
      fedilink
      English
      83 months ago

      Meta is probably screwed already. Their user base is not growing as before, maybe shrinking in some markets, and they need the padding to cover it up.

      • @[email protected]
        link
        fedilink
        English
        23 months ago

        Very true.

        But also so stupid because their user base is, what, a good fraction of the planet? How can they grow?

        • @[email protected]
          link
          fedilink
          English
          13 months ago

          38% of the population as user. 20% daily active users. The classic way to grow is to squeeze the users and advertisers more and more with fees, subscriptions, tiers, … I guess the exodus at X has them spooked of what could happen if they continue with that plan, so they’re trying this AI thing.