• @[email protected]
    link
    fedilink
    English
    137
    edit-2
    13 days ago

    If anyone wants to know what subreddit, it’s r/changemyview. I remember seeing a ton of similar posts about controversial opinions and even now people are questioning Am I Overreacting and AITAH a lot. AI posts in those kind of subs are seemingly pretty frequent. I’m not surprised to see it was part of a fucking experiment.

    • @[email protected]
      link
      fedilink
      English
      2613 days ago

      This was comments, not posts. They were using a model to approximate the demographics of a poster, then using an LLM to generate a response counter to the posted view tailored to the demographics of the poster.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        12 days ago

        You’re right about this study. But, this research group isn’t the only one using LLMs to generate content on social media.

        There are 100% posts that are bot created. Do you ever notice how, on places like Am I Overreacting or Am I the Asshole that a lot of the posts just so happen to hit all of the hot button issues all at once? Nobody’s life is that cliche, but it makes excellent engagement bait and the comment chain provides a huge amount of training data as the users argue over the various topics.

        I use a local LLM, that I’ve fine tuned, to generate replies to people, who are obviously arguing in bad faith, in order to string them along and waste their time. It’s setup to lead the conversation, via red herrings and other various fallacies to the topic of good faith arguments and how people should behave in online spaces. It does this while picking out pieces of the conversation (and from the user’s profile) in order to chastise the person for their bad behavior. It would be trivial to change the prompt chains to push a political opinion rather than to just waste a person/bot’s time.

        This is being done as a side project, on under $2,000 worth of consumer hardware, by a barely competent progammer with no training in Psychology or propaganda. It’s terrifying to think of what you can do with a lot of resources and experts working full-time.

    • Refurbished Refurbisher
      link
      fedilink
      English
      312 days ago

      AIO and AITAH are so obviously just AI posting. It’s all just a massive circlejerk of AI and people who don’t know they’re talking to AI agreeing with each other.

  • @[email protected]
    link
    fedilink
    English
    25813 days ago

    There’s no guarantee anyone on there (or here) is a real person or genuine. I’ll bet this experiment has been conducted a dozen times or more but without the reveal at the end.

    • @[email protected]
      link
      fedilink
      English
      3213 days ago

      I’m sorry but as a language model trained by OpenAI, I feel very relevant to interact - on Lemmy - with other very real human beings

    • @[email protected]
      link
      fedilink
      English
      6813 days ago

      I’ve worked in quite a few DARPA projects and I can almost 100% guarantee you are correct.

        • @[email protected]
          link
          fedilink
          English
          913 days ago

          Hello, this is John Cleese. If you doubt that this is the real John Cleese, here is my mother to confirm that I am, in fact, me. Mother! Am I me?

          Oh yes!

          There you have it. I am me.

            • @[email protected]
              link
              fedilink
              English
              212 days ago

              Now look here! I was invited to speak with the very real, very human patrons of this fine establishment, and I’ll not have you undermining my efforts to fulfill that obligation!

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          12 days ago

          If you think that, the US is the only country that does this. I have many, many waterfront properties in the Sahara desert to sell you

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            12 days ago

            You know I never said that, only that they never mention or can admit that.
            The american bots or online operatives always need to start crying about Russian or Chinese interference on any unrelated subject?
            Like this Shakleford here, who admits he’s worked for the fascist imperialist warcriminal state.
            I’ve seen plenty of US bootlicker bots/operatives and hasbara genocider scum. I can smell them from far.
            Not so much Chinese or Russians.

            • @[email protected]
              link
              fedilink
              English
              112 days ago

              Well my friend, if you can’t smell the shit you should probably move away from the farm. Russian and Chinese has a certain scent to it. The same with American. Sounds like you’re just nose blind.

              • @[email protected]
                link
                fedilink
                English
                112 days ago

                I know anything said online that goes against the western narrative immediately gets slandered: ‘Russian bots’, ‘100+ social credit’ and that lame BS.
                Paranoid delusional Pavlovian reflexes induced by western propaganda.
                Incapable of fathoming people have another opinion, they must be paid!
                If that’s the mindset hen you will see indeed a lot of those.
                The most obvious ones to spot are definitely the Hasbara types, same pattern and vocab, and really bad at what they do.

                • @[email protected]
                  link
                  fedilink
                  English
                  112 days ago

                  I mean that’s just like your opinion man.

                  However, there are for a fact government assets promoting those opinions and herding those clueless people. What a lot of people failed to realize is that this isn’t a 2v1 or even a 3v1 fight. This is an international free-for-all with upwards of 45 different countries getting in on the melee.

    • dream_weasel
      link
      fedilink
      English
      1113 days ago

      I have it on good authority that everyone on Lemmy is a bot except you.

    • @[email protected]
      link
      fedilink
      English
      2213 days ago

      There’s no guarantee anyone on there (or here) is a real person or genuine.

      I’m pretty sure this isn’t a baked-in feature of meatspace either. I’m a fan of solipsism and Last Thursdayism personally. Also propaganda posters.

      The CMV sub reeked of bot/troll/farmer activity, much like the amitheasshole threads. I guess it can be tough to recognize if you weren’t there to see the transition from authentic posting to justice/rage bait.

      We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen. What happens when robots pass the imitation game?

      • @[email protected]
        link
        fedilink
        English
        212 days ago

        I think the reddit user base is shifting too. It’s less “just the nerds” than it used to be. The same thing happened to Facebook. It fundamentally changed when everyone’s mom joined…

      • @[email protected]
        link
        fedilink
        English
        212 days ago

        We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen

        Skill issue

      • Maeve
        link
        fedilink
        513 days ago

        I’m conflicted by that term. Is it ok that it’s been shortened to “glow”?

        • @[email protected]
          link
          fedilink
          English
          6
          edit-2
          13 days ago

          Conflict? A good image is a good image regardless of its provenance. And yes 2020s era 4chan was pretty much glowboy central, one look at the top posts by country of origin said as much. It arguably wasn’t worth bothering with since 2015

    • M137
      link
      fedilink
      English
      212 days ago

      Dozens? That’s like saying there are hundreds of ants on earth. I’m very comfortable saying it’s hundreds, thousands, tens of thousands. And I wouldn’t be surprised if it’s hundreds of thousands of times.

  • @[email protected]
    link
    fedilink
    English
    17
    edit-2
    13 days ago

    The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

    This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

    • accept that negative publicity will result
    • accept that people may stop cooperating with them on this work
    • accept that their reputation will suffer as a result
    • ensure that they won’t do anything illegal

    After that, if they still feel their study is necesary, maybe they should run it and publish the results.

    If then, some eager redditors start sending death threats, that’s unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

    As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

    AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.

    As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn’t needed after all.

    • @[email protected]
      link
      fedilink
      English
      212 days ago

      But those other studies didn’t make the news though, did they? The thing about scientists is that they aren’t just scientists, and the impact of their work goes beyond the papers that they publish. If doing something ‘unethical’ is what it takes to get people to wake up, then maybe the publication status is a lesser concern.

    • @[email protected]
      link
      fedilink
      English
      1612 days ago

      The quote is not coming from Reddit, but from a professor at Georgia Institute of Technology

  • @[email protected]
    link
    fedilink
    English
    83
    edit-2
    12 days ago

    This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.

    This research isn’t what you should get mad it. It’s pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.

    Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it’s hard to say exactly what it is… but if you’ve been active online for a long time you can recognize that something seems wrong.

    We’ve seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don’t know what that is watch ‘The Great Hack’ documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs.

    This study is by a group of scientists who are trying to figure that out. The only difference is that they’re publishing their findings in order to inform the public. Whereas Russia isn’t doing us the same favors.

    Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media ‘users’ creating a huge uproar.


    Most of you, who don’t work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.

    Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.

    We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.

    • @[email protected]
      link
      fedilink
      English
      1112 days ago

      Conversely, while the research is good in theory, the data isn’t that reliable.

      The subreddit has rules requiring users engage with everything as though it was written by real people in good faith. Users aren’t likely to point out a bot when the rules explicitly prevent them from doing that.

      There wasn’t much of a good control either. The researchers were comparing themselves to the bots, so it could easily be that they themselves were less convincing, since they were acting outside of their area of expertise.

      And that’s even before the whole ethical mess that is experimenting on people without their consent. Post-hoc consent is not informed consent, and that is the crux of human experimentation.

      • @[email protected]
        link
        fedilink
        English
        212 days ago

        Users aren’t likely to point out a bot when the rules explicitly prevent them from doing that.

        In fact one user commented that he had his comment calling out one of the bots as a bot deleted by mods for breaking that rule

        • @[email protected]
          link
          fedilink
          English
          19 days ago

          Point there is clear, that even the mods helped the bots manipulate people to a cause/point. This proves the studiy’s point even more. In practice and in the real world.

          Imagine the experiment was allowed to run secretly, it would have changed user’s minds since the study claims that the bots were 3 to 6 times better at manipulating people than a human in different metrics.

          Given that Reddit is a bunch of hive minds, it is obvious that it would have made huge dents. As mods have a tendency to delete or ban anyone who rejects the group think. So mods are also a part of the problem.

    • @[email protected]
      link
      fedilink
      English
      1112 days ago

      Regardless of any value you might see from the research, it was not conducted ethically. Allowing unethical research to be published encourages further unethical research.

      This flat out should not have passed review. There should be consequences.

      • @[email protected]
        link
        fedilink
        English
        212 days ago

        If the need was justified big enough and negative impact low enough, it could pass review. The lack of informed consent can be justified with sufficient need and if consent would impact the science. The burden is high but not impossible to overcome. This is an area with huge societal impact so I would consider an ethical case to be plausible.

  • Fat Tony
    link
    fedilink
    English
    1112 days ago

    You know what Pac stands for? PAC. Program and Control. He’s Program and Control Man. The whole thing’s a metaphor. All he can do is consume. He’s pursued by demons that are probably just in his own head. And even if he does manage to escape by slipping out one side of the maze, what happens? He comes right back in the other side. People think it’s a happy game. It’s not a happy game. It’s a fucking nightmare world. And the worst thing is? It’s real and we live in it.

    • @[email protected]
      link
      fedilink
      English
      512 days ago

      Please elaborate. I would love to understand this from black mirror but I don’t get it.

  • @[email protected]
    link
    fedilink
    English
    61
    edit-2
    12 days ago

    Holy Shit… This kind of shit is what ultimately broke Tim(very closely ralated to ted) kaczynski… He was part of MKULTRA research while a student at Harvard, but instead of drugging him, they had a debater that was a prosecutor pretending to be a student… And would just argue against any point he had to see when he would break…

    And that’s how you get the Unabomber folks.

    • Geetnerd
      link
      fedilink
      English
      1713 days ago

      I don’t condone what he did in any way, but he was a genius, and they broke his mind.

      Listen to The Last Podcast on the Left’s episode on him.

      A genuine tragedy.

      • @[email protected]
        link
        fedilink
        English
        112 days ago

        You know when I was like 17 and they put out the manifesto to get him to stop attacking and I remember thinking oh it’s got a few interesting points.

        But I was 17 and not that he doesn’t hit the nail on the head with some of the technological stuff if you really step back and think about it and this is what I couldn’t see at 17 it’s really just the writing of an incell… He couldn’t communicate with women had low self-esteem and classic nice guy energy…

  • @[email protected]
    link
    fedilink
    English
    3912 days ago

    Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”

    What are they going to do? Ban the last humans on there having a differing opinion?

    Next step for those fucks is verification that you are an AI when signing up.

  • @[email protected]
    link
    fedilink
    English
    2713 days ago

    […] I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

  • @[email protected]
    link
    fedilink
    English
    4913 days ago

    This is probably the most ethical you’ll ever see it. There are definitely organizations committing far worse experiments.

    Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

    • @[email protected]
      link
      fedilink
      English
      1913 days ago

      Yeah I was thinking exactly this.

      It’s easy to point to reasons why this study was unethical, but the ugly truth is that bad actors all over the world are performing trials exactly like this all the time - do we really want the only people who know how this kind of manipulation works to be state psyop agencies, SEO bros, and astroturfing agencies working for oil/arms/religion lobbyists?

      Seems like it’s much better long term to have all these tricks out in the open so we know what we’re dealing with, because they’re happening whether it gets published or not.

      • @[email protected]
        link
        fedilink
        English
        413 days ago

        actors all over the world are performing trials exactly like this all the time

        I marketing speak this is called A/B testing.

    • @[email protected]
      link
      fedilink
      English
      212 days ago

      Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

      You put it better than I could. I’ve noticed this too.

      I used to just disengage. Now when I find myself talking to someone like this I use my own local LLM to generate replies just to waste their time. I’m doing this by prompting the LLM to take a chastising tone, point out their fallacies and to lecture them on good faith participation in online conversations.

      It is horrifying to see how many bots you catch like this. It is certainly bots, or else there are suddenly a lot more people that will go 10-20 multi-paragraph replies deep into a conversation despite talking to something that is obviously (to a trained human) just generated comments.

        • @[email protected]
          link
          fedilink
          English
          3
          edit-2
          12 days ago

          I think the simplest way to explain it is that the average person isn’t very skilled at rhetoric. They argue inelegantly. Over a long time of talking online, you get used to talking with people and seeing how they respond to different rhetorical strategies.

          In these bot infested social spaces it seems like there are a large number of commenters who just seem to argue way too well and also deploy a huge amount of fallacies. This could be explained, individually, by a person who is simply choosing to argue in bad faith; but, in these online spaces there seem to be too many commenters who seem to deploy these tactics compared to the baseline that I’ve established in my decades of talking to people online.

          In addition, what you see in some of these spaces are commenters who seem to have a very structured way of arguing. Like they’ve picked your comment apart into bullet points and then selected arguments against each point which are technically on topic but misleading in a way.

          I’ll admit that this is all very subjective. It’s entirely based on my perception and noticing patterns that may or may not exist. This is exactly why we need research on the topic, like in the OP, so that we can create effective and objective metrics for tracking this.

          For example, if you could somehow measure how many good faith comments vs how many fallacy-laden comments in a given community there would likely be a ratio that is normal (i.e. there are 10 people who are bad at arguing for every 1 person who is good at arguing and, of those skilled arguers 10% of them are commenting in bad faith and using fallacies) and you could compare this ratio to various online topics to discover the ones that appear to be botted.

          That way you could objectively say that on the topic of Gun Control on this one specific subreddit we’re seeing an elevated ratio of bad faith:good faith scoring commenters and, therefore, we know that this topic/subreddit is being actively LLM botted. This information could be used to deploy anti-bot counter measures (captchas, for example).

          • @[email protected]
            link
            fedilink
            English
            312 days ago

            Thanks for replying

            Do you think response time could also indicate that a user is a bot? I’ve had an interaction that I chalked up to someone using AI, but looking back now I’m questioning if there was much human involvement at all just due to how quickly the detailed replies were coming in…

            • @[email protected]
              link
              fedilink
              English
              112 days ago

              It depends, but it’d be really hard to tell. I type around 90-100 WPM, so my comment only took me a few minutes.

              If they’re responding within a second or two with a giant wall of text it could be a bot, but it may just be a person who’s staring at the notification screen waiting to reply. It’s hard to say.

  • @[email protected]
    link
    fedilink
    English
    12
    edit-2
    13 days ago

    Another isolated case for the endlessly growing list of positive impacts of the GenAI with no accountability trend. A big shout-out to people promoting and fueling it, excited to see into what pit you lead us next.

    This experiment is also nearly worthless because, as proved by the researchers, there’s no guarantee the accounts you interact with on Reddit are actual humans. Upvotes are even easier for machines to use, and can be bought for cheap.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      12 days ago

      The only way this could be an even remotely scientifically rigorous study is if they randomly selected the people who were going to respond to the AI responses and made sure they were human.

      Anybody with half a brain knows just reading reddit comments and not assuming most of them are bots or shills is a hilariously naive act, the fact that “researchers” did the same for a scientific study is embarassing.

    • @[email protected]
      link
      fedilink
      English
      2
      edit-2
      13 days ago

      ?!!? Before genAI it was hired human manipulators. Ypur argument doesn’t exist. We cannot call edison a witch and go back in caves because new tech creates new threat landscapes.

      Humanity adapts to survive and survives to adapt. We’ll figure some shit out

      • @[email protected]
        link
        fedilink
        English
        112 days ago

        Jarvis, explain to this man the concepts of “scale” and “size.”
        Jarvis, rotate this man’s eyes ninety degrees clockwise.

  • SolNine
    link
    fedilink
    English
    3912 days ago

    Not remotely surprised.

    I dabble in conversational AI for work, and am currently studying its capabilities for thankfully (imo at least) positive and beneficial interactions with a customer base.

    I’ve been telling friends and family recently that for a fairly small amount of money and time investment, I am fairly certain a highly motivated individual could influence at a minimum a local election. Given that, I imagine it would be very easy for Nations or political parties to easily manipulate individuals on a much larger scale, that IMO nearly everything on the Internet should be suspect at this point, and Reddit is atop that list.

    • @[email protected]
      link
      fedilink
      English
      3112 days ago

      This isn’t even a theoretical question. We saw it live in the last us elections. Fox News, TikTok, WaPo etc. are owned by right wing media and sane washed trump. It was a group effort. You need to be suspicious not only of the internet but of tv and newspapers too. Old school media isn’t safe either. It never really was.

      But I think the root cause is that people don’t have the time to really dig deep to get to the truth, and they want entertainment not be told about the doom and gloom of the actual future (like climate change, loss of the middle class etc).

      • @[email protected]
        link
        fedilink
        English
        712 days ago

        I think it’s more that most people don’t want to see views that don’t align with their own or challenge their current ones. There are those of us who are naturally curious. Who want to know how things work, why things are, what the latest real information is. That does require that research and digging. It can get exhausting if you don’t enjoy that. If it isn’t for you, then you just don’t want things to clash with what you “know” now. Others will also not want to admit they were wrong. They’ll push back and look for places that agree with them.

        • @[email protected]
          link
          fedilink
          English
          212 days ago

          People are afraid to question their belief systems because it will create an identity crisis, and most people can’t psychologically deal with it. So it’s all self preservation.

  • @[email protected]
    link
    fedilink
    English
    4812 days ago

    This is a really interesting paragraph to me because I definitely think these results shouldn’t be published or we’ll only get more of these “whoopsie” experiments.

    At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.

    • @[email protected]
      link
      fedilink
      English
      1412 days ago

      This is certainly not the first time this has happened. There’s nothing to stop people from asking ChatGPT et al to help them argue. I’ve done it myself, not letting it argue for me but rather asking it to find holes in my reasoning and that of my opponent. I never just pasted what it said.

      I also had a guy post a ChatGPT response at me (he said that’s what it was) and although it had little to do with the point I was making, I reasoned that people must surely be doing this thousands of times a day and just not saying it’s AI.

      To say nothing of state actors, “think tanks,” influence-for-hire operations, etc.

      The description of the research in the article already conveys enough to replicate the experiment, at least approximately. Can anyone doubt this is commonplace, or that it has been for the last year or so?

    • @[email protected]
      link
      fedilink
      English
      312 days ago

      I’m pretty sure that only applies due to a majority of people being morons. There’s a vast gap between the 2% most intelligent, 1/50, and the average intelligence.

      Also please put digital text on white on black instead of the other way around

      • @[email protected]
        link
        fedilink
        English
        812 days ago

        I agree, but that doesn’t change anything, right? Even if you are in the 2% most intelligent and you’re somehow immune, you still have to live with the rest who do get influenced by AI. And they vote. So it’s never just a they problem.

      • @[email protected]
        link
        fedilink
        English
        511 days ago

        What? Intelligent people get fooled all the time. The NXIVM cult was made up mostly of reasonably intelligent women. Shit that motherfucker selected for intelligent women.

        You’re not immune. Even if you were, you’re incredibly dependent on people of average to lower intelligence on a daily basis. Our planet runs on the average intelligence.

  • @[email protected]
    link
    fedilink
    English
    8613 days ago

    The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.

    • Joe
      link
      fedilink
      English
      2013 days ago

      It would be naive to think this isn’t already in widespread use.

      • @[email protected]
        link
        fedilink
        English
        212 days ago

        I mean that’s the point of research: to demonstrate real world problems and put it in more concrete terms so we can respond more effectively

    • ArchRecord
      link
      fedilink
      English
      1213 days ago

      To be fair, I do believe their research was based on how convincing it was compared to other Reddit commenters, rather than say, an actual person you’d normally see doing the work for a government propaganda arm, with the training and skillset to effectively distribute propaganda.

      Their assessment of how “convincing” it was seems to also have been based on upvotes, which if I know anything about how people use social media, and especially Reddit, are often given when a comment is only slightly read through, and people are often scrolling past without having read the whole thing. The bots may not have necessarily optimized for convincing people, but rather, just making the first part of the comment feel upvote-able over others, while the latter part of the comment was mostly ignored. I’d want to see more research on this, of course, since this seems like a major flaw in how they assessed outcomes.

      This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

      • @[email protected]
        link
        fedilink
        English
        412 days ago

        This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

        And the fact that you can generate hundreds or thousands of them at the drop of a hat to bury any social media topic in highly convincing ‘people’ so that the average reader is more than likely going to read the opinion that you’re pushing and not the opinion of the human beings.