• @[email protected]
    link
    fedilink
    English
    12
    edit-2
    2 months ago

    Another isolated case for the endlessly growing list of positive impacts of the GenAI with no accountability trend. A big shout-out to people promoting and fueling it, excited to see into what pit you lead us next.

    This experiment is also nearly worthless because, as proved by the researchers, there’s no guarantee the accounts you interact with on Reddit are actual humans. Upvotes are even easier for machines to use, and can be bought for cheap.

    • @[email protected]
      link
      fedilink
      English
      2
      edit-2
      2 months ago

      ?!!? Before genAI it was hired human manipulators. Ypur argument doesn’t exist. We cannot call edison a witch and go back in caves because new tech creates new threat landscapes.

      Humanity adapts to survive and survives to adapt. We’ll figure some shit out

      • @[email protected]
        link
        fedilink
        English
        12 months ago

        Jarvis, explain to this man the concepts of “scale” and “size.”
        Jarvis, rotate this man’s eyes ninety degrees clockwise.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      2 months ago

      The only way this could be an even remotely scientifically rigorous study is if they randomly selected the people who were going to respond to the AI responses and made sure they were human.

      Anybody with half a brain knows just reading reddit comments and not assuming most of them are bots or shills is a hilariously naive act, the fact that “researchers” did the same for a scientific study is embarassing.

    • @[email protected]
      link
      fedilink
      English
      4
      edit-2
      2 months ago

      I don’t know what you have in mind but the founders originally used bots to generate activity to make the site look popular. Which begs the question. What was really the root reddit cultures. Was it the bots following human activity to bolster it. Or were the humans merely following what the founders programmed the bots to post.

      One things for sure, reddit has always been a platform of questionable integrity.

      • @[email protected]
        link
        fedilink
        English
        12 months ago

        They’re banning 10+ year accounts over trifling things and it’s got noticeably worse this year. The widespread practice of shadowbanning makes it clear that they see users as things devoid of any inherent value, and that unlike most corporations, they’re not concerned with trying to hide it.

  • @[email protected]
    link
    fedilink
    English
    29
    edit-2
    2 months ago

    When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.

    Not since the APIcalypse at least.

    Aside from that, this is just reheated news (for clicks i assume) from a week or two ago.

    • @[email protected]
      link
      fedilink
      English
      7
      edit-2
      2 months ago

      One likely reason the backlash has been so strong is because, on a platform as close-knit as Reddit, betrayal cuts deep.

      Another laughable quote after the APIcalypse, at least for the people that remained on Reddit after being totally ok with being betrayed.

  • @[email protected]
    link
    fedilink
    English
    142 months ago

    ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.

        • @[email protected]
          link
          fedilink
          English
          72 months ago

          Nobody is blaming the AI model. We are blaming the researchers and users of AI, which is kind of the point.

        • @[email protected]
          link
          fedilink
          English
          82 months ago

          the researchers said all AI posts were approved by a human before posting, it was their choice how many lies to include

        • Ecco the dolphin
          link
          fedilink
          English
          62 months ago

          Which, in an ideal world, is why AI generated comments should be labeled.

          I always break when I see a deer at the side of the road.

          (Yes people can lie on the Internet. If you funded an army of propagandists to convince people by any means necessary I think you would find it expensive. People generally find lying like this to feel bad. It would take a mental toll. With AI, this looks possible for cheaper.)

          • Rolivers
            link
            fedilink
            English
            32 months ago

            I’m glad Google still labels the AI overview in search results so I know to scroll further for actually useful information.

            • @[email protected]
              link
              fedilink
              English
              12 months ago

              They label ‘AI’ only the LLM generated content.

              All of Google’s search algorithims are “AI” (i.e. Machine Learning), it’s what made them so effective when they first appeared on the scene. They just use their algorithms and a massive amount of data about you (way more than in your comment history) to target you for advertising, including political advertising.

              If you don’t want AI generated content then you shouldn’t use Google, it is entirely made up of machine learning who’s sole goal is to match you with people who want to buy access to your views.

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        2 months ago

        That lie was definitely inappropriate, but it would still have been inappropriate if it was told by a human. I think it’s useful to distinguish between bad things that happen to be done by an AI and things that are bad specifically because they are done by an AI. How would you feel about an AI that didn’t lie or deceive but also didn’t announce itself as an AI?

        • @[email protected]
          link
          fedilink
          English
          82 months ago

          I think when posting on a forum/message board it’s assumed you’re talking to other people, so AI should always announce itself as such. That’s probably a pipe dream though.

          If anyone wants to specifically get an AI perspective they can go to an AI directly. They might add useful context to people’s forum conversations, but there should be a prioritization of actual human experiences there.

          • @[email protected]
            link
            fedilink
            English
            22 months ago

            I think when posting on a forum/message board it’s assumed you’re talking to other people

            That would have been a good position to take in the early days of the Internet, it is a very naive assumption to make now. Even in the 2010s actors with a large amount of resources (state intelligence agencies, advertisers, etc) could hire human beings from low wage English speaking countries to generate fake content online.

            LLMs have only made this cheaper, to the point where I assume that most of the commenters on political topics are likely bots.

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              2 months ago

              For sure, thus why I said it’s a pipe dream. We can dream though, maybe we will figure out some kind of solution one day.

              I maybe could have worded my comment better, people definitely should not actually assume they are talking to real people all the time (I don’t). But there should ideally be a place for people-focused conversation and forums were originally designed for that purpose.

              • @[email protected]
                link
                fedilink
                English
                22 months ago

                The research in the OP is a good first step in figuring out how to solve the problem.

                That’s in addition to anti-bot measures. I’ve seen some sites that require you to solve a cryptographic hashing problem before accessing. It doesn’t slow a regular person down, but it does require anyone running a bot to provide a much larger amount of compute power to each bot which increases the cost to the operator.

  • thedruid
    link
    fedilink
    English
    202 months ago

    Fucking a. I. And their apologist script kiddies. worse than fucking Facebook in its disinformation

    • @[email protected]
      link
      fedilink
      English
      22 months ago

      Realistic AI generated faces have been available for longer than realistic AI generated conversation ability.

    • thedruid
      link
      fedilink
      English
      32 months ago

      Meh. Believe none of what you hear and very little of what you can see

      Unless a person is in front of you, don’t assume anything is real online. I mean it. Nothing online cannot be faked, nothing online HASNT been faked.

      The least trustworthy place in the universe. Is the internet.

  • @[email protected]
    link
    fedilink
    English
    862 months ago

    The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.

    • Joe
      link
      fedilink
      English
      202 months ago

      It would be naive to think this isn’t already in widespread use.

      • @[email protected]
        link
        fedilink
        English
        22 months ago

        I mean that’s the point of research: to demonstrate real world problems and put it in more concrete terms so we can respond more effectively

      • @[email protected]
        link
        fedilink
        English
        42 months ago

        This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

        And the fact that you can generate hundreds or thousands of them at the drop of a hat to bury any social media topic in highly convincing ‘people’ so that the average reader is more than likely going to read the opinion that you’re pushing and not the opinion of the human beings.

  • @[email protected]
    link
    fedilink
    English
    612 months ago

    Reddit: Ban the Russian/Chinese/Israeli/American bots? Nope. Ban the Swiss researchers that are trying to study useful things? Yep

      • Oniononon
        link
        fedilink
        English
        232 months ago

        Humans pretend to be experts infront of eachother and constantly lie on the internet every day.

        Say what you want about 4chan but the disclaimer it had ontop of its page should be common sense to everyone on social media.

          • Oniononon
            link
            fedilink
            English
            52 months ago

            If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.

            Don’t worry tho, popular sites on the internet are dead since they’re all bots anyway. It’s over.

            • @[email protected]
              link
              fedilink
              English
              42 months ago

              If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.

              These two groups are not mutually exclusive

      • @[email protected]
        link
        fedilink
        English
        82 months ago

        Sure, but still less dangerous of bots undermining our democracies and trying to destroy our social frabic.

  • @[email protected]
    link
    fedilink
    English
    132 months ago

    Imagine what the people doing this professionally do, since they know they won’t face the scrutiny of publication.

  • @[email protected]
    link
    fedilink
    English
    17
    edit-2
    2 months ago

    The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

    This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

    • accept that negative publicity will result
    • accept that people may stop cooperating with them on this work
    • accept that their reputation will suffer as a result
    • ensure that they won’t do anything illegal

    After that, if they still feel their study is necesary, maybe they should run it and publish the results.

    If then, some eager redditors start sending death threats, that’s unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

    As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

    AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.

    As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn’t needed after all.

    • @[email protected]
      link
      fedilink
      English
      22 months ago

      But those other studies didn’t make the news though, did they? The thing about scientists is that they aren’t just scientists, and the impact of their work goes beyond the papers that they publish. If doing something ‘unethical’ is what it takes to get people to wake up, then maybe the publication status is a lesser concern.

  • @[email protected]
    link
    fedilink
    English
    272 months ago

    […] I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

  • @[email protected]
    link
    fedilink
    English
    32 months ago

    It hurts them right in the feels when someone uses their platform better than them. How dare those researchers manipulate their manipulations!