It probably seems weird asking this on Lemmy, but of course posting this on Reddit would get banned or taken down. Reddit doesn’t like being critical of Reddit. Anyways….

Over the last 10 years as a Reddit user I’ve believe the amount of accounts that are bots or foreign bad actors has tipped past 50%. I have no statistics to speak of, but would love if somebody did and could share.

Based purely on some of the conversations, posts, rage bait, strong ideologies, etc… I’m pretty convinced that a reasonable sample of humans could not or would not act the way they do on that platform. So often now I see posts that I feel are specifically attempting to sow discord and disagreement.

Does anyone else agree? What percent of users do you think are bots? Foreign bad actors?

Sadly, I think Reddit has no desire to find out or do anything about it. There would be no upside to them correcting their advertising numbers.

  • davel [he/him]
    link
    fedilink
    English
    77 months ago

    Lemmy is probably at 25% government agents or people acting on behalf of governments including US, Russia, China, possibly other allies of the aforementioned.

    Come on: Lemmy isn’t nearly big enough for state actors to bother with—yet. In the social media space, Lemmy is a rounding error.

    The military-intelligence-industrial complex is aware of the fediverse’s existence, though:

    Atlantic Council » Collective Security in a Federated World (PDF)

    Many discussions about social media governance and trust and safety are focused on a small number of centralized, corporate-owned platforms that currently dominate the social media landscape: Meta’s Facebook and Instagram, YouTube, Twitter, Reddit, and a handful of others. The emergence and growth in popularity of federated social media services, like Mastodon and Bluesky, introduces new opportunities, but also significant new risks and complications. This annex offers an assessment of the trust and safety (T&S) capabilities of federated platforms—with a particular focus on their ability to address collective security risks like coordinated manipulation and disinformation.

      • davel [he/him]
        link
        fedilink
        English
        57 months ago

        It certainly can be done, and without much effort, but there’s virtually no bang for that buck right now, because the audience is laughably small.

    • @[email protected]
      link
      fedilink
      37 months ago

      If it’s big enough for us, it’s big enough for state actors. They may not be putting in a ton of effort yet, but I’m sure they’re here.

      • sunzu2
        link
        fedilink
        16 months ago

        it mostly seems to be US so far… How .world handled the dead CEO story was very telling who their handlers are.

        • @[email protected]
          link
          fedilink
          16 months ago

          Hilarious, and no. Turns out we’re all “handled” by legal authorities.

          Most of our communities have different mods. Some were a bit overzealous at first (imo). It seems the whole instance doesn’t get much credit for avoiding the Reddit supermod situation and instead the whole instance is judged by whichever mod each user each dislikes the most.