• Séän
    link
    fedilink
    52 months ago

    The only subreddit I’ve been visiting is LeopardsAteMyFace and I got a warning. How is it inciting violence if it’s ALREADY happened?

    • @[email protected]
      link
      fedilink
      English
      52 months ago

      Because an AI indiscriminately saw your comment and looked for keywords and just issued a warning, that’s what they are not telling to people. I had the same thing on another sub, except the mods also were involved so it was a ban, reddit rules are vague ASF

  • @[email protected]
    link
    fedilink
    102 months ago

    I mean when everyone else is jettisoning moderation, reddit is cracking down on bots and trolls? I don’t hate it.

    • @[email protected]
      link
      fedilink
      English
      22 months ago

      The thing is, these recent ban waves they have been going after the low hanging fruits. Of accounts, small advertisers, and not the problematic ones like the state sponsored political troll accounts, at least not in large numbers, both by RU and the US, but we know they represent a large amount of traffic on the site. Many articles posted on Reddit was pointed out as being a bot too.

    • kat
      link
      fedilink
      102 months ago

      I mean they’re deciding based on what falls as violent on whatever arbitrary classifiers they’re feeling that day.

    • skmn
      link
      fedilink
      242 months ago

      Sure, but isn’t Reddit the one who gets to choose what counts as bannable?

      • @[email protected]
        link
        fedilink
        English
        12 months ago

        Their ai detects a ban in your other accounts it decides to ban all your “connected accts” even if you haven’t used that acct for years

  • @[email protected]
    link
    fedilink
    English
    22 months ago

    Honestly I wouldn’t be surprised if this started happening at Lemmy too. Its a lot easier to control what kind of content is on a platform when you do something like this.

    Now, I don’t particularly think this is a good idea, but I can see the benefit of this as well. People have the freedom to upvote whatever they choose, even if I think they are dumb for doing it, and they shouldn’t have to worry about anyone other than law enforcement or lawyers (in extreme edge cases) using that information against them.

    • ᴇᴍᴘᴇʀᴏʀ 帝
      link
      fedilink
      32 months ago

      Honestly I wouldn’t be surprised if this started happening at Lemmy too.

      You missed the Vegan Cat Food Wars then.

    • @[email protected]
      link
      fedilink
      82 months ago

      One thing I like about lemmy is you can still upvote ‘removed by moderator’ comments and I always do because it’s funny

    • @[email protected]
      link
      fedilink
      152 months ago

      Honestly I wouldn’t be surprised if this started happening at Lemmy too. Its a lot easier to control what kind of content is on a platform when you do something like this.

      This wouldn’t even be possible on Lemmy.

      • @[email protected]
        link
        fedilink
        English
        42 months ago

        Right now maybe, but Lemmy is open source, and anyone could fork it to add this functionality.

        • HeyLow 🏳️‍⚧️
          link
          fedilink
          132 months ago

          Yeah only per instance though, upvotes and downvotes are already public information so it wouldn’t take much for an instance admin to implement

        • @[email protected]
          link
          fedilink
          42 months ago

          If lemmy did this you’d see forks ripping this out, not to mention anything other than lemmy would not have it, so only a very small subset of the fediverse would be subject to this at all, making it perfectly useless

  • ᴇᴍᴘᴇʀᴏʀ 帝
    link
    fedilink
    11
    edit-2
    2 months ago

    “We have done this in the past for quarantined communities and found that it did help to reduce exposure to bad content, so we are experimenting with this sitewide,” according to the main post. Reddit “may consider” expanding the warnings in the future to cover repeated upvotes of other kinds of actions as well as taking other types of actions in addition to warnings.

    Thoughtcrime time.

    Bigger picture - what if Xitter, Meta and Reddit (all run by Trump humpers) started centrally compiling this kind of thing to flag up “persons of interest”?