• Blaze (he/him)OP
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 hours ago

    Do you really believe that someone could get their a misinformation post heavily upvoted here? The main differences with Reddit are

    • actual moderation (most of Reddit mods are inactive since the API shutdown)
    • public votes (via Mbin) which allows to identify bots and brigading
    • meta communities like !yepowertrippinbastards@lemmy.dbzer0.com which allow to call out toxic behavior in a meta way.

    If someone would do something similar here, they would at the very least get called out on !fediverselore@lemmy.ca or !yepowertrippinbastards@lemmy.dbzer0.com , and mods and admins would get called out to act on those. Reddit does not have such mechanisms.

    • AwesomeLowlander@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      I disagree with you to some extent.

      1. Moderation does not matter if the post is made on a comm or instance which favors it cough .ml cough
      2. Bots and brigading are not the issue here. Neither of them were a factor in the post I linked, and they are not a necessary part of the abuse process under discussion.
      3. Yepowertrippinbastards works on a small scale, but it is not inherently scalable. As the fediverse grows, it will become less practical to name and shame bad actors on an individual basis. It also does not matter when the abuse system (preliminary blocklist) can be implemented by any new account.
      4. The very nature of the abuse system being described means that anybody who would report it on YPTB or similar comms can only do so once before themselves being blocked and unable to view future posts of that sort.

      We should try to keep in mind that the fediverse and lemmy will likely grow to larger scales. Any systems and safety measures we implement should take that into account. The block mechanism as you suggest is extremely ripe for abuse at large scale, and relying on mods / admins to combat it will place an unnecessary extra load upon them, if it is even possible.

      • Blaze (he/him)OP
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 minutes ago

        The block mechanism as you suggest is extremely ripe for abuse at large scale, and relying on mods / admins to combat it will place an unnecessary extra load upon them, if it is even possible.

        Interestingly enough, I feel like the current systems require mods/admins to keep an eye at all times, as harassment can happen at any time, and users can’t really protect themselves.
        There is a scenario which is exactly the opposite from the one you presented:

        • user gets harassed, blocks the harasser
        • the harasser can still comment on every comment and post of that user, requiring mod and admins to jump in to stop the abuse. With the Bluesky system, users themselves can prevent that.

        We should try to keep in mind that the fediverse and lemmy will likely grow to larger scales.

        BlueSky just passed 21 millions users.

        Bots and brigading are not the issue here. Neither of them were a factor in the post I linked,

        I had a look again at the post.

        I first prepared the account by blocking all the moderators and 4 or 5 users who usually call out misinformation posts.

        Would that be enough here? Of course, it depends on the topic of the thread (no link in the post, so I can’t really see what they were talking about), but I’m pretty sure there would be more than 4 or 5 people who would call out about misinformation.

        The very nature of the abuse system being described means that anybody who would report it on YPTB or similar comms can only do so once before themselves being blocked and unable to view future posts of that sort.

        Can’t we use here the same argument other people use about Lemmy being a public forum, and thus the posts being public for everyone except the blocked accounts?