As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.

What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    7 hours ago

    I’ve just accepted that if a bot interaction has the same impact on me as someone who is making up a fictional backstory, I’m not really worried wheter it is a bot or not. A bot shilling for Musk or a person shilling for Musk because they bought the hype are basically the same thing.

    In my opinion the main problem with bots is not individual acccounts pretending to be people, but the damage they can do en masse through a firehose of spam posts, comments, and manipulating engagement mechanics like up/down votes. At that point there is no need for an individual account to be convincing because it is lost in the sea of trash.

    • douglasg14b@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      You’re missing the big impact here which is that bots can shift public opinion in mass which affects you directly.

      Gone are the days where individuals have their own opinions instead today opinions are just osmosised through social media.

      And if social media is essentially just a message bought by whoever can pay for the biggest bot farm, then anyone who thinks for themselves and wants to push back immediately becomes the enemy of everyone else.

      This is not a future that you want.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        I didn’t miss it, since my entire post is about manipulation and the second paragraph is about scale.

    • poVoq@slrpnk.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 hours ago

      Even more problematic are entire communities made out of astroturfing bots. This kind of stuff is increasingly easy and cheap to set up and will fool most people looking for advise online.

      • Danterious@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        Maybe we should look for ways of tracking coordinated behaviour. Like a definition I’ve heard for social media propaganda is “coordinated inauthentic behaviour” and while I don’t think it’s possible to determine if a user is being authentic or not, it should be possible to see if there is consistent behaviour between different kind of users and what they are coordinating on.

        Edit: Because all bots do have purpose eventually and that should be visible.

        Edit2: Eww realized the term came from Meta. If someone has a better term I will use that instead.

        Anti Commercial-AI license (CC BY-NC-SA 4.0)

      • drkt@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        I am convinced that the bidet shills on reddit are bots. There’s just no way that hundreds of thousands of people are suddenly interested in shitting appliances.

    • ericjmorey@discuss.online
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      A bot shilling for Musk or a person shilling for Musk because they bought the hype are basically the same thing.

      It’s the scale that changes. One bot can be replicated much easier than a human shill.