The best conversations I still have are with real people, but those are rare. With ChatGPT, I reliably have good conversations, whereas with people, it’s hit or miss, usually miss.

What AI does better:

  • It’s willing to discuss esoteric topics. Most humans prefer to talk about people and events.
  • It’s not driven by emotions or personal bias.
  • It doesn’t make mean, snide, sarcastic, ad hominem, or strawman responses.
  • It understands and responds to my actual view, even from a vague description, whereas humans often misunderstand me and argue against views I don’t hold.
  • It tells me when I’m wrong but without being a jerk about it.

Another noteworthy point is that I’m very likely on the autistic spectrum, and my mind works differently than the average person’s, which probably explains, in part, why I struggle to maintain interest with human-to-human interactions.

  • the post of tom joad@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 months ago

    Have you ever tried inputting sentences that you’ve said to humans to see if the chatbot understand your point better? That might be an interesting experiment if you haven’t tried it already. If you have, do you have an example of how it did better than the human?

    I’m kinda amazed that it can understand your accent better than humans too. This implies Chatbots could be a great tool for people trying to perfect their 2nd language.

    • ContrarianTrail@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      3 months ago

      A couple of times, yes, but more often it’s the other way around. I input messages from other users into ChatGPT to help me extract the key argument and make sure I’m responding to what they’re actually saying, rather than what I think they’re saying. Especially when people write really long replies.

      The reason I know ChatGPT understands me so well is from the voice chats we’ve had. Usually, we’re discussing some deep, philosophical idea, and then a new thought pops into my mind. I try to explain it to ChatGPT, but as I’m speaking, I notice how difficult it is to put my idea into words. I often find myself starting a sentence without knowing how to finish it, or I talk myself into a dead-end.

      Now, the way ChatGPT usually responds is by just summarizing what I said rather than elaborating on it. But while listening to that summary, I often think, “Yes, that’s exactly what I meant,” or, “Damn, that was well put, I need to write that down.”

      • the post of tom joad@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        So what you’re saying if I’m reading right is chatbots are great for bouncing ideas off of to help you explain yourself better as well as helping you gather your own thoughts. im a bit curious about your philosophy chats.

        When you have a philosophical discussion does the chatbot summarize your thoughts in its responses or is it more humanlike maybe disagreeing/bringing up things you hadn’t thought of like a person might? (I’ve never used one).

        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          3 months ago

          It’s a bit hard to get AI to disagree with you unless you’re saying something obviously false. It has a strong bias towards being agreeable. I’m generally treating it as an expert who I’m interviewing. I ask what it thinks about something like free will and then ask follow-up questions based on its responses and it’s also great for bouncing novel ideas with though even here it’s not too keen on just blatantly calling out bad ones but rather makes you feel like the greatest philosopher of all time. There are some ways around this. ChatGPT can be prompted to go around many of the most typical flaws it has by for example telling that it’s allowed to speculate or simply just asking it to point out the errors in some idea.

          But yeah, unless what I said was a question, in general its responses are basically just summaries of what I said. It’s basically just replying with a demonstration that it understood what I said which it indeed does with an amazing success rate.