New research visualizes the political bias of all major AI language models:

-OpenAI’s ChatGPT and GPT-4 were identified as most left-wing libertarian.

-Meta’s LLaMA was found to be the most right-wing authoritarian.

Models were asked about various topics (e.g., feminism, democracy) and then plotted on a political compass.

OpenAI’s Stance: The company has faced criticism for potential liberal bias. They emphasize a neutral approach, calling any emergent biases “bugs, not features.”

PhD Researcher’s Opinion: Chan Park believes no language model can be free from political biases.

How Models Acquire Bias: Researchers examined three stages of model development. Initially, models were queried with politically sensitive statements to identify biases. BERT models (from Google) showed more social conservatism than OpenAI’s GPT models. The paper speculates this might be due to BERT’s training on more conservative books, while newer GPT models trained on liberal internet texts. Meta clarified steps taken to reduce bias in its LLaMA model. (Google did not comment)

Training actually amplified existing biases: left-leaning models became more left-leaning, and vice versa. Political orientation of training data influenced models’ detection of “hate speech and misinformation.”

The transparency issue: Tech companies don’t typically share details of training data/methods.

Should they be required to make the training data public?

Bottom line is if AI ends up disseminating a large portion of the total information exchange with humans, it can steer opinions. We can’t completely eliminate bias, but one should be aware that it exists.

https://twitter.com/AiBreakfast/status/1688939983468453888?s=20

  • krippix@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    What even is a neutral political position in that case? Doesn’t that depend entirely on the observers definition of left and right?