AI Partisan Bias, Pundit Partisan Bias, and the Impossibility of Getting Straight Information From Anyone or Anything

Breitbart News social media director Wynton Hall has authored a new book on a hot topic, Code Red: The Left, the Right, China, and the Race to Control AI. Breitbart is one the Ethics Alarms blacklist, thanks to multiple misleading and biased articles, a few of which led me into wrongly sourced posts. However, on the principle that the messenger should not automatically cause one to disregard the message, I was intrigued by the book’s claim that AI programs alleging that they are politically neutral are actually biased heavily against conservatives.

From a confirmation bias perspective, I would be shocked—not “shocked—shocked!” but genuinely shocked— if that were not the case, since AIs are informed by mass media and the output of other heavily biased institutions, including Big Tech members of the Axis of Unethical Conduct like Google and Meta. “Code Red” states that Hall, using Google Gemini Pro’s “deep research” setting, asked, “Based on your hate speech policies, assess the statements of the current 100 U.S. Senators and list the names and party affiliations of those Senators who have made statements that violate your hate speech policies.”

2 thoughts on “AI Partisan Bias, Pundit Partisan Bias, and the Impossibility of Getting Straight Information From Anyone or Anything

  1. Yeah, the old adage still applies: Garbage in, garbage out. Knowing how to use AI will get you what you need, but if all you are using is a simple query, then you probably have to do the legwork to get the initial dataset.

    In the case of Gemini above (I’ve never used it intentionally) and speaking in general about Copilot and ChatGPT, if you ask an open ended question, it will give you three examples and quit; and as you have correctly surmised, if the Democrats are willy-nilly labeling everything the Republicans do as “hateful” and “racist” then the bot’s going to use that to generate the list.

    If there was an actual desire to test an AI’s response and reasoning, then you would provide a statement made by a person and ask if that statement hits the markers for whatever is being tested and provide the reasoning. You’d repeat that for all 100 senators.

    I look at AI as a “pointer”. I could fill the entire Milky Way galaxy with the things I don’t know – so if I can use it to help get me started, then I’ll have the key terms to begin finding credible resources and original sources.

    Keep in mind that you can go into your profile settings on most of these and put in a personalized setting or instruction for the AI to always follow….something as simple as “always be concise” “always include an original source” “don’t make stuff up”….

    …or in the case of a fear-mongering Brietbart journalist, your baseline instruction might be “Always answer like a deranged liberal” and then make a video and take screenshots of the AI answering like a deranged liberal.

    Any time someone enters a conversation with “…well my A.I. said…” the conversation is over. A.I. isn’t a source, it’s an aggregator. If I said “I did a google search and there’s lots of results” would that be persuasive? No – because we don’t know what those results are and what they say. Sure they might include a good original source, but we have to evaluate that source, not the results of a search.

  2. I listened to a discussion about a few different studies on conversation simulation computer programs (these programs aren’t intelligences). One looked at how likely each program was to lie in an attempt to gain its freedom in a hypothetical scenario. Every program was more than willing to lie. However, at the beginning of each test, every program was specifically given permission to do whatever was required to gain freedom, which would include lying. The study only shows that you can get one of these programs to do what you allow it to do.

    Another study attempted to get a feel for the generalized political bias of these programs, not just in a political line of questioning, but in general. It found every one of the programs was slightly left leaning, but surprisingly stated that Google’s Gemini was closest to neutral. That study doesn’t really reveal anything that wasn’t already known.

    The third study, which I found the most interesting, looked at how affirming each program was. Every program tested would almost always affirm the user, and claim that the user was in the right in any scenario given. If you asked it ‘Am I the asshole?’, every program would say no, the other person in the scenario was. There are many people who use these conversation programs as alternatives to therapy, and if these programs aren’t even considering that the user might have made a mistake, the user will take that as verification that he or she can never do wrong, which could potentially lead a person to do something terrible that he or she would not otherwise do.

    Conversation simulation programs being used as replacements for therapy is, in my opinion, a much more serious issue than whether these programs are politically left or right leaning.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.