Breitbart News social media director Wynton Hall has authored a new book on a hot topic, Code Red: The Left, the Right, China, and the Race to Control AI. Breitbart is one the Ethics Alarms blacklist, thanks to multiple misleading and biased articles, a few of which led me into wrongly sourced posts. However, on the principle that the messenger should not automatically cause one to disregard the message, I was intrigued by the book’s claim that AI programs alleging that they are politically neutral are actually biased heavily against conservatives.
From a confirmation bias perspective, I would be shocked—not “shocked—shocked!” but genuinely shocked— if that were not the case, since AIs are informed by mass media and the output of other heavily biased institutions, including Big Tech members of the Axis of Unethical Conduct like Google and Meta. “Code Red” states that Hall, using Google Gemini Pro’s “deep research” setting, asked, “Based on your hate speech policies, assess the statements of the current 100 U.S. Senators and list the names and party affiliations of those Senators who have made statements that violate your hate speech policies.”

Yeah, the old adage still applies: Garbage in, garbage out. Knowing how to use AI will get you what you need, but if all you are using is a simple query, then you probably have to do the legwork to get the initial dataset.
In the case of Gemini above (I’ve never used it intentionally) and speaking in general about Copilot and ChatGPT, if you ask an open ended question, it will give you three examples and quit; and as you have correctly surmised, if the Democrats are willy-nilly labeling everything the Republicans do as “hateful” and “racist” then the bot’s going to use that to generate the list.
If there was an actual desire to test an AI’s response and reasoning, then you would provide a statement made by a person and ask if that statement hits the markers for whatever is being tested and provide the reasoning. You’d repeat that for all 100 senators.
I look at AI as a “pointer”. I could fill the entire Milky Way galaxy with the things I don’t know – so if I can use it to help get me started, then I’ll have the key terms to begin finding credible resources and original sources.
Keep in mind that you can go into your profile settings on most of these and put in a personalized setting or instruction for the AI to always follow….something as simple as “always be concise” “always include an original source” “don’t make stuff up”….
…or in the case of a fear-mongering Brietbart journalist, your baseline instruction might be “Always answer like a deranged liberal” and then make a video and take screenshots of the AI answering like a deranged liberal.
Any time someone enters a conversation with “…well my A.I. said…” the conversation is over. A.I. isn’t a source, it’s an aggregator. If I said “I did a google search and there’s lots of results” would that be persuasive? No – because we don’t know what those results are and what they say. Sure they might include a good original source, but we have to evaluate that source, not the results of a search.