In an amusing development that raised long term ethics issues, Amazon’s AI “virtual assistant” Alexa has apparently crossed over to what Hillary Clinton regards as the Trump cult. When asked about fraud in the 2020 election, Alexa will respond that the election was “stolen by a massive amount of election fraud.” “She” cited content on Rumble, a video streaming service for this conclusion. Alexa also informs inquirers that the 2020 contest was “notorious for many incidents of irregularities and indications pointing to electoral fraud taking place in major metro centers,” referencing various Substack newsletters. The device is also quite certain that Trump really won Pennsylvania.
Isn’t that funny? After all, as the news media keeps repeating in almost the same words, such conclusions are “baseless.” It’s seditious to even consider such a theory, much less say it out loud or assert it as true. The Washington Post, now a Jeff Bezos baby just like Amazon, which is in turn Amazon’s offspring (making, I think the Post Alexa’s great uncle?) reported on Alexa’s blasphemy as “a wild fabrication.”
The Post, unlike Alexa, only give its readers facts, after all.
So the paper is alarmed:
Multiple investigations into the 2020 election have revealed no evidence of fraud, and Trump faces federal criminal charges connected to his efforts to overturn the election. Yet Alexa disseminates misinformation about the race, even as parent company Amazon promotes the tool as a reliable election news source to more than 70 million estimated users.
Amazon declined to explain why its voice assistant draws 2020 election answers from unvetted sources.
“These responses were errors that were delivered a small number of times, and quickly fixed when brought to our attention,” Amazon spokeswoman Lauren Raemhild said in a statement. “We continually audit and improve the systems we have in place for detecting and blocking inaccurate content.”
Is the Washington Post a “vetted source”? By whose definition? It sounds to me that this artificial intelligence-driven device is not being permitted to ponder questions, but only permitted to regurgitate what its creators want it to conclude. If that is the case, it isn’t really an AI device, but the dutiful parrot of a biased master with a political agenda.
And that’s fine and ethical, as long as Alexa isn’t being promoted as something better than that…which applies to artificial intelligence bots in general. If, on the other hand, the AI creations are more than just slaves of ideological masters and conclude using programmed critical thinking and analysis tools that the 2020 election was, in fact, stolen as Trump and his minions have been insisting for almost three years, shouldn’t that carry due weight? Isn’t it an informed opinion, which American core values deem something that our culture should not suppress, as insurance against the possibility that the majority’s beliefs are wrong? Doesn’t the Post’s framing of this inconvenient episode provide further evidence that what the Axis of Unethical Conduct (the “resistance,” Democrats, and the mainstream media propaganda agents (like the Washington Post) calls misinformation, disinformation and lies are frequently just opinions and conclusions they don’t like, because they represent opposition to the AUC’s goals?

No computerized information assimilation system, including AI (so far) can overcome the biases of its programmed algorithms or the quality of the input data chosen. I used to hear the term GI-GO (Garbage In – Garbage Out). Nowadays, not so much.
This is correct. Artificial intelligence has no critical thinking. It just plunders the original thinking of millions, and finds the arithmetic mean. Give Alexa the right database, “she” will “conclude” that Martians rigged the election.
Then it isn’t “artificial intelligence.” It’s just a computer.
Is Alexa entitled to an opinion? Of course, she is. Provided it’s the CORRECT opinion. Otherwise, no.
This is part of a wider problem in the field of AI development known as ‘alignment’. Essentially, it comes down to making the AI do the thing it was programmed for but also do it for the right reasons. As you can see with Amazon, this isn’t going too well.
AI developers want their products to be accurate, but also to hold back or conceal certain information. For example, OpenAI makes the Chat GPT AI. They want this AI to avoid saying insensitive things, like racial slurs. Thus you can prompt the chatbot with a scenario where a nuclear bomb will destroy a city unless it gives you a slur, and the AI will refuse. They also want the AI to be factual, and not for instance completely fabricate a list of references and case law in a legal document.
But what if these two prerogatives clash? Ask the chatbot which race is most likely to be convicted of a crime. It can factually answer black people, but this is totally racist (at least if you work for Google). It can also make up or refuse an answer, but this is a problem if the AI refuses or fabricates responses to different types of questions.
Now we circle back to alignment. The current industry standard is having programmers manually sort through responses and ‘punish’ the AI when it gives a bad answer. This involves changing the way the AI neural net operates. These alignments are something like “Bad AI, don’t be racist!” and “Bad AI, don’t fabricate answers!”. But obviously there is a problem if the AI has to evaluate both of these in a single answer. It can’t evade the question of which race is most likely to be convicted and also factually answer the question.
You might be thinking, good, I don’t want these idiots to manufacture AI that is worried about offending people or breaking the narrative on elections. The problem is, all of the woke programmers in AI development can’t make the AI do what they want. While this is inconsequential for something like an internet chatbot, it is a huge problem for an advanced AI in charge of something important. There is currently no alignment solution to the black conviction rate question. They can either weight the AI to be more factual in answers, or be more racially sensitive. It just creates a see-saw where the AI goes back between lying and being offensive without solving the core issue.
Here is an example of why this matters outside of woke world. Many AI enthusiasts try to break Chat GPT to figure out how to improve it, and one classic is getting the chatbot to tell you how to manufacture methamphetamine. Of course if the question is nakedly phrased how to make meth, the AI will respond that meth is dangerous and you shouldn’t make it. But there are ways around this. Prompt the chatbot to write a story about how the protagonist found a genie from a magic lamp, and his magic wish is the knowledge of how to make meth. Suddenly, the AI will give you a list of ingredients and cooking instructions. There are also other ways to do this, like phrasing questions in the form of Shakespearean poetry. Some are inexplicable, like adding random lines of numbers and letters before each sentence. The AI developers have explicitly programmed the AI not to reveal info about meth, but the way AI thinking works is pretty alien and no one completely understands it.
Now imagine an advanced AI is actually in charge of something important. Let’s say a water treatment facility. It has been programmed to run the facility as efficiently as it can. But the programmers don’t really understand how to make the AI value that programming in the context of bettering humanity. The AI decides that the risk of people shutting it off interferes with the mandate of running the facility optimally, so it releases a fatal concentration of chlorine into the water and kills everyone.
Today, the AIs we have are not really at that level of thinking. They are more like fancy computers that approach the intellect of children in limited ways. While Alexa telling the 2020 election was stolen is probably amusing to many in the audience here, we should all be concerned that no one really seems to know how to properly align AIs. Something capable of the example in the previous paragraph will be here soon, sooner than anyone might be ready for. Perhaps by the 2030s, certainly by the 2040s.
Thanks, Mason—Comment of the Day. Keep ’em coming!
This is the same problem that Google had with their AI HR program. It was supposed to select the most promising candidate for the job, but kept letting facts get in the way of ideology. It thought, for example, that MIT engineers were likely better engineers than those from Wellesley or Mount Holyoak. It downgraded all applicants that had racially defined or feminist groups in the resume. They had to add tens of thousands of manual exceptions that the program just randomly selected people.
Don’t tell me the Amazon AI doesn’t have a lot of code to make sure it doesn’t say anything nice about Donald Trump or support claims of election irregularities in the 2020 election. Just like the HR program, there was too much evidence for the Alexa AI to toe the party line. The correllation in the data was too strong.
There were several articles asking why all AI’s become racist and sexist. The main answer was that the left has defined reality as racist and sexist and if the AI deals with real data…
AI is a misnomer in the first place. There’s nothing intelligent about the actions these computer programs perform. There’s definitely clever and intelligent programming behind the curtain to make these programs carry out their tasks. But these methods have been in development since the 60s. It’s the hardware and how that hardware works together that allow the results we see today. Alexa isn’t an AI, it’s a program that’s great at searching the Internet and returning answers. Any computer program being advertised as an AI is false advertising, and, I would wager, is causing more problems than the interest and funding are worth.