Before I finish Part 2 of On Maduro’s Arrest, the Ethics Dunces and Villains Are All In Agreement: What Does This Tell Us? , it is worth noting that one analyst posed the question, “Was it illegal for Trump to arrest Maduro?” to the AI bots ChatGPT and Grok.
ChatGPT, sounding, the inquirer notes, like a typical left-biased law professor, said that the arrest was illegal. It also wrongly stated that Maduro had been legitimately elected and adopted the positions of “international experts” as well as the United Nations Charter.
Grok, however, pronounced the arrest legal, citing the Venezuelan dictator’s illegitimate election, his federal indictment, and the power of the President, as Commander in Chief, to execute criminal warrants abroad.
Just now I asked Google’s bot the same question. It refused to answer, saying only that “The legality of the U.S. operation to capture Nicolás Maduro on January 3, 2026, is a subject of intense debate, with most international legal experts considering it a violation of international law, while U.S. authorities defend it as a law enforcement action.”

I think Google’s answer is the safest, if not the most accurate. It defers the answer to humans, even if a large portion of humans have an agenda-driven answer.
This illustrates perfectly why AI cannot be used ethically in certain professions such as law.
Personally, I don’t give a flying fuck what any kind of AI “thinks” about anything. In my opinion, AI is a intellect sucking black hole and individuals that choose to use it regularly and trust it as some kind of valid knowledge basis are being foolish.
I will say that I enjoy seeing AI getting comparison tested like Jack did for us. It’s especially useful when different AI’s come up with different biased answers to the exact same question(s).
What happens when all the AI’s come up with the exact same biased answers and they’re all incorrect?
I’m of the same mind. I played around with AI as entertainment but I don’t want it helping me with my emails, etc. It is being pushed everywhere. I was checking my UPS delivery with a tracking number and the AI is constantly asking me if I need help. NO. I even told it that it was just an annoying nuisance.
Your comment also reminded me of the story I saw yesterday. “Cops Forced to Explain Why AI Generated Police Report Claimed Officer Transformed Into Frog“. “Heber City, Utah, was forced to explain why a police report software declared that an officer had somehow shapeshifted into a frog.” They use AI to generate police reports from the officer’s body cams. The AI picked up on the movie that was playing in the background, “The Princess and the Frog”
https://futurism.com/artificial-intelligence/ai-police-report-frog
“I hope you’re satisfied! ‘Cause if you ain’t, don’t blame me! You can blame the AI on the other siiiiiide!”
AI simply denotes a category of IT tools, employing machine learning, NLP, computer vision, and other techniques. AI has been used with success to do pattern recognition, facial recognition, diagnostics and drug discovery in healthcare, analysis of security threats, fraud detection in finance, giving personized recommendations at Amazon Netflix and YouTube, automating workflows in the office, generating media content, and chat boxes disseminating information and knowledge. So AI is a technology that is widely used with success. There is no point in demonizing an entire technology; all technologies have downsides shortcomings and possibilities of abuse. However AI is not ready for prime time or even applicable everywhere. AI tools using LLM need to be trained to do its tasks properly, and need a vast amount of parameters. GPT-3 which was released in 2020 used 175 billion parameters to do its tasks. If the training data used by the AI is biased it will produce bias outputs.