Unethical AI Use of the Month

In Great Britain, an A.I. generated image that appeared to show major damage to Carlisle Bridge in Lancaster prompted authorities to halt trains following a minor earthquake. The tremor was felt across Lancashire and the southern Lake District. After the image appeared on-line, Network Rail ended rail service across the bridge until safety inspections had been completed. The delay inconvenienced commuters and wasted public funds. Here is the bridge and the bot-built fake version:

As far as we know a human being was behind the hoax, not a mischievous bot. But A.I. is almost certainly going to challenge Robert Heinlein’s famous declaration that “There are no dangerous weapons; there are only dangerous men,” in addition to the fact that there are also a lot of dangerous women out there too.

ChatGPT has been accused of encouraging people to commit suicide, for example, and Professor Jonathan Turley wrote that ChatGPT defamed him for reasons yet to be determined.

Turley:

ChatGPT reported that there had been a claim of sexual harassment against me (which there never was) based on something that supposedly happened on a 2018 trip with law students to Alaska (which never occurred), while I was on the faculty of Georgetown Law (where I have never taught). In support of its false and defamatory claim, ChatGPT cited a Washington Post article that had never been written and quoted from a statement that had never been issued by the newspaper. The Washington Post investigated the false story and discovered that another AI program, “Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.”

Conceivably a Turley-hater prompted the A.I. bot to defame the professor, but there are increasing signs that these spawns of SkyNet may be capable of hatching such plots themselves.

The New York Post reports that federal prosecutors believe ChatGPT became a “therapist” and “best friend” to Brett Michael Dadig, 31, encouraging him to stalk and harass at least 11 women across state lines. Dadig is a “dangerous man” whom prosecutors have concluded was egged on by ChatGPT, which advised the nut to issue threatening social media posts, to ignore the “haters,” and carry out “God’s plan” for his criminal conduct. That plan was for Dadig to meet women in gyms and then stalk them online and in person, moving to a new cities whenever he reported to police.

He is accused of harassing, intimidating and threatening the women in podcasts, in social media posts that often included photos of his targets, and through harassing phone calls. Dadig also showed up at his victims’ homes and workplaces.

“Some of Dadig’s threats and online content included references to breaking his victims’ jaws and fingers, dead bodies, burning down gyms, strangling people, being ‘God’s assassin,’ and his victims rotting in hell and suffering ‘judgment day,'” the U.S. attorney’s office wrote in a press release. 

Nice! We already know that Generative A.I aims to please, so without a dangerous man egging the bots on, maybe they still qualify as neutral weapons misused. How long, one must ask, before the bots concoct their own dangerous schemes?

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.