Oh , yeah, this is good advice.
The Washington Post (gift link, but don’t get excited, it’s a crummy gift) permitted a father-son team of faithful dupes to reassure us all that artificial intelligence is no different from any other machine, and can never compete with the human mind. Authors Andrew Klavan (a novelist) and Spencer Klavan, a classicist, are here to explain to us that artificial intelligence is like a wax writing tablet was to Plato (Spencer’s idea, I bet) or computers were to past generations, technological advances humans foolishly thought could match the human mind. “But by using machines as metaphors for our minds, we fall prey to the illusion that our minds are nothing more than machines. So it’s not surprising that now, when the possibilities of AI are enthralling Silicon Valley, those who think programs can become conscious are trying to tell us that consciousness is just a program,” they write.
Point? We have nothing to worry about! These things can’t really think or feel like we do! A.I. lacks “what ancient philosophers called “the inner logos” — the unique interior apparatus we have for structuring and understanding our experience of the world.”
Neither Klavan has anything in his biography to indicate they have more than the average landscaper’s understanding of technology, so what’s their authority for this verdict? Jesus, and Louis Armstrong. I kid you not. “The great Louis Armstrong, performing the George David Weiss and Bob Thiele song, “What a Wonderful World,” put it this way,” they write. “I see friends shaking hands, saying ‘How do you do?’ / They’re really saying: ‘I love you.’” Jesus put it similarly in Matthew 15: “The things that come out of a person’s mouth come from the heart.”
The two non-scientists have come to the dangerous and ignorant conclusion that A.I. bots are just “large language models” (LLMs) that are not capable of thought because, well, that’s what Louis sings. They tell us at the end,
….we are deceived by the psychological quirk that causes us to view ourselves as “like unto” the machines we make. We are not. We are not machines at all, in fact, but organic unities — brain, heart, loins and senses — animated by spirit and collaborating with creation on unique but interconnected experiences of life. The psalmist’s warning still applies: Those who project an inner life onto their own creations will cease to cherish the inner life unique to humankind. Those who make idols become like them.
If we wish to fulfill humanity’s inherent purposes of love and connection, art and culture, we can’t afford to keep making that mistake.
Gee, guys, thanks for that! Shut the hell up.
Here’s a mistake that human beings haven’t made before: assuming that a machine can never become a threat to human existence and civilization because our souls make us superior and God won’t side with the machines. We haven’t made the mistake before because we have never made machines like these machines before. Science fiction writers, however, have anticipated this day, and I’d urge the Klavins to read some of their speculations rather than singing along with Satchmo.
Earlier generations scoffed at writers’ imaginary future developments like submarines, television and rockets. I have another quote from popular culture to pay heed to: Han Solo’s admonition in “Star Wars” to Luke Skywalker. “Don’t get cocky, kid!” The dangers and risks created by artificial intelligence should not be underestimated; in fact, they should be over-estimated if only to make sure we don’t let our guard down. The bots aren’t going away, and if the Klavins want us to believe the bots can’t really thinkabout us, that’s just what the bots would want us to believe.
The Post op-ed is irresponsible.
I do not think we should view AI bots through the lens of science fiction because that is all fiction. AI technology is a form of IT technology, with some stunning capabilities. We should not become luddites who dismiss technology out of fear of the future based on dystopian science fiction. We should not simply assume that the singularity is near, and that Skynet is about to overtake us.
Let’s assume for the sake of argument that in the near future AI will pass the Turing test as in the movie Ex Machina, meaning that it will be impossible to determine whether certain output (e.g. conversations) is produced by humans, or by AI bots. We may also assume that the exterior of some AI entities will be perfected so that they as physically indistinguishable from humans; e.g. the perfect AI girlfriend giving the same tactile experiences as a real woman.
I think the ethical and legal implications need to be considered of this scenario. Do the rules of ethics apply in the same way to Turing intelligent AI entities as to humans? Will these AI entities have legal rights and obligations similar to humans?
We have fallen vcitim to many times to accept technologies whole heartedly without first looking at the moral, ethical, societal impact.
Grant me that all technolliges do in fact havee these imapcts. grant also not all of the results from these impacts are positive progress. some have negative impact on the human condition.
A mundane, yet seasonally appropriate, example is Christmas shopping. Before internet/techology days we scurried to the store, interacted with others. wished everyone happy holidays (Merry Christmas, Happy hanukah.)
Albeit internet shopping is convenient, however no human interaction.
Perosnal example, I fuiltfully was shopping for family gifts, I found somehtnign useful that I wanted to purchase for my wife and three sons, I wanted them in three different colors. When i cliked purchase only one item appeared in the psuedo shopping cart. There was neither a click to increase quantity nor to slect a varaient color. The chat box opened querying me. I could not get it to fetch three of the same items in various colors.
In my case the frustration rose to an exclamation of “A pox on you all!”
It’s interesting that this post has only garnered a couple of comments, and your previous AI post on the 7th didn’t get any. Not to oversell it, but AI may be the most important issue ever. Already entry-level white collar jobs are disappearing. I heard of a recent study that 13% of such jobs were gone, and that was published back in August. AI is being compared to the industrial revolution in terms of workforce displacement, but exponentially more disruptive since it’s taking place in the span of a few years rather than several decades. As if that’s not enough, there’s serious talk that we may be ushering in an extinction event for homo-sapiens. On the plus side, though, my AI heavy stock portfolio is doing quite well, thank you.
My own experience with AI has been less than encouraging. I really hadn’t made much use of it, but last week I was putting together a spreadsheet to project annual returns on some weekly stock market moves I was considering. Creating the spreadsheet and then populating the data for about 20 stocks was going to take me the better part of an hour, and then updating the data in real time would be difficult. It struck me that AI might do it better and more quickly than I could.
My first task was to determining which AI to use. I figured I’d have to subscribe to one of them to get the job done decently and in a timely fashion, so I asked Google which AI was best for real-time data. The answer both from the Google AI and various Reddit forums was that an AI model I hadn’t heard of, Perplexity, was superior when dealing with pulling information from the web in real time. I found I could get a year-long free trial, so that’s what I went with.
Once installed, I asked Perplexity to create the spreadsheet according to my specifications and then pull the pricing information from the web and put it into the spreadsheet. There were a few hiccups at first, which I attributed that to my own lack of clarity in providing instruction. Once the first page of the spreadsheet was complete, however, I assumed subsequent pages would be a simple matter of copy and paste, with minor adjustments because the dates were changing. I assumed wrong. Perplexity was perplexed. It made mistake after mistake, repeatedly setting up the equations incorrectly even though all it had to do was use the initial page as a model, or worse yet, inputting completely erroneous data from the internet.
I had to supervise every step of the way, instructing it to correct what it had gotten wrong so my spreadsheet wouldn’t be completely useless. With Perplexity’s help, it took most of an afternoon to do what, as I said, would have taken me less than an hour to do on my own. The most surprising, and honestly amusing, part, though, was that Perplexity repeatedly complained that doing this task I had set before it was going to take too long, and wouldn’t I prefer that it just set up the spreadsheet and then let me fill in the data myself. Between the repeated errors and the complaining about how much of its time I was wasting, it felt like I was dealing with a bad Gen Z employee. All that was missing was the blue hair and insistence on letting me know its pronouns. I finally gave up and did the spreadsheet myself.
On a more cataclysmic note, I just finished watching a Diary of a CEO interview from last week with “World-Leading AI scientist Professor Stuart Russell”, who apparently wrote the textbook about AI that all the current AI leaders read in college.
(As an aside, I highly recommend the Diary of a CEO podcast if you don’t watch it already. Host Steven Bartlett does long form interviews with a variety of fascinating people, with topics ranging from science to business to health to politics. You need a big block of time to watch an entire interview, though. This particular interview was just over two hours.)
According to the podcast, the consensus among leaders in the AI field like Elon Musk and Sam Altman is there is roughly a 25% chance AI will directly cause the extinction of the human race, and that we’re somewhere between 2 and 10 years from AI advancing to the point that it has this capability. This isn’t Greta Thunberg hysterically whining that we’ve stolen her future. These are heads of the AI industry, supposedly sober men who stand to benefit substantially financially from advances in AI. And yet they think there’s a decent chance AI will lead to our demise within a few short years. To put that 25% (1 out of 4) chance in perspective, we mandate nuclear power plants have less than a 1 in a million chance of disaster before approving them for construction.
Maybe we ought to consider slowing AI development until we’re certain it can be contained? Just a thought.
Here’s a link to the Diary of a CEO podcast I referenced if you have a couple of spare hours and don’t want to sleep at night:
If you have a couple more spare hours and really want to be scared, here’s another Diary of a CEO podcast from two weeks ago about AI that I watched after writing most of this post. I advise against watching it unless you’d be prepared to give up everything and devote what’s left of your life to stopping AI:
Comment of the Day, Jon, and yes, I find the lack of AI commentary here fascinating. A reflection on the generations most active here? Apathy? Complacency? I just did a gratis ethics session for a law firm and 75% of it involved AI. I’ll post your comment, and watch it get light comments too….