A Quick Note on the Competence of Artificial Intelligence…

In writing the previous post about the Swiss organization that is paid to help people kill themselves, I was planning on mentioning Phillip Barry’s mysterious cult drama “Hotel Universe.” Barry, whose most lasting work is “The Philadelphia Story” but who was once one of Broadway’s most successful playwrights, wrote a fascinating but perplexing drama about how the suicide of a friend during a group vacation sends his characters on an existential journey into fantasy, madness, or a mass hallucination. My now defunct theater company performed the piece, because that was the kind of non-commercial, crazy productions we gravitated to. The last words of the dead friend were, “Well, I’m off to”…somewhere. I couldn’t remember. The suicidal woman I was writing about had told her family she was off to Lithuania, which is what reminded me of “Hotel Universe.”

But I couldn’t remember where Barry’s character was “off to” when what he meant was “I’m going to kill myself now.” It was driving me crazy, so I thought, “What a perfect question for AI! ” So I asked Google’s bot, “In ‘Hotel Universe,’the man who is going to kill himself says, I’m off to…” Where?” The thing answered quite assertively,

“In the play Hotel Universe by Philip Barry, the character Tom says, “I’m off to… the bottom of the sea” before attempting suicide. 

  • Who says it: The line is spoken by Tom.
  • Context: Tom is in despair and decides to end his life, a decision he expresses with this dramatic phrase.
  • Significance: The phrase is symbolic, indicating his desire to escape his painful existence and find oblivion, much like the deep sea.
You can find this line in the first act of the play.”

I didn’t remember the whole line, but I knew it wasn’t THAT. Sure enough, the correct line suddenly came to me. What Tom said before his suicide was “I’m off to Africa!”

Don’t trust artificial intelligence…at least in matters involving literature. Aspiring student cheaters beware.

10 thoughts on “A Quick Note on the Competence of Artificial Intelligence…

  1. Our company recently launched an AI chat to give us answers to questions. I had to laugh because there’s a disclaimer at the bottom that cautions us answers may not be correct.

    I’ve worked on my voice for years as I use it in my work. It irritates me to death when someone cuts me off and asks, “Are you AI?” or worse “Are you a robot?”, “a human?”, “a person?” I literally tell them my name when we begin speaking, make a joke here and there and, somehow, we’ve gotten to the point where a “too perfect” voice is considered non-human.

    Another tidbit: This weekend, at Dragon-Con in Atlanta (yes, I went there after my historical tour of middle Georgia), an artist was removed from the vendor hall because his work was AI generated. He refused to leave and the police were called to evict him.

    It’s spurred another large convention company to announce that it was banning AI art from its vendor halls, as well.

    • I have to chuckle at the disclaimer. It’s similar to the IRS agent disclaimer that their answers may not be accurate for filing your taxes.

      Color me jealous that you were at DragonCon. Some of the cosplay is so creative.

    • This past tax season, my company announced that it was introducing an AI chat feature. It, too, was accompanied by similar disclaimers plus, as I recall, admonitions about not disclosing confidential client information during your chats.

      I have never looked any further. To me, when you use a paid tax preparer, what you should be getting is my knowledge of the tax code and, even more importantly, my judgment and experience in asking you the right questions and applying the answers to those questions to properly write the best tax return.

      Anyone can do data entry (and mostly get it right). Whenever I talk about platforms like Turbotax and the like, I tell people that their tax return is as good as the tax professional doing it. If that’s Joe Blow from down the street, well . . . I pick up some decent clients through fixing their own messes.

      ———-

      Congrats on being able to attend DragonCon and I hope you enjoyed it. It is almost within our driving range. We’ve toyed occasionally with trying to get tablea there as a bookseller — but for us, it would be hopeless. Our procedures are simply not geared for that kind of high volume environment. We’d have to totally revamp them and even then it would have been dicey.

      We have a couple friends who do vend at DragonCon — it’s high stress, very demanding, but can also be very rewarding.

      It would be neat to be able to attend just as a regular person. I look at the panels at our small cons and some of them are very enticing. I’m sure DragonCon has those on steroids.

  2. I asked chat.gpt and it said it couldn’t find anything. So I asked Grok and this is what it gave me:

    In Philip Barry’s play Hotel Universe, the character Tom Ames says, “I’m off to bed. Good night, everybody.” This line occurs in Scene 2, as Tom exits the conversation, reflecting his casual and somewhat detached demeanor. If you were looking for a different quote or context, could you clarify?

    That didn’t make sense, so I asked again saying, “Isn’t this about a sucide.” Then I got this:

    You’re likely referring to a moment in Philip Barry’s Hotel Universe where Tom Ames is associated with a suicide scene, but the specific quote and context need clarification. In the play, there is a reference to a young man, not explicitly named as Tom in the provided sources, who “apparently commit[s] suicide as he dove off a high promontory into the ocean saying that he was off to Africa, and then was apparently lost in the sea” (,,,). This event is observed by some characters and sets a dark tone, as Lily Malone encourages others to contemplate suicide, though the play later shifts to a more hopeful perspective.

    However, there’s no direct confirmation in the provided sources that Tom Ames himself is the character who dies by suicide or says, “I’m off to Africa.” The sources suggest this suicide involves a minor character or an unnamed figure, not Tom, who is a central character played by Franchot Tone in the original 1930 production. Tom’s arc involves spiritual and existential struggles, including his loss of faith, but the play doesn’t explicitly depict him committing suicide.

    The quote you might be thinking of, “I’m off to…,” could be conflated with the line about the young man saying he’s “off to Africa” before his apparent suicide. If you meant Tom’s line “I’m off to bed. Good night, everybody” from Scene 2, this doesn’t directly connect to a suicide act but rather to his exit from a group scene.

    Since I have not seen the play, I can’t confirm or deny. Either way, I think trusting AI is not something anyone should do.

    • Great work. In fact, I don’t remember if Tom was the suicide—-I shouldn’t have trusted AI with that either. I do know that the character who killed himself was named, and he never appears in the play. I’m pretty sure “The Big Chill” was inspired by Hotel Universe. Someone should ask Lawrence Kasdan about that.

  3. This is a perfect example of an AI hallucination. This is a technical term, describing the situation where AI is unable to given an accurate answer, mostly due to lack of training data for the model, however the design prefers giving an answer above admitting ignorance. Sometimes giving an answer not matching the question may be preferred over admitting ignorance, e.g. AI applications that produces visual output (pictures, videos, animations); these type of applications allow the user to do trial and error runs by improving prompts (which can be quite complex), and selecting the best result.

  4. My pastor recently told me that he had received an invitation to a webinar about using AI to write sermons. He was appalled that any authentic pastor would even consider such a thing, but allowed that it might be amusing to read, considering AI’s track record of errors in other areas where it has been used. Of course, someone with no knowledge of scripture or hermeneutics might not spot glaring errors in a sermon, making such use doubly troublesome.

  5. My pastor recently told me that he had received an invitation to a webinar about using AI to write sermons. He was appalled that any authentic pastor would even consider such a thing, but allowed that it might be amusing to read, considering AI’s track record of errors in other areas where it has been used.
    Of course, someone with no knowledge of scripture or hermeneutics might not spot glaring errors in a sermon, making such use doubly troublesome.

  6. Off topic of AI competency, today one of my children was late to school due to an orthopedic appointment. As a result he missed his science test.

    When we picked all of our kids up from school, he mentioned that a student in his class was so nervous about the test that he vomited in the class room. Thus, the test was postponed.

    An untrained AI, at least doesn’t make a mess all over your computer, it just pathologically makes up stuff.

    I wonder if Google Gemini is a bit more ethical on this matter though. Sometimes when I ask a question I get a thorough answer. However, other times, when I know it has captured my words, it responds with “I did not understand.” Which is a lie, but not a fabricated answer.

Leave a reply to Jack Marshall Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.