I Dunno…The Latest From “The Ethicist” Has Me Tempted To Cancel Him

Prof. Kwame Appiah, the latest (and arguably the most ethical) in a long line of proprietors of the Sunday Times “The Ethicist” column has long provided me with fodder for ethics posts, often critical ones. Appiah might finally have jumped the shark however: I don’t know that I can continue to regard him highly after his collection of rationalizations employed to answer a TV screenwriter’s query about whether it is ethical for him to use generative artificial intelligence bots to write screenplays he is paid for and puts his name on. “So what ethical line would I be crossing? Would it be plagiarism? Theft? Misrepresentation?” the inquirer asks.

My answer is simple: using AI as inspiration or even a model isn’t any of those things, just like a screenwriter reading other writers and watching movies with deft screenplays is legitimate source material inspiration. Most artists “steal” from other sources, altering their models sufficiently to pass as original, and rightly so. There’s a line where imitation and inspiration becomes theft and plagiarism—like when the Beach Boys lifted Chuck Berry’s “Sweet Little Sixteen” almost note for note—but short of that line is just art as usual. At least, however, the artist is the one doing the adapting and ethical tight-rope walking, not a machine. I feel the same way about authors using AI to write their products exactly the way I feel about AI judging: the human being, his or her experience, quirks, patterns, world view and more is why a screenwriter has the job. Using a bot, and I don’t care how it has been programmed, to produce full scenes and dialogues is lazy and dishonest. Individuality is a writer’s, indeed any artist’s, most valuable commodity.

In short: what the screenwriter is proposing is unethical. Now here’s “The Ethicist’s” take. I’m going to post it all, and leave it to you to name the rationalizations, which you can find here.

“We’re done here.” Some years ago, sleepless in a hotel room, I flicked through TV channels and landed on three or four shows in which someone was making that declaration, maybe thunderously, maybe in an ominous hush. “We have nothing more to discuss.” “This conversation is over!” Do people really talk like that? Possibly, if they’ve watched enough television.

“My point is that a good deal of scripted TV has long felt pretty algorithmic, an ecosystem of heavily recycled tropes. In a sitcom, the person others are discussing pipes up with “I’m right here!” After a meeting goes off the rails, someone must deadpan, “That went well.” In a drama, a furious character must sweep everything off the desk. And so on. For some, A.I. is another soulless contraption we should toss aside, like a politician in the movies who stops reading, crumples the pages and starts speaking from the heart. (How many times have we seen that one?) But human beings have been churning out prefab dialogue and scene structures for generations without artificial assistance. Few seem to mind.

“When screenwriters I know talk about generative A.I., they’re not dismissive, though they’re clear about its limits. One writer says he brainstorms with a chatbot when he’s “breaking story,” sketching major plot points and turns. The bot doesn’t solve the problem, but in effect, it prompts him to go past the obvious. Another, an illustrious writer-director, used it to turn a finished screenplay into the “treatment” the studio wanted first, saving himself days of busywork. A third, hired to write a period feature, has found it helpful in coming up with cadences that felt true to a certain historical figure. These writers loathe cliché. But for those charged with creating “lean back” entertainment — second-screen viewing — the aim isn’t achieving originality so much as landing beats cleanly for a mass audience.

“So why don’t the writers feel threatened? A big reason is that suspense, in some form, is what keeps people watching anything longer than a TikTok clip, and it’s where A.I. flounders. A writer, uniquely, can juggle the big picture and the small one, shift between the 30,000-foot view and the three-foot view, build an emotional arc across multiple acts, plant premonitory details that pay off only much later and track what the audience knows against what the characters know. A recent study found that large language models simply couldn’t tell how suspenseful readers would find a piece of writing.

“That’s why I hear screenwriters talk about A.I. as a tool, not an understudy with ambitions. I realize you’ve got another perspective right now: “We’re not so different, you and I,” as the villain tells the hero in a zillion movies. But don’t sell yourself short. You fed the machine your writing before you asked it to draft a scene. You made it clear what dramatic work was to be done. And so long as you and the studio or production company are consenting parties on this score, you’ll be on the right side of the Writers Guild of America rules. Your employers wanted a script; you’ll be accountable for each page they read. And though generative A.I. was trained on the work of human creators, so were you: Every show you’ve watched, every script you’ve read, surely left its mark. You have no cause to apologize.

“Does the entertainment industry? It was hooked on formula, as I’ve stressed, long before the L.L.M.s arrived. Some contrivances endure simply because they’re legible, efficient and easy to execute. Take the one where one character has news to share with another, but is interrupted by the other’s news, which gives the first character reason not to share her own news. Then comes the inevitable: “So what was it you wanted to tell me?” Ulp! Writers have flogged that one for decades; why wouldn’t a bot cough it up? The truth is that many viewers cherish familiarity and prefer shows, especially soaps and franchise fare, to deliver surprises in unsurprising ways. Still, there will always be an audience for work that spurns the template — for writers who, shall we say, think outside the bot.

“That’s the bigger story. In the day-to-day life of a working writer, the question is less abstract. If people press you about your A.I. policy, point to the guild’s rules. Tell them that every page you submit reads the way you want it to. Then announce: We’re done here.

I’ll get you started: Kwame spends much of his answer expounding on how many screenplays today are unoriginal and feel like mechanically recycled scripts already. “[H]uman beings have been churning out prefab dialogue and scene structures for generations without artificial assistance. Few seem to mind,” The Ethicist writes.

Let’s see:

  • #1 “Everybody Does It”
  • #1a “We Can’t Stop It”
  • #1 c “You’re not alone!”
  • #2 Whataboutism, or “They’re just as bad!”
  • #8 “No Harm No Foul”
  • #8A. The Dead Horse-Beater’s Dodge, or “This can’t make things any worse”
  • and many more.

Now you tell me the others you see. How can someone claim to be “The Ethicist” and not recognize such a wave of rationalizations leading him away from ethical values to “nobody cares”? This answer seems very close to signature significance territory. I’m wondering whether a competent ethicist could write that response.

Hey! Maybe he had a bot write it!

6 thoughts on “I Dunno…The Latest From “The Ethicist” Has Me Tempted To Cancel Him

  1. “I don’t know that I can continue to regard him highly after his collection of rationalizations employed to answer a TV screenwriter’s query….”

    I’ve had a similar experience with two public intellectuals and their podcast. After getting through about a quarter of one’s autobiography, I can no longer hold him in much regard. Similarly, after hearing the other say he would have been fine with Trump being removed from the scene by having been assassinated, or words to that effect. I’m simply no longer able to muster up any interest in hearing what he has to say. An interesting phenomenon.

  2. Jack: “My answer is simple….”

    That is the problem. You are not filling a column. Even if an answer is simple, part of the column is entertainment. Additionally, much of ethics is about the WAY a conclusion is reached, not just the conclusion itself.

    So, he had a column to fill and he crafted a response with an ongoing theme about how a lot of writing is formulaic. He maintained that theme throughout as a way to entertain, as well as to explain.

    Also, his analysis is consistent with yours when you say:

    “Most artists “steal” from other sources, altering their models sufficiently to pass as original, and rightly so. There’s a line where imitation and inspiration becomes theft and plagiarism—like when the Beach Boys lifted Chuck Berry’s “Sweet Little Sixteen” almost note for note—but short of that line is just art as usual.”

    I could easily say, “Jack, that Rationalization #1 “Everybody Does It.” Except that neither you nor APPIAH (not Attiah-two typos to fix) are rationalizing anything. It appears to me that both of you are describing the way the creative process works. And, I do not quite see how you differ. He seems to be saying “lots of writing is formulaic even without AI, but using AI is not a problem if you ‘Tell them that every page you submit reads the way you want it to.'” In other words, it appears that many of the rationalizations you apply to his analysis could equally be applied to your, except that neither of you seem to be rationalizing unethical conduct. Rather, both of you appear to be exploring the limitations of the truncated saying that “imitation is the sincerest form of flattery.”

    We all take inspiration from others (note: that is not a rationalization); the quality of the work depends upon how we take that inspiration and make something new.

    -Jut

    • I’d second Jut’s comment here. What I took away from Mr. Appiah’s column is that he’s say what you said: use it to brainstorm the pieces, keep track of the story, validate consistency, get suggestions – ultimately, you’re the writer who is taking all of those pieces and formulating a polished cohesive product. It is obviously entirely unethical and a dereliction of duty to prompt a bot for a response and turn that in as a completed work…but simply bouncing an idea off of it, asking what might be a variety of reactions a character might express in a particular situation? I can see that as “inspiration farming”. Plus if that’s all you’re using it for – how would you prove where the idea came from? It takes as much talent to recognize a good idea as it does to create it.

      With that said, I hope writers take a hard stance against using AI to perfect their scripts. Greatness is achieved through struggle. How many podcast videos and discussions have erupted over some nuanced error in a Star Wars film because of some plot hole or incongruity?

      As with beauty, the flaws in perfection are what people focus on. If something is “perfect” it is bland and easily forgotten. The flaws keep you looking, keep you engaged, keep you thinking. I would caution all aspiring writers from attaining perfection.

      • 1. I KNEW when I wrote about the fired reporter named “Attiah” that typo would creep in when I next wrote about “Appiah.” Forgot. Now fixed.

        2.All valid, BUT: it’s a matter of emphasis and audience. By starting out with an “Everybody does it” narrative, the average readers (which you and I are not) will assume that that was the Ethicist’s message: oh, it’s ok because writers aren’t original on TV and movies anyway. Noe that I put “stealing” in quotes because using other works to adapt and imitate is not stealing, and The Ethicist never made that clear.

        He’s had shorter columns. His duty is still to be clear and ethically coherent, and I don’t think he was either.

  3. I do not think that the Ethicist’s approach is a bad as it appears.

    The term “ethics rationalizations” is only appropriate when applied to justify behavior that is clearly unethical. And here is where I have questions: when is use of AI unethical when creating artistic content?

    The YouTube video linked below is an example of AI generated content, which I use as a screensaver on my television. I wonder how much work the creator had to put into this this video to have AI produce this type of content. As a retired software engineer I would be happy to have a conversation with the creators of this channel, and see them demonstrate how they generate this content.

    I have been playing around with some AI generated content, and there is a lot of work needed to produce pictures and videos that exactly match what you envision. There are plenty of technical books out there with titles such as “Prompt Engineering for Generative AI” and “AI Engineering”. In other words, knowledge as how to use AI is a skill similar to knowledge of how to use spreadsheets, or knowledge of how to lay out a website.

    So I am inclined to see AI as a tool similar as a word processor with spelling checks, a paint brush or a search engine. However as AI is an ascendent technology, thought needs to be given on how to ethically apply it. E.g. if somebody produces artistic content (literature, videos, paintings) I think it would be ethical to disclose the fact that AI is used. Personally I have no ethical problem with AI generated content as such; the main concern for me is the quality and the enjoyment of the resulting product. Use of AI is not the same as plagiarism.

    I would guess that “AI and Ethics” would be a proper area fore research and scholarly debates for ethicists and philosophers.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.