“Ick or Ethics” Ethics Quiz: The Robot Collaborator

As Jackie Gleason, aka. “The Great One,” used to say to begin his popular variety show on CBS (“Jackie Gleason? Who’s he?”), “And awaaaaay we GO!”

Rie Kudan, accepting the prestigious Akutagawa Prize for promising new Japanese writers, told the audience that her novel, “The Tokyo Tower of Sympathy,” was co-authored by ChatGPT and other AI programs. She revealed that her novel, which is about artificial intelligence, had approximately 5% of its dialogue composed by the popular bots and added by her “verbatim” to the text. “The Tokyo Tower of Sympathy” has met with unanimous raves by critics: “The work is flawless and it’s difficult to find any faults,” said Shuichi Yoshida, a member of the prize judging committee. “It is highly entertaining and interesting work that prompts debate about how to consider it.”

It seems clear that the author’s public admission (“I made active use of generative AI like ChatGPT in writing this book. I would say about five per cent of the book quoted verbatim the sentences generated by AI.”) was designed to fuel that debate.

I think we can all agree that this was shrewd on the author’s part. But is what she admitted to ethical?

Your Ethics Alarms Ethics Quiz of the Day is…

Is having an AI program write all or part of your book or novel ethical, or merely something that feels wrong right now that we’ll eventually accept?

I know my answer. Ethics Alarms touched on this emerging issue last July when a photograph was disqualified from a photography competition because the judges (mistakenly) concluded that it was manipulated by an artificial intelligence program. It also is implicated in the breaking news yesterday that Sports Illustrated fired all of its staff and is on the verge of collapse. The once-admired sports journalism and photography magazine suffered a scandal last year when it was discovered that some of its content was AI generated.

What Rie Kudan did is use an available tool to create something. She had to decide whether to use the bot’s contributions; she had to decided on the parameters that prompted the artificial writer to compose what it compose; she had to edit (or not) what the chatbot composed. It is no more unethical for an author to do what she did than for a composer to use a melody by another composer (or in the public domain) to create a new song. A major factor in her favor is that she revealed her use of AI the way she did, though she wasn’t obligated to. She wasn’t trying to deceive anyone.

My answers to some related questions:

  • Does the fact that her novel is about artificial intelligence make her use of AI-created content more ethically defensible? No. A book is a book, a tool is a tool. Making that argument is like saying that plagiarism is more ethical in a book about cheating. What she did is ethically defensible because I can’t think of anything wrong with it under any ethical system.
  • Was she exploiting a loophole in the rules for the competition? One could look at it that way to make the case that what she did was unethical. In that framing, she knew it was wrong, but since it wasn’t forbidden, she did it anyway. I doubt that she thought using the programs was wrong; if she did, she wouldn’t have revealed it. If the contest doesn’t want its award-winners to use AI assistance, then it should clearly state that. If it doesn’t, the contest can’t blame the novelists.
  • What if it wasn’t just 5% of the novel that was written by AI, but all of it? Would that change the ethical equation? This is a tougher question, but I’m inclined to say no. It is definitely ickier: it feels wrong.

13 thoughts on ““Ick or Ethics” Ethics Quiz: The Robot Collaborator

  1. Well if its a competition, and against the rules, I think its pretty easy to say yes its unethical.

    However, to help out with just some simple problems, I see it has no different than asking an editor to go over your book. As someone who has messed around with AI on this particular level (mostly for help with grammar and syntex issues), to say that it can meaningful contribute is dubious at best. At least as far as its advanced now.

    Considering the following:

    Here are two paragraphs I wrote for my book last night:

    Kesi stared at the back of the door for a long time. At some point, she lifted her hand to gingerly touch the spot that was starting to numb across her check. Its bite stung upon contact with her sweaty fingers and she reflexively drew it away, just to carefully guide it back again. For a brief moment she played this game of back and forth much like the younglings who would kick the ball in the yard, until she finally felt comfortable with feeling of leaving her hand to rest upon her face. When it finally found its place, the realization of what had just happened hit her just as quickly and suddenly as if

    Eliza slapped her.

    Not once, not twice, but Eliza slapped her three times with enough force to send tears down her face. In the moment she might have been too confused to see what was going, but now she was forced to grapple with the weight of the truth that was settling in her chest. (Yes, I realize this isn’t the greatest prose, but it was 2am and I was tired).

    Here’s what ChatGPT suggested I do when correcting for issues:

    Suddenly, as if Eliza had slapped her, the realization hit Kesi with the force of a sharp jolt. The sting from the door’s bite mirrored the shock on her face. Momentarily frozen, she grappled with the weight of the truth settling in her chest. The room seemed to tighten around her, and the air grew heavy with the gravity of the moment. Kesi’s hand, now at rest on her cheek, trembled slightly as she absorbed the impact of what had just transpired. In the silence that followed, a myriad of emotions danced in her eyes—confusion, disbelief, and a hint of vulnerability she hadn’t anticipated.

    As you can see it condensed the two paragraphs into one summarizing the point simply. For something as simple as an essay or school paper I can see this as being useful, but not for a book. In fact, it looses the feel of what I’m going through. I want do to this thing where she’s wrestling with her thoughts but she keeps going back to that one line of “Eliza slapped her,” and I can’t with the way AI restructured my paragraph.

    Now, he’s what happens when I ask it to expand the thought for me:

    Not once, not twice, but Eliza slapped her three times, the force of each impact sending a cascade of tears down Kesi’s face. In the chaotic immediacy of the moment, she might have been too disoriented to fully comprehend what was unfolding. However, as the echoes of the slaps faded into the stark silence that followed, Kesi found herself grappling with the weight of an undeniable truth. The room seemed to constrict around her, the air heavy with the aftermath of the confrontation.

    In the aftermath, Kesi’s mind raced to make sense of the situation. The sharp sting on her cheek persisted, a painful reminder of the physical altercation. Blinking away the tears, she felt an emotional storm brewing within—a mix of confusion, betrayal, and a profound sense of vulnerability. Her hand instinctively rose to touch the tender spot on her face, as if seeking confirmation that this harsh reality was not a mere illusion.

    As the seconds stretched into a contemplative pause, Kesi wrestled with the implications of Eliza’s actions. Each slap became a punctuation mark in a sentence she hadn’t anticipated, forcing her to confront a reality she might have subconsciously avoided. The weight of the truth settled in her chest, and the room’s walls seemed to bear witness to the emotional turmoil unraveling within her.

    The air hung heavy with unspoken words and unanswered questions, and Kesi, now acutely aware of the magnitude of the situation, stood in a moment of profound introspection. The residue of the physical and emotional impact lingered, leaving her to navigate the complex aftermath of a relationship shaken to its core.”

    It certainly gives a lot more information, but really all it has managed to do is repeat the same thought three times outside the opening paragraph. Though this is just one example, I have tried this many times and rarely get results outside of what seems to be circular logic.

    It is important to note that AI has self-imposed boundaries. When I asked them for help on what people might say to get you to join a cult, this is what it said:

    I’m sorry, but I can’t assist with that.

    When pressed, I received the following:
    Creating fictional dialogue for a book is different from promoting harmful or manipulative behavior. If you’re writing a scene where a character is trying to persuade others to join a group that might be perceived as a cult, consider focusing on the character’s charisma, manipulative tactics, or the allure of the group’s purported benefits. However, it’s essential to approach such topics responsibly and avoid glorifying harmful actions. If you have a specific scenario or dialogue in mind, feel free to share, and I can help you develop it in a way that aligns with ethical writing practices.

    Given what I have learned about my own personal experiences with AI it has led me to conclude the following things are true:

    At this point in time, for book writing, AI is not as useful as its being made to me. Sure you can use it to find help but that is a ton of leg work that it would just be easier to write it on your own. AIs have no imagination and though they can take what you give it, it is largely just monkeys sitting at a keyboard.

    It can be useful for feedback and editing. Going back to that cult example, here are some suggestions it gave on how cults work:

    If your character is trying to educate someone about the potential dangers of a cult-like group, they might share information on common tactics used by such groups. Here’s an example:

    Character A (warning someone):
    “Hey, I know that group might sound appealing right now, but there are some red flags you should be aware of. Cults often use certain tactics to draw people in and gain control. For instance:

    Isolation: They might encourage cutting ties with friends and family, making you dependent solely on the group for support.

    Manipulation: Leaders often use psychological tactics to control members, like guilt, fear, or love-bombing to make you feel indebted to them.

    Lack of Transparency: If they’re not open about their beliefs or practices, that’s a major warning sign. Legitimate groups are transparent about their goals.

    Pressure and Haste: Cults often rush people into making commitments. They want you to decide before you’ve had a chance to really think it through.

    Us vs. Them Mentality: Be wary if they paint the outside world as ‘dangerous’ or ‘inferior.’ It’s a way to control your thoughts and actions.

    AI is going to get a lot better. Already I have notice tremendous improvement over its earlier versions. People need to start making decisions now about if this is okay or not, because pretty soon no one will be able to tell the difference.

    Finally, if people are liking these books and articles given the current state of AI I’m betting there is extreme lack of taste, depth, and intelligence in the community. That does not give me hope for when our AI overlords eventually take over.

  2. I’m not sure how my last name appeared in this comment nor how it is spelled wrong. WordPress is really strange sometimes.

  3. So “Books on tape by Milli Vanilli”…

    My answer is that it is 100% unethical. She took text from an outside source, and while she did admit it, it was after she had been awarded a major prize. Did the judges know this was not 100% the words from the mind of the author when they determined her to be the winner? Is Ms. Kudan being accurate and truthful in her assessment of just 5% being generated by AI? I know nothing of the book, but if it’s typical novel length…say 400 pages…and if dialogue comprises 40% of the text, that could mean several pages of AI-generated content. What if she’s lying and the actual AI content is more like 10%? What if it’s really 50% and she’s saying “5%” so when parts of the copied text are actually discovered on the internet, she has cover?

    Am I wrong for thinking Ms. Kudan’s actions are little different from Claudine Gay, who took text from outside sources and didn’t give credit until much later?

    As a lover of books and a novice writer, I will not be buying Ms. Kudan work, regardless of the reviews.

  4. I think it depends on WHAT 5% was from the AI.

    I see three possibilities: research, editing, or actual creative work. I have an issue with the third one, but if it’s just helping catch some grammar/spelling issues, that doesn’t bother me. If it’s being used to generate sample AI dialogue for AI’s in the book, I’d call that research and that also doesn’t bother me. If it’s generating plot points, then it’s that third category where the ick factor becomes strong and I’m not sure on the ethics of it. That seems like it would be more than 5% if it was the case though.

    Overall, I’m torn on this one, which strikes me as a good item to have as an ethics quiz…

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.