
In his Comment of the Day on today’s post about various graduation-related ethics stories, JutGory provides a veritable feast of delicious ethics morsels. It all began when he sent me an email suggesting as an ethics quiz candidate the story involving the student who had ChatGPT write the speech he submitted for approval to high school officials, intending all the while to sandbag them and deliver a different speech he knew they would never approve. I gratefully used the item but not as a quiz, judging it too easy: the Ethics Alarms position would be that using artificial intelligence to write anything one is supposed to write unassisted is unethical. Jut followed up with this COTD teeming with related ethics conundrums.
***
When I submitted #4, I asked if it might be an ethics quiz whether using ChatGPT to write the address.
You asked if I was being tongue in cheek.
The answer was not entirely. When I sent the e-mail, I had not finished thinking about the issues. Here were things I was mulling over:
1) Having AI write a speech for you is not as bad as a lawyer using it to write a brief.
2) It is certainly not as bad as the bait and switch in the other ethics breach he committed.
3) It was still deceptive to propose a speech you had no intention of giving; so was the wrong thing committed in the proposal of the speech, or in the drafting itself, or both?
4) It would not be plagiarism to give the speech because you are not really copying anyone.
5) This reminded me of the ownership issue of the photo taken by the monkey (you covered this); if you put in the parameters to ChatGPT, how much of the product can you claim as your own (because ChatGPT can’t really copyright it (Can it? Does it?)?
6) It also reminded me of the artist who entered an AI painting into a competition (again, covered here) and there were no restrictions on such submissions in the contest.
After I sent the e-mail, I concluded it was wrong but primarily based upon the dishonesty. Actually using ChatGPT to draft an address raises some of these other issues and the answer fits somewhere in the middle of that mess that I laid out.
Follow up question: would it be even worse if he had ChatGPT draft his negative address, as well? Does he get any credit for actually writing the address he gave? (That’s a little tongue in cheek, but still an appropriate question in this context.)
___________________
I’m baaaack….to offer my answers to the (let’s see) eight enumerated issues and the two follow-up questions at the end:
1. Rationalization #22.
2. Ditto.
3. Using any speech to deceive was the ethical breach, regardless of how it was written.
4. I agree. It’s not plagiarism, just as submitting a paper sold by a term paper mill isn’t plagiarism.
5. I expect this issue to be litigated sooner or later.
6. I wrote about that one, too. In that case, the program used can fairly be called just an artist’s tool, absent either a rule that prohibited it, though an ethical entrant would have checked with organizers before submitting the art for a prize. In this case, there is no question (is there?) that the student knew a speech written by a bot would be rejected.
7. No. The substituted speech was unethical from the first word: it couldn’t be made more or less unethical by the means of its production. I suppose the content could have made the speech more unethical, if, say, it were obscene or racist, or revealed national security secrets.
8. No. You don’t get credit for not doing something unethical.
Like this:
Like Loading...