On yesterday’s Open Forum, Null Pointer clarified some of the ethics issues surrounding ChatGPT, currently causing panic and consternation among teachers worried that their students will use artificial intelligence to write their essays. (They are already receiving an artificial education from most of those teachers, so this seems a bit hypocritical to me.)
Here is Null Pointer’s edifying Comment of the Day:
***
“AI” programs like ChatGPT are interesting toys that have some real world utility, but are not really artificial intelligence. They are pattern recognition applications. I would not suggest using them to do one’s homework because they lie. They are trained on large datasets pulled from the internet, and if the data pulled off the internet is wrong, then they spit out wrong answers. If they don’t know the answer, they make things up. https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-error-exoplanet-demo
Like any tool, the ethics lies in the user utilizing them. Having someone or something else do your homework for you is cheating. Using an AI to proofread your grammar is not really any different than the built in grammar checks in Microsoft Word. Pattern recognition tools probably have a lot of real world utility, but they are not going to be replacing humans anytime soon.
Overblown claims in the media that the ChatGPT will eliminate all the jobs are, quite frankly, hilarious. I read somewhere the other day that all the junior programmer jobs are going to be replaced by ChatGPT because people are training it to write code by feeding it GitHub data. Oh good, they are training the AI to write buggy, illegible code that is already written! Any programmer can copy and paste things off the internet, and already do so on a regular basis. They don’t need an AI to do that for them. At best it will speed up their searches for code to copy and paste. Programmers are necessary to implement new functionality, not already existing functionality, so this entirely irrelevant. The AI cannot actually think, it can only copy. It might learn to write boilerplate code, but there are already wizards built into IDEs that do that anyways. Anything unique will still have to be written by a person.
Similarly, these sorts of AI cannot replace people like authors or journalists because they cannot observe the real world. They can only observe already existing data. How would ChatGPT write a story on an event that no humans have already covered? It cannot go out into the field and see what happened with that house that burned down or that war that broke out. A journalist will have to do that before ChatGPT will know it happened. Smaller newspapers or local outlets might be able to then autogenerate stories off of the bigger news outlets coverage of events, but smaller outlets usually just copy and paste the stuff from the bigger outlets anyways.
Copying and pasting is not discernment. It is not thought. ChatGPT is just a tool. It may boost productivity in some sectors if utilized properly. It may also make the current Google censorship look like small potatoes. At the end of the day, the humans using these sort of tools are where the ethics actually exist.
I believe Bing is using ChatGPT in its search algorithm. It also has significant bias built in. A request to write a positive poem about Trump and it returned a message that it is programmed to be neutral on such issues. But when the name Biden was substituted it produced a multiple stanza poem praising Biden’s humanity and leadership.
A recent post lamented the slide into authoritarianism as it discussed Turley’s take on the statements regarding whose free speech and whose safety. Using tools such as these that return results with an air of authority that are biased is a great danger to our republic.
There’s a meme that says “ChatGPT” in French means “cat, I farted.” My friend the French professor confirms the translation.
You may regard this as a metaphor should you decide to do so.
That’s not quite right. Rather, it is a homophone for that French phrase. It only means it in the same sense that PD (as in NYPD, LAPD, etc.) means pederast in French – by sounding the same when pronounced in a French accent as “pédéraste’s” abbreviation “pédé”.
True. Sloppy description on my part.
I wonder if you posed the exact same query to ChatAI or Googles Bard would you get exactly the same or almost identical output. If the programs are using probabilities to determine the following words it seems to me that the probabilities would not change for identical queries. The question becomes will Googles product corroborate faulty answers from Bing if the data sets used are identical?
I wrote my initial comment before I read the linked Verge article.
You should enjoy this. Evidently, even within the rules of the game of chess, AI bot ChatGPT just flat out makes up its own rules when playing against a chess-playing engine.
Chat GPT is still just a tool in development, but represents a giant leap forward, particularly in the realm of search. Given the limitations and the training that had to go into it, once those constraints are removed and it can be properly trained, it can continue to improve. Yes, it can only learn what it can see – but training it to analyze video inputs, tweets, false information, true information will eventually get it to where it will be near impossible to deny it’s value. If every city council meeting is live-streamed, and AI can auto generate captions (see how that’s done on a zoom call, it’s insane to me.) and AI can read those captions – who needs a reporter or a journalist? If AI can access every corporate structure, know all the shareholders and financial statements and know what every company does and can process that information on demand, it will become a research tool that will kick start every investigation (gov’t or journalism), compressing the research and planning phase into mere hours.
Yes – it’s a long way to go – but I don’t think it’s more than 5 years away.