Ethics Dunce: Geoffrey Hinton, “The Godfather of Artificial Intelligence”

You should know the name Geoffrey Hinton by now. To the extent that any one scientist is credited with the emergence of artificial intelligence, he’s it. He was among the winners of the prestigious Turing Prize for his break-through in artificial neural networks, and his discoveries were crucial in the development of advanced Artificial Intelligence (AI) software today like Chat GPT and Google’s Bard. He spent 50 years developing the technology, the last 10 pf which working on AI at Google before he quit in 2023. His reason: he was alarmed at the lack of sufficient safety measures to ensure that AI technology doesn’t do more harm than good.

And yet, as revealed in a recent interview on CBS’s “60 Minutes,” Hinton still believes that his work to bring AI to artificial life was time well-spent, that his baby was worth nurturing because of its potential benefits to humanity, and that—-get this—all we have to do is, for the first time in the history of mankind, predict the dangers, risks and looming unintended consequences of an emerging new technology, get everything right the first time, not make any mistakes, not be blindly reckless in applying it, and avoid the kind of genie-out-of-the-bottle catastrophes that world has experienced (and is experiencing!) over and over again.

That’s all!

When asked the course that guarantees humanity’s safety would be, Hinton answered: “I don’t know. I– I can’t see a path that guarantees safety. We’re entering a period of great uncertainty where we’re dealing with things we’ve never dealt with before. And normally, the first time you deal with something totally novel, you get it wrong. And we can’t afford to get it wrong with these things.”

Will somebody please make this guy watch a double-feature of “Jurassic Park” and “The Terminator”?

Why would anyone, much less anyone smarter than a spoon, think that just because the consequences of abusing this latest powerful technology are particularly dire, which he admits in the interview, it is magically possible to, as he says, get it right the first time? Humanity never gets anything right the first time, literally never. I’m certain that Hinton is well versed in Chaos Theory, the study of apparently random or unpredictable behavior in complex systems that the late scientist/science fiction novelist Michael Crichton introduced to the general public to in his cloned dinosaur novel. “When we have control…” blathers the dinosaur amusement park entrepreneur when everything starts falling apart, only to be reminded that nobody ever has control.

Never mind: Hinton thinks it’s worth gambling that this time all of the dangers of technology abuse can be avoided. In addition to perfect foresight and self-control by scientists and capitalists (Two quotes from Ian Malcolm, the “chaotician” in “Jurassic Park” come to mind: 1) “Your scientists were so preoccupied with whether or not they could [clone dinosaurs], they didn’t stop to think if they should,” and 2) “Genetic power’s the most awesome force the planet’s ever seen, but you wield it like a kid who’s found his dad’s gun.”), he tells Scott Pelley that iron-clad international treaties will have to be signed.

You know, because nations and their occasionally insane leaders always abide by treaties.

Of course, Hinton concedes, if we don’t get AI under control right away, things could get bad. Really bad. Like most people being thrown out of work because they can be replaced by machines that are cheaper and smarter. Or “autonomous battlefield robots”—a treaty will stop anyone from building those, surely. Or, he says, AI could take over. “I’m not saying it will happen. If we could stop them ever wanting to, that would be great. But it’s not clear we can stop them ever wanting to.”

Oh.

But he’s still proud of his work and thinks all the risks are worth it because of all the potential good that might come from his discoveries—when we have control.

We’re doomed.

10 thoughts on “Ethics Dunce: Geoffrey Hinton, “The Godfather of Artificial Intelligence”

  1. Mrs. OB was the lead on the development of the Authorizer’s Assistant program at American Express in the mid ’80s, the first successful commercial application of what would become known as artificial intelligence.

  2. A Facebook post this morning put me in theatre historian mode, so whereas your cinematic examples are pertinent (I might have gone to “2001,” too), I feel compelled to mention Karel Čapek’s R.U,R. from a little over a century ago (published in 1920, produced in ’21). The initials stand for Rossum’s Universal Robots: a great time-saver when they’re first invented, but let’s just say things don’t work out really well for our species by the time the third act curtain falls. Representatives of about every art form you could name have warned us about the need to consider the consequences of placing blind faith in our technological advances for a long time. But as Paul Simon noted in “The Boxer,” “a man hears what he wants to hear and disregards the rest.”

  3. Do you remember:

    In the year 2525, if man is still alive
    If woman can survive, they may find
    In the year 3535
    Ain’t gonna need to tell the truth, tell no lie
    Everything you think, do and say
    Is in the pill you took today
    In the year 4545
    You ain’t gonna need your teeth, won’t need your eyes
    You won’t find a thing to chew
    Nobody’s gonna look at you
    In the year 5555
    Your arms hangin’ limp at your sides
    Your legs got nothin’ to do
    Some machine’s doin’ that for you
    In the year 6565
    You won’t need no husband, won’t need no wife
    You’ll pick your son, pick your daughter too
    From the bottom of a long glass tube

    And that is only the first verse!

  4. An often ignored reference to the potential catastrophes that have been written by great Sci-fi authors comes from one that has recently re-entered the zeitgeist — Dune.

    In the Dune universe, the reason that mentats were used instead of computers for analysis of data by the Great Houses was something called the Butlerian Jihad, or the human war against the thinking machines (essentially AI and AI-controlled robots) which resulted in the feudal empire that was the backdrop of the novel.

    In the Dune universe, the consequences of AI were technological reversal due to abdication by humans in favor of computers for all complex and menial tasks. This set back humanity many generations, reduced the capacity for rational thought and creativity, and to me, rings the truest of all horrible outcomes rather than the Terminator-style apocalypse.

    Subtle unintended consequences are almost always the worst, and longest-lasting.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.