You should know the name Geoffrey Hinton by now. To the extent that any one scientist is credited with the emergence of artificial intelligence, he’s it. He was among the winners of the prestigious Turing Prize for his break-through in artificial neural networks, and his discoveries were crucial in the development of advanced Artificial Intelligence (AI) software today like Chat GPT and Google’s Bard. He spent 50 years developing the technology, the last 10 pf which working on AI at Google before he quit in 2023. His reason: he was alarmed at the lack of sufficient safety measures to ensure that AI technology doesn’t do more harm than good.
And yet, as revealed in a recent interview on CBS’s “60 Minutes,” Hinton still believes that his work to bring AI to artificial life was time well-spent, that his baby was worth nurturing because of its potential benefits to humanity, and that—-get this—all we have to do is, for the first time in the history of mankind, predict the dangers, risks and looming unintended consequences of an emerging new technology, get everything right the first time, not make any mistakes, not be blindly reckless in applying it, and avoid the kind of genie-out-of-the-bottle catastrophes that world has experienced (and is experiencing!) over and over again.
That’s all!









