Join top executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
On May 1, The New York Times reported that Geoffrey Hinton, the so-called Godfather of AI, had resigned from Google. The reason he has given for this move is that he will allow him to speak freely about the risks of artificial intelligence (AI).
His decision is surprising and not surprising. The first since he has dedicated a lifetime to the advancement of AI technology; the latter given the growing concerns he expressed in recent interviews.
There is symbolism in this announcement date. May 1 is May Day, known to celebrate workers and the flowering of spring. Ironically, AI and especially generative AI based on deep learning neural networks can replace a large swath of the workforce. We’re already starting to see this impact, for example, at IBM.
Is artificial intelligence replacing jobs and approaching superintelligence?
No doubt others will follow as the World Economic Forum sees the potential for 25% job disruption over the next five years, with artificial intelligence playing a role. As for the flowering of spring, generative AI could spark a new beginning of the symbiotic intelligence of humans and machines working together in ways that will lead to a renaissance of possibility and abundance.
Event
Transform 2023
Join us in San Francisco July 11-12, where top executives will share how they integrated and optimized AI investments for success and avoided common pitfalls.
subscribe now
Alternatively, this could be when the advancement of AI begins to approach superintelligence, possibly posing an exponential threat.
It’s these kinds of worries and concerns that Hinton wants to talk about, and he couldn’t do so while working for Google or any other company pursuing the commercial development of AI. As Hinton said in a Twitter post: I left so I could talk about the dangers of AI without considering the impact this has on Google.
First of May
Maybe it’s just a play on words, but the date of the announcement conjures up another association: Mayday, a distress signal commonly used when there is immediate and grave danger. A mayday signal should be used when there is a real emergency, as it is a priority call to respond to a situation. Is the timing of this news purely coincidental or is it intended to symbolically add to its significance?
According to Times article, Hinton’s immediate concern is the ability of AI to produce human-grade content in text, video, and images, and how that ability can be used by bad actors to spread disinformation and disinformation in a way that the average person won’t be able to able to know which is more true.
He also now believes we are much closer to the time when machines will be smarter than the smartest people. This point has been much debated, and most AI experts have considered it far in the future, perhaps 40 years or more.
The list included Hinton. In contrast, Ray Kurzweil, former director of engineering at Google, has long said that this moment will come in 2029 when AI will easily pass the Turing test. Kurzweil’s views on this timeline had been an outlier, but not anymore.
According to Hinton’s May Day interview: The idea that this stuff [AI] may actually get smarter than people some people believed. But most people thought it was far away. And I thought it was far. I thought it was 30 to 50 years old or even more. Of course I don’t think so anymore.
Those 30-50 years could have been used to prepare businesses, governments and society through governance practices and regulations, but now the wolf is at the door.
Artificial general intelligence
A related topic is the discussion on artificial general intelligence (AGI), the mission for OpenAI and DeepMind and others. The AI systems in use today primarily excel at specific, narrow tasks, such as reading radiological images or playing games. A single algorithm cannot excel at both types of tasks. In contrast, AGI possesses human-like cognitive abilities, such as reasoning, problem solving, and creativity, and, as a single algorithm or a network of algorithms, would perform a wide range of human-level tasks or better in different domains.
Much like the debate about when AI will be smarter than humans for at least specific tasks, predictions vary widely about when AGI will be achieved, ranging from a few years to several decades or centuries, or possibly never. These timeline predictions are also advancing thanks to new generative AI applications like ChatGPT based on Transformer Neural Networks.
Beyond the intended purposes of these generative AI systems, such as creating compelling images from text messages or providing human-like text responses in response to questions, these models possess the uncanny ability to exhibit emergent behaviors. This means that AI can show new, intricate and unexpected behaviors.
For example, the ability of GPT-3 and GPT-4, the underlying models of ChatGPT, to generate code is considered emergent behavior since this ability was not part of the design specification. Instead, this trait emerged as a byproduct of model training. The developers of these models cannot fully explain how or why these behaviors develop. What can be inferred is that these capabilities emerge from large-scale data, transformer architecture, and powerful pattern recognition capabilities developed by models.
Timelines accelerate, creating a sense of urgency
It is these advances that are recalibrating the timelines for advanced AI. In a recent CBS News interview, Hinton said he now believes AGI could be achieved in 20 years or less. He added: We may be close to computers that can come up with ideas to improve themselves. This is a problem, right? We have to think hard about how to control it.
The first evidence of this capability can be seen with the nascent AutoGPT, an open source recursive AI agent. As well as anyone being able to use it, this means they can autonomously use the results it generates to create new prompts, chaining these operations together to complete complex tasks.
In this way, AutoGPT could potentially be used to identify areas where the underlying AI models could be improved and then generate new ideas on how to improve them. Not only that, but how The New York Times columnist Thomas Friedman notes that open source code can be exploited by anyone. He asks: what would ISIS do with the code?
It is not a fact that generative AI in particular or the overall effort to develop AI will lead to negative results. However, the accelerating timeline for more advanced AI brought about by generative AI created a strong sense of urgency for Hinton et al, clearly leading up to its May signal.
Gary Grossman is SVP of technology practice at Edelman and global head of the Edelman AI Center of Excellence.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data engineers, can share data-related insights and innovations.
If you want to read cutting-edge ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing your own article!
Read more from DataDecisionMakers
#call #artificial #intelligence