leader

Geoffrey Hinton, a British researcher and academic known as one of the godfathers of artificial intelligence, has quit his nearly decade-long association with Google to warn against the dangers of further developing AI without assessing its consequences.

Following an interview with The New York Times, the Canada-based research scholar tweeted on Monday (May 1), “I left [Google] so that I could talk about the dangers of AI without considering how this impacts Google,” adding that the company has acted “very responsibly” in its pursuit of AI development. This year, Google began testing its own AI chatbot, Bard.

Hinton is a 75-year-old scholar and researcher from the UK. In 1970, he graduated from the University of Cambridge with a BA in Experimental Psychology, and in 1978, he earned a PhD in Artificial Intelligence from the University of Edinburgh in Scotland. Additionally, he has worked as a professor at Carnegie-Mellon University in Pennsylvania, USA, where he taught computer science.

Hinton declared in the 1980s that he was opposed to funding research for potential use of AI in combat because the US military financed the majority of AI research in the country at the time. This led to his relocation to Canada.

After that, he moved to the University of Toronto’s Department of Computer Science as a member of the Canadian Institute for Advanced Research. He is an emeritus distinguished professor at the institution and the author of many research articles on machine learning. He has been a VP Engineering fellow with Google part-time since 2013.

Hinton mentioned in a Coursera course that typically, a computer programme or code is developed by hand for each individual task to be accomplished by a machine (such displaying a photo or a certain text to the user). In contrast, machine learning uses a large number of samples that are gathered and sent to the computer in order to train it to recognise the right output for a particular input. Then, using these examples, a machine learning algorithm creates a programme that accomplishes the task, he wrote.

For instance, a machine can be taught to recognise various animals or plants by feeding it thousands of photos. According to the NYT interview, Hinton “embraced an idea called a neural network… a mathematical system that learns skills by analysing data” in 1972 while a PhD student at the University of Edinburgh. According to him, the goal was to use cutting-edge learning algorithms to tackle real-world issues. These algorithms were inspired by how the human brain functions with its networks of neurons or nerve cells.

The Association for Computing Machinery defined “neural networks” in 2018 as “systems composed of layers of relatively simple computing elements called ‘neurons’ that are simulated in a computer,” which is how the Turing Award for contributions to computer science is given. These “neurons” interact with one another and only vaguely resemble the neurons in the human brain.

A breakthrough was made in 2012 when Hinton, Ilya Sutskever, and Alex Krishevsky, two of his students in Toronto, “built a neural network to analyse thousands of photos and teach itself to identify common objects,” according to a story in the New York Times. At OpenAI, Sutskever later rose to the position of head scientist and co-founder. Later, the trio founded DNNResearch, which Google later purchased for $44 million. For picture search, it included components from this into its social networking platform Google+.

Hinton said that neural networks are the best method for speech recognition, object classification in photos, and machine translation in a 2021 commencement speech at IIT-Bombay. “Neural networks with over a trillion parameters are capable of generating pretty complex stories or providing answers to a wide range of questions because they are so good at predicting the next word in a sentence… Although these large networks are still 100 times smaller than the human brain, they have already begun to pose some very intriguing questions regarding the origins of human intelligence, according to him.

The invention of artificial neural networks “may well be the start of autonomous intelligent brain-like machines,” according to the UK’s Royal Society in its feature of Hinton.

In his interview with the New York Times, Hilton raised three main points of worry. First of all, he worries that the internet may soon be overrun with fake photographs, videos, and text and that the typical person will “no longer be able to know what is true” given that technologies like ChatGPT search the internet for information and produce a final product.

Second, it might eventually result in the automation of human occupations. It eliminates the tedious work, he said, adding that “it may eliminate more than that.”

The information we are building is fundamentally different from the intelligence we now have, he added in his BBC interview. These are digital systems, whereas we are biological systems. He claims that this results in a significant capacity differential, where they can instantly analyse massive amounts of data. Such information might in the future be utilised however “bad actors” see fit.

And Hinton is not the only one who has expressed similar worries. Elon Musk, the founder of Tesla, and Steve Wozniak, the co-founder of Apple, wrote an open letter in early April calling for a six-month moratorium on the continued development of AI systems because they pose “profound risks to society and humanity.”

They also expressed concerns about the spread of false information and demanded that businesses create a set of common safety rules for the design and development of advanced AI at this time that can be checked by impartial outside experts. With a pause, the letter stated that it is suggested to create a proper framework with a legal framework and foolproofing, including watermarking systems “to help distinguish real from synthetic.”

Leave a Reply

Bizemag

FREE
VIEW