Search Site

ADNOC Drilling closes JV

It is a JV between ADNOC Drilling, SLB and Patterson UTI.

Boeing to boost 787 production

The firm will invest$1bn to ramp up production in South Carolina.

ADNOC signs deal with PETRONAS

Under the agreement, ADNOC will supply 1m tons of LNG per year.

Aramco-Horse Powertrain deal completed

An agreement for the purchase of 10% equity stake was signed in June 2024.

Roche to buy Poseida Therapeutics

The $1.5 billion deal is due to close in early 2025.

Nobel-winning physicist ‘unnerved’ by AI technology he helped create

(FILES) In this file photo taken on June 28, 2023 British-Canadian cognitive psychologist and computer scientist Geoffrey Hinton, known as the 'godfather of AI', speaks during the Collision Tech Conference at the Enercare Centre in Toronto, Ontario, Canada. AFP
  • John Hopfield, a professor emeritus at Princeton, joined co-winner Geoffrey Hinton in calling for a deeper understanding of the inner workings of deep-learning systems
  • With the meteoric rise of AI - and the fierce race it has sparked among companies - the technology has faced criticism for evolving faster than scientists can fully comprehend

Washington, United States – A US scientist who won the 2024 Nobel physics prize for his pioneering work on artificial intelligence said Tuesday he found recent advances in the technology “very unnerving” and warned of possible catastrophe if not kept in check.

John Hopfield, a professor emeritus at Princeton, joined co-winner Geoffrey Hinton in calling for a deeper understanding of the inner workings of deep-learning systems to prevent them from spiraling out of control.

Addressing a gathering at the New Jersey university via video link from Britain, the 91-year-old said that over the course of his life he had watched the rise of two powerful but potentially hazardous technologies — biological engineering and nuclear physics.

“One is accustomed to having technologies which are not singularly only good or only bad, but have capabilities in both directions,” he said.

“And as a physicist, I’m very unnerved by something which has no control, something which I don’t understand well enough so that I can understand what are the limits which one could drive that technology.”

“That’s the question AI is pushing,” he continued, adding that despite modern AI systems appearing to be “absolute marvels,” there is a lack of understanding about how they function, which he described as “very, very unnerving.”

“That’s why I myself, and I think Geoffrey Hinton also, would strongly advocate understanding as an essential need of the field, which is going to develop some abilities that are beyond the abilities you can imagine at present.”

Hopfield was honored for devising the “Hopfield network” — a theoretical model demonstrating how an artificial neural network can mimic the way biological brains store and retrieve memories.

His model was improved upon by British-Canadian Hinton, often dubbed the “Godfather of AI,” whose “Boltzmann machine” introduced the element of randomness, paving the way for modern AI applications such as image generators.

Hinton himself emerged last year as a poster child for AI doomsayers, a theme he returned to during a news conference held by the University of Toronto where he serves as a professor emeritus.

“If you look around, there are very few examples of more intelligent things being controlled by less intelligent things, which makes you wonder whether when AI gets smarter than us, it’s going to take over control,” the 76-year-old told reporters.

Civilizational downfall

With the meteoric rise of AI capabilities — and the fierce race it has sparked among companies — the technology has faced criticism for evolving faster than scientists can fully comprehend.

“You don’t know that the collective properties you began with are actually the collective properties with all the interactions present, and you don’t therefore know whether some spontaneous but unwanted thing is lying hidden in the works,” stressed Hopefield.

He evoked the example of “ice-nine” — a fictional, artificially engineered crystal in Kurt Vonnegut’s 1963 novel “Cat’s Cradle” developed to help soldiers deal with muddy conditions but which inadvertently freezes the world’s oceans solid, causing the downfall of civilization.

“I’m worried about anything that says… ‘I’m faster than you are, I’m bigger than you are… can you peacefully inhabit with me?’ I don’t know, I worry.”

Hinton said it was impossible to know how to escape catastrophic scenarios at present, “that’s why we urgently need more research.”

“I’m advocating that our best young researchers, or many of them, should work on AI safety, and governments should force the large companies to provide the computational facilities that they need to do that,” he added.