This is a temporary backup site for TRENDS MENA while our primary website is being restored following a regional disruption affecting Amazon Web Services cloud infrastructure in the GCC.

Search Site

AD Ports Group 2024 net profit $484m

The Group's revenue increased 48 percent year-on-year.

TAQA net income $1.93bn in 2024

The company's revenues increased 6.7 percent year-on-year.

ADNOC L&S 2024 net profit $756m

The company's revenue increased by 29 percent to $3.54 billion.

ADNOC Distribution 2024 net profit down 7%

Minus UAE corporate tax, it would have grown by 2.4% to $725m

Maaden raises $1.25bn in sukuk offering

The Sukuk were offered in a five-year and a 10-year tranche.

Prompted by GPT-4, Musk, experts call for halt in ‘giant AI experiments’

  • Musk was an initial investor in OpenAI, spent years on its board, and his car firm Tesla develops AI systems to help power its self-driving technology, among other applications
  • The letter, hosted by the Musk-funded Future of Life Institute, was signed by prominent critics as well as competitors of OpenAI like Stability AI chief Emad Mostaque

Paris, France– Billionaire mogul Elon Musk and a range of experts called on Wednesday for a pause in the development of powerful artificial intelligence (AI) systems to allow time to make sure they are safe.

An open letter, signed by more than 1,000 people so far including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4 from Microsoft-backed firm OpenAI.

The company says its latest model is much more powerful than the previous version, which was used to power ChatGPT, a bot capable of generating tracts of text from the briefest of prompts.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” said the open letter titled “Pause Giant AI Experiments”.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” it said.

Musk was an initial investor in OpenAI, spent years on its board, and his car firm Tesla develops AI systems to help power its self-driving technology, among other applications.

The letter, hosted by the Musk-funded Future of Life Institute, was signed by prominent critics as well as competitors of OpenAI like Stability AI chief Emad Mostaque.

Canadian AI pioneer Yoshua Bengio, also a signatory, at a virtual press conference in Montreal warned “that society is not ready” for this powerful tool, and its possible misuses.

“Let’s slow down. Let’s make sure that we develop better guardrails,” he said, calling for a thorough international discussion about AI and its implications, “like we’ve done for nuclear power and nuclear weapons.”

‘Trustworthy and loyal’

The letter quoted from a blog written by OpenAI founder Sam Altman, who suggested that “at some point, it may be important to get independent review before starting to train future systems”.

“We agree. That point is now,” the authors of the open letter wrote.

“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

They called for governments to step in and impose a moratorium if companies failed to agree.

The six months should be used to develop safety protocols, AI governance systems, and refocus research on ensuring AI systems are more accurate, safe, “trustworthy and loyal”.

The letter did not detail the dangers revealed by GPT-4.

But researchers including Gary Marcus of New York University, who signed the letter, have long argued that chatbots are great liars and have the potential to be superspreaders of disinformation.

However, author Cory Doctorow has compared the AI industry to a “pump and dump” scheme, arguing that both the potential and the threat of AI systems have been massively overhyped.