Search Site

ADNOC Drilling closes JV

It is a JV between ADNOC Drilling, SLB and Patterson UTI.

Boeing to boost 787 production

The firm will invest$1bn to ramp up production in South Carolina.

ADNOC signs deal with PETRONAS

Under the agreement, ADNOC will supply 1m tons of LNG per year.

Aramco-Horse Powertrain deal completed

An agreement for the purchase of 10% equity stake was signed in June 2024.

Roche to buy Poseida Therapeutics

The $1.5 billion deal is due to close in early 2025.

European Parliament approves ‘pioneering’ rules to govern AI

The text passed with support from 523 EU lawmakers, with 46 voting against. (AFP)
  • The AI Act focuses on higher-risk uses of the technology by the private and public sector, with tougher obligations for providers
  • EU chief Ursula von der Leyen hailed the vote ushering in a "pioneering framework for innovative AI, with clear guardrails"

Strasbourg, France – The European Parliament gave final approval on Wednesday to the world’s most far-reaching rules to govern artificial intelligence, including powerful systems like OpenAI’s ChatGPT.

The AI Act focuses on higher-risk uses of the technology by the private and public sector, with tougher obligations for providers, stricter transparency rules for the most powerful models like ChatGPT, and outright ban on tools considered too dangerous.

Senior European Union officials say the rules, first proposed in 2021, will protect citizens from the risks of a technology developing at breakneck speed, while also fostering innovation on the continent.

EU chief Ursula von der Leyen hailed the vote ushering in a “pioneering framework for innovative AI, with clear guardrails.”

“This will benefit Europe’s fantastic pool of talents. And set a blueprint for trustworthy AI throughout the world,” she said on X.

The text passed with support from 523 EU lawmakers, with 46 voting against. The EU’s 27 states are expected to endorse the law in April before publication in the bloc’s Official Journal in May or June.

Brussels has been sprinting to pass the new rules since OpenAI’s Microsoft-backed ChatGPT arrived on the scene in late 2022, unleashing a global AI race.

There was a burst of excitement for generative AI as ChatGPT wowed the world with its human-like capabilities — from digesting complex text to producing poems within seconds, or passing medical exams.

Further examples include DALL-E and Midjourney, which produce images, while others create sounds based on a simple input in everyday language.

But with the excitement came a swift realisation of the threats — not least that AI-generated audio and video deepfakes would turbocharge  disinformation campaigns.

“Today is again a historic day on our long path towards regulation of AI,” said Brando Benifei, an Italian lawmaker who pushed the text through parliament with Romanian MEP Dragos Tudorache.

“We managed to find that very delicate balance between the interest to innovate and the interest to protect,” Tudorache told journalists before the vote.

Rules covering AI models like ChatGPT will enter into force 12 months after the law becomes official, while companies must comply with most other provisions in two years.

AI policing restrictions

The EU’s rules known as the “AI Act” take a risk-based approach: the riskier the system, the tougher the requirements — with outright bans on the AI tools deemed to carry the most threat.

For example, high-risk AI providers must conduct risk assessments and ensure their products comply with the law before they are made available to the public.

“We are regulating as little as possible and as much as needed, with proportionate measures for AI models,” the EU’s internal market commissioner, Thierry Breton, said.

Violations can see companies hit with fines ranging from 7.5 million to 35 million euros ($8.2 million to $38.2 million), depending on the type of infringement and the firm’s size.

There are strict bans on using AI for predictive policing and systems that use biometric information to infer an individual’s race, religion or sexual orientation.

The rules also ban real-time facial recognition in public spaces but with some exceptions for law enforcement, although police must seek approval from a judicial authority before any AI deployment.

Digital civil rights group Access Now said the bans did not go far enough.

“The final text is full of loopholes, carve-outs, and exceptions, which mean that it will not protect people, nor their human rights, from some of the most dangerous uses of AI,” it said in a statement.

EU ‘resisted pressure’ –

Since AI will likely transform every aspect of Europeans’ lives and big tech firms are vying for dominance in what will be a lucrative market, the EU has been subject to intense lobbying.

Watchdogs on Tuesday pointed to campaigning by French AI startup Mistral AI and Germany’s Aleph Alpha as well as US-based tech giants like Google and Microsoft.

They warned the law’s implementation “could be further weakened by corporate lobbying”.

“Many details of the AI Act are still open and need to be clarified in numerous implementing acts, for example, with regard to standards, thresholds or transparency obligations,” three watchdogs based in Belgium, France and Germany said.

Lawmaker Tudorache said the law was “one of the… heaviest lobbied pieces of legislation, certainly in this mandate”, but insisted: “We resisted the pressure.”

Organizations representing the European creative and cultural sectors welcomed the vote in a joint statement but urged the EU to ensure “these important rules are put into practice in a meaningful and effective way”.