This is a temporary backup site for TRENDS MENA while our primary website is being restored following a regional disruption affecting Amazon Web Services cloud infrastructure in the GCC.

Search Site

ADNOC Distribution 2025 dividend $700m

The company had reported EBITDA of $1.17 bn in 2025.

Empower okays $119.1m H2 2025 dividend

The dividend is equivalent to 43.75% of paid-up capital.

Alujain widens 2025 loss

The increase in loss is due to impairment charges, weaker prices.

Masar 2025 net profit $262m

Higher land plot sales boost revenue and operating income.

Tasnee’s 2025 losses deepen

The petrochemicals' company's revenue also fell 17.7 percent.

Firms lag as AI-driven fraud surges: Study

  • Among the fastest-growing threats is deepfake-enabled social engineering.
  • The United Arab Emirates and Saudi Arabia have the potential to take a leading role in next-generation fraud prevention.

Organizations worldwide are struggling to keep pace with a surge in AI-driven fraud, with only 7% saying they are well prepared to detect or prevent such threats, according to a new global study.

The report, published by the Association of Certified Fraud Examiners and analytics firm SAS, highlights widening gaps between rapidly evolving fraud tactics and corporate defenses, as criminals increasingly exploit accessible artificial intelligence tools.

Based on a survey of 713 anti-fraud professionals across eight regions, the 2026 Anti-Fraud Technology Benchmarking Report found that AI-powered scams — from deepfake impersonation to forged documents — are rising across industries.

“The data paints a worrisome picture: fraud is evolving faster than most organizations can defend against it,” said John Gill.
“AI-powered threats aren’t on the horizon — they’re already here, and they’re accelerating quickly.”

Deepfakes and scams surge

Among the fastest-growing threats is deepfake-enabled social engineering, with 77% of respondents reporting an increase over the past two years. Consumer scams and generative AI-driven document forgery followed closely, each cited by 75% of respondents, while 72% reported a rise in deepfake “digital injection” attacks.

More than half of those surveyed expect deepfake scams and AI-generated fraud to rise significantly over the next two years, underscoring concerns that attackers are gaining an early lead in the use of emerging technologies.

Adoption rises, but gaps remain

While companies are increasing investment in AI tools to counter fraud, adoption remains uneven. The study found that 25% of organizations currently use AI or machine learning in their anti-fraud programs, up from 18% in 2024, with another 28% planning adoption by 2028.

However, governance frameworks are lagging. Although 86% of respondents said accuracy is critical when deploying generative AI, only 18% reported testing their models for bias or fairness. Just 6% said they were fully confident in explaining how their AI systems make decisions.

For regulated sectors such as banking and insurance, the lack of explainability could pose regulatory and legal risks in addition to reputational damage.

Gulf markets seen as well positioned

The report highlighted the potential for Gulf economies, particularly the United Arab Emirates and Saudi Arabia, to take a leading role in next-generation fraud prevention.

“Few regions combine high growth with such strong regulatory leadership,” said Abed Hamandi.
“The UAE and Saudi Arabia are not constrained by legacy in the same way as many mature markets.”

Central banks such as the Central Bank of the UAE and Saudi Central Bank have played a key role in shaping digital ecosystems, enabling faster adoption of real-time, AI-driven fraud detection systems, the report said.

Budgets grow, but constraints persist

More than half of respondents (55%) said their organizations plan to increase spending on anti-fraud technology over the next two years. However, financial constraints remain the top barrier, cited by 84% as a major or moderate challenge.

Experts say the pace of investment may not be sufficient to counter the speed at which criminals are deploying new tools.

“Cybercriminals don’t have governance committees, and they don’t wait for budget cycles,” said Stu Bradley.
“Every quarter business leaders spend evaluating a technology is another quarter lawbreakers get to weaponize it.”

Emerging technologies gain traction

The study points to growing interest in advanced technologies, particularly generative AI and autonomous “agentic” systems.

While only 16% of organizations currently use generative AI for fraud detection, 58% plan to adopt it in the future. Among current users, key applications include phishing detection, risk assessment and report generation.

Agentic AI — systems capable of executing tasks autonomously — is expected to see faster uptake, with 31% of organizations planning adoption by 2028.

Physical biometrics remains the most widely deployed emerging technology, used by 45% of organizations, up from about one-third in 2022. By contrast, cloud-based fraud detection platforms and automation tools remain underutilized, at 10% and 29% respectively.

Quantum risks on the horizon

Looking further ahead, respondents expect quantum computing to reshape the fraud landscape sooner than anticipated. Around 62% said quantum technologies will materially impact fraud detection and prevention by 2030, while 11% said the impact is already being felt.

Race against time

The report concludes that organizations face a narrowing window to strengthen defenses as AI-driven fraud becomes more sophisticated and widespread.

Across sectors, the key differentiator will be the ability to deploy technology at scale while maintaining governance, speed and adaptability, it said.