Search Site

Roche to buy Poseida Therapeutics

The $1.5 billion deal is due to close in early 2025.

BP announces $7bn gas project

The project aims to unlock 3 trillion cu ft of gas resources in Indonesia.

Lulu Retail Q3 profit $35m

For the nine-month period, net profit increased by 73.3%.

Talabat IPO offer price range announced

The subscription will close on 27 Nov for UAE retail investors.

Salik 9M net profit $223m

The company's third-quarter profit increased by 8.8 percent.

Threat of AI-powered phishing is on the rise

Cybercrime economy
  • Generative AI is driving a surge in sophisticated spear phishing, automating attacks via social media scraping and credential stuffing, transforming the cybersecurity landscape.
  • The emergence of tools like WormGPT and FraudGPT on the dark web, coupled with cybercrime's ranking as the world's third-largest economy, is fueling innovative phishing techniques.

DUBAI — Not long ago, we could identify phishing emails by their poor spelling, grammatical errors, and awkward syntax. Common scams, like the Nigerian prince scheme, were easy to spot. Most of us haven’t encountered sophisticated, targeted spear phishing because the effort of researching individuals and crafting personalized messages has been prohibitively expensive for criminals. However, with the advent of generative AI, this is rapidly changing, and as security professionals, we must brace for the implications.

Generative AI enables the complete automation of spear phishing, reducing its cost and expanding its usage. Consider the effort required for an attacker to compose an effective spear phishing message for a business email compromise (BEC). The attacker selects a target, researches their social media, identifies their closest connections, and discerns their interests. Using this information, the attacker crafts a personalized email with a tone designed to elude detection. This process involves meticulous lead tracking and psychological insight.

Could this process be automated? Absolutely. Attackers can automate the harvesting of social media content and employ credential stuffing to hijack accounts for intelligence gathering. Similarly, automation allows attackers to construct a detailed knowledge graph about a target’s life.

Armed with this knowledge graph, attackers could input highly personal details into a ChatGPT-like service, one lacking ethical restraints, to generate targeted and convincing spear phishing messages. They could even create series of messages across various platforms, from email to social media, using multiple fabricated accounts, each with a persona tailored to exploit the target’s trust tendencies.

Signs suggest that this threat is looming. Reports of new attack tools available on the dark web, like WormGPT and FraudGPT, show that criminals are beginning to harness generative AI for malicious purposes, including phishing. While we haven’t seen large-scale, end-to-end automation yet, the components are aligning, and the economic forces driving cybercrime make the evolution of these technologies almost certain.

Within the cybercrime economy, specialization drives innovation. The World Economic Forum (WEF) estimates cybercrime as the world’s third-largest economy, trailing only the United States and China. Costs are expected to reach $8 trillion in 2023 and $10.5 trillion by 2025. This economy encompasses specialized vendors: some sell stolen credentials, others provide access to compromised accounts, and some offer IP address proxying over millions of residential IP addresses.

Additionally, phishing-as-a-service providers offer complete toolkits, from email templates to real-time phishing proxy sites. As vendors vie for criminal clientele, the greatest rewards will go to those offering comprehensive services at the lowest cost — a trend likely to accelerate spear phishing automation. We can envisage organizations specializing in various types of data collection on targets, data aggregation, and Language Learning Models (LLMs) tailored to specific industries or fraud types.

With the expected rise in spear phishing, organizations must strengthen their anti-phishing measures:

  • Uplevel Phishing Awareness Training: Regular education on phishing risks, recognizing suspicious emails, and handling potential phishing attempts is crucial. Traditional training focuses on identifying emails with spelling and grammar mistakes. However, training must now delve deeper, teaching people to scrutinize requests from untrusted, unverified sources. Simulated phishing campaigns should use well-written, professional messages, targeted at specific employees and seemingly from legitimate sources.
  • Defend Against Real-Time Phishing Proxies: Attackers often bypass multi-factor authentication (MFA) using real-time phishing proxies. They deceive users into submitting credentials and one-time passwords on controlled sites, which they then proxy to the actual application for unauthorized access.
  • Defend More Rigorously Against Account Takeovers: Criminals control vast numbers of accounts via credential stuffing using bots. Beyond financial fraud, they use additional personal data, gathered through scraping, for more phishing attacks. Effective defense against bots involves extensive signal collection and machine learning application.
  • Use AI to Battle AI: As criminals use generative AI for fraud, organizations should employ AI in defense. F5 collaborates with organizations, leveraging signal collection and AI to combat fraud. F5 Distributed Cloud Account Protection monitors real-time transactions across user journeys, detecting malicious activity and ensuring accurate fraud detection rates. Detecting fraud within applications mitigates phishing damage. Efficient traffic inspection with AI involves effective TLS orchestration for decrypting traffic.

What’s Next?

Generative AI presents new security challenges. The rise of automated spear phishing necessitates reevaluating our trust heuristics. Where once professionalism’s appearance sufficed for trust, now more stringent communication verification protocols are needed. In this era of misinformation campaigns, deep fakes, and automated spear phishing, increased suspicion is vital. Organizations must deploy AI in defense as rigorously as criminals exploit it.

Jim Downey is Senior Product Marketing Manager at F5.

The opinions expressed are those of the author and may not reflect the editorial policy or an official position held by TRENDS.