This is a temporary backup site for TRENDS MENA while our primary website is being restored following a regional disruption affecting Amazon Web Services cloud infrastructure in the GCC.

Search Site

DP World 2025 revenue $24.4bn

The profit for the year up 32.2% to reach $1.96bn.

BYD 2025 revenue surges

The EV manufacturer reported net profit of $.3.3bn for 9M 2025.

Aramco net income $28bn

Capital investment during Q3 2025 $12.9bn on investments in energy projects.

e& revenue up 23%

Consolidated net profit reached $2.94 billion during 2025.

Al Rajhi profit up 26%

Operating income for 2025 increased 22% to SAR 39 bn.

AI can screen talent, only humans can see it

  • Systems trained on narrow behavioral norms may inadvertently penalize those who express themselves differently.
  • In societies where interpersonal trust and relational fit remain central to business culture, removing human judgment entirely could be counterproductive.

In boardrooms from Dubai to Riyadh, artificial intelligence is rapidly reshaping how organizations operate and function. Across the Middle East and North Africa, governments are championing national AI strategies, digital transformation agendas, and knowledge economies designed to reduce reliance on hydrocarbons. Yet as companies explore automation in recruitment, a central question remains: can technology alone deliver fair, diverse, and future-ready talent pipelines?

As a researcher interested in the intersection of AI and hiring, I argue that while AI can improve efficiency and consistency, human insight remains essential at every stage of recruitment. This insight is particularly important in MENA, where labor markets are undergoing a profound structural transformation.

Recruitment is not a single yes-or-no decision. It is a sequence of interconnected funnels. Organizations define roles, craft job descriptions, attract applicants, screen CVs, conduct interviews, extend offers, and negotiate acceptance. At each stage, both employers and candidates make choices. AI increasingly shapes many of these decisions, but its influence is far from neutral.

Defining the job

Consider the first funnel: defining the job itself. It is plausible that as organizations begin to experiment with AI tools to inform role descriptions and identify in-demand skills, job definitions may evolve, potentially incorporating new skill combinations or emphasizing capabilities differently than in the past. However, we currently lack comprehensive empirical evidence from MENA-specific contexts. What is clear is that the way jobs are scoped will influence who applies and who feels they belong in a role, making careful human oversight indispensable.

AI systems are trained on historical data. If that data reflects yesterday’s assumptions, those assumptions risk being scaled at speed. Gendered language, narrow definitions of leadership, or implicit preferences for particular educational backgrounds can quietly reappear in newly generated job descriptions. In a region where inclusion agendas are gaining momentum, from women’s workforce participation in Saudi Arabia to Emiratization policies in the UAE, blindly automating the past could undermine carefully designed reform.

Attraction and application

The second funnel is attraction and application. AI has dramatically increased the potential volume of applications. Candidates can now generate tailored CVs and cover letters quickly, and some platforms even allow multiple submissions with minimal effort. At first glance, this seems positive: a larger pool should mean greater choice. Yet higher application volumes inevitably translate into higher rejection rates. In my research, I have shown that rejection shapes behavior. In high-rejection environments, candidates are less likely to reapply. Crucially, evidence shows that women are particularly attuned to perceptions of fairness in the hiring process. Where the rejection feels opaque or insufficiently explained, they may opt out altogether. This has important downstream implications for the composition and diversity of candidate pools.

For MENA economies striving to raise women’s labor force participation and integrate young graduates into private sector roles, this is not trivial. If AI-driven efficiency quietly increases rejection rates without transparent communication, it could narrow, rather than widen, future pipelines.

Screening introduces another paradox. Algorithms can, in certain contexts, be less biased than humans. Human recruiters are susceptible to similarity bias, favoring candidates who share their school, nationality, or hobbies. Properly designed AI systems can ignore such irrelevant signals and focus on structured indicators of performance.

New challenge: Homogenization

Yet the widespread use of AI by candidates creates a new challenge: homogenization. When everyone uses the same tools to polish their CVs, profiles begin to look strikingly similar. The language becomes uniformly professional, filled with identical buzzwords. Distinguishing genuine excellence from algorithmic gloss becomes harder. When signals become noisier, employers may respond in predictable ways. They may interview more candidates, raising costs, or reject more at earlier stages, reinforcing high-rejection dynamics. They may respond by privileging alternative signals that appear more informative, yet are often more socially exclusive, such as employee referrals, closed professional networks, or elite educational credentials, thereby reintroducing inequality through channels that operate outside the algorithm itself. In a region where youth unemployment remains a pressing concern, particularly in North Africa, the potential for mass rejection at scale carries social as well as organizational implications.

The interview stage adds further complexity. AI-powered video platforms can analyze tone, facial expression, and speech patterns, turning unstructured interactions into structured data. This promises consistency and scalability, which is attractive for employers managing thousands of applicants across multiple countries.

Human judgment

However, cultural nuance matters deeply in MENA. Communication styles differ across the Gulf, the Levant, and North Africa. Comfort with camera-based interviews varies by age, gender, and socioeconomic background. Systems trained on narrow behavioral norms may inadvertently penalize those who express themselves differently. In societies where interpersonal trust and relational fit remain central to business culture, removing human judgment entirely could be counterproductive.

This is where human insight becomes indispensable. A hiring manager understands team dynamics, organizational culture, and strategic trade-offs. They can recognize potential that does not neatly match a predefined template. They can interpret context, for example, a candidate whose career path reflects national service obligations, family responsibilities, or entrepreneurial risk-taking in volatile markets.

AI can identify patterns at scale. Humans can interpret meaning.

For organizations across MENA, the imperative is not to resist AI but to govern it wisely. That means clear accountability for tool selection and training data. It means monitoring not only who is hired, but who applies and who keeps applying. It means stress-testing job descriptions for unintended bias and auditing rejection rates for demographic impact. It means blending structured AI assessments with thoughtfully designed human interviews.

Above all, it requires recognizing that technology is not a neutral destiny. The region’s ambitious visions, from economic diversification to knowledge-based growth, depend on attracting and nurturing diverse talent. AI can accelerate that mission or quietly distort it.

The future of hiring in MENA will undoubtedly be digital. But its success will hinge less on the sophistication of algorithms than on the judgment, transparency, and cultural intelligence of the people who deploy them.