By 2027, AI-driven investment tools will become the primary source of advice for retail investors, with usage projected to grow to around 80% by 2028, according to Deloitte. This shift is already underway, with firms like Morgan Stanley integrating AI into advisory workflows — leveraging tools such as AI @ Morgan Stanley Debrief, which acts as an assistant during client meetings. Such advancements are the early vanguard of efficiency, scaled expertise and reshaped client interactions that will make advice more accessible and data-driven than ever before.
Large language models (LLMs) are still evolving. They are expanding beyond robo-advisors, progressing from chatbots to assistants to agents, reducing the advice gap for retail investors.
The essential question is can AI systems deliver the level of emotional intelligence and empathy required by investors to share information and needs in ways they do with their human advisors, i.e., could machines ever replace human advisors?
Framing the problem: The ‘trust equation’
Trust is at the core of successful advice relationships and is the outcome of four distinct factors taken together, forming the so-called ‘trust equation.’ We analyze AI’s impact on each factor and its overall implications:
1. Credibility
Could AI provide expert, accurate advice? AI systems have demonstrated the potential to do so by processing vast datasets with speed and precision. Emerging research from Cornell University, for instance, leveraging Open AI’s GPT-4, which emulates expert investment decision-making, showed the ability to deliver excess returns. In addition, Agentic AI advice systems will also integrate collaboration with human experts as part of their workflows, strengthening the credibility of the human-AI compound systems.
2. Reliability
Will AI systems consistently deliver sound guidance, coaching and recommendations? AI is consistent and free from human error, but its ‘black box’ nature raises transparency concerns. Research at MIT on the use of LLMs for advice indicates that LLMs can offer expert guidance and their expertise increases with additional modules and practices, such as the use of retrieval augmented generation (RAG) for context enrichment.
3. Intimacy
Building meaningful, trust-based relationships requires an understanding of personal experiences. AI can recognize sentiment, but it lacks the lived experience and cultural nuances that human advisors bring. New research indicates that investors perceive AI-generated forecasts as less credible than those from human analysts, highlighting the challenge AI faces in establishing intimacy. However, AI adoption in other industries, such as in online companionship (e.g., Replika AI and Character AI), suggests that consumers are increasingly spending hours on such platforms building AI-human relationships. Moreover, a recent study by the Ontario Securities Commission found that investors may be equally receptive to advice from AI systems as they are to that from human advisors.
4. Self-orientation
While AI operates without personal biases, its training data or deployment by financial firms may introduce conflicts of interest, such as favouring proprietary investment products. Ensuring AI acts in the best interests of clients necessitates regulatory oversight to prevent corporate incentives from biasing recommendations. In July 2023, the SEC proposed rules to prevent broker-dealers and advisers from using predictive analytics in ways that prioritize firm interests over investors. Firms must assess and mitigate conflicts of interest. The SEC warned AI models could favour proprietary products or higher-revenue recommendations over client needs. Regulators and industry leaders must engineer AI systems to balance incentives with a client’s best interests.
These considerations, viewed holistically and given the current state of the AI models, lead us to believe that the near-term opportunity is a hybrid model where AI enhances human expertise.
Bridging trust and tech with hybrid models
This phase of transformation calls for a balanced approach that capitalizes on AI systems and preserves the human touch, essential for building trust-based relationships. For instance, a report by the London Stock Exchange Group indicates that over 80% of investors are open to AI-supporting advisors in portfolio management, suggesting a shift towards AI-assisted advisory.
On one hand, AI systems offer unparalleled capabilities in processing vast amounts of data, detecting subtle market trends and delivering consistent, data-driven recommendations. On the other hand, human advisors bring empathy, ethical considerations and personalized engagement to the table — qualities that are vital during periods of economic volatility and uncertainty. Together, these complementary strengths can address the technical and emotional needs of clients. Studies from MIT Sloan show that while LLMs can provide sound financial insights when augmented by finance-specific modules, they still require human oversight to explain nuances and build the rapport that underpins long-term client relationships.
Moreover, the convergence of AI and humans encourages greater transparency in the advisory process. As institutions invest in improving the explainability of machine-driven recommendations, they are also committed to preserving the personalized interactions that clients value. This collaborative evolution sets a promising stage for the near-term evolution of wealth management through a seamless blend of human intuition and AI innovation.
Could machines ever replace human advisors?
AI is not a futuristic concept in wealth management – it is transforming how investors access financial guidance, with the aforementioned Ontario Securities Commission study also showing that among participants who used AI applications like ChatGPT, 29% accessed financial or investment-related information, advice or recommendations through these tools. Given its progression on trust dimensions and adoption via hybrid models, we posit that the future of advice will follow three models, with part of the market adopting AI-only advisors:
• Bespoke, human advisory for select clients with complex needs.
• Semi-autonomous AI-human hybrid models.
• Fully automated AI-agentic advisors
AI is advancing, outpacing current methods sooner than expected. Yet, while the technology may soon exceed human performance on technical tasks, it cannot yet replicate the empathy required in complex social contexts. Regulatory and social barriers to adoption also remain.
Ultimately, the future of wealth management will not be AI replacing all human advisors, but AI empowering many — creating an ecosystem where AI enhances expertise, deepens client relationships and expands access to guidance at scale.
This Op-Ed was first published by the World Economic Forum.