INSEAD Day 4 - 728x90

BYD logs record EV sales in 2025

It sold 2.26m EVs vs Tesla's 1.22 by Sept end.

Google to invest $6.4bn

The investment is its biggest-ever in Germany.

Pfizer poised to buy Metsera

The pharma giant improved its offer to $10bn.

Ozempic maker lowers outlook

The company posted tepid Q3 results.

Kimberly-Clark to buy Kenvue

The deal is valued at $48.7 billion.

“Efficiency is over. The future is about invention, and creativity”: Gerd Leonhard

Gerd Leonhard, the chief executive officer (CEO) of the Futures Agency and the author of Technology vs. Humanity.
  • “We need global guardrails, like banning autonomous killing machines or AI-run nuclear systems,” Gerd Leonhard tells TRENDS
  • “If democracy is to prevail, we must limit technology to being a tool, not a purpose,” he adds.

Gerd Leonhard, the chief executive officer (CEO) of the Futures Agency and the author of Technology vs. Humanity, has an ominous warning for anyone willing to listen to him. He believes that humanity is hurtling toward a “clash between man and machine” — one that will test not just our technological prowess but our moral and social imagination.

Speaking to TRENDS, Leonhard rues the stark fact that “the machines are now better than us at being efficient” even as artificial intelligence (AI) reshapes industries and societies at dizzying speed.

What’s at the heart of the knowledge economy of 2030? 

We’re going beyond facts and figures — that’s for machines. Our knowledge is complex—emotional, spiritual, bodily. We think with our body, not just our brain.

Automation will remove many simple jobs. In Dubai, 4,000 taxi drivers could be replaced. In India, millions of engineers will face disruption. We need a new work structure.

Job losses from AI are inevitable, and education seems unprepared. What should we teach the next generation? 

We used to teach kids to gather knowledge and data to get jobs. That’s no longer true. Machines now have data, information, and knowledge.

We must learn what makes us human—wisdom, understanding, intuition, imagination, creation. I’d rather my kids be creative than just learn coding.

India produces over a million engineers a year—much of that work will soon be done by machines. We need supervisors and creative thinkers, not just engineers.

Does AI risk making humans dumber by outsourcing thinking?

Yes, it’s possible we use technology to do nothing. It’s human nature to take shortcuts. This is the first time in history we must consider not using a tool because it may not be good. AI is like a drug — it makes you lazy but makes you think you’re on top of it.

The coming clash between man and machine.

We must create standards—like Europe debating mobile bans in classrooms. The key is using AI’s good side without too many bad effects. The problem is we spend all the money on making good things, not on preventing bad ones. Social media was the same—it made us mistrust each other. If we repeat that with AI, it could be social media multiplied by 1,000.

When Technology vs. Humanity came out in 2016, AI was still distant. Much of what you warned about has come true. What do you see changing most in the next decade?

The subtitle of my book was “The coming clash between man and machine.” That’s what we have today. Technology took longer, but it’s now far ahead of our human capacity to collaborate. We’re inventing not just smart machines, but nuclear fusion reactors and synthetic biology tools that can be automated. Anything that was science fiction is becoming real.

These are powerful things. We can change the human genome, maybe end cancer in 20 years, or make jet fuel from plants. Machines that think and learn will help in prediction, analysis, and cutting emissions. But the main problem is that we invent these crazy things without agreeing on how to use them, what the standards are, or who is in charge.

You’ve warned of an “AI incident” that might force global regulation. What could that look like?

The most likely thing is not that AI will have bad intentions, but that we’ll believe the information it gives and change our behavior. A likely incident could be a stock market crash—everyone selling at once because of orchestrated or false data. We’d have to shut down the markets and reorganize their logic because we’re already using too much AI in finance.

The rise of AI and social media coincides with the rise of authoritarianism. Is there a connection?

If democracy is to prevail, we must limit technology to being a tool, not a purpose. We need to shift from profit and power to people, purpose, and prosperity.

Right now, the US is moving toward using technology as a power instrument. The Trump government says we should have fewer AI rules — to make more money. That’s dangerous for democracy.

China has state capitalism, the US has corporate capitalism, and Europe has social capitalism—more human benefit. We’re going to see conflict between “digital humans” and those who want to stay human but use technology wisely.

The UN should lead this discussion but has failed. We need global guardrails — like banning autonomous killing machines or AI-run nuclear systems.

You’ve said “the future is no longer about efficiency, it’s about being human.” What does that mean?

Machines now beat us in efficiency. But we’re better at complex decisions that require moral judgment. Morality can’t be taught to machines—they don’t exist.

Efficiency is over. The future is about invention, creativity, understanding—what humans have always done. If you become like a machine, you’ll never have a job again.

Any final thoughts on the new knowledge economy?

The challenge is for governments and schools to stop over-obsessing with STEM. We must revive music, film, arts, sports — while keeping knowledge.

Knowledge is abundant; human judgment is not. Education must recognize we need different kinds of people, not robots.