
Sebastian Thrun on the next wave of AI
Tech plays a big part in rewiring globalization, so Think:Act Magazine turned to Stanford professor and AI pioneer Sebastian Thrun for some guidance. His future view is that AI is about to unlock even more human productivity and ingenuity.
if there were a poster child for impact inventing, it could well be German-native Sebastian Thrun. He has given billions of people a better understanding of where they’re going by laying the foundations for Google Street View and he used this mapping data to turn autonomous vehicles from an academic pipe dream into a daily reality (think Waymo robo-taxis, which as of July 2025 had completed 100 million fully autonomous miles on the road).
After winning the first robotic vehicle race in the Nevada desert two decades ago for his Stanford University team and later launching the secretive moonshot factory Google X, the Silicon Valley veteran has kept up his game. Thrun has invested in or headed numerous startups to infuse AI and automation into areas ranging from aviation to call centers to online education. Thrun spoke with Think:Act Magazine from San Francisco, where he was between meetings with a fresh crop of stealth companies trying to reinvent how we shop online.
sebastian thrun
is a German-American serial entrepreneur, educator and scientist who has spent his life reimagining how humans and machines can work together.
After an early academic post at Carnegie Mellon University, Thrun left for Stanford University in 2003, where, alongside his professorship, he began moving into the Silicon Valley ecosystem. Among his many achievements in innovation, he was a central figure in early self-driving car breakthroughs, co-founded the online education platform Udacity, was the founder and CEO of Kitty Hawk Corporation and co-founded Google X.
From his base in California, Thrun maintains a reputation as a constant innovator who seeks to apply AI not just for efficiency, but as a tool for human progress.
You’ve seen AI evolve from academic research to widespread enterprise deployment. If you were to look a few years ahead, which AI capabilities do you think will have the biggest impact on how businesses operate?
Artificial intelligence will shift from a global engine to a local engine, which means companies will have their own version of an AI that’s intimately familiar with their business and individual people too. In the future, instead of you interviewing me, your AI will interview my AI and get exactly the same result. [AI] having intimate knowledge about a company, its history, the network of people, the information and documents involved will make it infinitely more efficient to run a company.
What are some of the potential blind spots that corporate leaders might overlook when thinking about deploying AI?
Most of the CEOs that I meet, especially outside the core tech industry, are a bit like deer in the headlights. They hear all these claims about agentic AI and how they could do with half their staff, but they have a hard time finding their way into this world. That’s in part because there are a lot of over-hyped claims and there is insufficient information concerning how to make AI practical. On top of all that, there is also a lot of internal fear.
People are really scared about what’s going to happen to their jobs should some of these claims about AI come to fruition – especially the most extreme claims that all human labor will be replaced by AI within the next three or five years. The attitude I would take is a playful one: Use AI to try things out, because true innovation is non-linear. Linear innovation means we take what we do today and make it more efficient, but nonlinear innovation is about finding entirely new ways of operating. And those nonlinear innovations are still not known. You might stumble upon them if you experiment …
What’s a good example you could give for this playful approach yielding results?
AI stumbled across large language models. When LLMs were first launched, no one understood that they could write software or speak Italian. It was discovered by happenstance that they’re good coders. When the first wave of ChatGPT launched in 2023, I don’t think people understood that every software engineer today (in 2025) would already be twice as good as they were back then.
If you look back on your experience at Google, where you headed the self-driving car project among other things, what advice do you have for business leaders facing the question of which AI applications will be transformative versus those that are just incremental improvements?
That’s obviously incredibly hard to predict. I would say anybody who creates content will be at least five times as efficient – especially if the content is repetitive in nature. Obviously, in robotics, we can already see a momentous change with self-driving cars, which is also AI. For information access, LLMs are already proving to be massively transformational. The key will be to get them out of one very specific application, like search, and into these different verticals. Most human labor is repetitive and will be automated in the next 10 years or so.
With AI playing a larger role in the workplace, how do you envision the optimal division of labor between humans and AI systems?
The most important thing is to look at AI not as a replacement per se, but as an enhancement. Just like agricultural machinery does not really replace people but enhances them. People become more effective in what they do in certain job domains, which leads to growth. Rarely does AI directly replace people – what’s more common is AI working together with people.
“rarely does AI directly replace people – what’s more common is ai working together with people.”

DRIVING CHANGE
With over two decades of breakthroughs in robotics labs and Silicon Vally boardrooms behind him, Sebastian Thrun has been hailed as one of the most influential minds in the field of AI.
AI also creates new jobs. Prompt engineering is one activity that was practically unknown before ChatGPT came around. Are there any specific skills that corporate leaders or HR departments should focus on when it comes to upskilling or reskilling their workforce?
The world has an infinite thirst for software engineers. And now many more people will engage in software engineering because it is becoming relatively easy. Anyone can basically program in English, which is a great programming language, if you think about it. Now I can go to the computer and talk to it. That’s the way a manager would “program” something. What this means is that the field of software engineering will grow rapidly. And going forward, the most important thing is to be a bit of a generalist. Don’t box yourself in. In the past, we tended to become more specialized – like medical doctors who would only look at chest X-rays their entire lives. Those days are over. We now live in a world where, relative to the human lifespan, it’s impossible not to change. The new normal means that whatever you believe your job is, you can be sure that it is going to be different five years from now.
How will hierarchies and job functions change in response to the introduction and spread of AI?
Companies will have less – and fewer layers – of management. I hope that AI will really assist in the effective management of people, and I believe it’s possible. A manager could easily manage 30 or 40 people in the tech industry, but typically, the norm is seven. Provided there’s a really good AI that assists in decision-making, communication and so on, companies will move faster. Every human being who has a job or is at school must play with these new AI tools every day and try them out. Just keeping your eyes closed and looking the other way is not an option anymore.
Every bloc or superpower has its own big contenders in the field of AI. Do you see strategic value for companies in developing their own localized models, whether they want to or are forced to by laws?
It’s a cost factor and depends on what the model should do. For most companies, it makes no sense to develop anything from the ground up. There are, luckily, many foundation models that are open-source and which you can just copy over. We live in a world with striking openness, and you can either use those models and prompt them with the correct materials, or you can fine-tune them to meet your needs, which in almost all cases will be satisfactory.
Are you worried that there might be a backlash, either in the workplace or in greater society, due to the speed or scope of these changes?
A big worry of mine, at this moment, is Europe. When it comes to AI, the US is obviously leading – and China is now a close second. Europe has more and better educated people than the US, but I’m still missing the willingness of Europe to effectively create a Manhattan Project for AI. I would love to see Europe say: “AI is coming and we need to be on top of it.” Instead, the public dialogue is often about regulations and risks and security.
Do you think there is still enough room left in the market to catch up?
Absolutely, yes. Look at what Elon Musk has just done with [the chatbot] Grok, where he’s become a top contender within a few years, and it didn’t take that much work. The European attitude tends to be more on the pessimistic side, worrying a lot about abuse. Where does it come from? When I talk to my European friends, I often feel they see the world more like a zero-sum game, where you talk about how to divvy up the existing pie. In Silicon Valley, our attitude is: “How do we grow the pie?” Europe should be much more engaged on the creation side of things because nearly all AI innovation comes out of the US, some out of China – and even China has regulations. I’m not against regulations, but I think regulation should occur when it’s clear what the abuse is. Take drones. The drone world is effectively unregulated in Ukraine, and you see massive innovation.
“whatever you believe your job is, you can be sure that it is going to be different five years from now.”
When you developed self-driving cars for Google, dealing with complex regulations was a crucial aspect. Robo-cars, after all, have to be really safe. Are there any lessons for other industries?
In the US, the approval hurdles are very low compared with Europe. Waymo has now driven 100 million miles and has never harmed a person. We have invented something that is measurably and clearly safer than human driving. And the reason why this worked is because companies like Waymo, Cruise or Tesla are liable if they kill people. Liability goes a long way to regulate – and it’s a good way to regulate – because it ties together the company’s success with its actions as well as shifting the onus of figuring out whether or not your product is safe enough to the company. The fact that we are relatively open to these new technologies has helped propel Waymo into the world, and it’s going to be good for people. Which brings me to another point: You never really know who benefits, because the people who won’t die in traffic accidents will obviously never find out. Just imagine it’s you who would be run over by some idiot drunk driver, and now you get to live. How amazing would that be?
Stay ahead of the curve!
Sign up for our newsletter and let Think:Act bring you up to speed with what's happening today and guide you on what's happening next.








