AI and society

Think:Act Magazine "It’s time to rethink AI"
AI and society

May 15, 2024

Our way of life will change with artificial intelligence

Listen to the article


by Steffan Heuer
Artworks byCarsten Gueth

In this piece, experts look at how artificial intelligence is likely to impact our wider society, plus how AI governance is your business: All of us have a responsibility to lead efforts at containing novel systems while we still can.

Perhaps it's precisely because he has been at the forefront of AI innovation that Mustafa Suleyman's musings on the technology's downsides carry so much weight. As one of the co-founders of AI pioneer DeepMind, which is now part of Alphabet, he has become increasingly worried about the unexpected consequences if companies and countries wait too long to rein in the systems they race to build. Here, he delves into the reasons why the world urgently needs better AI governance.

Mustafa Suleyman, co-founder of DeepMind and CEO of Microsoft AI.
Mustafa Suleyman, co-founder of DeepMind and CEO of Microsoft AI.

In your book The Coming Wave you warn that we may not be able to control the tsunami of change ahead. Are we on the cusp of another Luddite movement?

One of the key arguments in my book is about stopping the need for or the likelihood of a neo-Luddite movement. The point about the original Luddites is that they reacted to a failure of technology. Those building and using it failed to take account of the immediate social and political circumstances of its use – and as a result people lost their livelihoods and their whole world fell apart in just a few years. But over the long term, the descendants of those people, who revolted and became Luddites, lived much wealthier and more comfortable lives. This has been the norm for technology throughout history; people build and use it with little regard for the consequences. So the lesson is what we learn about how to roll out new technologies to avoid such an outcome in the first place. We have to make sure AI and related powerful technologies are both beneficial and controlled.

Mustafa Suleyman

Mustafa Suleyman is a British AI researcher and co-founder of DeepMind, where the college dropout was the head of applied AI. DeepMind was acquired by Google in 2014. Suleyman co-founded Inflection AI, a generative AI company, in 2022. In March 2024, Microsoft hired him to lead a new consumer AI unit.

You argue we will need containment to avoid the potentially catastrophic consequences of AI. What does that look like?

As the conversation around technology has exploded, we are still missing a unified approach to understanding, mitigating and controlling these spiraling new powers: a general-purpose concept for a general-purpose revolution. Containment fits the bill. Containment is what will let us keep control of history's most powerful technology as it rolls out at speed. It's an overarching lock uniting cutting-edge engineering, ethical values, government regulation and international collaboration. Containment is, in short, the elusive foundation for building the future. So when discussing something like AI, I think we need to be discussing how to contain it. 

What makes you optimistic that we have a chance to bring politicians, citizens and tech companies to an agreement?

The challenge is enormously steep. No element of containment is easy or has obvious precedents. Indeed, the whole historical drift of technology is that it has never been contained. From stone tools to the printing press, fire to electricity, it has always proliferated far and wide, spread everywhere and rapidly improved. Moreover, the incentives driving technology today are immense – geopolitical competition, huge financial rewards, an open research culture ... Try stopping all that. So I think that containment looks almost impossible in many respects, but equally think we need to keep going. It must be possible, for all our sakes.

"The whole historical drift of technology is that it has never been contained."

Mustafa Suleyman

Co-founder of DeepMind and CEO of Microsoft AI

What can we as individuals do to help sculpt this coming wave?

There are 10 steps to containment that work at many different levels, a positive inasmuch as it creates room for everyone to get involved. In fact, I'd go further – containment will only work precisely when everyone gets involved, building a global movement behind containment. Think about climate change; the meaningful response here only started when it became a major priority for ordinary people, not just scientists or activists. That forced change on companies and governments otherwise-minded to ignore it.

So the first thing to do, whoever you are, is to push for better results, demand responsible, beneficial technologies, see this as a personal and societal priority and not something to be dropped down the agenda. Also, if you are a critic here, then get involved – don't just sit on the sidelines. Contained technology will be built by its critics, not by blind cheerleaders. We need people alert to dangers and risks working on the inside, on development from the ground up.


- Developing an Apollo program for technical AI safety
- Conducting audits for AI models to ensure their transparency and accountability
- Exploiting hardware chokepoints to slow development and buy time for regulators and defensive technologies
- Getting critics involved in directly engineering AI models from the start
- Having AI players be guided by goals other than profit
- Arming governments with knowledge about AI, allowing them to regulate technology and implement mitigation
- International treaties to stop proliferation of the most dangerous AI capabilities
- Establishing a culture of sharing learnings and failures to quickly disseminate means of addressing them
- Creating a public mass movement that understands AI and demands the necessary checks and balances
- Not relying too much on delay, but instead moving into a new, somewhat stable equilibrium

And what should a CEO do who is pushed and pulled to use these new tools?

As for CEOs, businesses will have a huge role to play here. After all, most state-of-the-art AI is currently built by companies. They in turn respond to incentives of the market or their shareholders, which may not always be the most compatible with containment.

Can CEOs help square the circle? Can they re-imagine and re-fashion their organizations to respond to a more diverse array of drivers, ones amenable to contained technology, to, at times, saying no to relentless proliferation? It's a tall order, and something I have found is incredibly difficult to push within established companies. But equally it's something we absolutely need. We need companies with a culture of containment ingrained in every facet of their operation and management. Leadership from CEOs about how to build this new generation of responsible corporate entities would be a huge step forward.

While the world stares at GenAI, is there something that we are missing or not paying attention to that will make this wave even bigger?

There's a huge amount that isn't exactly under the radar, but nonetheless doesn't get the attention it deserves. For a start, I think biotechnology, and synthetic biology in particular, is just a huge story that receives a fraction of the attention of AI. Intelligence is a fundamental property, but so is life.

We are now engineering both, which is an extraordinary step change in what is possible. Synthetic biology is – like AI – growing much more powerful as well as much cheaper. The costs of sequencing DNA have collapsed over recent decades: You can now sequence a human genome for a couple hundred dollars. Twenty years ago it cost more than a billion. Synthetic biology isn't just about reading or even editing the code of life – it's about writing it. That makes it a fully general-purpose technique akin to AI and just a handful of other technologies. While not a day goes by without AI being in the headlines, I think that will become true of synthetic biology in the next five to 10 years as well. 

The Al impact on the boardroom, the workplace, society and you. Please click below to read the other parts of the cover story:
About the author
Portrait of
Steffan Heuer has been covering the intersection of technology, commerce and culture in Silicon Valley for more than two decades. His work has appeared in The Economist, the MIT Technology Review and the German business monthly brand eins. He currently divides his time and reporting between Berlin and California.
All online publications of this edition
Load More
Portrait of Think:Act Magazine

Think:Act Magazine

Munich Office, Central Europe