Unlike the torrent of analyses prompted by the global introduction of generative AI with last November 30th’s debut of ChatGPT – some good, some less so, but most on a spectrum of speculation that ranges from dire apocalypse to a looming technological Age of Aquarius, Mustafa Suleyman’s The Coming Wave stands almost alone. There is also a small but growing cast of AI thinkers coming around to the author’s point of view.
This new book breaks new ground by getting off this apocalypse vs. nirvana spectrum to peer beyond the debate and look at the ways our institutions must cope – and how they can.
Less a book on AI, it is rather a book on the twin technologies of both AI and synthetic biology, two halves of a whole that will remake the two foundational properties of our world: intelligence and life. But The Coming Wave, despite its formidable insights into both realms, is also less an examination of these two technologies than it is a look at the cultural and social context of the change they will shape and produce. This is an exploration of how our institutions can get it right.
“For most of history, the challenge of technology lay in creating and unleashing its power,” Suleyman writes. “That has now flipped: the challenge of technology today is about containing its unleashed power, ensuring it continues to serve us and our planet.”
This is a clarion call, an ear-piercing klaxon frankly, to a movement with which I’ve long been associated: the “Conscious Capitalism” movement. Conscious Capitalism traces back to my good friend John Mackey’s 1980 founding of Whole Foods. The movement has since accelerated since John’s eponymous book published in 2013 with co-author Raj Sisodia. This week, Conscious Capitalism, the non-profit, will hold its annual CEO Summit in Austin. And I’m sure we’ll have the opportunity there to focus on The Coming Wave. I’ll be presenting at the Summit about the many positive benefits of AI and how it will impact all industries, not just tech.
I’ll get to this must-read book, but first, a quick primer on Conscious Capitalism. Its basic tenet is that business must be about more than making money. It is about instilling purpose and meaning, an idea that I led with in the introduction to my own book, The Entrepreneur’s Essentials, last year. Business should be about pairing responsibility to shareholders with obligation to stakeholders, including employees, customers, suppliers, communities, and the environment. Among offspring of the movement is the related concept of the “B Corporation” model by which corporations codify these obligations of public benefit into their founding charters and bylaws. So far, 10,000 companies around the world have done so, including my own, data.world.
Now, as the CEO of a company founded with AI at its core, one that has also been a proud “B Corp” ever since our public launch on July 11, 2016, I see this as a moment when the advent of profound innovation is converging with the movement to profoundly rethink capitalism.
It’s worth mentioning here that Suleyman, the London-born son of a cab driver and a nurse, has an unusual background for a technologist. After dropping out of Oxford at 19, he helped start a mental health crisis line for Muslims in the United Kingdom. From there, he worked on human rights issues for the mayor of London before starting a conflict resolution consultancy that worked for the United Nations, the Dutch government, and the Worldwide Fund for Nature. Veering toward technology, in 2010 he founded DeepMind, the AI company that was acquired by Google in 2014. Early last year, Suleyman left Google to join with Reid Hoffman in the founding of Inflection AI – more than incidentally, a B Corporation.
But let’s get back to his book. I want to emphasize I’m not dismissing his broad survey of the history of AI and synthetic biology. His fundamental insight is that these two broad realms of endeavor are in fact a Janus-faced reality. Combined, they will allow us both to exponentially expand our cognition while modifying life forms and even creating new ones. Leaders should crack this book for these insights alone.
But I’ll defer for now from a focus on his analysis of technology, its evolution and trajectory, which have been explored by so many authors, led by my good friend, colleague, and fellow Austinite Byron Reese. Byron explored the concepts both in last year’s book Stories, Dice, and Rocks That Think, and in his upcoming We Are Agora that publishes in December. For while Suleyman builds upon and adds to that body of knowledge, his most important contribution is how humanity might manage this “phase transition” of our societal operating system, our collective “OS”.
He calls on us to contain the risks of AI, but not with an unrealistic “pause” as some in the field have advocated, nor by the mantra of “trust us” echoed by many tech companies. Rather he proposes a set of aligning “concentric circles” of action that start small but ultimately lead to “new business incentives, reformed government, international treaties, a healthier technological culture, and a popular global movement”.
In many ways, his book is an early indicator of a maturing discussion about AI. In a recent discussion with the Wall Street Journal, OpenAI CEO Sam Altman signaled his intentionality to engage broadly with stakeholders outside of technology. He discussed innovations to make the next version of ChatGPT both less prone to hallucinations but also less dependent on massive data. The next version of ChatGPT will make use of “smarter” faculties to reason on its own, avoid bias, and honor growing concerns over copyright and intellectual property. “It does not take much imagination to think of scenarios that deserve great caution,” he said.
His CTO, Mira Murati, acknowledged the deep sense of obligation at the company: “We’re building systems that are going to be everywhere – at your home, in your educational environment, in your work environment, and maybe when you’re having fun. That’s why it is so important to get it right.”
Another pioneer leading the discussion in new directions is computer scientist and entrepreneur Fei-Fei Li, who is co-director of the Stanford Institute for Human-Centered Artificial Intelligence. Her new book The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI, coming out in November, is next on my reading list. In an interview with podcaster Kara Swisher, she noted her concerns are not of some impending apocalypse, but are short-term concerns, including the dominance in AI research of large tech companies. Yes, the capital-intensive nature of AI means major players must lead, and they are doing great things. But they may not be incentivized toward unprofitable research like that needed to cure cancer, mitigate climate change, or address other social issues that lack a return on investment.
“Not a single American university today can train a ChatGPT model,” she said, calling for greater public sector investment in AI. I’ll second that, but to me it’s also a call for the Conscious Capitalism movement to boldly step up and embrace the challenge of AI.
Suleyman’s Inflection AI co-founder Reid Hoffman, meanwhile, added his own nuance to the evolving discussion a few days ago. He noted that with our global focus on how AI works, we’re neglecting the increasingly amazing things that it does.
“So, for example, (Large Language Models) can handle translations for language pairs for which they've never been directly trained, like translating Urdu to Welsh,” he writes, citing the work of computer science and AI researchers Blaise Agüera y Arcas and Peter Norvig. “Indeed, while it would certainly be a mistake to assess LLMs only by what they do well, it seems equally misguided to assess them by only what they do poorly.”
I’m profoundly encouraged by all of the increasingly sophisticated thinking. But clearly, the leader of this new canon of thought is Suleyman, and in many ways it converges in The Coming Wave. Just a few ideas he proposes – not as concrete and finished policy proposals, but as starting ideas to make sure we get things right:
A dramatic expansion of research in AI safety. There are fewer than 400 AI safety researchers at top labs worldwide today amid the more than 40,000 AI researchers in total. He suggests requirements of a fixed portion of AI R&D – perhaps 20% – be directed toward safety efforts.
He notes such efforts as the cross-industry and civil society “Partnership on AI”, which has already kick-started an “AI Incidents Database” designed for confidentially tracking safety events – an effort ripe for dramatic expansion.
He notes the need for new voices in the AI discussion, including proactively hiring and engaging non-technical perspectives to include moral philosophers, political scientists, and cultural anthropologists: “In a world of entrenched incentives and failing regulation, technology needs critics not just on the outside but at its beating heart.”
Speaking most directly to the Conscious Capitalism movement, he acknowledges the importance of profit but notes an exclusive ethos focused on shareholder returns is poorly suited for the coming wave: “I believe that figuring out ways to reconcile profit and social purpose in hybrid organizational structures is the best way to navigate the challenges that lie ahead.”
Government gets a great deal of Suleyman’s attention. The public sector needs to compensate skilled researchers at market rates if government is to even understand fast-evolving technology. Departments or ministries of technology are an idea whose time has come, he argues. More controversially, he advocates licensing or certification for development and deployment of some technologies: “For understandable reasons, we don’t let any business build or operate nuclear reactors in any way they see fit.”
Suleyman calls out the culture within technology firms who talk a good game of “embracing failure” but hardly do so, “when there’s a new technology or product that goes awry. Rather, a culture of secrecy takes over.”
This is a mere glimpse of the many ideas that Suleyman advances in this deeply thoughtful and original book. But the overarching theme is that as we embrace AI to solve challenges from climate change to disease to education, we need to look the risks directly in the eye and confront them as “we”, as society, as the descendents of those who abolished slavery, instituted women’s suffrage, and led and enabled the civil rights movement more than a half century ago.
“If we – we humanity – can change the context with a surge of committed new movements, businesses, and governments, with revised incentives, with boosted technical capacities, knowledge and safeguards, then we can create conditions for setting off down that teetering path with a spark of hope.”
For me, it’s far more than a spark. I’m filled with hope for the future of humanity that we can make possible with AI. If you are too, I encourage you to pick up The Coming Wave – Technology, Power, and the 21st Century’s Greatest Dilemma. And I hope to see you at the Conscious Capitalism CEO Summit in Austin!