It’s been just a little more than three months since the short-lived coup against OpenAI CEO Sam Altman, soon after which I wrote the post: Ten lessons from OpenAI’s five-day rollercoaster. And a rollercoaster it certainly was. Over those five days, global attention was riveted on the epoch-making tech of AI while Shakespeare-esque speculation ranged from secret greed (David Sacks), to hints of a dangerous new discovery that had to be kept under wraps (Elon Musk), to galloping hypotheses involving critical backer and partner Microsoft (Hacker News).

Sorry folks, the actual drama was – we now know thanks to reporting by the Wall Street Journal’s new podcast series, Artificial – hardly the treachery of Hamlet. Rather, it was all more akin to A Midsummer Night’s Dream, the great playwright’s comedy of confusion and absurd mixups. I encourage readers to listen to all four episodes of the Journal’s series. But in short, it boils down to AI scientist Ilya Sutskever acting out something akin to the role of Midsummer’s Nick Bottom, overplaying his overconfident hand.

OpenAI co-founder and querulous board member Sutskever – wanting a more prominent role – first persuaded three independent board members to join him in the ouster. Then he backtracked in the following furor. The backdrop to the backtrack was the fact that 95% of the employees threatened to follow Sam out the door – surely in no small part due to the shakeup imperiling a secondary stock sale that would have made them all rich given OpenAI’s $85 billion valuation. This all led to the happy ending we now know about: Altman and President Greg Brockman returned from exile, Sutskever resigned from the board but ultimately stayed with OpenAI, and the board was reconstituted to bring in such stars as Larry Summers and Bret Taylor.

Mea culpas and apologies continue to abound. As with the actors in Midsummer, Sutskever is chastened, Altman claims to have learned much about “empathy”, and harmony – along with the stock deal – has been restored amid the reconciliation (though an internal probe of the drama in this case continues).

In the interim each of my ten lessons has been validated, as well as my prediction that the truth about what happened in that boardroom last November would ultimately come out. But as OpenAI reveals yet another stunning technology in its stunning video creation tool “Sora”, and Altman grabs headlines amid a quest to raise an investment round that would be greater than the entire economy of Japan, I need to share an 11th Lesson.

The emerging ‘post digital economy’

That lesson is that we must begin organizing business and commerce around a concept that I’ll call the “post-digital economy.” Now bear with me. I realize that’s probably a new term to many readers, and it is in fact scarcely used outside a few abstract academic circles and a seminal report on the future of AI by Accenture in 2017. But it is, I hope, a useful phrase to help us think broadly about the way AI is making our existing models for organizing and regulating business and commerce obsolete.

A fun book that has informed by early thinking on this is The Revenge of Analog, published in 2016 by David Sax. It’s an exploration of the way we’ve come to regard the world in a binary way – analog or digital. And Sax does a great job of explaining our ambivalence about all this, exploring the now-familiar explosion in vinyl record sales, the popularity of those black Italian-made Moleskine notebooks so cool even in Silicon Valley, or the dramatic and continuing growth of traditional paper book sales. Analog vs. digital has become kind of a short-hand for life and tools before and after the advent of the silicon chip.

Now though, even that dichotomy is soon to be in society’s rear view mirror. I’m by no means arguing that we’re leaving digital technologies behind. We are just getting started with them. I am, however, arguing that AI is ushering in a world where digital technologies are – and will be – so embedded in our experience, such a part of our human/machine interfaces, so persistent in our health care, and core to so much more, that analog/digital is no longer a meaningful differentiator.

For example, the growing use of digital “smart grid” technology, which uses AI to more efficiently distribute power through analog networks, illustrates the blurring of definitional lines. My digital Apple Watch monitors the analog inputs of my pulse and body temperature. Is it analog or digital technology? Our language, and consequently our thinking, can scarcely keep up with the pace of this convergence of once-familiar descriptors.

That few of our institutions are prepared to cope is a huge topic. Take education, for example. Our linear K-12 school models and our bachelor’s, master’s, PhD university models are now relics in the age of AI, as I wrote last year. This is the post-digital age.

But let’s save that cosmic topic for another day and bring the discussion back to those five days in November. What the entire Shakesperean drama really showed was the inherent tension in its hybrid, and still experimental business model. It’s a tricky, untested model to be sure — and that’s its virtue. For AI is a technology that defies all the metaphors we tend to use to explain its scope: the discovery of fire, the invention of the printing press, the splitting of the atom, or the advent of the World Wide Web. The mother of all inflection points, AI will change everything. This means, of course, that the business, governance, and regulatory models must change too.

And unlike with innovations past, our goal as technologists and innovators cannot be – in this post-digital moment – to “blithely disrupt” as AI scientist Fei-Fei Li so eloquently explains in her marvelous book, The Worlds I See, which I recently reviewed. The moral imperatives to get AI right – the need for intentionality as I put it – are such that Li founded Stanford University’s Institute for Human-Centered AI in 2019 to focus squarely on this issue.

As the Journal’s series explains, OpenAI was born over drinks in 2015 at the famous Rosewood Hotel in Menlo Park with the same imperative. The attendees included Altman, Brockman, Sutskever, and Elon Musk. In agreement on the gravity of the undertaking, the founders decided to make OpenAI a non-profit corporation. Its mission was simply the betterment of humanity. No profit imperatives should or would distract.

Conjoined corporate twins

Despite generous backers however, including the likes of billionaires Reid Hoffman, Vinod Khosla, and others, it didn’t take long for the founders to realize they needed massive amounts of capital. In the first year alone, OpenAI burned through $116 million, according to the Journal’s account. Fast forward to 2019 and that’s when they hatched the novel plan to convert the company to a for-profit organization, which was quickly followed by an infusion of $1 billion from Microsoft in the revamped OpenAI’s first funding round. Except in this case, OpenAI took the radically novel step of making the new model one of conjoined twins, with very different personalities: the non-profit and for-profit companies are adjacent to one another, with the non-profit board still overseeing operations. This was dubbed a “capped profit” company, limiting investor returns to 100X in a first round investment. That may still seem like a handsome return, but in the “power law” logic of venture capital investing, the assumption is that 90% of investments fail. In short, you need one good bet to cover nine bad ones so the fence you’re aiming for is always the “unicorn,” the firm that reaches a value of $1 billion. Musk, meanwhile, wasn’t keen on the idea and left; he has since gone on to found the xAI in Nevada, which in November launched Grok, an AI chatbot now in beta testing by premium users of X. His larger plans for xAI are unclear.

We all know the rest of the mind-blowing story: ChatGPT launched on November 30, 2022 (note: I co-wrote a one-year anniversary piece on its founding alongside AI guru Byron Reese). By January of last year, the company had made history as it quickly grew to 100 million users. Shortly after that, Microsoft ponied up another $10 billion. Around this time, cousins were emerging, including Inflection AI founded in 2022, Anthropic (founded by ex-OpenAI employees in 2019 after Microsoft entered the picture), and of course the freshest face, Google’s Bard, founded in 2023 and now renamed Gemini.

And then the explosion, which in retrospect was really the first major test of the altruistic mission uniquely paired with the profit imperative: “On sort of a meta level, what it is is a collision between a non-profit and a commercial-minded management,” New York University Professor and author Scott Galloway said at the time.

Vigorous debate has since ensued on that model. So was the “capped profit” model a failure? Many seem to think so. Galloway, who I really respect, has suggested this will be the end of so-called ESG (environmental, social, governance) investing. Brian Quinn, a professor and expert on corporate structures at Boston College Law School told Politico that similar “funky structures” will be abandoned by AI companies, including in all likelihood by OpenAI.

Retrofitting vs. future-proofing

To be sure, the bloom is off the rose of ESG and social impact investing. Last year investors pulled $13 billion from sustainable funds. And we all know of the battles raging over “DEI,” for diversity, equity, and inclusion. To be sure, I have my own strong views on DEI, particularly in the wake of Hamas’ attack on Israel on 10/7. But conflation of the shifting political winds curtailing socially responsible investing and action in legacy companies with any souring on the emerging ethos of responsible AI investment and governance is wide of the mark. And it’s certainly the wrong lesson to take from OpenAI, which just hit a staggering $2 billion in estimated revenue for 2024. Whatever your views on ESG or DEI, these are retrospective movements, intended to fix – for better or worse – past ills. By contrast, the growing conversation about responsible AI is about the imperative of future-proofing our institutions and practices.

That’s a critical distinction and the 11th Lesson in a nutshell – development of responsible AI will be led by the creators themselves.

It won’t be easy. There will be starts and stops, just as we’ve seen. There’s a lot to figure out as we reinvent capitalism for the 21st century and the post-digital economy. In my own case, data.world, which we founded in 2015 as a cloud-based SaaS company to enable and accelerate data analysis, governance, and sharing, we’ve been a Public Benefit Corporation since our launch on July 11, 2016. Unlike more traditional companies, our responsibility to stakeholders – including employees, community, and the environment – are codified into our bylaws along with our obligation to investors and shareholders. This is just one approach to socially responsible commerce in this new era, part and parcel with the broader concept of Conscious Capitalism, pioneered in my hometown of Austin by Whole Foods Founder John Mackey. John literally wrote the book on Conscious Capitalism and I’ve served on the board of its non-profit, national organization (which is catalyzing an international movement).

That’s not to say we should do away with regulation. The food we eat, the drugs we put in our bodies, the airplanes that ten million people confidently board each day around the world are safe because of regulation. Yes, the European Union last December became the world’s first major power to enact laws that seek to protect against systemic AI risk, ensure cybersecurity, and guard against AI surveillance and other unethical practices. And I certainly follow the news that the United States, which, unlike its sketchy record in regulating data privacy, intends to lead the world in AI regulation and governance. I believe in sensible regulation and I broadly support the endeavors. The problem is that I’m deeply skeptical. For our modes of 19th and 20th Century analog regulation have not served us so well in the digital age – the ongoing confusion and scramble to constrain the worst aspects of social media being a case in point. As someone working and thriving in the realm of data, I am sobered by the fact that the first data privacy law in the United States was proposed in the early 1970s by Republican Sen. Barry Goldwater of Arizona and Democratic Sen. Sam Ervin of Texas. It took until 1996, however, before we got our first strong data privacy law known as HIPAA, for the Health Insurance Portability and Accountability Act.

A global ‘phase transition’ in our civilizational ‘OS’

In the post-digital age, this lag between the pace of AI innovation and the ability of lawmakers and regulators to cope will only grow. I’m not cynical, and there are many brilliant and competent people serving in government. But many will recall a few years back during a hearing on technology regulation before the House Judiciary Committee when Google CEO Sundar Pinchai had to explain to a confused lawmaker:  “Congressman, the iPhone is made by a different company.”

This challenge to both our institutions and our companies is masterfully articulated by another AI pioneer, Mustafa Suleyman, co-founder of both Google’s DeepMind and, more recently, Inflection AI, which is also a Public Benefit Corporation. In his new book, The Coming Wave, which I reviewed last fall, he proposes a comprehensive vision for us to cope with what he calls the “phase transition” of our societal operating system, our “OS.” In addition to the ethos of the Public Benefit Corporation, he proposes a set of aligning “concentric circles” of action that start small but ultimately lead to “new business incentives, reformed government, international treaties, a healthier technological culture, and a popular global movement”. Recently, Suleyman outlined his thoughtful vision for AI in a discussion at Li’s Institute for Human-Centered AI.

Similarly creative is the AI company Anthropic, which has gone beyond founding itself as a Public Benefit Corporation to also create a “Long-Term Benefit Trust”. This Trust consists of five trustees overseeing the Board of Directors. Financially disinterested, the Trust will actually have the power to remove Board members failing to honor the mission of long-term benefit to humanity. Anyone who thinks this is a bad business model should note that Anthropic has tallied $7.3 billion in new investment in the past year, including $4 billion from Amazon.

The stakes could not be higher. I’m not one who worries that some rogue AI will outsmart us and eliminate humans as some AI scenarios warn. But political manipulation, job displacement, “deep fakes”, surveillance, harassment, and other threats are real. The parallel reality is that there is no regulatory cavalry on the horizon coming to rescue us. The FCC’s swift action in the wake of a fake Joe Biden showing up in robocalls shortly before the New Hampshire presidential primary was laudable. It’s also unlikely to be effective, say experts. In this emerging post-digital reality, we’ve got to do this ourselves.

I first began writing on this after a series of conversations leading up to last year’s TED Conference summit in Vancouver, where I penned my first detailed post on this topic. My thinking has certainly evolved since, as has the technology, and the insights of so many in the field, including pioneers such as Suleyman and Li. But the trio of basic ideas I outlined then remain, if increasingly nuanced and supported by nearly a year of experience. That learning curve includes the founding of our own AI Lab at data.world and “Interactions with Archie”, our own generative AI bot that allows users to query and chat with our entire history of demos, white papers, blogs, and internal research. We’ve come a long way since then with our AI operations, lifting our own productivity by 25% by those that use our AI productivity tools.

So these are the elements of the 11th Lesson:

First, transparency: In the early days of ChatGPT, Altman repeatedly promised that OpenAI would “build this in public.” That he skated a bit on this commitment was among the grumbling among board members that Sutskever allegedly exploited in his short-lived coup attempt, according to the Journal’s reporting. I hope Altman doubles down on this commitment, and it certainly is the ethos being advocated for by Suleyman and Li. Inflection AI’s “concentric circles,” the Institute for Human-Centered AI mission to keep the public interest front and center, and Anthropic’s new “Trust” are all pieces of the emerging mosaic that will help us deliver on the true promise of AI in learning, health care, climate change mitigation, and so much more. I note too that Google’s DeepMind just launched the innovative AI model “Gemma”, which not only aims to be more cost-efficient, but serves a wide variety of applications and will be open source. I’ve long been a fan of the open-source movement and DeepMind’s CEO Demis Hassabis’ explanation on the Hard Fork podcast of the stress-testing they did to assure security and minimize risk is an example of best practice that others should emulate.

Second, post-digital regulation: As important as past regulatory success has been to keep us safe and free from unscrupulous practices in businesses and workplaces, we need new models for regulation, education, and clearly commerce. These should include the new business incentives and global treaties that Suleyman advocates, and the teaching of ethics, philosophy, morality, history, and even anthropology in engineering and technology courses as advocated by Li at Stanford. But I’m also keen on the concept of “sandbox regulation”, where innovators and regulators jointly test regulations before they become permanent. The “emergency use authorization” for pandemic vaccines is a form of this model that we should emulate in other domains. As this global debate heats up, a newly organized collective of researchers and analysts in 52 countries, the Forum on Information and Democracy, published a comprehensive report this week on the state of the global discussion among academics, governments, and NGOs. Their report, AI As a Public Good: Ensuring Democratic AI in the Information Space, is a comprehensive inventory of initiatives underway worldwide and is worth a look.

And third, the public benefit model: I really believe this model is the table stakes for any company forming in this new post-digital economy – whether it be in AI, or in any other endeavor. It’s not just that capitalism should be a better steward and seeker of solutions to the world’s challenges and ills. It’s the fact that only commerce undertaken in the spirit of Conscious Capitalism has the reach, the agility, the scale, and the visionary innovation to make us not just endure but thrive in the post-digital age.

William Shakespeare, of course, never wrote a play on this. We’ll have to write our own.