Earlier this year, founder-investor Sam Altman left his high-profile role as the president of Y Combinator to become the CEO of OpenAI, an AI research center at its outset that founded by some of the most prominent people in the tech industry in late 2015. The idea: to ensure that artificial intelligence is “developed in a way that is safe and is beneficial to humanity,” as one of those founders, Elon Musk, said back then to the New York Times.
The move is intriguing for many reasons, including that artificial general intelligence — or the ability for machines to be as smart as humans — does not yet exist, with even AI’s top researchers far from clear about when it might. Under the leadership Altman, OpenAI has also restructured as a for-profit company with some caveats, saying it will “need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.”
Whether OpenAI is able to attract so much funding is an open question, but our guess is that it will, if for no reason other than Altman himself — a force of nature who easily charmed a crowd during an extended stage interview with this editor Thursday night, in a talk that covered everything from YC’s evolution to Altman’s current work at OpenAI.
On YC, for example, we discussed that “ramen profitable” was once the goal but that a newer goal seems to be to graduate from the popular accelerator program with millions of dollars in venture funding, if not tens of millions of dollars, and what the implications of this evolution might be. (“If I could control the market — obviously the free market is going to do its thing — I would not have YC companies raise the amounts of money they raise or at the valuations they do,” Altman told attendees at the small industry event. “I do think it is, on net, bad for the startups.”)
Altman was also candid when asked personal and occasionally corny questions, even offering up a story about the strong relationship he has long enjoyed with mom, who happened to be in town for the event. Not only did he say that she remains one of a small handful of people who he “absolutely” trusts, but he acknowledged that it has become harder over time to get unfiltered feedback from people outside that small circle. “You get to some point in your career where people are afraid to offend you or say something you might not want to hear. I’m definitely aware that I get stuff filtered and planned out ahead of time at this point.”
Certainly, Altman is given more rope than most. Not only was this evidenced in the way that Altman ran Y Combinator for five years — essentially supersizing it time and again — but it’s plain from the way he discusses OpenAI that his current thinking is no less audacious. Indeed, much of what Altman said Thursday night would be considered pure insanity coming from someone else. Coming from Altman, it merely drew raised brows.
Asked for example, how OpenAI plans to make money (we wondered if it might license some of its work), Altman answered that the “honest answer is we have no idea. We have never made any revenue. We have no current plans to make revenue. We have no idea how we may one day generate revenue.”
Continued Altman, “We’ve made a soft promise to investors that, ‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.’” When the crowd erupted with laughter (it wasn’t immediately obvious that he was serious), Altman himself offered that it sounds like an episode of “Silicon Valley,” but he added, “You can laugh. It’s all right. But it really is what I actually believe.”
We also asked what it means that, under Altman’s leadership, OpenAI has become a “capped profit” company, with the promise of giving investors up to 100 times their return before giving away excess profit to the rest of the world. We noted that 100x is a very high bar — so high in fact that most investors investing in plain-old for-profit companies seldom get close to a 100x return. For example, Sequoia Capital, the only institutional investor in WhatsApp, reportedly saw 50 times the $60 million it had invested in the company when it sold to Facebook for $22 billion, a stunning return.
But Altman not only pushed back on the idea the idea that “capped profit” is a bit of marketing brilliance, he doubled down on why it makes sense. Specifically, he said that the opportunity with artificial general intelligence is so incomprehensibly enormous that if OpenAI manages to crack this nut ahead of the big competitors also at work on it, including Google and Microsoft, it could “maybe capture the light cone of all future value in the universe, and that’s for sure not okay for one group of investors to have.”
Before we parted ways, we also shared with Altman various criticisms by AI researchers who we’d interviewed ahead of our sit-down and who’d complained that, among other things, OpenAI seeks out attention for qualitative and not foundational leaps in already proven work, and that its very mission of discovering a path to “safe” artificial general intelligence needlessly raises alarms and makes their research harder.
Altman absorbed and responded to each point. He wasn’t entirely dismissive of them, either, saying of OpenAI’s alarmist bent, for example, that he does have “some sympathy for that argument.”
Still, Altman insisted there’s a better argument to be made for thinking about — and talking with the media about — the potential societal consequences of AI, no matter how aggravating some may find it. “The same people who say OpenAI is fear mongering or whatever are the same ones who are saying, ‘Shouldn’t Facebook have thought about this before they did it?’ This is us trying to think about it before we do it.”
You can check out the full interview below. The first half of our chat is largely centered on his Altman’s career at YC, where he remains chairman. We begin discussing OpenAI in greater detail around the 26-minute mark.