If 2023 was the year artificial intelligence became a household topic of conversation, it’s in many ways because of Sam Altman, CEO of the artificial intelligence research and deployment company OpenAI and TIME’s 2023 CEO of the Year.
Altman sat down for a wide-ranging conversation with TIME’s Editor-in-Chief Sam Jacobs in December, in which he spoke candidly about his November ousting—and reinstatement—at OpenAI. The pair also discussed how AI threatens to contribute to disinformation, and the rapidly advancing technology’s future potential, at the “A Year in TIME” event in New York City on Dec. 12.
Here are some excerpts from the conversation, which have been condensed and edited for clarity.
We have a lot of questions for you. The first one on the minds of many people in the room tonight: What the hell happened?
A lot of things. Honestly it's been a crazy whole year, like in the context of everything that has happened to us this last three weeks—or a month or whatever it's been–it stands out but not as much as you would think it should. We kind of went from this unknown research lab to this like, reasonably well-known tech company in a year. And I think that takes most companies like 10 years. That's been a wild experience to live through. Of course, these last few weeks have been particularly crazy and sort of painful and exhausting and [I’m] happy to be back to work. To say something empathetic, I think everybody involved in this, as we get closer and closer to superintelligence, everybody involved gets more stressed and more anxious and we realize the stakes are higher and higher. And I think that all exploded.
How do you think this moment has changed OpenAI?
It's been extremely painful for me personally, but I actually think it's been great for OpenAI. We've never been more unified, we have never been more determined and focused. And we always said that some moment like this would come between where we were in building AGI. I didn't think it was going to come so soon, but I think we are stronger for having gone through it. Again, I wouldn't wish it on an enemy. But it did have an extremely positive effect on the company.
What did you learn from it?
I haven't fully recompiled reality yet. I haven't had the time to emotionally process all of this, because it all happened so fast, and then I had to come back in and pick up the pieces that I haven't had time to sit down and really reflect as much as I would like. But I would say the most important thing that I learned—a thing I had always heard as a cliche or whatever—is that your job as a CEO is the people you hire, and how much you sort of develop and mentor your team. And the proudest moment for me, in all of this craziness, was realizing that the executive team could totally run the company without me. I can go retire, OpenAI will be fine. And I'm super proud of the people [who were able] to do that. And to watch them work at a time where I couldn't really talk to them—but they did an amazing job—really made me very proud. And it also made me very optimistic because I think as we do get closer to artificial general intelligence, as the stakes increase here, the ability for the OpenAI team to operate in uncertainty and stressful times, that should be of interest to the world.
You’re describing how high the stakes are here. What do you say to someone who says this company brought itself to the brink of self-destruction, how can we trust its leader and how can we trust this company with this transformative technology?
We have to make changes. We always said that we didn't want AGI to be controlled by a small set of people. We want it to be democratized. And we clearly got that wrong. So I think if we don't improve our governance structure, if we don't improve the way we interact with the world, people shouldn't, but we're very motivated to improve that.
On those changes, your former co-founder, Elon Musk, former Person of the Year, has described OpenAI as a "closed source maximum-profit company effectively controlled by Microsoft." Is Elon wrong?
On all of those topics.
And any others?
I actually, in spite of his constant attacks on OpenAI, I'm very grateful that Elon exists in the world.
Why?
Because I think he's done some amazing things. I think the transition to electric vehicles is super important. I think getting to space is super important, and I'm grateful for those things. You know, we're definitely not maximum profit-seeking—although you could talk to Elon about some of his ventures for that one. And we open source a lot of stuff. We’ll open source more in the future and we're certainly not controlled by Microsoft. And I think all that is something that someone can say but does not actually reflect the truth.
A question about another competitor. Google has announced Gemini, a model that it claims outperforms GPT-4 on many performance tests. What do you make of Gemini and why did it take them so long to release it?
I'm happy for more people to be making AI progress. I think AI will be the single most transformative technology of this era. And so more people doing that, I think is great. When Gemini Ultra gets released sometime next year, we'll get to look at it. I can weigh in on it then. Certainly, there's been a lot of confusion around the metrics, but I'm sure Google will do great work.
In an interview with Edward Felsenthal, my predecessor as TIME Editor in Chief, you said “I am a Midwestern Jew. I think that fully explains my mental model.” Is that true? Do you still feel that way?
I think as a compressed one sentence to explain everything, I think that's pretty good.
As a New England Jew, I have to ask you, how does Judaism shape your worldview and what has it been like to be a Jewish leader since October 7?
You know, if you had asked me this question at the beginning of the year, I would have said there's all of these subtle but important cultural things that have I think shaped my worldview and how I act and how I live my life. And I wouldn't have talked about anything other than that. And one of the weird things about being Jewish and getting internet famous is most of your online experience is people saying horrible things about Jews. And I don't know if that was always the case, or if that's like ramped up, but that's certainly been my experience this year. And on double time, since the last couple of months. I think I was just wrong to be so dismissive of this. I was like look, antisemitism—we're done with that, the world has moved on, there's other problems. Let's talk about those. And I have really seen in this last year, and particularly these last couple of months, that I was just completely wrong about that. And it's a sad, sad thing for the world.
You're someone who likes to take on intractable problems. As you've thought about that, how do you think about solutions towards that?
That one seems harder than AGI.
Speaking of difficult problems, next year is a historic year for democracy. There will be elections in 40 countries. Are you concerned at all about AI's ability to contribute to disinformation, and do you think there are specific concerns that we're not taking seriously?
Yeah, so I think AGI will be the most powerful technology humanity has yet invented. And, like any other previous powerful technology, that will lead to incredible new things. I think we'll see education change deeply and improved forever. I think the kids that start kindergarten today, by the time they graduate twelfth grade, will be smarter and better prepared than the best kids of today. I think that's great. I think we can talk about the same thing in a lot of other things, health care, people who program for a living, a lot of other knowledge work, but there are going to be real downsides. There will be many that we'll have to mitigate, but one of those is going to be around the persuasive ability of these models and the ability for them to affect elections next year. And I think we're going to really confront something quite challenging.
So what's that going to look like?
So, right now, troll farms in whatever foreign country who are trying to interfere with our elections, they make one great meme and that spreads out and all of us see the same thing on Twitter or Facebook or whatever. That will continue to happen and that will get better. But a thing that I'm more concerned about is, what happens if an AI reads everything you've ever written online. Every article, every tweet, every everything, and then right at the exact moment, sends you one message customized for you, that really changes the way you think about the world. That's a new kind of interference that just wasn't possible before AI.
Is AI good or bad for media?
One thing I would say is no one knows what happens next. I think the way technology goes, predictions are often wrong. The future is subtle and nuanced and dependent on many branching probabilities. So the honest answer is I don't know. But I think it's going to be more good than bad. It will be bad in all sorts of ways. But I think it nets out to something good. As people have more free time, more attention, and also care more about the people they trust to help them make sense of the world—to help them decide what to trust and how to think about a complicated issue—I think they're going to rely and care more about their relationship with someone in the media more and more and care more about high quality information in a world of massive amounts of generated content. So I think it should be net good, but it will be different.
How do you think about your company's role and your role in helping to preserve an ecosystem where high quality information remains?
It's obviously super important to us, but that’s a sort of empty statement. The kinds of things that we try to do are build tools that are helpful to people. If, to people in media, people in other industries, if you had asked us five years ago what was going to happen, we would have said we will be able to build trusted, responsible AI. But fundamentally, it's going to be going off and doing its thing. And now, I think we see a path to, what we do is instead build tools for people. And we put these tools out into the world and people—media or otherwise—use them to architect the future. And that is the most optimistic thing I think we have discovered in our history and the safety story changes in that world. The way that we are a responsible actor in society changes in that world. I think we now see a path where we just empower everyone on Earth to do what they do, more and better. And that's so exciting. That's so different than how I thought AI was going to go, but I'm so happy about it.
One of the challenges we had in talking about the work that you've done, and that OpenAI is doing, is helping people understand your vision of what artificial general intelligence means for our future. Can you help this room understand how their lives will be changed? You said you can't predict the future, but as we move forward, what will AGI mean for all of us?
There's many important forces towards the future. But I think the two most important ones are artificial intelligence and energy. If we can make abundant intelligence for the world, and if we can create abundant energy, then our ability to have amazing ideas, for our children to teach themselves more than ever before, for people to be more productive, to offer better health care, to uplift the economy. And to actually put those things into action with energy, I think those are two massive, massive things. Now, they come with downsides and so it's on us to figure out how to make this safe and how to responsibly put this in the hands of people. But I think we see a path now where the world gets much more abundant and much better every year. And people have the ability to do way, way more than we can possibly imagine today. And I think 2023 was the year we started to see that. [In] 2024 we'll see way more of it. And by the time the end of this decade rolls around, I think the world is going to be in an unbelievably better place. It sounds sort of silly and sci-fi optimism to say this, but if you think about how different the world can be, not only when every person has, like today they have like ChatGPT—it's not very good. But next they have the world's best chief of staff. And then after that every person has a company of 20 or 50 experts that can work super well together. And then after that everybody has a company of 10,000 experts in every field that can work super well together. And if someone wants to go focus on curing disease, they can do that. And this one wants to focus on making great art, they can do that. But if you think about the cost of intelligence and the quality of intelligence, the cost falling, the quality increasing by a lot, and what people can do with that, it's a very different world. It's the world sci-fi has promised us for a long time. And for the first time, I think we could just start to see what that's going to look like.
Commentaires
Enregistrer un commentaire
Thank you to leave a comment on my site