Why Vinod Khosla Is All In on AI

Conversations with leaders |

Email not displaying correctly? View it in your browser.
Why Vinod Khosla Is All In on AI
Vinod Khosla, founder, Khosla Ventures, at Collision 2024, Toronto, Canada. Vaughn Ridley—Sportsfile for Collision/Getty Images
By Astha Rajvanshi
Staff Writer

When Vinod Khosla had a skiing accident in 2011 that led to an ACL injury in his knee, doctors gave conflicting opinions over his treatment. Frustrated with the healthcare system, the leading venture capitalist proffered, in a hotly debated article, that AI algorithms could do the job better than doctors. Since then, Khosla’s firm has invested in a number of robotics and medtech companies, including Rad AI, a radiology tech company. The self-professed techno-optimist still stands by his assertions a decade later. “Almost all expertise will be free in an AI model, and we’ll have plenty of these for the benefit of humanity,” he told TIME in an interview in August.

One of Silicon Valley’s most prominent figures, Khosla, 69, co-founded the influential computing company Sun Microsystems in the 1980s, which he eventually sold to Oracle in 2010. His venture capital firm Khosla Ventures has subsequently placed big bets on green tech, healthcare, and AI startups around the world—including an early investment of $50 million in 2019 in OpenAI. When OpenAI’s CEO, Sam Altman, was briefly fired last year, Khosla was one of the investors who spoke out about wanting Altman back in the top job. “I was very vocal that we needed to get rid of those, frankly, EA [Effective Altruism] nuts, who were really just religious bigots,” he said, referring to the company’s board members who orchestrated the ousting. He contends with their concerns: “Humanity faces risks and we have to manage them,” he said, “but that doesn't mean we completely forgo the benefits of especially powerful technologies like AI.”

Khosla, one of the TIME100 Most Influential People in AI in 2024, is a firm believer that AI can replace jobs, including those performed by teachers and doctors, and enable a future where humans are free from servitude. “Because of AI, we will have enough abundance to choose what to do and what not to do,” he said.

This interview has been condensed and edited for clarity.

Khosla Ventures has been at the forefront of investing in AI and tech. How do you decide what to put your bets on, and what's your approach to innovation?

I first mentioned AI publicly in 2000, when I said that AI would redefine what it means to be human. Ten years later, I wrote a blog post called “Do we need doctors?” In that post, I focused on almost all expertise that will be free through AI for the benefit of humanity. In 2014, we made our first deep learning investment around AI for images, and soon after, we invested in AI radiology. In late 2018, we decided to commit to investing in OpenAI. That was a big, big bet for us, and I normally don't make bets that large. But we want to invest in high-risk technical breakthroughs and science experiments. Our focus here is on what's bold, early, and impactful. OpenAI was very bold, very early. Nobody was talking about investing in AI and it was obviously very impactful.

You were one of the early investors in OpenAI. What role did you play in bringing Sam Altman back into his role as CEO last year?

I don't want to go into too much detail as I don't think I was the pivotal person doing that, but I was definitely very supportive [of Altman]. I wrote a public blog post that Thanksgiving weekend, and I was very vocal that we needed to get rid of those, frankly, EA [Effective Altruism] nuts, who were really just religious bigots. Humanity faces risks and we have to manage them, but that doesn't mean we completely forgo the benefits of especially powerful technologies like AI.

What risks do you think AI poses now and in 10 years? And how do you propose to manage those risks?

There was a paper from Anthropic that looked at the issue of explainability of these models. We're nowhere near where we need to be, but that is still making progress. Some researchers are dedicated full-time to this issue of ‘how do you characterize models and how do you get them to behave in the way we want them to behave?’ It's a complex question, but we will have the technical tools if we put the effort in to ensure safety. In fact, I believe the principal area where national funding in universities should go is researchers doing safety research. I do think explainability will get better and better progressively over the next decade. But to demand it be fully developed before it is deployed would be going too far. For example, KV [Khosla Ventures] is one of the few not assuming that only large language models will work for AI, or that you don't need other types of AI models. And we are doing that by investing in a U.K. startup called Symbolica AI that's using a completely different approach to AI. They'll work in conjunction with language models, but fundamentally, explainability comes for free with those models. Because these will be explainable models, they'll also be computationally much more efficient if they work. Now there's a big ‘if’ in if they work, but that doesn't mean we shouldn't try. I'd rather try and fail than fail. To try is my general philosophy.

You're saying that explainability can help mitigate the risk. But what onus does it put on the makers of this technology—the Sam Altmans of the world—to ensure that they are listening to this research and integrating that thinking into the technology itself?

I don't believe any of the major model makers are ignoring it. Obviously, they don't want to share all the proprietary work they're doing, and each one has a slightly different approach. And so sharing everything they're doing after spending billions of dollars is just not a good capitalistic approach, but that does not mean they're not paying attention. I believe everybody is. And frankly, safety becomes more of an issue when you get to things like robotics.

You’ve spoken of a future where labor is free and humans are free of servitude. I'm wondering about the flip side of that. When we're talking about replacing things like primary healthcare with AI, how does that shift the labor market, and how do we reimagine jobs in the future?

It's very hard to predict everything, and we like to predict everything before we let it happen. But society evolves in a way that's evolutionary, and these technologies will be evolutionary. I'm very optimistic that every professional will get an AI intern for the next 10 years. We saw that with self-driving cars. Think of it as every software programmer can have a software intern programmer, every physician can have a physician intern, every structural engineer can have a structural engineer intern, and much more care or use of this expertise will be possible with that human oversight that will happen for the next decade. And in fact, the impact of that on the economy should be deflationary, because expertise starts to become cheaper or hugely multiplied. One teacher can do the job of five teachers because five AI interns help them.

That's interesting because you're suggesting almost a coexistence with AI that complements or optimizes the work. But do you see it eventually replacing those jobs?

I think these will be society's choices, right? It's too early to tell what's there, and we know the next decade will be about this internship of AI expertise idea, in conjunction with humans. The average primary care doctor in America sees the average patient once a year. In Australia, it's four or five times a year because they have a different doctor-patient ratio. Well, America could become like Australia without producing 5 more doctors. All these effects are hard to predict, but it's very clear what the next decade will be like. We've seen it in self-driving cars. Apply that model to everything, and then you can let them go and do more and more, and society gets to choose. I do think in the long term, in 30, 40, 50 years, the need to work will disappear. The majority of jobs in this country, in most parts of the world, are not desirable jobs, and I think we will have enough abundance because of AI to choose what to do, and what not to do. Maybe there will be many more kids becoming like Simone Biles or striving to be the next basketball star. I do think society will make most of these choices, not technology, of what is permitted and what isn’t.

You've publicly disagreed with Lina Khan's approach to the FTC. What role can regulators play in this need to strike a balance between investing in radical, untested new technologies at scale, and enforcement and regulation to make sure they are safe to use?

I think regulation has a role to play. How much, and when, are critical nuances. We can't slow down this development and fall behind China. I've been very, very clear and hawkish on China because we are in the race for technology dominance with them. This is not in isolation. The Europeans have sort of regulated themselves out of any technology developments, frankly, around all the major areas, including AI. That's going too far. But I thought the executive order that President Biden issued was a reasonably balanced one. Many, many people had input into that process, and I think that's the right balanced hand.

Can you expand on where you see dominance within the global AI race? Do you think countries like Japan and India can become global AI leaders?

In the West, it's pretty clear there will be a couple of dominant models. Places like Google, OpenAI, Meta, and Anthropic will have state-of-the-art models. So there won't be 50 players in the West, but there will be a few, a handful, as it currently appears. Now, that doesn't mean the world has to depend on the American models. In Japan, for example, even the Kanji script is very different, as are their national defense needs. They want to be independent. If AI is going to play a role in national defense, they will have to rely on a Japanese model. The same thing in India. If China has its own model, India will have its own model. And so national models will exist. There's Mistral in the E.U., and that's a trend we recognized very early, and we were the first to invest in this idea that countries and regions with large populations will want their own models.

In thinking about these nation models, how do you ensure greater equitable distribution of the benefits of AI around the world?

I do think we have to pay attention to ensuring it, but I'm relatively optimistic it will happen automatically. In India, for example, the government’s Aadhaar payment system has essentially eliminated Visa and MasterCard in their [fee] of 3% on all transactions. I've argued that if that same system is the key to providing AI services, a primary care doctor and an AI tutor for everybody should be included in the same service. It wouldn't cost very much to do it. I actually think many of these will become free government services and much more accessible generally. We've seen that happen with other technologies, like the internet. It was expensive in 1996, and now the smartphone has become pretty pervasive in the West and is slowly becoming pervasive in the developing world too.

Share the Leadership Brief by clicking here.

 
TIME may receive compensation for some links to products and services in this email. Offers may be subject to change without notice.
 
Connect with TIME via Facebook | Twitter | Newsletters
 
    UNSUBSCRIBE    PRIVACY POLICY   YOUR CALIFORNIA PRIVACY RIGHTS
 
TIME Customer Service, P.O. Box 37508, Boone, IA 50037-0508
 
Questions? Contact leadership@time.com
 
Copyright © 2024 TIME USA, LLC. All rights reserved.

Commentaires

Posts les plus consultés de ce blog

Kid draws a hilarious family portrait, featuring his mother on her period

Chris Froome sends out strong message to his rivals as he storms back to win Criterium du Dauphine for the second time

This Is What Fish Oil Supplements Actually Do