Elon Musk is a known critic of artificial intelligence. The Tesla and SpaceX founder has repeatedly warned that A.I. poses the "biggest existential threat" to mankind. Along with Y Combinator founder Sam Altman, he also created OpenAI, a non-profit aimed at ensuring that the technology is used for good.
Now, a newly published Vanity Fair profile examines the depths of Musk's fears of A.I.--and how other tech leaders are reacting to it. Here are some of the story's biggest revelations.

1. Musk believes that even seemingly harmless A.I. can be dangerous.

While the billionaire entrepreneur has spoken in the past about weaponized A.I., he fears that giving a supercomputer even a harmless task could have catastrophic effects.
"Let's say you create a self-improving A.I. to pick strawberries," Musk said in the interview, "and it gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever."

2. Musk's fears of A.I. created tension with his friend, Google co-founder Larry Page.

Musk targeted Google as the company most likely to let its A.I. get out of control. "I've had many conversations with Larry about A.I. and robotics--many, many," he said of his relationship with Page, adding that at some points, it affected their friendship.
One of Page's biggest beliefs is that A.I. is only as good or bad as its creators. Musk, on the other hand, envisions a scenario in which humans take a backseat to A.I.
3. Some in Silicon Valley think Musk's proclaimed fear is a recruiting tool.
Many argue that Musk is creating a good-vs-evil storyline. By portraying his own companies as being on the good side, he'll have an easier time attracting talent for cheap.
Andrew Ng, founder of Google Brain and Silicon Valley-based chief scientist at Chinese A.I. company Baidu, points out that Musk benefits from this. "I think he sees accurately that A.I. is going to create tremendous amounts of value," Ng said.
Others dispute: "He's Elon-freaking-Musk," said Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute. "He doesn't need to touch the third rail of the artificial-intelligence controversy if he wants to be sexy. He can just talk about Mars colonization."

4. Peter Thiel fears that Musk's musings are having the opposite effect.

Thiel, the venture capitalist and Musk's fellow PayPal co-founder, worries that pushback against A.I. has the effect of drawing more attention to the field, thus accelerating research and progress in the field.
"There's some sense in which the A.I. question encapsulates all of people's hopes and fears about the computer age," Thiel said. "I think people's intuitions do just really break down when they're pushed to these limits because we've never dealt with entities that are smarter than humans on this planet."

5. Some tech leaders think A.I. will breed super-humans (if it doesn't kill us first).

Futurist Ray Kurzweil has that miniature computer chips within our bodies will eventually connect humans directly to the cloud, which could improve our intelligence and capabilities. Altman says that A.I. will lead to a new iteration of humans--if it doesn't rebel and wipe us all out first. "In the next few decades, we are either going to head toward self-destruction, or toward human descendants eventually colonizing the universe," he said.

6. Musk doesn't believe there's a way to stop A.I. once it becomes too smart.

Last year, a team of A.I. researchers, including one from Google's DeepMind, published a paper outlining the plans for a "big red button" that could stop a dangerous A.I. system in its tracks. Musk isn't so sure this will work. "I'm not sure I'd want to be the one holding the kill switch for some super-powered A.I.," he said, "because you'd be the first thing it kills."