In the interview Musk explained how distributed AI could provide motivational controls: if AI developed as an extension of people rather than stand-alone machines.

AI [as an] extension of yourself, such that each person is essentially symbiotic with AI as opposed to the AI being a large central intelligence that’s kind of an other. If you think about how you use, say, applications on the internet, you’ve got your email and you’ve got the social media and with apps on your phone — they effectively make you superhuman and you don’t think of them as being other, you think of them as being an extension of yourself. So to the degree that we can guide AI in that direction, we want to do that. And we’ve found a number of like-minded engineers and researchers in the AI field who feel similarly.


If AI develops as part of people, as part of humanity than it will be just as good for humanity and just as bad for humanity as people are now for each other. Altman said: “Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else.”