Centralized v distributed
The work currently done on AI in universities, government labs and corporations like Google, Facebook and car manufacturers can one day lead to superintelligence, which the philosopher Nick Bostrom defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. Superintelligence could take a centralized form, say, a single machine with more thinking power than all of humanity combined. Such a machine in the hands of one nation state or corporation would hugely skew the balance of power in favor of its owner.

Conversely, a distributed superintelligence would mutually benefit everyone with access. Just like the knowledge repository that is the internet makes everyone who is connected more, well, knowledgeable.

“We think the best way AI can develop is if it’s about individual empowerment and making humans better, and made freely available to everyone, not a single entity that is a million times more powerful than any human”, said Altman during an interview with him, Musk, and OpenAI CTO Greg Brockman, conducted by Hackers author Steven Levy.

It is hard to predict how AI will develop. No one knows if or when superintelligence will emerge and it is equally hard to assess the type and severity of threats AI may pose to humanity. In his book Superintelligence: Paths, Dangers, Strategies Nick Bostrom meticulously addresses both these questions. One threat he identifies is the control problem: how do you make sure the superintelligence only acts in ways that are beneficial to humanity. One approach could be capability control, clipping the AI's capabilities so it can't act without human assistance. For instance, lock it in a box without actuators or access to the internet. Another approach is motivational control: embedding human values in the AI so it intrinsically will act to the benefit of humanity.