"I have not spent 27 years of my life creating something that is going to harm our planet." Those are the words of Michael Stewart, co-founder and CEO of the Artificial Intelligence company Lucid. The Texas-based company has recently brought to market an AI system that took decades to create.

To safeguard against the technology causing harm Stewart, together with legal expert Kay Firth-Butterfield, set up an Ethics Advisory Panel to help the company roll out its AI for good.

Lucid's AI system is called Cyc. The Cyc project was started in 1984 by renowned computer scientist Dr. Doug Lenat. He and Stewart have collaborated for many years and together they founded Lucid when the system was finally ready to go into the world.

Uncharted territory
All new technologies can be both beneficial and detrimental to humanity but artificial intelligence especially, could turn out to be profoundly harmful to our species. Ranging from engineering challenges such as the control problem - which raises the question of how we can ensure that a superintelligence will remain under human control - to socio-economic issues like wealth inequality if AI takes over jobs at a massive scale.

The emergence of AI raises complicated questions and there is certainly a lot of attention for these issues in the field. In academia for instance, University of Cambridge's Centre for the Study of Existential Risk (CSER) has a special program dedicated to researching the risks of AI. And last year OpenAI was launched, an NGO that promises to open source its AI research to make it available to all. But Lucid may be the first commercial AI company that has an Ethics Advisory Panel (EAP) to steer its business strategy along ethical guidelines.