Machine learning can enable positive change in society, says Adam Benzion, Chief Experience Officer at Edge Impulse. Read on to learn how the company is preventing unethical uses of its ML/AI development platform. 

Adam Benzion on ethics and machine learning
Adam Benzion (Chief Experience Officer, Edge Impulse)

Priscilla Haring-Kuipers: What Ethics in Electronics are you are working on?

Adam Benzion: At Edge Impulse, we try to connect our work to doing good in the world as a core value to our culture and operating philosophy. Our founders, Zach Shelby and Jan Jongboom define this as “Machine learning can enable positive change in society, and we are dedicated to support applications for good.” This is fundamental to what and how we do things. We invest our resources to support initiatives like UN Covid-19 Detect & Protect, Data Science Africa, and wildlife conservation with Smart Parks, Wildlabs, and ConservationX.

This also means we have a responsibility to prevent unethical uses of our ML/AI development platform. When Edge Impulse launched in January 2020, we decided to require a Responsible AI License for our users, which prevents use for criminal purposes, surveillance, or harmful military or police applications. We have had a couple of cases where we have turned down a project that were not compatible with this license. There are also many positive uses for ML in governmental and defense applications, which we do support as compatible with our values.

We also joined 1% for the Planet, pledging to donate 1% of our revenue to support nonprofit organizations focused on the environment. I personally lead an initiative that focuses on elephant conservation where we have partnered with an organization called Smart Parks and helped developed a new AI-powered tracking collar that can last for eight years and be used to understand how the elephants communicate with each other. This is now deployed in parks across Mozambique.

Haring-Kuipers: What is the most important ethical question in your field?

Benzion:
There are a lot of ethical issues with AI being used in population control, human recognition and tracking, let alone AI-powered weaponry. Especially where we touch human safety and dignity, AI-powered applications must be carefully evaluated, legislated and regulated. We dream of automation, fun magical experiences, and human-assisted technologies that do things better, faster and at a lesser cost. That’s the good AI dream, and that’s what we all want to build. In a perfect world, we should all be able to vote on the rules and regulations that govern AI.

Haring-Kuipers: What would you like to include in an Electronics Code of Ethics?

Benzion: We need to look at how AI impacts human rights and machine accountability aka, when AI-powered machines fail, like in the case of autonomous driving, who takes the blame? Without universal guidelines to support us, it is up to every company in this field to find its core values and boundaries so we can all benefit from this exciting new wave.

Haring-Kuipers: An impossible choice ... The most important question before building anything is? A) Should I build this? B) Can I build this? C) How can I build this?

Benzion:
A. Within reason, you can build almost anything, so ask yourself: Is the effort vs. outcome worth your precious time?


About the Author

Priscilla Haring-Kuipers writes about technology from a social science perspective. She is especially interested in technology supporting the good in humanity and a firm believer in effect research. She has an MSc in Media Psychology and makes This Is Not Rocket Science happen. 


WEEF 2022

WEEF 2022

The World Ethical Electronics Forum (WEEF 2022), which is slated for November 2022, will build on the momentum from last year's event, where Elektor engineers and other thought leaders discussed ethics and sustainable development goals. Over the next few months, Elektor will be publishing thought-provoking, ethics-related articles, interviews, and polls. Visit the WEEF 2022 website (www.elektormagazine.com/weef) for additional details.