Select Page

I’m in Paris tonight to speak at an event on Ethics and Artificial Intelligence.

Here’s the event pitch:

AI systems are making decisions in a variety of industries today, or will be doing so in the near future, that could have an impact on virtually everything they touch. As AI advances, systems will need to be trained and ‘raised’ in much the same way as humans. Come and join us to discuss the ethical challenges and solutions of artificial intelligence and its impact on the future. Engage with people from different sectors of AI and discuss the future market trends. We invite the brains from different startups who will pitch their ideas and will explain how AI will help us to change the future.

My message tonight is pretty simple: if you take the human out of the technology, you run into ethical issues every time.

To give my ten minute speech a little structure, I’ve built it around a nice quote from William McDonough. While he was speaking in relation to the environment and the impact of human activity on the natural world, it’s something that I (and others) believe can be applied to discussion of the development of AI. Here’s what he said:

You can’t say it’s not part of your plan that these things happened, because it’s part of your de facto plan. It’s the thing that’s happening because you have no plan… We own these tragedies. We might as well have intended for them to occur.

When you think about the ethical issues that have emerged with the increasing development and adoption of AI technologies, this sentiment seems to apply.

Did your photo matching algorithm somehow learn to class African faces as gorillas? Of course you didn’t set out to achieve this outcome, but you did nothing to avoid it, either. It happened because there was no plan.

Did your bias-free criminal justice recidivism algorithm start recommending longer sentences for ethnic minorities, and lighter sentences for ethnic majorities? Again, you didn’t plan for this outcome but failing to make any plan around unconscious racial biases meant it was bound to emerge.

Did your autonomous vehicle ‘choose’ to run down a pedestrian rather than swerve and potentially injure a passenger? If there’s no rule to deliver this outcome (and there probably isn’t) but also no rule to prevent it, you’re on uncertain ethical grounds and the bad outcome is entirely predictable.

And what about mass surveillance and mass data gathering systems powered by states, both authoritarian ones and those that are nominally democratic? There will be problems, there will be ethical challenges, and there will be entirely preventable negative impacts on populations driven by what Cathy O’Neil called Weapons of Math Destruction. They might not be deliberate, but they are conceivable…if you bother to take the time to conceive them.

I’m glad that at my company the human is explicitly not distant from the technology. Unlike some straight-up AI solutions, the technology we sell is not designed to replace anyone who might use it. We talk about augmenting human intelligence and work hard to encapsulate human expertise in our models and software solutions. Indeed, without the human element, our technology doesn’t work.

And it’s because we encapsulate human expertise and because we connect human intelligence with artificial intelligence that our technology doesn’t run into the same ethical issues that others do. Our software solutions empower decision makers, it doesn’t replace them, and the fact that we can make a business doing this means that it is an option for others in the technology sector, too.