February 24th 2022

Getting Comfortable with AI Ethics

People can tend to get a little overwhelmed initially when you bring up the subject of “AI ethics”. The intersection of a complex branch of philosophy with a complex branch of technology and mathematics is not generally a crowd pleaser of a topic.

Decoration Element

Despite this, once past the scary intellectual speed bump, nearly everyone has plenty to talk about. Which is probably just as well:  how we as humans are going to interact with AI as it develops is one of the questions of our time.

So how do we get people to engage with these uncomfortable conversations? This sort of blog is part of that effort. AI is worthy of almost endless discussion ethically as it is a power-amplifier in a number of remarkable ways. By this I mean AI is generally applied to making decisions faster, “better” or utilising more data that a human can, so it enables either more to be done, “better” outcomes or the examination of things that were previously not achievable. The problem, and the reason I’ve used quotes around better is because of course flawed AI can make “bad” decisions faster and proliferate errors or bias far faster than a human can.

Now humans certainly don’t make perfect, in fact often not even good, decisions. We have a proud history of taking a long time to trust new things and coming up with all sorts of tightly-held but fundamentally unsound ideas. For instance, in the early 20th century it was quite widely shared as a theory that travelling over 100mph would likely cause irreparable damage to the human body, and at the same time the pseudo-science of phrenology (a theory of human behaviour based upon the belief that an individual's character correlate with the shape of their head) very nearly made it into aspects of the criminal justice system!

So if we don’t trust easily, we’re not great at decisions and AI can just as easily help us to do bad things quickly as good, how do we defend against this? Or should we all just hide under the bed?

Perhaps unfortunately, hiding isn’t going to work. AI and automation are going to be an ever growing part of all of our lives, not just those who work directly with data.  The biggest thing that data leaders can do is to establish and support organisations to build the internal capability and space to discuss and critique the ethical implications of AI projects. For the ethical AI project the ever-repeated question needs to be “should we?” far more than the “can we?” that typically predominates in technology-led areas. We are as technologists far too likely to get caught up in our own excitement and perceived cleverness to remember to consider these questions carefully enough, so we have to engage widely with other colleagues and users to test these “should we?” elements. We have to be open to that challenge and we have to build structures that make it easy to discuss, and take the time to clearly explain how even complex models work. Some sectors have frameworks already in place to unpick some of the key questions that might arise.  In medicine or academia for instance there are ethics committees and known mechanisms to facilitate discussion. However, these don’t exist in most sectors, and even where they do the AI specific elements aren’t always well served.

From my experience the practical things we can do in this space that really drive forward a change are, firstly, organisations taking seriously the question of AI ethics and investing in training for staff. Training isn’t quite the right word but “facilitated group discussion with an expert” is a bit of a mouthful. That said, it is precisely what is needed, the philosophical aspect and “no right answer” of ethics means that videos, books and lessons will take people only so far. Secondly there are simple prompts that can be worked into project documentation to encourage people to think about ethical considerations. These can be as straightforward as a “why should we do this using AI techniques?” justification statement in a business case, an ethics canvas to be used and updated throughout a project lifecycle or an “ethical challenges” session run by someone outside of the project team. With these approaches and other work, ethics will become a capability within your business. Just as data protection is everyone’s business so is ethics and to avoid repeating the mistakes that others have already made with AI, we have to invest in guardianship of the values we hold as organisations and give data practitioners the encouragement and support to act in that role.

The challenge isn’t to get someone “in charge of ethics” for your organisation, the challenge is to get your whole organisation comfortable discussing ethical issues about their work and spotting the issues that will inevitably arise as AI becomes ever more part of our work and lives.

Author: Richard Oakley, Director of Data Science and AI at Methods Analytics

Decoration Element
Decoration Element Decoration Element