Loading...

Loading...

Good bot, bad bot: Unravelling the ethics of AI

  • Posted on November 30, 2018
  • Estimated reading time 4 minutes
AI ethics

Right now, we stand on the cusp of another great technological revolution, as a range of intelligent technologies that fall under the banner of AI begin to redefine what’s possible – in our businesses and in our lives.


Rapid technological advancements are creating a set of questions, problems and dilemmas that never existed before. There are new risks as well as new opportunities, and digital innovators must be conscious of mindfully designing how we deal with these new intelligent technologies and the associated data they create.

As Adam Warby, Avanade’s CEO, has pointed out, we currently have an opportunity to shape the relationship between humans and the powerful new technology we’re creating, which means we must take a thoughtful approach to digital ethics.

That’s one of the reasons we talk to our clients about Applied Intelligence – the combination of intelligent technology and human ingenuity applied across every business function and process – that helps organizations take advantage of intelligent technologies confidently and responsibly.

“Ethical regulation of the design and use of AI is a complex but necessary task. The alternative may lead to devaluation of individual rights and social values, rejection of AI-based innovation, and ultimately a missed opportunity to use AI to improve individual wellbeing and social welfare.”

Addressing ethical concerns
In a recent study with Forbes Insights, Accenture, Avanade and Microsoft explored many of the emerging trends and issues around AI, including ethics.

It’s clear that, built the right way, AI has the potential to vastly improve the way we work and live, and 47 percent of respondents believe AI systems should be built around humans, with ethical concerns high on the list of things to address.

The good news is that genuine ethical concerns (not groundless scare stories) are front of mind for organizations like Avanade, and our partners and clients, that are innovating and implementing the technology.

That said, the fact remains that our research shows 87 percent of executives surveyed feel they’re unprepared to address ethical concerns from the increased use of AI and automation. So, how can we realize the many benefits of the coming AI revolution without causing unintended harm?

The problem with automation
Algorithms use data to make choices, but how do we know what rules they follow – or when it’s better to deviate from them? As AI, machine learning, and automation technologies become increasingly intelligent, we need transparency to understand how algorithms reach conclusions and make decisions. In fact, 52 percent of our survey respondents say that companies need to make data and analytics transparent and comprehensible to consumers and non-scientists.

We know, for example, that algorithms can be biased. They can inherit biases from the people who built and configured them – and from the data we choose to give them.

We’ve come a long way since the COMPAS AI system was shown to have a clear racial bias in its decisions about which prisoners should be granted parole. Incidents like this have taught us that, while AI systems can analyze massive datasets to suggest a course of action, humans must make the final decision.

If algorithms are making autonomous decisions, how do we know they’re doing the right thing? For David Dunning, a program manager at the Defense Advanced Research Projects Agency (DARPA), ‘explainability’ is the key. His Explainable Artificial Intelligence program is looking at how the AI systems that will be relied on by intelligence analysts and soldiers on the ground can explain the conclusions they reach and the decisions they make.

Humans in the loop
We can’t allow unchecked automation – there must be a human element to any AI system. We must ensure that, during design and implementation, we aren’t just testing functionality and interoperability, but also testing for bias in the data, design and configuration. When designing automated systems, we must include a digital ethics perspective. That means building transparency and control into the system, so people can see and understand the data and the decisions-making rationale, and make informed choices based on the AI analysis.

Diversity is also a key challenge that needs to be addressed as we design and build AI systems. The team building the system should come from different backgrounds and bring diverse perspectives to the project – and a combination of cognitive intelligence (IQ) and emotional intelligence (EQ) is vital.

After implementation, it’s vital to maintain human oversight and, if necessary, adjust the algorithm as it evolves. And that creates an opportunity for new skills – analyzing the decisions made by algorithms and checking for bias. This will require training, as well as, input from a variety of stakeholder groups to establish an ethical baseline for decision-making.

In addition, the value of human contextual awareness and domain expertise can’t be underestimated when it comes to interpreting and acting on AI recommendations, and these irreplaceable human capabilities should be a core part of any AI initiative.

The human touch must also play an essential role in the decision-making process, tempering robotic efficiency with important skills that are impossible for AI systems to replicate, such as empathy, compassion and common sense.

For AI to be better integrated, so that businesses, communities and individuals can experience its full benefits, it must be human-centered – augmenting human capability and capacity without causing unintended consequences. And to be human-centered, AI and automation must be grounded in a digital ethics framework.

A blueprint for digital ethics
At Avanade, we’ve defined a digital ethics framework to guide our clients’ AI and automation projects, based around four key characteristics of a digitally ethical product or service:

  • Fairness and inclusiveness: Free from discriminatory bias, and compliant with data protection and security principles.
  • Human accountability: With humans taking responsibility for the deployment and outcomes of robotic decision-making.
  • Trustworthiness: Safe, transparent and truthful systems that users can rely on.
  • Adaptability: Flexible enough to adapt as our understanding of digital ethics evolves.

At heart though, a digital ethics framework needs to be about people – your customers and your employees. So that instead of just asking what’s possible, you also ask what’s ethical.

I recently picked apart these issues and more during a fascinating discussion on the impact of AI with Pat Geary, Chief Evangelist at Blue Prism, Professor Leslie Willcocks and Lance Ulanoff – you can see all the highlights of our conversation in this video series.

Avanade Insights Newsletter

Stay up to date with our latest news.

Share this page
CLOSE
Modal window
Contract