Loading...

Loading...

AI drives critical need for a digital ethics framework

  • Posted on February 23, 2018
  • Estimated reading time 4 minutes
AI and digital ethics

With digital ethics, “I know it when I see it” isn’t good enough

“I shall not today attempt further to define the kinds of material … within that shorthand description … but I know it when I see it.” This rationale is used to categorize the struggle to define something that is hard to define, like taste, art, beauty or, in this case, obscenity, as famously stated by U.S. Supreme Court Justice Potter Stewart. 

That perspective could easily be applied to digital ethics today. Gartner defines digital ethics as the systems of values and moral principles for the conduct of electronic interactions. But what does this really mean? Everyone agrees we need to decide what is ethical and what is not, yet most executives and organizations seem to operate on an “I know it when I see it” basis.

Taking the lead on defining digital ethics
Technological advances like cloud, big data and artificial intelligence (AI) are creating new market opportunities and will help solve big societal problems. But the pace and widespread adoption of innovation continues to accelerate, and will introduce challenges that will affect wide swaths of society. Waiting for local, national or international legislation to pass is not an option. It simply takes too long. At Avanade, we believe that responsibility for defining the ethics around building, using and applying technology lies with the organizations that are driving it.  Those who lead the charge must play an active role in developing both informal and formal regulations.

Microsoft is one of those leading companies putting a stake in the ground, and recently described the societal impact of artificial intelligence in a ground-breaking book called “The Future Computed: Artificial intelligence and its role in society.” The book examines the use cases and potential dangers of AI technology and gives guidance how avoid or mitigate them. Digital ethics was also front and center at the recent World Economic Forum in Davos, Switzerland, where several sessions were devoted to AI and how to use it responsibly.

AI brings new urgency to digital ethics
There's good reason the topic of digital ethics is gaining visibility. As technologies like AI grow, so too does their potential impact. For example, imagine there is a medically unfounded correlation between two data groups, such as gender and fatality rate among individuals who have a certain symptom. An AI-based tool used by a hospital to determine the urgency for surgery based on this data might connect the two data groups and make a wrong recommendation to proceed with surgical intervention. Medical expertise and common sense will be required.

This may seem like a dramatic example, but it underscores the need for human involvement, especially in situations where ethical dilemmas could arise. We are starting to see the impact of digital ethics play out each day, from those that affect the individual (Does an autonomous vehicle save a pedestrian or a passenger?) to those that impact the broader community (How do we maintain control over machines that become increasingly intelligent and powerful?). We all agree that decisions affecting human lives require human involvement. We need to look beyond pure logic and bring in compassion, a value for life and dignity, and simple common sense.

But that’s easier said than done. Avanade research shows that 89 percent of executives have encountered an ethical dilemma at work, but 87 percent admit they are not prepared to address ethical concerns caused by the increased use of smart technologies and digital automation. People are hungry for guidance that is thoughtful, effective and ready to be put into practice.

Four pillars of a digital ethics framework
Digital ethics is one of the most important building blocks for the success of a range of technologies.  People will not trust systems or companies they do not believe are ethical, nor will they do business with them or work for them. Having a strong sense of ethics, especially in the digital space, will quickly become necessary for a digital organization’s short- and long-term sustainability.

At Avanade, we are already defining parameters around digital ethics and believe we can do much better than a “we know it when we see it” approach.  We established four pillars to help determine if a product or a service is digitally ethical:

Fairness and inclusiveness. It must not apply or lead to unlawful or unethical discrimination. It must comply with data protection and security principles while also helping to create a better information society – one that is open, pluralistic, tolerant, equitable and just.

Human accountability. Humans need to be ultimately accountable for the deployment and outcomes of a computer-generated diagnosis or decision. Only humans can apply compassion, empathy and common sense to ethical dilemmas.

Trustworthiness. AI systems must be reliable, capable, safe, transparent and truthful in their digital practices. This includes making it clear to users whether they are dealing with a computer or a human, and ensuring design starts with reliable and trustworthy data.

Adaptability. As our understanding of digital ethics evolves, so should the tools we create. Flexibility will allow for this evolution and enable us to incorporate feedback and learn from it.

Only by meeting these criteria can we be confident that an AI system is digitally ethical.

“Ethics is knowing the difference between what you have a right to do and what is right to do,” Justice Potter also once said. Our hope is that this framework can help move the discussion from the right to do something to agreeing upon the right thing to do.

Read more about how your organization can get ready for an AI-first world.

Avanade Insights Newsletter

Stay up to date with our latest news.

Share this page
CLOSE
Modal window
Contract