Drowning in digital ethics guidance? Here’s how to make real progress
- Posted on March 12, 2020
- Estimated reading time 3 minutes
It’s fair to say that digital ethics and responsible AI are starting to get the widespread attention they deserve. But how helpful is it? It seems like every other week another association or authority releases a new set of ethical guidelines. In the past month alone, we’ve seen new documents from the European Commission, the US Department of Defense, and the Pope(!) on how best to tackle tough ethical questions of AI.
You’ll notice common threads among these guidelines, such as explainability, transparency, accountability, fairness, and respect for human rights including privacy. As we’re still operating without broad regulation for AI and digital technology, the good news is that whichever set of guidelines you subscribe to, you’re in pretty good company with the others. However, that doesn’t give you or your team much to go on in practical terms.
To help you take those next steps, Avanade just published our Digital Ethics Trendlines report, a point-of-view paper describing how companies can practically translate their ethical objectives into a functioning program.
As I’ve described before, the first step is taking these objectives and your company’s own values and turning them into principles to drive desired behavior. Next, you’ll have to look at the governance structure you already have in place: processes and controls for assuring quality, compliance, security, and privacy for the technology you build, buy, deploy, and operate. Each of these steps along the way should have ethical responsibilities too. Every designer, developer, quality assurance and security tester, implementer, and operator of technology has a part to play.
The Trendlines report includes this diagram describing how various stakeholders of IT governance can contribute:
This will all be a gradual process and a change in culture for most organizations. If you’re looking for ways to keep the momentum going, keep these big-picture goals in mind:
- Find ways to make biases lead to positive outcomes. It’s impossible to completely remove bias from any system, however, systems that single out a certain population are not necessarily bad. (The issue is when these biases lead to unfair treatment.) Knowing that, stop spinning your wheels trying to eliminate bias, and aim instead for positive outcomes: increased diversity, more employee promotions, better customer service, etc.
- Root out long-term biases hardwired into your development lifecycle. Some of the most insidious biases may be those that cause indirect but ultimately harmful impacts over the long term. For example, you might be prioritizing rapid user adoption, aesthetic quality, and user privacy, which all seem to have positive implications. However, over the long term these priorities may inhibit product quality, environmental responsibility, and national security (respectively). Encourage your team to talk through these priorities regularly and find ways to make adjustments that align with your corporate values.
- Prioritize inclusion, transparency, and oversight. Be as inclusive as possible when understanding requirements and designing solutions; the more diverse your team, the likelier you are to identify potential issues and come up with the most favorable solution. Require transparency for any difficult or potentially contentious decisions, so stakeholders understand what’s going on. Finally, build these principles into your existing governance structure – your executive committee meetings, IT governance board agenda, IT policies, performance objectives, etc. – so that they receive appropriate oversight and become part of your organization’s culture.
And as always, if you’re looking for a more in-depth discussion or help on any of these topics, you can contact us or post a comment below.