Loading...

Loading...

AI, you have some explaining to do

  • Posted on June 5, 2020
  • Estimated reading time 3 minutes
AI you have some explaining to do

You never know what goes on behind closed doors
When it comes to your next-door neighbors, maybe it’s better that way. But in the context of artificial intelligence (AI) it’s quite the opposite. As we operationalize machine learning (ML) and AI systems, end-users need to know how decisions are made and why actions are taken.

What I hear often from clients looking at adopting AI or users in the field that work with AI-based decision making is that they don’t trust the black box paradigm of AI. If AI is “learning” and “evolving” based on acquired data, and they can’t see its logic flow, they’re not comfortable with it and do not want to rely on its decisions or recommendations.

I recently discussed this very issue with a client that had developed an AI to assist human teams determine bid ranges based on strategic fit, expected economic return, and competitive intelligence when bidding for oil and gas exploration leases. When the guidance of the AI happened to align with the analyst’s recommendation, they did not see much value in its recommendation. After all, they already had come to the same conclusion. On the other hand, if the guidance disagreed with their recommendation, they would reject the recommendation as clearly not valid. Because the decisions ultimately involve many millions of dollars and there was no way to follow the way the AI derived the decision, they relied on human experience and never put AI decisions to test. This effectively robbed the AI of the opportunity to establish a contrarian track record.

Without knowledge, there is no trust
AI has enormous potential to help us in many ways, across many industries and applications. However, end-users are often in the dark about how and why AI decisions are made, which causes a lack of understanding and trust. And for good reason. It could be risky, costly or even dangerous to place blind trust in AI. Here are a few examples of how we do and don’t trust these systems:

Risk assessment at the bank
You apply for a loan and the bank uses AI and ML systems to conduct a risk assessment, determine your creditworthiness and process your application quickly. These technologies help your bank with compliance and improve financial crime risk with advanced fraud detection. What these systems cannot do, however, is make judgment calls based on unique context. While a human bank employee may have approved your loan, the AI system denies it, even though you may very well be a solid credit risk when your individual circumstances are considered.

Alarm fatigue at the nuclear power plant and in healthcare
In the control room of a UK-based nuclear power plant there are more than 25,000 alarms, all of which are overseen 24/7 by a team of reactor operator experts. The equipment monitors everything in the plant—from pumps to generators, turbines, and of course—the nuclear reactor itself, and the operators are responsible for the day-to-day safe operation of the facility.

And in the healthcare industry, many monitoring systems are used—blood pressure, heart rate and oxygen level monitors, weight-sensitive bed and chair alarms, and other more advanced systems.

“False alarms” in both of these industries may cause alarm fatigue—a phenomenon defined as a natural human behavioral reaction to “deprioritize signals” that have often been proven to be either false or misleading—and that could have dangerous or even life-threatening consequences.

Cancer diagnosis
When IBM attempted to introduce AI into oncology (Watson for Oncology) the promise was simple: top-quality treatment recommendations for 12 cancers that account for 80 percent of the world’s cases. What happened, though, was unexpected. Similar to the above case in oil and gas leasing, the AI’s contrary recommendations were rejected by the doctors—assuming a flaw or incompetence in the algorithm—while concurring recommendations were seen as not valuable.

Because the machine could not “explain” its decision and why a recommended contrarian treatment was reasonable due to the complexity of the underlying model, the distrust has only grown, with doctors rejecting the AI recommendation and sticking with human expertise.

De-risking human-machine interaction early on
So how can we address the risk factor of trust so prevalent with AI systems? Right now, AI development and model generation is far too technically focused. By building transparent explainability into AI projects from the start, developers and data scientists can make design decisions that make it easier for testers, stakeholders and end-users to comprehend these systems.

Transparent AI through collaboration
Another factor to consider is emerging regulatory requirements, which are helping to address issues such as bias and privacy through the establishment of ethical frameworks. In April 2019, two initiatives were launched:
  • The US introduced the Algorithmic Accountability Act to check automated decision-making systems for bias.
  • The European Union introduced Ethics Guidelines for Trustworthy Artificial Intelligence, which outlines seven key requirements that AI systems should meet in order to be deemed fair, transparent and accountable.

More global consultation and programs are underway, such as G20 AI Principles, Declaration on Artificial Intelligence in the Nordic-Baltic Region, and Canada’s Directive on Algorithmic Impact Assessment.

There is still much work to be done to build AI systems that are trusted and transparent. By working together, industry, scientists and governing bodies can develop AI systems that are trusted and transparent, protect brand reputation and safeguard data.

Avanade Insights Newsletter

Stay up to date with our latest news.

Share this page
CLOSE
Modal window
Contract