Digital ethics is not a risk management discipline, and here’s why.
- Posted on October 26, 2021
- Estimated reading time 6 minutes
There is a recent tendency in our industry to talk about (and apparently manage) digital ethics impacts as if they were risks. At first, this may seem like a reasonable approach; many IT and business risks have ethical implications, whether we’re talking about privacy breaches or harmful results from unintended bias. In addition, many of the methods we use to identify and address ethical impacts are similar to those we might use in risk management, for identification, evaluation, treatment, etc.
However, treating digital ethics as another category or discipline of risk management is a mistake. Your organization needs a digital ethics program distinct from risk management. If you are conducting digital ethics risk assessments, you will come up short in at least four ways:
Many important ethical impacts are not risks.
Risk is a measure of uncertainty, usually articulating a combination of likelihood and impact of an event or circumstance. A risk event might be foreseeable, but never a certainty. However, when we assess digital technologies or initiatives, we can identify a number of ethical impacts that we can know for certain: crypto mining and training machine learning algorithms require a tremendous amount of energy, manufacturing mobile devices requires a number of rare metals, interfaces that don’t meet accessibility standards exclude people with impairments, surveillance technologies make people uncomfortable, and so on.
To be fair, there may also be risks related to these ethical impacts. For example, the widespread use of drones for package deliveries in an urban environment will definitely contribute to noise pollution, which has ethical implications. There is also a risk (with some likelihood) that the noise pollution is bad enough to result in lawsuits or customer backlash. But if you’re conducting a risk assessment, you end up going straight to these uncertain outcomes and fail to consider the known impact, which is an increase in noise pollution.
Risk management programs are self-centered.
Corporate risk management programs – whether their jurisdiction is enterprise risk, IT risk, operational risk, or third-party risk, etc. – start and end with the assessing organization. All major risk management standards (ISO 31000, COSO ERM, etc.) guide practitioners to identify, measure, evaluate, and treat risks that might affect the organization’s ability to achieve objectives (financial, operational, legal, strategic, etc.). There is tremendous value in these programs, and I’d love to see them get more investment. However, they don't address some of the most important ethical harms.
Why? Let’s consider a privacy breach. By now, any good enterprise risk register will include privacy breach, hopefully considering both employee and customer data. But the negative impacts described in that risk register will include costs such as regulatory fines, lawsuits, and reputational damage. But the risk register will not consider the mental toll on the individuals’ whose data was breached, nor will it include the personal time they will spend to reset potentially dozens of online accounts. These are ethical harms that a risk program will ignore.
Justifying mitigation costs for ethical harms requires a different process.
The reason companies invest time and money in risk management programs in the first place is to understand and address the potential events and circumstances that may lead to future losses. (It’s the rare, albeit plausible, risk management program that recommends increasing risk.) But in the standard nomenclature of risk and reward, the goal is typically to reduce the likelihood and/or severity of negative risk impacts in order to make a particular venture more profitable. For example, a construction firm might establish safety measures to reduce the likelihood of employee injury and take out an insurance policy to cover the costs of any accident – both mitigation efforts will make it more likely the company will complete the project without massive costs eating into their profits.
On the other hand, if we think about the chance that an unintentionally biased algorithm refuses to extend a loan to an aspiring homeowner, the cost of mitigating that chance of bias is less clear. The company may consider it worth the cost to mitigate the likelihood of a lawsuit or the cost of reputational damage, because those costs will eat into profits. But deciding how much to spend to mitigate the ethical harm to the customer requires ethical deliberation and considerations of corporate values, which are not standard practices in risk management.
Risk management programs don’t strive for ethically positive outcomes.
Strong corporate risk management programs can do a lot to strengthen trust in a brand, with the possibility of improving important metrics like customer and employee loyalty, customer win rate, and technology adoption rates. That said, risk management programs don’t aim to achieve ethically positive outcomes (unless you count increasing trust as a positive outcome in itself, in which case, let’s talk).
Digital ethics efforts, on the other hand, can and do look for ways to create more ethically positive outcomes. For example, making profile preferences more inclusive in an online social or gaming environment can encourage a larger population to engage and benefit from technology. Enabling customers to voluntarily share private data in exchange for a particular benefit can improve their experience and sense of agency. Neither of these efforts would fall into the category of risk remediation, nor would they normally be considered “tech for good” (as they aren’t making technology for altruistic purposes). Instead, digital ethics efforts here are guiding decisions to make technology work better for relevant stakeholders.
A final thought: Digital ethics and risk management must work together
Acknowledging that these two disciplines have distinct jurisdictions and goals, they can still coexist and even benefit from each other. Specifically, the continuous, monitoring, oversight, and reassessment that we see in mature risk programs are excellent models for digital ethics. Both capabilities should be embedded into standard procedures, which in the tech world include everything from requirements gathering and design to implementation and operations. And both are essential elements of a brand promise that should pervade throughout the organization and ultimately improve customer and employee experience. We even found in our Digital Ethics Global Survey that companies that have a robust risk management program are 20% more likely to offer digital ethics training for employees.
As executives work to demonstrate that they operate a responsible business, they should advocate for strong risk management and digital ethics capabilities, as long as they don’t make the mistake of confusing the two.
As always, I look forward to your input, and if you’re looking for a more in-depth discussion or help on this topic, you can contact us directly or post a comment below.