Loading...

Loading...

Digital ethics: Global survey results and storylines to watch in 2021

  • Posted on January 25, 2021
  • Estimated reading time 5 minutes
global digital ethics survey

As we look to turn the corner after a decidedly and unexpectedly challenging year, we see digital technology at the center of so many of today’s most important issues: health, politics, commerce, education, criminal justice, and social interaction. Looking ahead to 2021, there are many reasons to be hopeful, and many ways that rethinking technology can help usher in a renewed sense of vitality for ourselves and our institutions.

But as always, we have to remain vigilant with our ethical decisions, understanding that how we design, develop, and deploy technology will have tremendous impact on our lives and the lives of others. Below we look at five of the most significant and urgent digital ethics storylines we expect to unfold in 2021, and we offer some highlights from Avanade’s soon-to-be-released Global Digital Ethics Survey results as context for how we can move forward.

We’ll begin a reckoning of workplace and citizen surveillance and the privacy we’ve given up responding to COVID-19.
To help prevent the coronavirus from spreading, while simultaneously trying to bolster our economies, we’ve called on tech in a big way. From contact tracing apps and fever screening systems, to at-home delivery and contactless commerce, from online education and court proceedings, to remote working and health care; every day there are fewer and fewer aspects of our lives that aren’t facilitated and tracked by digital technologies.

Avanade’s Global Digital Ethics Survey found that tech spending priorities affected how business and tech leaders thought about many aspects of digital ethics, including balancing health and privacy. For example, among leaders who said that employee health and safety is one of their top 3 tech spending priorities, 26% had a negative view of how their firms handled ethical issues when using sensitive personal data, compared to only 20% among leaders whose firms did not count employee health and safety as a top tech driver. Considering how much more data companies and governments have been collecting for the sake of keeping people safe, it’s important to voice these concerns and clarify exactly how these organization will use the data they collect.

Content monitoring and moderation will be a lightning rod of challenges with no easy solution.
Whether we’re talking about spotting and dealing with hate speech, deep fakes, disinformation, child pornography, or calls for violence, the way digital platforms govern users’ content will be a source of much debate in 2021. For example, in the UK, there was understandably passionate debate about a December 2020 rule to increase users’ privacy protections that will also make it more difficult for tech platforms to monitor messages for evidence of child abuse, trafficking, and other criminal behavior. Meanwhile in the US, striking down or substantially changing Section 230 of the Communications Decency Act (which protects tech platforms from liability for users’ posts) seems to be one of the few policy items with bipartisan support.

Views on these issues vary by region. In Europe, 28% of our survey respondents (and 36% in France alone) have a negative view of how well their firms are addressing child protection in their technology, while only 20% of North American respondents felt the same way. Similarly, 25% of European respondents had a negative view of how well their firms are addressing their technology’s support for trustworthy information, while only 16% of North American respondents gave that response. Understanding that no company has completely solved this challenge, the higher levels of concern seem to indicate more attention to and appreciation of these complex issues. We expect both attention and appreciation to increase dramatically in 2021, but solutions will require thoughtful collaboration.

Diversity and inclusion failures will continue, and progress will be difficult to spot.
The shortcomings here seem obvious. Technology designed and developed by teams that lack diversity are supporting broader systems that fail to recognize or address bias and unfairness in the outcomes they produce. Examples in 2020 were too numerous to count, from bias in image-cropping and ads displayed on social media, to perpetuation of racism in housing and mortgage processing algorithms, it’s clear that we have much work to do. On top of that, it’s easy to be discouraged as we read stories like Google firing of one of its AI ethics leaders for publishing a report on the inherent biases and inequalities of AI (although I’m inspired by the industry’s response).

Judging by the results of our global survey, there are still large gaps in both the diversity of tech teams and inclusivity of the products they produce, although some companies are doing better than others. For example, respondents who said their companies have a defined set of corporate values that help guide their decisions were much more likely (32%) to say they have seen business benefits from how inclusive their technology is compared to those who say they don’t have defined values (17%). And not coincidentally, among respondents who said they don’t have defined values, 50% said their firms could be doing more to improve the diversity of people who develop their technology.

Our focus on privacy is overshadowing other ethical issues, like mental health and personal agency.
Critical debates about data privacy continue, and the arguments related to data protection, security, pandemic response, and law enforcement won’t subside any time soon. These are important concerns, but these topics have largely overshadowed other pressing issues, such as how technologies can impact people’s mental health and personal agency. There are countless victims of cyber-bullying, cyber-stalking, revenge-porn, deep fakes, and other malicious attacks unrelated to privacy that cause deep and lasting harm. And even in less flagrant situations, people are suffering from fatigue, over-surveillance, hyper-connectivity, and dark patterns in the technology they engage with on a constant basis.

Business and tech leaders feel differently about these issues depending on their industry. For example, 32% of survey respondents in financial service firms had a negative view of how their firm handles mental health issues related technology, and 30% had a negative view with respect to personal agency issues. Comparatively, 21% and 18% of health care respondents provided those same answers, respectively. But we can’t simply assume that health care organizations are doing a better job here, as their focus on privacy may by keeping them from fully considering other ethical issues.

We’ll continue to see terrific leaps forward for digital accessibility.
In all of these discussions, it’s important to note that considering ethical implications is not just about identifying ethical risks; we should be looking for ethically positive outcomes as well. One example we’re seeing across the tech industry is a deeper appreciation for accessibility, in all its forms. While this field still is ripe for investment and innovation, we’re seeing products like motion-capture tech being used to translate between voice and sign language, accessible game controllers and in-game accessibility features, and eye-tracking support for a growing number of personal and work devices. There are also technologies including extended reality helping neurodiverse individuals engage with classrooms workplaces.

Interestingly, it appears that smaller companies have a better approach to accessibility than larger ones. Specifically, 37% of survey respondents from companies smaller than 5,000 people (minimum 1,000) and 41% from companies with less than $1B USD (minimum $500M) in revenue said their firms are addressing accessibility so well that they’re seeing positive business results. But only 32% of respondents from companies larger than 100,00 people and only 35% from companies with over $5B in revenue had the same answers, respectively. While those discrepancies may seem relatively small, consider that vastly greater impact larger companies can have in the field of accessibility tech, and imagine just a 5% increase in the number of enterprise executives who see the business value of being more inclusive.

Avanade’s Global Digital Ethics Survey reached 800 business and technology leaders from a wide range of industries in firms around the world. We will continue to share more data from our Global Digital Ethics survey over the next several weeks and months as we track these and other major storylines throughout 2021.

As we’ve been saying, so much of the improvement we hope to see and achieve in digital ethics will start when we change the nature of our conversations, when talking about the ethical impacts of our technology becomes second nature.

As always, I look forward to your input, and if you’re looking for a more in-depth discussion or help on any of these topics, you can contact us directly or post a comment below.


Avanade Insights Newsletter

Stay up to date with our latest news.

Share this page
CLOSE
Modal window
Contract