What we can learn from anti-tech sentiment
- Posted on December 17, 2021
- Estimated reading time 4 minutes
Technology can be an emotional topic. Ask someone how they feel about sacrificing digital privacy to support law enforcement, about social media’s impact on politics, or about the future of artificial intelligence, and their response will likely carry an emotional charge. And we all know tech optimists who see innovation as the solution to the world’s most severe problems, just as we also know pessimists who think new technologies will likely do more harm than good.
Those of us who make a living in a technology role or in the tech industry might feel compelled to disregard or discount the pessimists’ emotional viewpoints as irrational, especially in cases where the pessimists aren’t especially tech-savvy. But it’s worth taking time to hear them out; it might just be the best way to improve our approach to innovation.
Emotions give us insight
An emotionally negative response to technology might initially represent a “gut reaction” driven by fear or uncertainty, like a concern that new image recognition software will invade our privacy or autonomous machines could steal our jobs. But before we dismiss these immediate reactions, we should understand that they’re often based on at least some existing knowledge or past experience. In addition, they can prove valuable by helping guide our focus on key aspects of risk.
As an example, let’s take video-based sentiment analysis used to evaluate job candidates. Some candidates might recoil at the thought of such technology because of some knowledge (articles about demonstrable biases and inaccuracies) or some past experience (they have been told by friends that they’re emotionally “hard to read”). It’s also possible they can’t immediately articulate their reasons for reacting negatively, in which case we can ask more questions. And as they learn more about the system, we can gauge whether their negative emotions diminish or intensify, then make appropriate adjustments to the technology and related processes if needed.
Here we see how emotions can help us:
- Focus on issues and challenges that need closer scrutiny.
- Guide discourse and decisions in a more inclusive way, involving both tech experts and non-experts.
In an excellent book on this topic, Risk, Technology, and Moral Emotions, Sabine Roeser explains the idea in great detail, concluding “the emotions of the public should be taken seriously in order to arrive at well-grounded judgements about the moral acceptability of risks.”
Listen for specific concerns
At Avanade, we’re seeing these kinds of negative feelings arise occasionally among customer organizations, for example with employees who are hesitant to engage with products that track their physical location or their online behavior. Their concerns may be related to violations of privacy, personal time, personal agency, or any number of other factors. And this hesitance can be costly: we’ve seen a case where a major connected-worker project was scrapped completely because of employee concerns, and another where employees actively avoided a multi-million-dollar physical tracking system because their privacy concerns overshadowed the operational and safety benefits.
In cases like this, the employees’ hesitance doesn’t seem irrational or unfounded, and rarely are they upset about the technology itself. The concerns center on what the organization might do with the technology: for example, executives might replace workers with automated systems, they might use personal data inappropriately to make personnel decisions, and the list goes on and on. From a practical standpoint, these are concerns that are worth close consideration and potentially remediation.
To address these concerns, we might craft policies and employee communication that explains what data will be stored and how it will be used (and more importantly, not used). We might establish access and process controls to prevent the misuse of the technology. And on top of mitigating these risks, we might also highlight ways that these solutions can help improve individuals’ performance, work/life balance, health and safety, career trajectory, and other ethically positive outcomes (perhaps especially for those who decide to opt in).
Learn from the Luddites
On a final note, we may view tech hesitancy as a form of neo-Luddism, invoking images of early 19th century weavers organizing to smash stocking looms. But we know that those and other similar technologies had been around for decades before these protests, and that the Luddites’ emotional call to action was a response to manufacturers’ inauthentic claims about quality as well as broadly unfair workplace and economic conditions.
Since then, anti-tech movements have routinely reflected antipathy toward unfair labor practices, safety issues, privacy concerns, social injustices, and other societal harms; only in very rare cases are they protesting the technology itself. With that in mind, we can all do a better job understanding and exploring the specific concerns behind anti-tech sentiment and addressing them when possible. If we do it right, with curiosity and empathy, the results should include a more responsible approach to implementing and operating technology, which in turn should lead to a greater sense of trust, adoption, and ROI.
As always, I look forward to your input, and if you’re looking for a more in-depth discussion or help on this topic, you can contact us directly or post a comment below.