Loading...

Loading...

The pillars of ethical automation

  • Posted on January 23, 2019
  • Estimated reading time 3 minutes
ethical automation

This article was originally published on IT Brief Australia.


When technological advancements like the cloud, big data and artificial intelligence (AI) started their rise to prominence, the world knew it was in for some big changes.

 

One of the more obscure and unpredictable by-products of this technology-driven change has been the rise of digital ethics as a topic for discussion amongst business leaders.

This has been particularly true in the case of AI and automation - where new technology is leading to ethical challenges and dilemmas that are giving a renaissance to the armchair philosophy of ethical thinker’s past. Demonstrated by the many business leaders, governments, scholars, and more, pondering the ethical implementation of automation.

As the builders and users of autonomous systems, it’s important that we consider what ethical automation should look like.

Digital ethics are unavoidable
As we bring intelligent automation (IA) into our lives, the ethical thinking that has moved throughout time from philosopher’s armchairs to university lecterns; and from political chambers to courtroom benches, is now moving once again - this time to the computer programmer’s lab.

The data chosen to go into an intelligent autonomous system, as well as how the system is taught to interpret that data – both have huge implications for the ethical behaviour of the end product. In effect, the builders of our autonomous tools are making ethical decisions in the way they code and train their AI to behave – whether they like it or not.

This is a challenge we’ve already seen play out time and time again. Examples include an AI tool that demonstrated racial bias when predicting the likelihood of future criminal offence; or an AI-backed photography service that caused offence to minorities based on image recognition that was trained with an inherent racial bias towards the majority – both real cases.

As the pace and widespread adoption of intelligent automation continue to accelerate, the resulting rapid changes are introducing challenges that will affect wide swaths of society. Waiting for local, national or international legislation to pass is not an option. It simply takes too long.

One convincing school of thought believes that the responsibility for defining the ethics around building, using and applying technology lies with the organisations that are driving it. Those who lead the charge must play an active role in developing both informal and formal regulations.

The automotive industry and AI
Arguably, the area that best demonstrates the pace of change and the challenges facing developers of autonomous systems is the automotive industry, as it sits in the midst of total disruption led by intelligent automation.

In the manufacturing of consumer cars, some cities have begun to adopt self-driving vehicles, allowing citizens to benefit from auto-piloted commutes.

The philosophical and ethical questions that arise here have actually been around for some time now. The Trolley Problem is a philosophical thought experiment that was first introduced in 1967– and was recently revisited in the popular Netflix program The Good Place.

The thought experiment places a decision maker in the driver’s seat of a speeding trolley (tram) experiencing a brake failure and a choice between two tracks. One (in front of the driver) will see the tram crush five people without the driver taking action; the other (which requires the driver to switch tracks), will see the tram crush just one.

While this thought experiment was originally just a theoretical question for pondering the nature of what is ethical and moral, today, this scenario is no longer theoretical at all. The programmers of autonomous cars must program best and worst-case scenarios into their car systems – when faced with a problem like the above, how do we trust them to make the right decision?

This thought experiment can be expanded to any situation where lives are at stake. If an AI system is managing scheduling for operations in a hospital. Should it prioritise five short surgeries with a very high likelihood of success one more complicated surgery?

Despite the challenges, the use of AI in business is pervasive and extremely exciting. We are seeing exciting developments led by AI that promise to revolutionise many industries, each of which is compelling enough reason to tackle the hard questions today, so we can reap the rewards of a better tomorrow.

In road infrastructure, the operators of the Sydney Harbour Bridge have recently implemented sensors and machine learning to monitor for and predict bridge damage, hoping to improve the lifespan and commuter safety of the bridge.

In commercial trucking, major brands like Tesla have announced self-driving trucks that are set to disrupt a multi-trillion-dollar industry.

These are far from the only examples of an industry feeling the effects of AI and digital transformation, nor is it the only industry feeling these changes - but it is a very visible one, and one that quite literally begs the question - who is steering us towards an ethical digital future?

Johnathon Petersen

Very interesting article. As a BA the statement "The data chosen to go into an intelligent autonomous system, as well as how the system is taught to interpret that data – both have huge implications for the ethical behaviour of the end product" is a very important one. I think it is important start identifying the ethical impacts of how we design and implement solutions. The challenge is by what values and standard do we do this? Is it purely by legal considerations (as is the current method) or another way? 

February 10, 2019

Avanade Insights Newsletter

Stay up to date with our latest news.

Next steps

Talk to us about how we can bring the power of digital innovation to your business.

CLOSE
Modal window
Contract
Share this page