Loading...

Loading...

AI and our privacy: Should we be worried?

  • Posted on June 27, 2019
  • Estimated reading time 3 minutes
data privacy

No less than 85 percent of Belgians are seriously worried about their privacy when they think of sharing personal data combined with AI. This is one of the conclusions drawn from a study by IPSOS and former Minister for the Digital Agenda De Croo. However, from a realist point of view, one thing is clear: We all love a bargain. Most people do not mind sharing some information in order to get the best deal. The question is: when are we sharing too much data? How long should data be kept by other parties? In my opinion, sharing data as such does not pose a risk. Suppose the data you share is only used in the way you intended it to be; Would you consider that a breach of privacy?

Therefore, the main issue here is not whether data is being shared, but rather how data is used. Is your information dealt with in an ethical way? We can all think of a situation where private data could be used against us, and these ethical questions also matter on a larger scale. Consider the increasing amount of public surveillance footage: How quickly could ethical barriers be crossed if AI were applied to all this data? These questions are top of mind – definitely not merely a local concern –, and food for thought for many think tanks worldwide. I believe a global policy like General Data Protection Regulation (GDPR) prescribing how data must be handled in an ethical way, is crucial.

“Avanade embeds client data protection into every project.”
Making this policy a reality should be the next step. Although government enforcement is a positive evolution, everyone should also take personal responsibility for their own ethical code of conduct. Within Avanade, for example, we are all committed to our COBE (Code of Business Ethics) policy that states, “Respect is the cornerstone for ethics and integrity. By respecting the law, policies (both our clients' and our own), and each other, we can ensure we are realizing results the right way.”

What about security: How do we protect access to AI from so-called ‘bad actors’?
Access to data and information in general is a very important topic of discussion. We all know the – by now infamous – GDPR, the regulation in EU law on data protection and privacy, which aims to give control to individuals over their personal data. Avanade also embeds client data protection into every project we manage to maximize safeguarding our clients’ data. Keeping data secure with proper access control is definitely our first concern, after which we can start to talk about its usage.

“Ensuring AI behaves ethically should prevent the bad actor-effect.”
I am aware of the fears that exist surrounding the misuse of AI. But, in short, besides the fact that systems facilitating AI should be thoroughly secured, AI itself cannot be hacked. Nevertheless, when applying AI that impacts human wellbeing, we should do so with a great amount of care. The ‘bad actor’ could be AI itself taking a decision based on economic factors, while placing less value on human life, the outcome is probably not the intended behavior. Ensuring AI behaves ethically should prevent the bad actor-effect.

Predict what's next with analytics and AI and download our 6-step user guide.

Clive Cooper

Great Post,
According to Inkwood Research, The Global Artificial Intelligence as a Service Market was valued at nearly $ 1526 million in 2018 and is anticipated to reach around $ 34363 million by 2027, growing at an estimated CAGR of 41.47%, during the forecast period 2019-2027.

July 1, 2019

Avanade Insights Newsletter

Stay up to date with our latest news.

Share this page
CLOSE
Modal window
Contract