Building trust and managing risk in the age of AI
- Posted on January 16, 2024
- Estimated reading time 3 minutes
At a holiday cocktail party last month, there was none of the usual talk about playoff prospects, fantasy leagues or mid-winter getaways. Most people were chatting about one thing: Generative AI. What is real versus hype? What should their organizations be doing today to get ready? What are the risks and dangers? The benefits?
Gen AI is the world I live and breathe in my day job, so I ended up sharing with them what I tell my clients and curious friends every day.
Gen AI is a transformative technology that will impact and benefit every client and every industry, whether you are an insurance company, a bank, or an asset management firm. Companies have been learning about it, some have been kicking the tires, but everyone is feeling the pressure to move from thinking about it to putting AI in action. Indeed, in Avanade’s AI Readiness Report, 94% of organizations surveyed said they are increasing their digital investments because of AI. Companies are anxious to move from ideas to action. But how?
The first step: Build trust.
The number one question that comes up is anchored in trust. How can we be sure the information we receive from Gen AI will be accurate, free from bias and error-free? Can we trust that our data and our customers’ information will be secure? Can I trust AI to help me do my job?
The White House joined the conversation last October when it issued an executive order addressing the “Safe, Secure and Trustworthy Development and Use of Artificial Intelligence” to promote responsible use of AI while mitigating risks. This was followed by the European Union, which reached an agreement on wide-ranging provisions that could set a de facto global standard to classify risk, enforce transparency and hold companies accountable for noncompliance.
The U.S. executive order outlines eight guiding principles and priorities that federal executive departments and agencies must adhere to when undertaking actions covered in the order, encompassed in two general categories:
- Ensure AI is safe, secure, transparent, accountable and respects privacy, civil rights and civil liberties.
- Address AI risks, including the careful and responsible use of AI in the areas of biotechnology, cybersecurity, critical infrastructure and other national security concerns.
I’m glad to see the U.S. government and the European Union begin to work toward adoption of governance. But let’s be honest. That high-speed train has already left the station. While the EU proposed legislation may be passed as early as May, it wouldn’t fully take effect for two years. And beyond the executive order, any legislation in the U.S. is far behind. In the meantime, companies across our industry and others are already creating their own best practices and developing the principles of Responsible AI.
Here’s my take on what you can do today to build trust and minimize risk with AI:
- Think big, start small to build trust.
- Governance comes first. Create governance principles, internal controls and protocols that reflect your company’s core principles.
- Adopt a people-first mindset. This is about helping organizations and their people become more efficient—driven by people, enabled by GenAI technology.
- Commit now to the ethical and responsible use of AI. Consider appointing a Responsible AI ambassador, as Avanade has, to guide the use of generative AI throughout your organization.
There’s still time to jump on that runaway train…just don’t wait for it to stop at the station.
- Read the AI Readiness Report: Explore the perspectives of 300+ banking professionals on the potential and value of artificial intelligence from the Avanade AI Readiness Report.
- Ready to responsibly harness the benefits of AI? Sign up for an Avanade AI organizational readiness workshop.
- Explore Microsoft Copilot for Microsoft 365: From readiness to adoption, learn how you can unlock the full potential of your AI-powered work assistant. Click here to get started.