Gen AI must be used responsibly in the AI era – but is it?

  • Posted on November 2, 2023
  • Estimated reading time 5 minutes
Responsible Gen AI adoption

The pace of change we are experiencing in the Artificial Intelligence (AI) era in which we currently live is already remarkably fast and today, it is accelerating even more rapidly than ever before. Generative AI in particular has played a significant – if not the leading – role in driving this acceleration.

With OpenAI Labs’ public release of ChatGPT – the most widely known Generative AI platform in November 2022 - and Microsoft confirming its US$10 billion investment in ChatGPT technologies at the end of January this year, many questions about the technology’s long stay were answered. In fact, Gartner predicts that in under three years, Generative AI will produce 10 percent of all data – that’s tens of trillions of gigabytes.

The exponential growth of this new ground-breaking technology gives business leaders plenty of reasons to be excited, starting with is power and ease. But, as with any emerging technology, there are also potential risks. Some of these risks may come from your company’s use, others from malicious actors.

To manage both kinds of risks and harness Generative AI’s power to drive sustained outcomes and build trust, business must prioritize Responsible AI – an approach to developing, assessing, and implementing AI systems in a safe, trustworthy, and ethical way.

We are seeing progress from innovators, businesses, and governments in their efforts to facilitate the development of Responsible AI, while gaining broader understanding of the potential risks of the capabilities. For example, the White House announced new investments in AI research and forthcoming public assessments and policies, and the UK Government is playing a leading role positioning them as a leader ins AI global AI safety regulation.

Europe is at the forefront of responsible AI adoption
However, it’s the European Union that is leading the way, by setting the global standard to regulate artificial intelligence with its AI Act, initially proposed in 2021 and is expected to be approved for legislation by end of this year, or early 2024 at the latest. The AI Act aims to establish comprehensive regulations for artificial intelligence technologies, emphasizing transparency and fundamental rights protection in the European Union.

The way the EU is approaching AI will define the world we live in the future. To help build a resilient Europe for the Digital Decade, people and businesses will enjoy the benefits of AI while feeling safe and protected. The European AI Strategy aims to make the EU a world-leader for AI, to ensure AI is human-centric, sustainable, trustworthy in the pursuit to empower citizens and businesses to unleash its potential.

Fostering excellence in AI will strengthen Europe’s potential to compete globally, and the EU will achieve this by enabling the development and uptake of AI in the EU; making it the place where AI thrives from the lab to the market; ensuring that AI works for people and it remains a force for good in society; while building strategic leadership in high-impact industry sectors. This will be a game-changer for Europe, and the rest of the world will follow.

Government legislation is not enough - businesses must also act now
While the EU’s announcement that governance is forthcoming serves as a good start in ensuring businesses deliver Responsible AI, there is still much ground to cover.

New research from Avanade and McGuire Research Services revealed that most business and IT leaders are excited about the dawn of Generative AI, and imagining what they will do with AI Copilots, but may be less prepared to harness it responsibly. In fact, 48% of business leaders say they do not yet have a complete set of specific guidelines and policies in place for Responsible AI. That could be why 49% are not very confident that their organization’s risk management processes are adequate for an enterprise-wide technical integration of Generative AI.

And there’s a time crunch to consider. Almost all say that their organization needs to shift to an AI-first operating model within the next 12 months to stay competitive, with the majority requiring greater acceleration. This shift needs to happen by early 2024.

The silver lining is that this creates huge and immediate opportunities for innovative approaches to using Responsible AI. And without a solid Responsible AI framework – ethical guardrails and governance – organizations face generating greater business risk than business value.

However, Responsible AI is not just about being prepared. It’s about aligning to a business’ core values and doing what’s right for its people, its clients, and its communities. It’s about being conscious, being humble, being vigilant. We cannot thrive in the AI era without being responsible.

There are five crucial steps to Responsible AI that all business and IT leaders must address today:

1. Embrace the future of work with AI
2. Understand the unlimited human opportunity for AI
3. There is no end point to AI readiness
4. Hit the ground running with a Responsible AI framework
5. Seek an experience partner you can trust.

With these steps in place combined with adherence to legislative government frameworks, businesses can drive transformative value with the use of Responsible AI. Take Page Group as example - a company that is augmenting its workforce capabilities through the power of Generative AI. For them, the real power of AI is how it can supercharge the vast knowledge and experience of their people. They have a solid framework to help them to learn fast, so they can prioritize actions that enable their people with AI and harness the business value of the technology in a secure and transparent way.

I urge all business and IT leaders to act now to remain competitive. Rise above the hype and discover the use cases that will help your team deliver business value fast with Responsible AI.

Learn, explore, build, and create with the technology – ensuring responsible and reliable safeguards are in place to protect people and businesses. Above all, do what matters to make a genuine human impact.

Zephyrine Thompson

This is not only a well-written blog post, but it also presents a refreshing perspective on the AI.

December 13, 2023

Avanade Insights Newsletter

Stay up to date with our latest news.

Share this page
Modal window