Loading...

Loading...

Upcoming AI regulation and what it means for your business

  • Posted on June 15, 2023
  • Estimated reading time 9 minutes
Understanding latest AI regulations

Recent months have seen many exciting new developments in the AI space, particularly with Large Language Models (LLMs). These powerful models are evolving rapidly, with a vast array of contexts within which they can be applied. As a result, the impact and risks that arise from these technologies are also changing.

Reflecting some of the concern, over 30,000 individuals and signatories from various organisations supported an open letter published by the Future of Life Institute in March, calling for a 6-month moratorium on the training of AI systems more powerful than GPT-4. While the letter has been signed by influential individuals in the AI space, it has also been met with much scepticism from stakeholders, including ethicists, claiming such a pause would stifle innovation and fail to address the real risks, given that there is no realistic way to implement a moratorium without government intervention, and that appropriate regulation should be prioritised instead.

Given such concerns, we are seeing a lot of movement in the regulatory space by major players such as the EU and the UK. Recently we have seen the announcement by Rishi Sunak that Britain will be hosting the first global AI regulation summit this autumn (2023), which aims to discuss how an internationally co-ordinated approach to regulation can mitigate risks, while harnessing AI's potential.

In addition, the UK government is also proposing a pro-active approach to the future of generative AI technologies, and its attitude towards effective regulation has a decidedly different tone to the one voiced by the open letter. In the whitepaper entitled ‘A Pro-Innovation Approach to AI Regulation’, the government emphasises the importance of establishing regulatory frameworks that encourage innovation by industry, as opposed to stifling it. The resulting proposal is intended to be flexible to address the adaptive and context-specific nature of AI technologies, including generative AI (GenAI) solutions.

A risk-based, outcome-oriented approach
With a focus on the use and context of AI, rather than the technology itself, the framework promises an outcome-oriented approach that will have different regulatory requirements based on the specific use case and context in which the technology is being deployed. What this means for businesses is that there could be varying levels of risk management functions required for the same technology in different contexts. For example, stricter controls would be necessary if using GenAI in healthcare to provide medical advice based on symptoms of a patient, versus using the same technology to summarise research papers or documents in the same sector.

In this aspect, the stance taken by the UK government echoes that of the European Commission in its EU AI Act, which was approved by the European Parliament on May 11, 2023. The AI Act, which will become one of the first legal frameworks on AI upon its formal adoption by the end of June 2023, defines 4 levels of risk posed by AI systems outlined below:

EU AI Act

Source: European Commission

Examples of AI technologies under each category:

  1. Minimal risk: Majority of AI systems currently in use in the EU, such as AI-enabled spam filters and video games, fall under this category and will be allowed free use.
  2. Limited risk: Chatbots and other AI systems where specific transparency obligations will be required to inform the user they are interacting with a machine.
  3. High risk: Technologies like biometric identification systems, credit scoring systems, exam scoring systems, recruitment software and more will be subject to strict obligations.
  4. Unacceptable risk: Any AI system that is considered to be a clear threat to the safety, livelihoods and rights of people, such as social scoring systems deployed by governments, will be banned.

How the regulations address generative AI
The AI Act has included language addressing GenAI, such as elements that require companies to disclose when people are interacting with generative models and obligating companies to meet certain transparency requirements. Obligations include risk management, data governance and auditing by independent experts to check the level of robustness of the foundation model, as well as transparency measures to provision detailed summaries of the training data covered by copyright law.

Similarly, the UK whitepaper provides an agile definition of AI systems as being characteristically autonomous and adaptable, which is inclusive of GenAI technologies. For foundational models like LLMs, there are no specific regulatory rules proposed, however specific requirements may be issued by regulators to address associated risks, such as transparency measures to inform users when a model is being used and what data was used to train the model, as was also echoed with the EU AI Act.

When considering using or developing generative AI models, businesses should also consider implementing measures such as scenario planning, working with external auditors, leveraging Microsoft’s Responsible AI tools (such as Fairlearn and InterpretML), and testing & monitoring for model drift to avoid any ‘ethical debt’* that may emerge post-deployment.

Below we summarise what the framework aims to accomplish, as well as some of the key takeaways, followed by the implications for businesses and how to get ahead of the competition in capitalising on AI, and in particular genAI technologies, safely and responsibly.

1.    UK whitepaper: The framework
The UK framework aims to guide the responsible use of AI by issuing 5 key principles on a non-statutory basis to be implemented by existing regulators in each sector. It is designed to be a flexible and adaptable approach to ensure that companies are encouraged to keep investing, building and innovating responsibly in this space.

Even though this will be introduced as a non-statutory duty, there may be a legal statutory duty implemented following an initial period of monitoring for effectiveness. Therefore, it is important to implement the right custom frameworks and governance across all stages of the AI lifecycle within your organisation to make sure the principles are observed and applied effectively.

The principles in question are as follows:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

UK regulators are expected to update their own guidance around AI according to these principles, meaning that businesses governed by those regulators will be required to comply accordingly.

2.  Implementation & enforcement
To ensure the principles above are implemented effectively cross-sectorally by regulators, new central functions will be established, which we discuss below.

2.1. Central risk function: There will be a cross-sectoral function tasked with identifying, assessing, prioritising and monitoring AI risks. This means an added layer of regulatory body will be supporting business’ deployment of AI systems above the regulatory bodies, enabling more specialised control and monitoring of risks.

2.2. AI sandbox & testbeds: Regulatory sandboxes and testbeds will be created to support innovators in getting AI products to market faster, testing the regulatory framework and identifying emerging trends that the framework may need to be adapted to. This will encourage businesses to keep innovating and identify risks early, with an added layer of confidence that the solutions will be better assured of being safe before deployment.

2.3. AI assurance techniques & tools: The whitepaper proposes assurance tools for implementation, such as impact assessments, performance testing and other formal verification methods. To help in understanding how these can support wider AI governance, the government has launched a Portfolio of AI assurance techniques which showcases how these tools can be applied by businesses to real-world use cases.

3.  What should you do?
There are some key steps your company can take to get ahead of the changes to come and the requirements that the government and the regulators will be monitoring and enforcing. Avanade can support your organisation in your AI journey through our Responsible AI capabilities and generative AI expertise, strengthened by our Microsoft-OpenAI partnership.

Below are 4 ways in which Avanade can support your organisation in your AI journey:

  1. Define & establish Responsible AI Strategic Alignment Strategy
    • Solidify your company’s Responsible AI vision by creating an organisation-wide Responsible AI strategy, identifying the key risks, values and principles for your business. Avanade can help in aligning the Responsible AI principles with the wider company values and vision, to charter a clear path forward.

  2. Assess the current state of AI maturity to ensure effective governance is in place
    • We can support you in conducting AI maturity and impact assessment to highlight, categorize and prioritise the risk areas and gaps that your current frameworks may not be considering across key domains. We can then apply our Control Framework to your existing policies and processes to address the risks identified, supported by Microsoft’s Responsible AI tools to improve the fairness, privacy and explainability of your solutions.

  3. Implement an organisational model and framework for deploying AI solutions
    • It is important to develop a governance structure that can guide your organization from the initial conception of a use case, through to provisioning and integration of the system into the business. Avanade’s comprehensive Responsible AI framework can support you with establishing the right deliberation processes & accountability framework, operationalising governance & controls to ensure human oversight, thereby enabling the promotion of trust.

  4. Encourage value-driven design to support value realisation and continuous improvement
    • By embedding values across the design process, we can support you through each phase of the solution lifecycle including data exploration and preparation, training, re-training & tuning, deployment and monitoring/adjustment. We can help in ensuring that your models are interpretable, transparent and explainable in order to avoid undesirable consequences such as biased outputs.

4.  Conclusion
Overall, both the UK and EU approaches to AI – and GenAI – regulation have the shared requirement of encouraging a custom, use-case based approach to address the importance of context-specificity when it comes to determining and addressing AI risk. Addressing the requirements set out by the regulations therefore will require a custom approach by businesses to establish their own Responsible AI frameworks to equip themselves with the right tools, skills and processes needed to ensure positive, value-driven outcomes. It is also worth considering that regulations will keep coming, and existing regulations can change. While any organisation doing business in Europe will need to comply with the AI Act, organisations have a moral responsibility and duty to practice responsible AI regardless of regulatory mandates, as this will inevitably support in boosting the company brand, mitigate risks, generate trust and address any potential human impact created by the technologies.

Given the complexities around advanced AI systems like generative AI models, this is not an easy task to accomplish - we can help you in navigating and preparing for the coming regulatory changes by implementing and embedding the processes required to make confident decisions around AI. We can also help you kick-start your generative AI journey from scratch, ensuring we are going above and beyond the regulatory requirements to give you the competitive edge needed amongst peers when it comes to establishing trust.

Contact us to talk to an expert about how to get started with Generative AI, and register for a Responsible AI strategy consultation today.

*Fixing the problems after they present themselves (post-deployment) can result in what is called ‘ethical debt’, which means it is the affected stakeholders and users who may have to pay the price for the negative consequences created by the technology.

Avanade Insights Newsletter

Stay up to date with our latest news.

Share this page
CLOSE
Modal window
Contract