Loading...

Loading...

Five ways legislation governing AI may impact health and life sciences

  • Posted on December 19, 2023
  • Estimated reading time 4 minutes
5 areas in healthcare and life sciences impacted by US legislation

When the White House issued an executive order in October addressing the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, my colleagues and I cheered.

When the European Union took its own serious steps to regulate AI early this month, the move toward regulation gained even more steam. According to news reports, the European Parliament could pass the AI Act proposed by EU officials as early as May, 2024. Although the law then would take two years to take full effect, this legislation will certainly influence what we see here.

As someone who has worked with evolving technologies throughout my career, I am passionate about accessibility, equity and the responsible and ethical use of any powerful technology, and nothing fits that description as well as generative AI.

Developing and deploying rules of the road for the responsible use of generative AI is the backbone of Avanade’s own approach to working with our clients, along with our partners Microsoft and Accenture. It is integral to any deployment we design.

Nowhere is that more important than in the fields of healthcare and life sciences, already among the most regulated industries – and ones where the responsible protection of confidential and individual information must inform everything we do. I am encouraged to see how closely these governance initiatives mirror the conversations my team and I have with our healthcare and life sciences clients every day.

The U.S. order outlines eight guiding principles and priorities that executive departments and federal agencies must adhere to when undertaking actions covered in the order, encompassed in two general categories:
  • Ensure AI is safe, secure, transparent, accountable and respects privacy, civil rights and civil liberties.
  • Address AI risks, including the careful and responsible use of AI in the areas of critical infrastructure, biological, radiological, nuclear and cybersecurity risks, and other national security concerns.

While the EU legislation is described as more far-reaching, regulators there debated long and hard about how to balance the need with responsible oversight without stifling innovation.

The EU legislation likely will impact development and deployment of generative AI in the U.S. over time, much like data privacy laws in Europe have influenced large tech companies with European businesses to mirror them. More immediately, I see five areas within healthcare and life sciences where the U.S. executive order and federal support for Responsible AI principles could impact deployment of generative AI in the U.S.

  • Data privacy and security: Organizations will be even more tightly focused on protecting what is shared within their four walls and what is exposed publicly, as AI is integrated fully into cross-enterprise platforms, such as those used to power electronic health records.
  • Ethical AI: We will see significant focus on ensuring fairness, transparency and accountability not only in AI-generated output, but also in the AI algorithms used for medical diagnosis, treatment planning and other healthcare applications.
  • Accessibility and equity: This is another big one. Organizations will make a focused effort to make sure AI technologies are accessible and that their benefits are distributed equitably across different demographic groups. AI’s ability to synthesize and analyze data could lead to new approaches to address disparities in healthcare access and outcomes, including specific initiatives to address social determinants of health, such as food insecurity, safety, education and homelessness, all of which have been shown to impact healthcare and disease outcomes.
  • Research and development funding: Access to federal funding will likely depend on adherence of AI governance requirements as they are finalized.
  • Workforce training and education: One area to watch – and advocate for: federal funding for workforce training and education programs to ensure healthcare professionals and organizations are equipped to use AI technologies effectively and ethically.

In Avanade’s AI Readiness Report, life sciences outpaces all other industries surveyed when it comes to having a complete set of responsible AI policies in place (57%), with healthcare not far behind (49%), but significant work remains for all industries to implement the proper guardrails – another area where adoption of federal guidelines may spur action.

AI development is moving at the speed of light. While it will take time for official governance around the world to catch up, I am inspired that companies like Microsoft and Avanade are leading the way to establish and follow Responsible AI protocols now.

Learn how Avanade can help operationalize Responsible AI with powerful new tools from Microsoft Accenture and Avanade Expand Capabilities to Accelerate Data Readiness and Generative AI Adoption with Microsoft Technology.

Avanade Insights Newsletter

Stay up to date with our latest news.

Share this page
CLOSE
Modal window
Contract