x

Join Advarra

Learn more about our company team, careers, and values. Join Advarra’s Talented team to take on engaging work in a dynamic environment.

See Jobs

Understanding the Impact of the New EU Artificial Intelligence Act on Clinical Research

Artificial intelligence (AI) has taken the world by storm – and regulators are paying attention. The European Parliament recently adopted the Artificial Intelligence Act (AI Act), marking a significant regulatory step in the oversight of AI technologies.

This landmark legislation aims to create a comprehensive framework for AI development and deployment, ensuring ethical use, safety, and transparency for European Union (EU) residents.

The EU AI Act’s implications extend into clinical research, where AI is increasingly utilized for tasks like medical image analysis, natural language process for endpoint analysis, and generating/analyzing data for synthetic control arms.

This article explores the likely impact of the AI Act on software and systems used in clinical research and how it affects entities outside the EU. We also summarize the key information pharmaceutical companies and contract research organizations (CROs) need to know to prepare for compliance.

An Overview of the AI Act

The new Act categorizes AI applications based on these risk levels: unacceptable, high, limited, and minimal.

An example of limited and minimal risk systems include AI in benign gaming apps and language generators. These risks face fewer regulations but must meet certain standards to ensure ethical use.

Unacceptable risk AI systems are banned outright, while high-risk systems must comply with stringent requirements, including transparency, data governance, registration with the central competent authorities, and human oversight.

While some of the AI Act’s compliance dates are set for August 2024, the full Act will be enforced in March 2026.

High Risk AI-powered Systems: Key Requirements

The AI Act will likely consider many AI-based systems used in clinical trials today as “high risk.” This includes drug discovery software, study feasibility solutions, patient recruitment tools, and others.

Here are some key requirements for “high risk” AI systems as they relate to clinical trials. (This is not an exhaustive list; reference the AI Act for complete details.):

Potential Impact on Clinical Research

Software vendors, sponsors, CROs, and clinical sites are all increasingly using AI components in their processes, programs, and systems. Here are the three key areas in clinical research the AI Act might impact:

Medical Image and Medical History Analysis

One of the most transformative AI applications in clinical research is in medical image and history analysis. AI algorithms can process vast amounts of imaging and medical chart history data to detect anomalies, identify disease markers, and assist in diagnosis and endpoint identification with remarkable accuracy and speed.

Medical image and history analysis systems likely fall under the AI Act’s high-risk category, due to their significant potential impact on health and safety in clinical care delivery. This categorization also considers AI’s impact on endpoint adjudication analysis, which ultimately drives drug and device regulatory approval determinations.

Synthetic Control Arms

The use of AI-powered software to generate data for synthetic control arms in clinical trials is another likely “high risk” area poised for significant impact. Synthetic control arms use historical clinical trial data and real-world evidence to simulate a control group, reducing the need for placebo groups and accelerating the trial process.

Regulatory agencies are pushing for the use of real-world evidence (RWE) to accelerate approvals and reduce clinical trial cost and complexity. What happens, though, when AI technology ingests large datasets of real-world data (RWD) and extrapolates what a hypothetical control arm of hypothetical patients would look like giving aggregated massive datasets (i.e., a synthetic control arm)?

While the synthetic control arm described above is based on real data, the challenge lies in how to trust the AI’s assumptions. Regulators must consider how to verify the data provenance and what the AI determined and assumed to generate the control data, as well as the implications those assumptions have on the end result – drug or device approval.

Patient Identification

AI is also revolutionizing patient identification for clinical trials, a challenging process crucial for research success. AI algorithms can analyze vast datasets, including electronic health records (EHRs) and genomic data, to identify suitable candidates for clinical trials with greater precision and efficiency. This can be particularly valuable for the growing number of trials analyzing biomarkers, which can make it more challenging to find participants meeting narrow criteria and require more data collection before and during the study.

Under the EU AI Act, patient identification systems are likely considered high-risk due to their potential impact on patient health and privacy.

Impact of AI Act on Companies Outside the EU

Similar to the EU General Data Privacy Regulation (GDPR), the AI Act extends enforcement outside the EU Economic Zone. It has potentially significant implications for any company doing business within the EU, particularly those marketing AI-driven clinical research products and services within the region.

Non-EU companies must comply with the AI Act if their AI systems are used in the EU market. For those non-EU based organizations conducting clinical trials, consider the following:

  1. Understand the regulatory landscape: Non-EU companies need to thoroughly understand the AI Act’s requirements and its application to products, services, and actions. This includes staying informed about regulatory updates and any clarifying guidance issued by EU authorities.
  2. Establish an EU representative: Similar to GDPR, companies outside the EU may need to appoint an EU-based representative to ensure AI Act compliance and liaise with EU regulatory bodies.
  3. Adapt products and services for compliance: Non-EU companies must ensure their AI-enabled systems meet the Act’s standards for transparency, data governance, human oversight, and other requirements. This may require modifying existing offerings and potentially developing new ones specifically for the EU market.

How Clinical Trial Stakeholders Doing Business in the EU Should Prepare for AI Act Compliance

Sponsors, CROs, and others in the research industry should consider the following actions:

  1. Conduct an inventory and compliance assessment: List all current AI enhanced or supported systems and determine each system’s risk classification under the AI Act. Then, identify areas requiring upgrade or modification to meet new regulatory requirements.
  2. Implement data governance protocols: Establish or enhance data governance frameworks to ensure the quality, representativeness, and security of data AI systems use – including processes for regular data audits and updates.
  3. Enhance transparency and explainability: Develop mechanisms to ensure AI systems are transparent and their decisions explainable, like user-friendly interfaces allowing healthcare professionals to understand and interpret AI outputs.
  4. Strengthen human oversight: Ensure AI systems are designed with robust human oversight mechanisms, such as training healthcare professionals and researchers on how to effectively supervise and validate AI decisions.
  5. Ethical and legal training: Train staff on the ethical and legal implications of using AI in clinical research to help ensure all team members are aware of their responsibilities in AI Act compliance.

The European Parliament’s adoption of the AI Act represents a pivotal moment in AI technology regulation, particularly in high-stakes fields like clinical research.

It’s likely this is just the beginning of AI regulation; even companies not involved in EU business should still take notice and consider the Act’s impact, as it may foreshadow future domestic policies. The Act’s emphasis on transparency, data governance, and human oversight aims to ensure the safe and ethical use of AI, ultimately fostering greater trust and reliability in AI-driven clinical research.

A version of this article originally appeared in PharmaPhorum in July 2024.

Back to Resources