Please, subscribe to read the full content!

Your name
Your corporate email address
Privacy Policy

How AI Regulations in Drug Development Are Evolving: What Pharma Lawyers Need to Know

Artificial intelligence (AI) is no longer an experimental addition to the pharmaceutical and life sciences industries; it is a fundamental driver of innovation throughout the entire drug development lifecycle.

How AI Regulations in Drug Development Are Evolving: What Pharma Lawyers Need to Know


Artificial intelligence (AI) is no longer an experimental addition to the pharmaceutical and life sciences industries; it is a fundamental driver of innovation throughout the entire drug development lifecycle. AI is revolutionizing the process of bringing therapies to market, from the discovery of molecules to the recruitment of patients.


But regulators worldwide are rushing to elucidate the regulations as technology surpasses conventional frameworks. It is now imperative for pharmaceutical legal teams to comprehend these changes. This blog outlines the evolving global AI regulations and the information that pharmaceutical lawyers must possess in order to assist their organizations in navigating this complex, rapidly changing landscape.

 


AIs Expanding Role in Pharma


Deloitte's Intelligent Drug Discovery white paper indicates that over 50% of prominent pharmaceutical organizations are currently employing AI to optimize early-stage discovery. By scanning millions of compounds, machine learning algorithms can identify promising candidates in weeks instead of years.


Exscientia, a British biotech company, announced in 2020 that its AI-designed molecule had entered human clinical trials, a world first at the time. DSP-1181, the candidate for obsessive-compulsive disorder, was developed in collaboration with Sumitomo Dainippon Pharma. This milestone underscored the regulatory uncertainty and the potential of black-box algorithms to identify and test drugs. How should agencies assess these drugs?

 


Regulatory Milestones So Far


FDA: Action Plan for AI/ML in Medical Devices


The U.S. FDA has been a leader in the regulation of AI when it is used as a medical device or affects clinical decisions. The FDA issued its Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan in 2021, which outlines the process by which it will supervise adaptive AI algorithms that develop over time.


This is pertinent to the pharmaceutical industry, as SaMD encompasses AI-powered tools for clinical decision support, diagnostics, and digital therapeutics. The Action Plan underscores the necessity of Good Machine Learning Practice (GMLP) and explicit change control protocols if an algorithm re-trains itself following deployment.

 

 

EMA: Reflection Paper on AI


In 2023, the European Medicines Agency (EMA) released its Reflection Paper on the Use of Artificial Intelligence (AI) in the Medicinal Product Lifecycle, which was published across the Atlantic. It encourages developers to implement AI governance that is both robust and transparent, with a focus on human supervision, algorithm transparency, and data quality.


For instance, the European Medicines Agency (EMA) anticipates that sponsors will provide justification for the validation of AI system outputs and prepare for audits that regulators may conduct. This has significant implications for how legal teams craft vendor contracts and guarantee that the AI tools employed by R&D comply with regulatory requirements.

 

The EU AI Act: A Game Changer


The EU AI Act, the world's first comprehensive AI law, may represent the most significant change. It categorizes AI systems according to their level of risk, and it was completed in 2024. High-risk systems, including those employed in clinical trials, drug safety monitoring, or medical device decision-making, are subject to stringent regulations:

  • Effective risk management systems
  • Traceability and transparent design
  • Human supervision and accountability
  • Conformity assessments before deployment



When patient data is utilized in AI training, the Act intersects with GDPR. This generates intricate cross-border compliance challenges for pharmaceutical companies conducting global trials, including cross-jurisdictional privacy controls and secondary data utilization.

 


Key Challenges Lawyers Need to Watch


Validation & Explainability


The expectation of regulators for algorithms to be explicable is a recurring theme in white papers, including McKinsey's AI in Life Sciences and the WHO's Ethics & Governance of AI for Health. Sponsors must demonstrate the process by which an AI arrived at a conclusion when safety and efficacy are at risk, despite the potential power of black-box models.


This implies that legal teams should guarantee that AI vendor contracts include transparency obligations and that documentation is sufficiently robust to withstand future audits.

 

 

Data Privacy & Use


AI flourishes on data, which frequently comprises sensitive patient health information. The EMA and WHO underscore the necessity of complying with GDPR, HIPAA, and local data protection legislation in the areas of consent, anonymization, and secondary use.


In 2021, Google DeepMind's partnership with the UK's NHS was found to have violated GDPR standards when it utilized patient data to train an AI kidney disease detection tool without obtaining proper consent. This is a lesson that pharmaceutical companies must take to heart: even well-intentioned AI initiatives can result in privacy violations if the legal safeguards are not sufficiently robust.


 

Liability & Risk


Another unresolved issue: Who is responsible for the consequences of an AI error? Consider an adaptive clinical trial in which an AI platform dynamically reallocates patients to treatment arms. Is the sponsor, vendor, or developer accountable if a flaw distorts the results?


The FDA's SaMD Action Plan suggests accountability frameworks; however, there are numerous practical enquiries that remain. Pharma counsel should guarantee that AI suppliers' contracts contain explicit responsibility allocations, audit rights, and insurance obligations.

 

Whats Next


Industry professionals anticipate that additional guidance will be provided in the next 1224 months as regulators evaluate AI sandboxes. For instance, the UK's MHRA has initiated pilot programs to assist innovative AI developers in adhering to SaMD regulations. In the interim, the International Council for Harmonisation (ICH) is investigating harmonized guidelines for the use of artificial intelligence (AI) and digital health in clinical trials, as a first step towards achieving global consistency.

14+

Years of experience

600+

Events organized

4,000+

Speakers

25,000+

Attendees

Youre in good company

deutsche bank logo e on logo kpmg logo merck logo roche logo siemens logo

Testimonial

Our success stories

"High attention to detail in course content and very well delivered"

Simon Halsey
Product Development Manager
Essentra Packaging
United Kingdom

Our success stories

"Very good training led by two knowledgeable and open experts. Excellent insight given on many complex topics. Interactive and highly useful"

Aurelie Vivicorsi
USP PD Team Manager
Celonic AG
Switzerland

Our success stories

"Great course, impressed with the knowledge of the trainers and ability to answer wide variety of questions!"

Emilia Szwej
Manager, Senior Investigator
MT Sword Laboratories (BMS)
Germany

0 people are currently checking our trainings

X