When you scan the GxP domains (good laboratory practice/GLP, good clinical practice/GCP, and good manufacturing practice/GMP) for examples of their use of artificial intelligence/machine learning (AI/ML), you quickly realize that the GCP space, specifically clinical trials, has been applying AI/ML to several use cases, making them early adopters of technological advancements.
Beyond applying them, GCP organizations, such as sponsors and contract research organizations (CROs), have also developed technology solutions. Some examples include software as a medical device (SaMD); wearables that gather data during clinical trials; and the use of smartphone apps, cloud-based computing, and algorithms that analyze data, including video, facial recognition, natural language processing (NLP), and sentiment analysis.
In this regard, they follow good software engineering practices, such as the application of agile, iterative software developments, as opposed to the legacy linear, waterfall, “big bang” approach to software development.
A quick ChatGPT search of CROs that use AI/ML returned the following results:
The response continued, adding, “The application of AI and ML technologies in the CRO industry is continually evolving, and more organizations are incorporating these tools to improve efficiency, decision-making, and patient outcomes.”
The National Library of Medicine (NLM) has also weighed in on the role of machine learning in clinical research, saying, “Machine learning has the potential to contribute to clinical research through increasing power and efficiency of pre-trial basic/translational research and enhancing the planning, conduct, and analysis of clinical trials.”
Life sciences organizations often invest 20% or more of their revenue in research and development, which is $200 million for a company that generates a billion ($1B) in revenue annually. For obvious reasons, there’s a lot of interest in keeping the R&D pipeline full of promising products.
Why is so much invested in R&D? The answer is simple: an organization’s longevity is based on its product offering(s). A healthy R&D pipeline with promising product candidates represents the potential for billion-dollar blockbusters. Investing a million today to yield a billion tomorrow creates an insatiable desire to invest.
There’s more behind the decision-making rationalization to adopt technological advancements than functionality to cover their trial protocols and participant interactions from pre-screening, screening, enrollment, protocol execution, and study closure.
What’s required is a nimble, agile, iterative software development methodology to deliver software solutions that address real-world problems quickly. In this regard, technology is used to develop and deliver more advanced technologies.
For a product to become a viable reality that offers comfort and possibly a cure, it is necessary to perform early research and clinical trials to provide evidence that a drug, biologic, or device product is safe and effective and that the systems used to manufacture that product perform as intended (i.e., validation).
Let’s talk more about validation. Collectively, we (the life sciences industry) and regulators created the laws (rules) we must abide by. One such law is Validation, which is stipulated in 21 CFR Part 11:
11.10(a) Validation of systems to ensure accuracy, reliability, consistent intended performance, and the ability to discern invalid or altered records.
Unfortunately, despite regulators stressing the importance of applying a risk-based, least burdensome approach to validation, companies still struggle.
Validation generally consumes 20% or more of a project's resources. As such, a million-dollar project will spend 20% ($200K) on validation. This is true for all types of validation (equipment and instruments, computer systems, cleaning, process, and analytical method validation).
Given the expense and time it takes to validate – weeks if not months – one can quickly see why validation is so resisted. But what if we could leverage technology to make validation painless and efficient? What if we could validate in minutes rather than months? If this were possible (hint: it is), then the argument against validation becomes moot.
There are many myths and preconceived notions when it comes to validation.
You must do x, y, and z.
Everyone does it that way.
It’s generally accepted industry standard practice.
You’ll get a 483 if you don’t …
QA is forcing us to do it; otherwise, they won’t sign off.
When you hear these statements, revisit the regulation:
11.10(a) Validation of systems to ensure accuracy, reliability, consistent intended performance, and the ability to discern invalid or altered records.
As you can see, the regulation (i.e., the law) is vague and not very prescriptive. It’s up to life sciences professionals (mainly QA) to interpret the regulation and develop a quality management system (QMS) with policies and procedures to ensure compliance.
There may be instances where QA can't sign off because of a requirement in a corporate QMS or a standard operating procedure (SOP), but is this really what the law states? Or is the organization's interpretation of the law as stipulated in its SOPs creating challenges? It's worth looking into this aspect to see if improvements and efficiencies can be gained without sacrificing quality and compliance.
You’ve undoubtedly heard of FDA’s computer software assurance (CSA) initiative. This is another attempt by the Agency to reinforce a risk-based, least burdensome approach to validation. CSA also encourages adopting technological advancements such as AI/ML in software development and testing.
Here’s how it all works together.
Assemble a team of qualified professionals.
Approach the project (problem statement) from a critical thinking perspective, eliminating bias, including all individual perspectives, thinking outside the box, and removing preconceived notions.
Identify the risk associated with the system or features/functions of the system. The risk for a feature/function may be high, whereas the overall average risk of a system may not be.
Decide assurance needs based on system, feature/function risk.
Develop the product/solution according to procedure (SOPs) and test based on risk.
Finalize the validation phase of the system development lifecycle (SDLC) and release the system to production if all is acceptable.
In addition to CSA, advances in automated testing, AI, and ML have exponentially improved our ability to validate systems. In the past, automated testing required significant upfront effort and more time than brute-force manual testing. This made it a non-starter for many systems, especially if the system was stable and did not encounter changes that would require revalidation or regression testing.
Artificial intelligence/machine learning makes automated testing more intelligent, requiring less upfront planning, development, and scripting of the automated test solution. In fact, technology is advancing to the point where it is becoming more capable of testing itself, in many instances better than humans have been able to test the systems.
When you apply CSA as your validation approach and adopt an iterative development methodology, such as Agile, in your SDLC, you position yourself to take full advantage of Validation 4.0 and Pharma 4.0. This is especially true when you deploy technologies to digitize the entire SDLC process:
With digitalization, data is captured and recorded. As the process progresses, more data is gathered, updated, refined, clarified, and so on. Ticket issues can become feature enhancements. Requirements specify what needs to be fixed, removed, or added to a system. Systems are developed according to requirements, then tested and validated to demonstrate that they perform as intended.
When this is performed with automated testing, test results are recorded live, in real time. Failures can be flagged and resolved. When all issues are resolved or considered acceptable, the system can be considered validated and used in a regulated environment.
When the proper systems are in place, and data of good integrity is captured, recorded, and shared with other systems, efficiencies can be gained. Accuracy also increases because human error is either reduced or eliminated. Supervisors and managers can stay informed in real time. And because validation is digitized and automated, the execution of test scripts occurs at the speed of electricity (or speed of light when fiber optical is installed) — exponentially faster than manual, paper-based validation performed by humans.
If your technology team(s) uses an Agile SDLC methodology, you can realistically validate every sprint iteration programmatically, especially when it’s fully automated and can be executed in minutes. If you find an issue that slipped through the cracks, you can update automated test scripts to prevent reoccurrence.
This post focuses on GCP because, as evidenced by the use of AI/ML in clinical trials, GCP is open to exploring and adopting advanced technologies. However, you can apply the concepts mentioned above across the GxP spectrum.
Regarding the manufacturing and production of approved product(s), of course, digitization, automation, and the adoption of advanced technologies occur. Traditionally, GMP, and the life sciences in general, move slowly and cautiously to avoid jeopardizing patient safety. However, often the pace is too slow and inadvertently creates a risk by not advancing a product or utilizing more advanced technology.
Adopting and deploying new technology to automate your validation processes can save you time and money. This is true for every area in life sciences, from laboratory to clinical to manufacturing. Contact us to learn more.