This is the fourth post in a series of five summarizing a podcast series devoted to CSA. You can find the complete series here.
Testing is a fundamental and crucial part of computer system validation (CSV). Computer software assurance (CSA), the FDA's new methodology for performing CSV, won't change that.
The hope, however, is that CSA will streamline testing by encouraging us to rely less on scripted testing and pre-approved protocols and more on existing test/vendor records and automated tools in our testing efforts. In other words, CSA is designed to help us test smarter, not harder.
The foundation of good software, or software that performs as intended, is good requirements. The same is true with testing. If you want to write good tests, you must start with good requirements. Inferior or incorrect requirements are often the root cause of a testing problem.
User requirements define the intended use of the system. Functional requirements outline how the system must perform to meet those user requirements. Finally, design requirements are the elements and components that must be in the system to develop the functional requirements that will, in turn, meet the user requirements. Because they're interconnected, user requirements will have a domino effect on the overall safety and efficacy of the system. The effect can be negative or positive—it all depends on the quality of the user requirements.
Gathering and writing good requirements is a core competency for anyone tasked with writing test steps and test scripts. A good user requirement should have the following characteristics:
Once you are satisfied with your requirements, it's tempting to dive right into testing. But before you can move forward, you must consider risk and the least burdensome approach.
Testing, at its core, is really about risk. This idea is not new. The FDA promoted a risk-based approach in its final guidance document General Principles of Software Validation, published in 2002. A few years later, the International Society for Pharmaceutical Engineering (ISPE) introduced GAMP®5: A Risk-Based Approach to Compliant GxP Computerized Systems.
Unfortunately, over-testing and exhaustively documenting every step of the testing process (even for low-risk systems or features) "just to be sure" has become the standard approach to CSV testing. Maximizing testing efforts, regardless of risk, burdens resources and can even compromise patient safety. Testers often neglect—or entirely miss—high-risk issues when attempting to "test everything."
The CSA approach prioritizes risk-based critical thinking over excessive testing. It encourages a least burdensome approach to documentation and the use of less formal (and far less time-consuming) unscripted testing methods and automated technologies for digital validation.
Now that you're ready for testing, what approaches are at your disposal?
The purpose of CSA is not to eliminate CSV or scripted testing. Its purpose is to remind us to use critical thinking combined with a risk-based approach to determine which system features are crucial to product and patient safety and focus our most rigorous testing efforts there.
Note: This blog post summarizes the fourth episode of a 5-part podcast series devoted to CSA. Next time, I'll examine documentation, the fourth and final level of the CSA methodology.
Want to dive deeper into testing and CSA? Click on orange button below to listen to the full podcast.
Is CSA the new CSV? Not exactly. Here's the rest of your comprehensive guide to computer software assurance, the FDA's new framework for validating software systems.