Is AI Software Compliant for Labs? What the Experts Say

It’s the latest topic that everyone seems to be obsessed with these days: AI. 

Specifically, generative AI powered by companies like OpenAI (and by association, Microsoft), Anthropic, and Google. 

Whether you believe AI will fundamentally change society from top to bottom or think the latest generative AI advancements are overhyped, we’re not here to offer an opinion in either direction. 

That is not the purpose of this post. Furthermore, it’s not the question that has been on our minds, as well as the minds of lab staff everywhere. 

While the power of generative AI tools like ChatGPT and Claude is certainly impressive, it opens the door to questions like, “What happens to the data that these tools process?” or, “How are records and logs kept for the questions I ask these tools and the data I provide them?”

We won’t pretend to be the arbiters of compliance, but we did speak with a compliance expert recently about the applications (and risks) of AI in a lab setting. Read on to learn more and decide for yourself if AI tools are worth the risk in your lab.

Types of AI Tools Available to Labs

Ask 10 people what AI means to them, and you will likely get at least a handful of answers. 

For now, we’ll refer to AI in the context of Large Language Models (LLMs) powering chat, information generation, and analysis tools. These technologies have a variety of use cases beyond chatbots; labs now are using them for:

  • Data mining
  • Data analysis
  • Data cleaning and formatting
  • Predictive analytics
  • Text generation
  • Summarizing information
  • Automating tasks

And that’s just the start. 

As these technologies evolve and introduce autonomous agents that can act on your behalf, the sky truly will be the limit for what they can do.

Risks Posed By AI Software for Labs

The risks posed by AI tools are not necessarily new. You’ll find that these risks often already exist in your lab. The underlying risk lies in how much you “trust” an LLM and relax on oversight. 

The two risks we see concerning labs are:

  • Safety: Manage the security of your data and other sensitive information.
  • Accuracy: Having the confidence that the information you receive and the processes you employ are on the money. 

Let’s explore these a bit further.

Is Your Data Safe?

No conversation with an AI happens in a vacuum. Whether you ask ChatGPT for a recipe with spare ingredients in your pantry, recommendations for what to do in New York City, or to analyze a CSV file with test data, you must be aware of what’s at play. 

When you have a “conversation” with one of these AI tools or use their APIs to analyze data, you give them the keys to sensitive data, inside information, and other things you normally would not hand over so willingly to a stranger. 

While this risk is present with any cloud-based software application you use in your lab, the greater risk with AI-based tools is that their models can be trained on inputs, potentially resulting in the leakage of sensitive data. OpenAI, Google, and the like are sophisticated enough to implement guardrails to keep certain data secure, but nevertheless, this is one of the concerns slowing down AI adoption in more regulated industries. As such, even the National Security Agency (NSA) has begun instituting best practices for using AI tools in all environments.

Is Your Data Accurate?

While LLMs are getting better and seem to demonstrate some capacity to reason, they are still far from perfect. 

When you receive a question from someone else, a complex series of processes kicks off to interpret and understand the question while retrieving a response from your memory. LLMs work a bit differently. They are trained on massive reams of data (so much that it’s hard to fully comprehend) and respond based on probability. In other words, given your input, what is the most likely response you're expecting, and for each word the language model generates, what is the most probable word to follow next?

Each response is a matter of statistics, and the odds are never 100%, which is why “hallucinations” can occur. In this case, hallucinations mean getting incomplete, incorrect, or flat-out made-up data from your AI tool. This is no rare phenomenon. In fact, there is recent evidence to suggest this problem is actually getting worse.

You’ll probably note that the risk of interpreting data incorrectly is present when human eyes review the data, and you’d be correct. Just as in the example before, AI tools are not unique in these risks, but when you put too much trust into their output and security measures, you expose yourself to mistakes, compliance failures, and potential reputational harm.

What the Experts Say About AI and Lab Compliance

We had the pleasure of sitting down with Joe Franchetti, CEO of JAF Consulting, a compliance consulting company specializing in the pharmaceutical industry. 

As was expected, AI came up quickly in the conversation as Joe and the JAF team have worked to be at the forefront of this fast-moving industry. Here's what Joe had to say about the nature of AI and how compliant it is for labs to use these tools:

But as you get into these AI systems, especially on the clinical side where you start moving into software as a medical device where those clinical systems are now making clinical decisions for the healthcare practitioners and part of the clinical trial team when you get into executing clinical trials… the level of risk starts approaching a higher threshold for risk-tolerable activity. 

When these tools are used more to make clinical decisions, determine the quality of a product, or make some sort of predictive analysis, all of this is associated with risk. It’s not only about complying with regulatory expectations, but ensuring patient safety, whether in a non-clinical or clinical situation, during protocol execution, or a manufacturing activity.
You need to understand and assess the risk. 

As illustrated above, these tools can speed up your workflows and shorten the time it takes to achieve solid insights, but “handing over the keys” without having effective measures to verify output opens you up to compliance risks. Software in any form can be a fantastic asset for labs – but it must be validated, and so should the results it provides, whether or not they are generated by AI.

Examples of AI in Action

Despite the risks we’ve presented, AI is making its way into regulated industries and labs. 

Just recently, the FDA announced a new AI tool created specifically to help FDA employees, from scientific reviewers to investigators, work more efficiently. Not to mention numerous LIMS platforms all boasting of the new generative AI features that they’ve added to their platforms. 

The future of lab work could very well look like the picture depicted by this virtual lab, which combines human efforts with digital “AI scientists” to automate and speed up the path to discovery. Beyond a fully autonomous lab, AI tools are being used now to:

  • Process large datasets
  • Automate notifications and reminders
  • Analyze data in real time to flag anomalies, significant patterns, or potential errors
  • Assist in experiment design by suggesting optimal conditions or variables
  • Generate hypotheses and simulate outcomes before physical trials begin
  • Interpret results across multiple experiments to identify broader trends or insights
  • Improve reproducibility by ensuring consistent procedures and reducing human bias

And more.

Is AI Compliant? It’s Not So Easy to Answer

AI tools are neither inherently compliant nor non-compliant. 

What matters is how you understand and assess the risk to the data in your lab. Human-led systems could be just as fraught and fallible as AI systems; in fact, in many cases, human-led systems could perform worse.

But neither scenario lets you off the hook when it comes to understanding and assessing the risk in your lab and validating your data, from inventory to samples to tests. 

That’s why we bundled together our best compliance resources for you to save. Click the button below to read through our top compliance guides to help you understand key regulations labs face and how to best prepare for them – whether or not you utilize AI tools.

FREE GUIDE

QBench Regulatory Compliance Checklist

While this checklist cannot guarantee your lab will be compliant, it will be a major help in getting organized as you prepare for an inspection.

Get the Checklist PDF

Success! You can download the PDF below.
Download PDF Now
Oops! Something went wrong while submitting the form. Please refresh the page and try again.