By - June 03, 2025
As generative artificial intelligence (AI) platforms like ChatGPT, Claude, and Gemini become increasingly integrated into laboratory workflows, the need for structured guidelines to direct their use is urgent. Without clear parameters, laboratories face risks related to quality control, data integrity, patient confidentiality, and legal liability. This article explores the current applications of generative AI in clinical laboratory settings, identifies potential pitfalls, and argues for the immediate development of a Standard Operating Procedure (SOP) to govern safe, ethical, and effective AI use in the lab.
Laboratories are traditionally cautious environments. Every test, every result, every procedure must adhere to strict standards. Yet in 2024, a disruptive force entered the scene: generative AI platforms.
Originally built to generate human-like text or support decision-making, tools like ChatGPT (OpenAI1), Claude (Anthropic2), Gemini (Google DeepMind3), Microsoft Copilot, and specialized healthcare models like Med-PaLM are now being used in laboratories for a variety of tasks. However, as adoption accelerates, many labs are implementing AI informally and with little oversight or formal guidelines.
This raises a critical question: Should laboratories have a Standard Operating Procedure (SOP) for AI tool usage?
I would argue that the answer is yes — and that creating an SOP is not only logical, but necessary for maintaining professional standards.
Without formal endorsement or regulation, laboratory personnel are already using general and healthcare-specific AI models in creative ways, including:
|
Application |
Description |
|
Drafting SOPs and policies |
Creating initial drafts of standard operating procedures and policies based on regulatory standards. |
|
Summarizing regulatory updates |
Condensing updates from sources like CLIA, CAP, and ISO 15189 into quick, digestible summaries for team review. |
|
Generating competency questions and quizzes |
Developing preliminary competency assessments and quizzes for staff evaluations. |
|
Assisting in clinical correlation discussions |
Providing preliminary ideas or frameworks for clinical correlation discussions. |
|
Drafting educational materials |
Preparing outlines and content for internal staff training programs and continuing education sessions. |
|
Outlining research articles, case studies, or poster presentations |
Assisting in structuring and drafting scientific and professional presentations for publications or conferences. |
|
Suggesting code snippets for LIS automation |
Providing preliminary code examples to help automate tasks within Laboratory Information Systems (LIS). |
|
Supporting triage of quality control (QC) flag investigations |
Analyzing patterns in QC logs to assist in the preliminary identification and categorization of issues. |
While ChatGPT remains the most popular among general users, labs are beginning to explore Claude's longer contextual memory, Gemini’s integration with Google Workspace, Copilot’s link with Microsoft Office, and Med-PaLM’s emerging specialization in clinical information. These uses hint at a powerful new ally for overworked lab staff, but without oversight there is risk.
Improper use of AI introduces serious risks. Without a structured SOP, even seemingly harmless uses like drafting competency exams or updating SOPs can introduce hidden errors into clinical workflows.
|
Risk Area |
Description |
|
Data Confidentiality |
Inputting patient health information (PHI) into AI platforms can violate HIPAA4 or GDPR5 institutional compliance policies. |
|
Accuracy and Hallucination |
AI models occasionally “hallucinate” — fabricating convincing but false information. |
|
Source Integrity |
AI may generate answers without citing appropriate regulatory, scientific, or clinical references. |
|
Bias |
Biases present in the training data can shape AI outputs, leading to the creation of documents that may be noncompliant with regulations. |
|
Accountability |
While AI may assist in laboratory operations, any errors remain the legal and ethical responsibility of the laboratory and its staff. |
What an SOP for laboratory AI use should include
A well-designed SOP would not ban AI — it would help ensure its safe and evidence-based usage.
Here are essential elements the SOP should include:
1. Scope and Purpose
Define where AI platforms are allowed (e.g., document drafts, brainstorming) and where they are prohibited (e.g., clinical decision-making without human review).
2. Approved Use Cases
Specific acceptable uses: first drafts, summarization, literature search assistance, QC data pattern recognition.
3. Prohibited Activities
This is a big one. Prohibit inputting identifiable patient data, sensitive business information, or proprietary research.
4. Verification and Review Requirements
Mandate that all AI-generated content must be reviewed and verified by a licensed MLS, pathologist, supervisor, or designated reviewer.
5. Model Limitations and Disclaimers
Staff must understand model limitations: hallucinations, outdated knowledge cutoffs, lack of independent judgment.
6. Documentation and Transparency
Require documentation of when and how AI platforms were used — including versions/models used — within document histories or meeting minutes.
7. Training and Competency
Annual training for staff on AI basics and governance.
Competency assessments to ensure safe and appropriate AI use.
8. Compliance with Legal and Ethical Standards
Ensure SOP links directly to HIPAA, CLIA, CAP, etc.
Implementing an SOP for AI use in the laboratory offers several significant benefits. First, it improves quality assurance by ensuring that AI-generated outputs meet established quality and compliance standards. Additionally, a structured approach to AI use reduces legal risk by minimizing the chances of breaches involving PHI or the creation of documentation errors that could impact patient care. An SOP also promotes staff empowerment by providing clear guidelines, enabling them to innovate confidently without uncertainty about legal or ethical boundaries. Moreover, laboratories that proactively establish AI guidelines demonstrate leadership, positioning themselves as forward-thinking and responsible technology adopters. Finally, having formal AI usage policies in place will facilitate accreditation reviews, as future inspections by organizations such as CAP,6 CLIA, and ISO7 are likely to include inquiries about AI governance practices.
Generative AI tools like ChatGPT, Claude, Gemini, and Med-PaLM are rapidly becoming embedded in laboratory practices. The decision facing laboratories is not whether to use AI, but how to govern its use responsibly.
Developing and implementing a Standard Operating Procedure for AI tools represents a critical step toward ensuring safe, ethical, and high-quality laboratory practice in the era of artificial intelligence.
Securing the future of our laboratories begins with responsible action today.
References
1. OpenAI. ChatGPT Model Card and Usage Guidelines. 2024. Available at: https://openai.com/
2. Anthropic. Claude: Constitutional AI for Safer Conversations. 2024. Available at: https://www.anthropic.com/
3. Google DeepMind. Introducing Gemini: A New Generation of AI Assistants. 2024. Available at: https://deepmind.google/technologies/gemini/
4. HIPAA Journal. Healthcare Data Breaches Increase by 35% in 2023. HIPAA Journal. 2024. Available at: https://www.hipaajournal.com/
5. European Union. General Data Protection Regulation (GDPR). Official Journal of the European Union. 2018. Available at: https://gdpr-info.eu/
6. College of American Pathologists. Laboratory Accreditation Manual. 2023. Available at: https://www.cap.org/laboratory-improvement/accreditation
7. International Organization for Standardization (ISO). ISO 15189:2022 — Medical Laboratories – Requirements for Quality and Competence. Geneva: ISO; 2022.
Clinical Assistant Professor, University of Kansas Medical Center