European Journal of Computer Science and Information Technology (EJCSIT)

EA Journals

LLM-Powered Self-Auditing Framework for Healthcare Data Pipelines: Continuous Validation Lifecycle

Abstract

This article introduces novel prompting methodologies that enable Large Language Models (LLMs) to perform sophisticated semantic analysis of healthcare data pipelines, achieving unprecedented accuracy in detecting complex logical inconsistencies and clinical guideline violations. The proposed hierarchical prompting strategy, combined with chain-of-thought reasoning workflows and dynamic context injection, represents a fundamental advancement in applying LLMs to domain-specific technical auditing tasks. Our methodology achieved a 42% improvement in error detection sensitivity, and 35% reduction in false positive rates compared to standard prompting approaches, with 58% improvement in detecting complex multi-condition clinical protocols. Implementation within a comprehensive self-auditing framework across diverse healthcare organizations demonstrates the methodology’s effectiveness in detecting critical inconsistencies in EHR data transformation workflows, clinical dashboard calculations, and regulatory compliance verification.

Keywords: automated auditing, clinical guidelines, data governance, healthcare data pipelines, large language models

cc logo

This work by European American Journals is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 Unported License

 

Recent Publications

Email ID: editor.ejcsit@ea-journals.org
Impact Factor: 7.80
Print ISSN: 2054-0957
Online ISSN: 2054-0965
DOI: https://doi.org/10.37745/ejcsit.2013

Author Guidelines
Submit Papers
Review Status

 

Scroll to Top

Don't miss any Call For Paper update from EA Journals

Fill up the form below and get notified everytime we call for new submissions for our journals.