Real-Time GenAI Dashboards for End-to-End Retail Supply Chain Optimization (Published)
The incorporation of Generative Artificial Intelligence (GenAI) in real-time dashboard systems is revolutionizing the working environment of retail supply chains. This paper presents an in-depth study of GenAI-enabled dashboards that optimize the end-to-end supply chain by processing real-time data, providing predictive analytics, and enabling fast, intelligent visualization. By addressing the issues of stockouts, inefficient lead times, and checkout delays, the paper explores how streaming information directly provided by systems such as point-of-sale systems, IoT sensors, and inventory platforms can be analyzed in real time using powerful AI models to deliver practical solutions when needed. It also describes the architecture of these systems and emulates their impact on supply chain visibility, adaptability, and customer experience. Within the given paper, it is possible to determine the fundamental deficiencies of current literature and/or practice (specifically, poor utilization of GenAI towards interactive, operational settings). Evidence suggests that combining explainable AI, automation, and user-centered design is critical to facilitating more rapid decision-making, strategic fit, and a competitive edge in the contemporary retail setting.
Keywords: AI dashboards, GenAI, Real-time Analytics, Supply chain visibility, checkout automation, streaming
Robust detection of LLM-generated text through transfer learning with pre-trained Distilled BERT model (Published)
Detecting text generated by large language models (LLMs) is a growing challenge as these models produce outputs nearly indistinguishable from human writing. This study explores multiple detection approaches, including a Multi-Layer Perceptron (MLP), Long Short-Term Memory (LSTM) networks, a Transformer block, and a fine-tuned distilled BERT model. Leveraging BERT’s contextual understanding, we train the model on diverse datasets containing authentic and synthetic texts, focusing on features like sentence structure, token distribution, and semantic coherence. The fine-tuned BERT outperforms baseline models, achieving high accuracy and robustness across domains, with superior AUC scores and efficient computation times. By incorporating domain-specific training and adversarial techniques, the model adapts to sophisticated LLM outputs, improving detection precision. These findings underscore the efficacy of pretrained transformer models for ensuring authenticity in digital communication, with potential applications in mitigating misinformation, safeguarding academic integrity, and promoting ethical AI usage.
Keywords: Classifier, GenAI, detection, fine tuning, large language models, machine learning, natural language processing, pretraining