• 首页期刊简介编委会刊物订阅专栏专刊电子刊学术动态联系我们English
引用本文:张智琪.基于大语言模型结合RAG技术的慢性肾脏病人工智能体应用模型构建[J].中国现代应用药学,2025,42(17):100-106.
zhangzhiqi.Construction of an artificial intelligence application model for chronic kidney disease based on large language models combined with RAG technology[J].Chin J Mod Appl Pharm(中国现代应用药学),2025,42(17):100-106.
【打印本页】   【HTML】   【下载PDF全文】   查看/发表评论  【EndNote】   【RefMan】   【BibTex】
←前一篇|后一篇→ 过刊浏览    高级检索
本文已被:浏览 110次   下载 63 本文二维码信息
码上扫一扫!
分享到: 微信 更多
基于大语言模型结合RAG技术的慢性肾脏病人工智能体应用模型构建
张智琪
苏州大学附属第一医院
摘要:
目的:本研究旨在结合大语言模型(LLM)与检索增强生成(RAG)技术,构建慢性肾脏病(CKD)患者用药教育知识库及多模态指导系统,以提升患者用药安全性和依从性,并为医疗专业人员提供辅助支持。方法:通过数据收集与预处理、模型构建与训练、技术集成、知识库构建与维护以及系统评估与优化等方法搭建模型。设计30个CKD相关问题,选取Kimi、讯飞星火(Spark)及智谱(Zhipu)三大中文大语言模型进行对比评估,由10名肾内科临床药师从准确性、完整性、相关性、逻辑性及专业性5个维度评分,重点评估模型在CKD场景下的临床逻辑一致性、证据溯源完整性及禁忌症识别准确率。每位药师对3种模型处理方式(基础模型、加入提示词、加入知识库)的回答分别评分,共收集30份评分表。同时,收集5家软件开发公司在需求分析、规则设计、系统训练与测试、部署与优化4个阶段的时间投入数据,对比传统开发模式与LLM+RAG模式的耗时差异。采用双因素及单因素方差分析评估模型评分差异,配对t检验分析开发时间差异(P<0.05为显著)。结果:不同处理方式与模型间的交互评分差异显著(P<0.001)。Kimi模型在加入提示词后的评分显著高于讯飞与智谱模型;加入知识库后,Kimi评分最高,与智谱无显著差异,但显著高于讯飞模型;基础模型中Kimi评分亦最高。相同模型下,Kimi加入知识库后评分较提示词处理显著提升,但与基础模型无差异;讯飞和智谱加入知识库后评分均显著提升。LLM+RAG模式较传统模式显著缩短开发时间(P=0.017),规则设计阶段效率提升80%,平均节省2.125周/阶段,总体效率提高45.9%。结论:LLM结合RAG技术可显著提升开发效率并缩短周期,优化提示词与知识库能最大化模型性能。不同模型可根据成本与速度需求选择。本研究验证了LLM+RAG在医疗领域的应用潜力,但知识库覆盖范围、模型泛化能力及长期维护仍需优化。未来将扩展知识库并提升智能化水平,以提供更精准的医疗辅助工具。
关键词:  慢性肾脏病  用药教育  知识库  大语言模型  检索增强生成  个性化医疗
DOI:
分类号:
基金项目:关于 2024 年度江苏省药学会-“药”研新声药学科研项目;江苏省老年医学会2023临床药学专项基金科研项目
Construction of an artificial intelligence application model for chronic kidney disease based on large language models combined with RAG technology
zhangzhiqi
Pharmacy Department,The First Affiliated Hospital of Soochow University
Abstract:
OBJECTIVE This study aims to construct a medication education knowledge base for patients with chronic kidney disease (CKD) by integrating the context understanding capability of large language models (LLM) with the dynamic knowledge retrieval mechanism of Retrieval-Augmented Generation (RAG), a multimodal medication guidance system is constructed to enhance the safety and compliance of patients' medication use, and to assist medical professionals. METHODS The combination of LLM and RAG technology can significantly enhance development efficiency and shorten the cycle. Optimizing prompt words and knowledge bases can maximize model performance. Different models can be selected based on cost and speed requirements. This study has verified the application potential of LLM + RAG in the medical field, but the coverage of knowledge bases, model generalization ability, and long-term maintenance still need to be optimized. In the future, the knowledge base will be expanded and the level of intelligence will be improved to provide more accurate medical assistance tools.RESULTS The interaction scores among different processing methods and models were significantly different (P < 0.001). The score of the Kimi model was significantly higher than that of the iFlytek and Zhipu models after adding prompt words; after adding the knowledge base, the score of Kimi was the highest, with no significant difference from Zhipu but significantly higher than that of iFlytek; among the basic models, the score of Kimi was also the highest. Under the same model, the score of Kimi after adding the knowledge base was significantly higher than that after adding prompt words, but there was no difference from the basic model; the scores of iFlytek and Zhipu were significantly improved after adding the knowledge base. The LLM+RAG mode significantly shortened the development time compared to the traditional mode (P = 0.017), with an 80% increase in efficiency during the rule design stage, an average saving of 2.125 weeks per stage, and an overall efficiency improvement of 45.9%.CONCLUSION The combination of LLM and RAG technology can significantly enhance development efficiency and shorten the cycle, and optimizing prompts and knowledge bases can maximize model performance. Different models can be selected based on cost and speed requirements. This study has verified the application potential of LLM + RAG in the medical field, but the coverage of the knowledge base, the model's generalization ability, and long-term maintenance still need to be optimized. In the future, the knowledge base will be expanded and the intelligence level will be improved to provide more accurate medical assistance tools.
Key words:  chronic kidney disease  medication education  knowledge base  large language model  retrieval-augmented generation  personalized medicine
扫一扫关注本刊微信