高级检索

大语言模型在临床应用中的伦理问题与监管路径研究进展

Research progress on ethical issues and regulatory pathways of large language models in clinical applications

  • 摘要: 大语言模型(large language model, LLM)正日益应用于医疗领域,但其临床落地伴随诸多伦理与法规挑战。本文综述七大伦理挑战:患者安全与准确性、偏见与公平、隐私与数据保护、透明度与可解释性、问责与法律责任、患者自主与知情同意及医患关系与信任。在法规层面,国际研究表明,美国尚无LLM医疗监管专规,正尝试将高风险LLM纳入管理;欧盟通过《人工智能法案》将医疗人工智能列为高风险并施加严格合规要求;中国发布生成式人工智能管理办法并倡导行业标准,但法律框架仍待完善。在LLM开发中,需内嵌伦理原则、加强临床人机协同与人工监督、健全法律标准明确责任、保障数据隐私安全、实施持续监测改进,以及深化国际合作和多学科治理。

     

    Abstract: Large language models (LLM) are increasingly applied in the medical field, yet their clinical implementation faces numerous ethical and regulatory challenges. This paper reviews seven major ethical challenges: patient safety and accuracy, bias and fairness, privacy and data protection, transparency and explainability, accountability and legal liability, patient autonomy and informed consent, and the doctor-patient relationship and trust. At the regulatory level, international research indicates that United States currently lacks specific regulations for medical LLM use, while is exploring the regulation of high-risk LLM. The EU’s AI Act classifies medical AI as high-risk and imposes stringent compliance requirements. China has issued generative AI management measures and advocates industry standards, though its legal framework remains incomplete. Solutions include embedding ethical principles during model development, strengthening human-machine collaboration and manual oversight in clinical settings, establishing clear legal standards for accountability, safeguarding data privacy and security, implementing continuous monitoring and improvement, and deepening international cooperation and multidisciplinary governance.

     

/

返回文章
返回