Library
|
Your profile |
Legal Studies
Reference:
Atabekov A.R.
Analysis of approaches to determining legal liability for the actions of artificial intelligence in the medical field: the experience of the United States and Russia.
// Legal Studies.
2023. № 6.
P. 1-9.
DOI: 10.25136/2409-7136.2023.6.40928 EDN: IJDDLB URL: https://en.nbpublish.com/library_read_article.php?id=40928
Analysis of approaches to determining legal liability for the actions of artificial intelligence in the medical field: the experience of the United States and Russia.
DOI: 10.25136/2409-7136.2023.6.40928EDN: IJDDLBReceived: 30-05-2023Published: 06-06-2023Abstract: This article introduces a comparative analysis of existing approaches to determining the liability of artificial intelligence in the context of public medical relations between the United States and Russia. As part of the comparative analysis, the basic problems in the field of transparency in the decision-making of artificial intelligence were identified, theoretical and practical situations for the use of artificial intelligence in the medical field were considered, and possible compensatory legal measures were proposed to ensure the safe integration of artificial intelligence into the healthcare sector in Russia. The subject of the study is the formalization of artificial intelligence actions in legal relations between a doctor and a patient. The object of the study is regulatory documents, recommendations and other documents regulating the use of artificial intelligence for the purposes of medical legal relations in Russia and the United States, judicial practice, academic publications and analytical reports on the issues under study. The research methodology integrates a complex of modern philosophical, general scientific, special scientific methods of cognition, including dialectical, systemic, structural-functional, hermeneutical, comparative legal, formal legal (dogmatic), etc. Within the framework of this study, special emphasis is laid on the implementation of a comparative legal study of the phenomenon of the autonomy of artificial intelligence involved in legal relations between a doctor and a patient, followed by the identification of potential scenarios for regulating responsibility for AI actions. The measures proposed as a result of the study can be applied in the legislative activities and their implementation by relevant authorities that are in charge of the integration of artificial intelligence into the sphere of public relations in Russia, including the healthcare sector. Keywords: artificial intelligence, electronic person, comparative legal research of AI, medical law, telemedicine, secure AI, public law, administrative law, information law, law enforcement practiceThis article is automatically translated. Active discussion of the status and subsequent role of artificial intelligence (hereinafter - AI) is actively taking place on various international platforms (OSER[1], EU[2], UN[3]). Special attention is paid to the integration of AI into the healthcare sector and the corresponding legal and public risks[4], in this regard, it is proposed to study the US experience in more detail for the subsequent determination of acceptable legal structures for Russia. Considering the regulatory and legal regulation of AI in the US medical field, it is necessary to note the following documents: 1. Decree of the President of the United States of 11.02.2019 No. 13859 "Maintaining American leadership in the field of artificial intelligence"[5], which defines the directions of work on R&D in the field of AI, practical application and refinement of technologies AI; creation of the necessary infrastructure, data sets, technical and ethical standards in the field of AI. 2. The US Guidelines for the Regulation of AI-based Applications[6], which includes the principles to be taken into account when developing AI technologies. 3. Recommendations on digital healthcare[7], which contain regulatory documents and recommendations, which fix the position of the regulator in relation to AI-based solutions, an approach to their classification and registration. 4. The Law on Liability and Transfer of Data on Citizens' Health Insurance[8] and the Law on the Use of Medical Information Technologies in Economic Activity and Clinical Practice[9]. These documents determine the need to ensure the privacy of patient-related data for all participants in the process of providing medical services. Considering the practical phenomenon of the use of artificial intelligence, it is impossible not to mention IBM Watson technology, which uses cognitive computing to interpret clinical information about cancer patients and determine the course of treatment based on evidence[10]. The Deep Patient AI was trained in medical diagnostics based on a database of approximately 700,000 people, and when testing new patient data, it showed incredibly good results in predicting diseases. Without any special instructions, the AI found patterns that are not clearly visible to the attending physician and indicate a wider range of problems (including liver cancer)[11]. In practice, this means that a large array of data on oncological issues is loaded into the program and subsequent recommendations based on Big-data are provided to the doctor. Based on machine learning, this array analyzes data as quickly as possible (including drop-down zones) and gives suggestions to the attending physician as quickly as possible. According to the author, the question of responsibility becomes logical in case of an error both in the prediction by the AI and at the decision-making stage by the doctor. In the context of the medical application of AI, this problem is generally referred to as the "AI black box" in healthcare - the use of opaque computational models for decision-making related to healthcare. As a result of the use of models, algorithms are opaque, and the conclusions they fix cannot be clearly understood by the attending physician, and sometimes cannot even be explicitly formulated[12]. From the point of view of law, the indirect factor of blurring responsibility for the proposed AI solutions is discreteness and diffuseness, expressed by the fact that the AI development process can be carried out by different teams (not working in close conjunction and within a single overall coordination)[13]. This approach affects the issue of determining the responsibility of the AI development team for the medical field (taking into account the multicomponent analytical tools and the depth of analysis of patient data). The next significant issue in the field of determining legal responsibility for AI actions is the question of the very quality of the data obtained for AI analysis[14]. At various stages, incorrect provision of patient data by the doctor himself, at the stage of laboratory studies, etc. is possible. In the context of determining the responsibility of AI in the US medical field, it should be noted that courts traditionally do not single out legal liability because they are not legal entities[15]. As a result, the issue of a separate legal structure for AI is raised, since there is a risk of shifting responsibility to an unlimited number of persons (from the patient and the doctor, to the developer/s or persons influencing the infrastructure and data), which is also noted by American scientists (the difficulty of identifying cause-and-effect relationships due to the large number of legal relationships and mechanisms of interaction between humans and AI)[16]. At the same time, experts note that a possible solution for the consideration of AI-related litigation is the use of the res ipsa loquitur doctrine, which allows for a conclusion about the negligence of a particular defendant, despite the fact that this fact of negligence is typical for subjects to which the defendant can be attributed. This principle is based on the conclusion that the defendant has exclusive control over the instrument causing harm that could potentially have been committed as a result of negligence[17]. The next aspect in the field of determining the responsibility of AI in the medical field concerns the deviation of the doctor from the standards of medical care, which automatically leads to the definition of a violation of the duties of the doctor in relation to the patient[18]. Within the framework of this approach, a comparative analysis of the actions of a doctor with his potential colleague with comparable experience, education and technical equipment is carried out. At the same time, the question of interpretation of AI technical equipment and its role in the standard of medical services remains open[19]. The next aspect includes the definition of subsidiary liability in the application of AI in the medical field[20]. This aspect concerns the question of the legal status of AI as an "agent". In this logic, a medical institution has some control or authority over an agent (for example, an employee — usually a doctor, but potentially also an AI system)[21]. Scientists propose the theoretical use of a strict liability model, which will simply automatically assign responsibility only to the manufacturer or developer of AI, including through the design of a single enterprise. The legal responsibility of a "single enterprise" implies that each organization in a group of interconnected companies can be jointly and severally responsible for the actions of other organizations that are part of the group. This theory of responsibility is beneficial in relation to AI, since it does not require companies to function together, but to work to achieve a common goal — to design, program and produce an AI product or its components[22]. However, even here a question arises arising from the technical nature of the autonomy and independent development of AI[23]. If the court considers the AI to be completely autonomous, then assigning subsidiary responsibility to the medical institution for any damage caused by the AI will be impossible, since such an autonomous AI will be functionally beyond the control of the principal. The next issue that scientists note is the unanimous conclusion of the courts that doctors have a legal obligation to inform patients about essential information regarding the proposed course of treatment and other information related to the provision of medical services, thereby strengthening the role of the doctor in ensuring that adequate information is communicated to the patient to make an informed decision[24]. Hence, the question arises of understanding sufficient patient information, including in the context of the AI used, its decision-making algorithms, etc.[25]. Considering the domestic legal regulation of AI in the medical field, it is necessary to note the following documents: 1. Decree of the President of the Russian Federation dated 10.10.2019 No. 490 "On the Development of Artificial Intelligence in the Russian Federation", which defines the basic legal guidelines for the development, regulation and further deployment of AI in Russia; this document is supplemented by a list of specific measures in the Order of the Government of the Russian Federation dated 08/19/2020 No. 2129-p 2. Federal Law No. 323 of 21.11.2011 "On the basics of public health protection in the Russian Federation". Article 38 of the said law defines the need for state registration of any software used in the medical field (the procedure is fixed by Decree of the Government of the Russian Federation No. 1416 of December 27, 2012). 3. The Order of the Ministry of Health of the Russian Federation dated 06.06.2012 No. 4n approves the classification of software, which is a medical device, in which AI is a medical device with a high degree of risk. 4. Related federal laws: Federal Law No. 152-FZ of 27.07.2006 "On Personal Data" (Clause 1 of Part 1 of Article 6 providing for the mandatory consent of the subject to the processing of his data) and Federal Law No. 123-FZ of 24.04.2020 "On conducting an experiment to establish special regulation in order to create the necessary conditions for the development of and the introduction of artificial intelligence technologies in the subject of the Russian Federation" (providing for an experimental regime for regulating the activities of AI with depersonalized data of subjects). At the level of domestic doctrinal research, it should be noted that a number of scientists (Lipchanskaya M. A., Zametina T. V.) note that there is a distorted perception of the concept of telemedicine and AI in the medical field, as well as the need to deepen the legal regulation of legal relations that arise between a doctor and a patient when using AI technology[26]. E.A. Ostregnova complements this the position, noting that in the near future the right to receive medical care may be supplemented by a separate role of AI[27]. I.V. Ponkin, in his voluminous report on the regulation of AI in the medical field, notes the need for a significant breakthrough development of the reference subject-object area of regulatory regulation of the use of technologies of digital AI models. This researcher also emphasizes the need to transform the structure and ontology of regulatory regulation in the field of healthcare, including the synthesis of digital law, medical law and bioethics. In addition, the author also justifies the need for a large-scale audit in the field of regulatory regulation of healthcare, including with the use of AI and Big-data technologies[28]. Investigating the phenomenon of legal responsibility for the actions of AI in medicine, it is necessary to note the position of the lawyer-practitioner M.S. Varyushin. As part of his review article, the author noted that the first basic question is whether the procedure for implementing AI has been properly formalized by a medical organization. If the AI is used without registration as a medical device, then the responsibility remains with the medical organization, with the possibility of further filing a lawsuit against the AI developer (depending on the terms and procedure for registration of a license agreement or an agreement on alienation of exclusive rights to AI)[29]. If the AI is registered as a medical device, then the harm caused to the patient will be compensated by the operating medical organization, while it is possible to file a recourse claim against the developer (copyright holder) AI in the event that an AI malfunction is detected, and not a mistake made by a doctor. In addition, Kovelina T. A., Sobyanin A.V. and Marukhno V. M. note in their study that the current regulatory and legal regulation of AI does not provide for the possibility of voluntary consent of the patient to use artificial intelligence in the implementation of medical intervention, and responsibility falls solely on the manufacturer and / or user, but in the future the circle of responsible may be expanded[30]. Thus, based on the above, it is recommended to form the following regulatory initiatives in the field of AI application in the medical field: 1. Creation of a unified standard for the provision of medical services with the use of AI and mandatory notification of the use of this technology to the patient, which requires appropriate amendments to the relevant legislation. 2. Creation of universal rules for the distribution of responsibility for patient data received by AI (patient, operator, doctor), ensuring their processing (doctor, developer, external factor), interpretation of data (doctor, developer) and also procedures for auditing the results obtained (developer, supervisory authority (including forensic examination), doctor, medical organization). 3. Formation of a public coordination center that ensures the capabilities and interests of developers, patients, doctors and regulatory authorities in order to ensure the safe integration of AI into the healthcare sector. References
1. OECD AI Observatory (catalogued information on AI regulation around the world): [Website]. — URL: https://oecd.ai/en/ (Accessed: 13.05.2023).
2. AD HOC COMMITTEE ON ARTIFICIAL INTELLIGENCE (CAHAI): [Website]. — URL: https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da (Accessed: 13.05.2023). 3. Outcome document: first draft of the Recommendation on the Ethics of Artificial Intelligence UNESCO: [Website]. — URL: https://unesdoc.unesco.org/ark:/48223/pf0000373434_rus (Accessed: 13.05.2023). 4. AI's impact raises legal, ethical questions Report on Tokyo Forum 2019 Parallel Session “Digital Revolution” : [Website]. — URL: https://www.u-tokyo.ac.jp/focus/en/features/z0508_00159.html (Accessed: 13.05.2023) 5. Executive Order 13859 of February 11, 2019 Maintaining American Leadership in Artificial Intelligence: [Website]. — URL: https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence (Accessed: 13.05.2023) 6. Guidance for Regulation of Artificial Intelligence Applications: [Website]. — URL: https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf (Accessed: 13.05.2023) 7. List of FDA Guidance Documents with Digital Health Content: [Website]. — URL: https://www.fda.gov/medical-devices/digital-health-center-excellence/guidances-digital-health-content (Accessed: 13.05.2023) 8. Health Insurance Portability and Accountability Act of 1996 (HIPAA): [Website]. — URL: https://www.cdc.gov/phlp/publications/topic/hipaa.html (Accessed: 13.05.2023) 9. HITECH Act Enforcement Interim Final Rule: [Website]. — URL:https://www.hhs.gov/hipaa/for-professionals/special-topics/hitech-act-enforcement-interim-final-rule/index.html (Accessed: 01.05.2023) 10. Chung, J. (2017). What should we do about artificial intelligence in health care?. NYSBA Health Law Journal, 22(3). 11. Knight, W. (2017). The dark secret at the heart of al. Technology Review, 120(3), 54-61. 12. Price, W., & Nicholson, I. I. (2015). Describing Black-Box Medicine. BUJ Sci. & Tech. L., 21, 347. 13. Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. JL & Tech., 29, 353. 14. Matheny, M., Israni, S. T., Ahmed, M., & Whicher, D. (2019). Artificial intelligence in health care: The hope, the hype, the promise, the peril. Washington, DC: National Academy of Medicine. 15. Chung, J., & Zink, A. (2017). Hey Watson-Can I Sue You for Malpractice-Examining the Liability of Artificial Intelligence in Medicine. Asia Pacific J. Health L. & Ethics, 11, 51. 16. Danny Tobey, Explainability: Where AI and Liability Meet, DLA PIPER: [Website]. — URL: https://www.dlapiper.com/en/us/insights/publications/2019/02/explainability-where-ai-and-liability-meet (Accessed: 13.05.2023) 17. Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 17 (Am. Law. Inst. 2010): [Website]. — URL: https://www.ali.org/publications/show/torts-liability-physical-and-emotional-harm/ (Accessed: 13.05.2023) 18. Greenberg, M. D. (2009). Medical malpractice and new devices: defining an elusive standard of care. Health Matrix, 19, 423. 19. Price, I. I., & Nicholson, W. (2017). Medical malpractice and black-box medicine. Big Data, Health Law, and Bioethics (Cambridge University Press, 2018), U of Michigan Public Law Research Paper, (536). 20. Allain, J. S. (2012). From Jeopardy to Jaundice: The medical liability implications of Dr. Watson and other artificial intelligence systems. La. L. Rev., 73, 1049. 21. Restatement of the Law (3d) of Torts—Apportionment of Liability, § 13 (2000): [Website]. — URL: https://www.ali.org/publications/show/torts-apportionment-liability/ (Accessed: 13.05.2023) 22. Vladeck, D. C. (2014). Machines without principals: liability rules and artificial intelligence. Wash. L. Rev., 89, 117. 23. Chinen, M. A. (2016). The co-evolution of autonomous machines and legal responsibility. Va. JL & Tech., 20, 338. 24. Sawicki, N. N. (2016). Modernized informed consent: Expanding the boundaries of materiality. U. Ill. L. Rev., 821. 25. Cohen, I. G. (2019). Informed consent and medical artificial intelligence: What to tell the patient?. Geo. LJ, 108, 1425. 26. Lipchanskaya М. А., Zametina Т. V. (2020). Social Rights of Citizens Using Artificial Intelligence: Legal Bases and Gaps in Legislative Regulation in Russia. Journal of Russian Law, (11), 78-96. 27. Ostavnova E. A. (2020). Implementation of the constitutional right to health protection in the context of the development of artificial intelligence. apni. ru Editorial team, 36. 28. Ponkin I. V. (2021). Medical Law in conditions of digitalization. Business, Management and Law, (10), 22-25. 29. Varjushin М. S. (2021). Legal framework for artificial intelligence technologies in telemedicine. Russian Journal of Telemedicine and E-Health, 7, (2), 18-22. 30. Kovelina T. A., Sobyanin A. V., Marukhno V. M. (2022). TO THE QUESTION OF LEGAL REGULATION OF THE USE OF ARTIFICIAL INTELLIGENCE IN MEDICINE. Humanities, socio-economic and social sciences, (2), 148-151
Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
|