Library
|
Your profile |
Legal Studies
Reference:
Morhat P.M.
A comparative legal study of the practice of considering legal disputes related to the use of artificial intelligence in the Anglo-Saxon legal system and Russia.
// Legal Studies.
2024. ¹ 8.
P. 40-57.
DOI: 10.25136/2409-7136.2024.8.71576 EDN: RKJAYI URL: https://en.nbpublish.com/library_read_article.php?id=71576
A comparative legal study of the practice of considering legal disputes related to the use of artificial intelligence in the Anglo-Saxon legal system and Russia.
DOI: 10.25136/2409-7136.2024.8.71576EDN: RKJAYIReceived: 25-08-2024Published: 01-09-2024Abstract: The article examines the experience in regulating artificial intelligence technology through the prism of judicial practice in Russia and the United States. The emphasis is laid both on the current regulatory framework and on a detailed study of case law, taking into account the potential broad interpretation of current approaches to regulating AI in the absence of comprehensive regulatory legal acts covering extensive segments of public relations. The purpose of this study is to determine the differentiated approaches inherent in the Anglo-Saxon and domestic judicial systems when considering disputes involving the consequences of the use of AI. The objectives of the study include generalizing judicial practice, defining doctrinal approaches to the subsequent regulation of AI and developing tools for prospective comprehensive regulation of AI, taking into account the need to maintain a balance of interests of both users and developers of AI. The research methodology includes systemic, structural-functional, hermeneutic and comparative legal methods. Conclusions are formulated on the need to develop a comprehensive regulation of AI technology, taking into account the current specifics of the consideration of legal disputes in Russia and the Anglo-Saxon legal system, common and different approaches to the consideration of disputes are noted, additional compensatory measures are proposed for the effective regulation of the use of AI technology. The need to identify a separate category of court cases related to AI technology to simplify the analysis and develop a unified practice for this block of disputes was noted. When analyzing domestic and foreign practice of considering disputes related to the use of artificial intelligence, the fundamental need to maintain the principle of balance between the interests of society and developers was noted, taking into account the risks that were identified during the analysis of Anglo-Saxon judicial practice and legal regulation. Keywords: artificial intelligence, judicial practice, comparative legal research, copyright, banking law, generative AI, public law, information technology, deep fake, chatGPTThis article is automatically translated. The active use of artificial intelligence (AI) technology is an integral part of everyday life, ranging from text analysis and search queries to complex models for predicting human behavior. Following this logic, it should be noted that the issues of dispute resolution related to the use of this technology in court are gaining new importance for any State. It should be noted that we have previously considered the phenomenon of AI positioning in the judicial process in Russia and foreign countries as an active tool to help the judge in the framework of judicial proceedings[1]. It should be noted that domestic scientists are actively exploring the phenomenon of artificial intelligence positioning in the framework of court proceedings, both through the prism of Russian legislation and through comparative legal research of foreign experience. E.V. Kupchina, in her analysis of the judicial practice of the United States in the consideration of civil disputes in the United States, notes the active practice of using AI technologies as legal navigators and predictive services that can help users as part of participation in the judicial process[44]. The author notes that this block of technologies is gaining momentum, provided that court documents are digitized. We support the thesis that the accuracy of the proposed AI solutions will increase, however, a study of current practice, including U.S. courts, will be required. Analyzing the doctrinal aspect of AI participation in civil law relations, it is necessary to note the work of E.V. Vavilin, which presents an analysis of domestic regulation of legal relations related to the use of AI[45]. The author has determined that responsibility for actions (inaction) should be assigned to the user (represented by the state, citizen, etc., depending on the field of use). However, the need to recognize the independent responsibility of AI is not excluded. To make practical sense of this approach, we will examine the practice of considering AI-related litigation through the prism of these structures. N. E. Vashenyak notes that the lack of full-fledged regulation of intellectual property issues for works created by AI and domestic judicial practice creates an ambiguous situation for copyright holders of neural networks, who are insured against the risks of misleading AI users and the potential alienation of more rights than initially assumed[46]. This approach is supported by us and requires additional research within the framework of this article. Within the framework of the presented article, we propose to consider and formulate a generalized judicial practice, where the object of judicial proceedings is AI, its actions and recommendations, in the context of various legal systems and the development of potential proposals to minimize risks when making decisions regarding this technology and its developers, users and other persons. For the purposes of this work, it is proposed to use the following research methodology - systemic, structural-functional, hermeneutic and comparative legal methods. Systemic and structurally functional methods help to determine the role and place of states and law in the context of the application of AI in public legal relations. The hermeneutical method occupies a special place, since it allows for a comprehensive review of litigation arising from the use of AI in various segments of legal relations, including interpreting the positions of the courts. For this article, we use a comparative legal method of litigation and regulation of AI in the United States, Great Britain and Russia. First of all, I would like to mention UNESCO's initiatives in the field of professional development of the judicial community in relation to AI technology[2]. Among the key blocks that are being considered by the professional judicial community at the UNESCO level are the definition and development of uniform approaches to the consideration of litigation related to intellectual property using generative AI, as well as in the field of evaluating evidence obtained through AI and the administration of justice in electronic form. As the fundamental legal acts in the United States, it is necessary to note the Strategic Plan for the Development of Research and Development of AI Systems (defining the main directions of R&D in this area)[3], the declarative "Draft Bill of Rights of AI" (contains declarative, but not mandatory principles of ethical AI development)[4], as well as the Law on a fundamental understanding of the applicability and realistic evolution of artificial intelligence[5] (which provides a basic definition of AI, the powers, goals and objectives of authorities in terms of AI application). The consolidated portal of judicial practice devoted to disputes related to AI in the United States is quite interesting and promising for analyzing the prospects for the development of AI regulation in public relations[6]. Within the first category of cases, it is proposed to consider disputes related to intellectual property rights and the use of AI in this segment of legal relations in the United States. The case is of undoubted interestKadray vs. Meta[7]. As part of this civil case, a class action lawsuit was filed by a group of authors (Richard Kadray, Sarah Silverman and Christopher Golden, etc.) against Meta, which trained the AI language model (LLaMA) on the works of the plaintiffs (books). The plaintiffs in their statement of claim determine that Meta's AI extracts a certain amount of data from their works (books) in the learning process, while violating the provisions of the Digital Millennium Copyright Act (DMCA)[8]; accordingly, the actions of the AI developer are interpreted as unfair competition, unfair enrichment and negligence. The court notes that Meta's company model was trained on these books, but the final AI product is not a reworking or adaptation of the plaintiffs' books. The Court notes that the plaintiffs' legal position can be formulated in two constructions: 1. The results of LLaMA's work are actual copies of the plaintiffs' works, which are protected by copyright. 2. The results of LLaMA's work are largely similar to the plaintiffs' books, and the results of AI's work are derivative works. Thus, the court ruled that the plaintiffs had done neither, and dismissed the claim. A similar practice is presented in Andersen v. Stability AI[9]. Within the framework of this class action, the activities of generative AI, which used the works of artists, are similarly challenged. The plaintiffs failed to prove the similarity of the works presented by AI and their legally protected works. In this regard, the court supported the defendants, rejecting the plaintiffs' claims. In general, the trend towards the need for further elaboration of approaches to copyright and copyright protection in court is formed in cases against companies specializing in the development of generative AI (Guild of Authors v. OpenAI[10], Chabon v. OpenAI[11]). However, the general vector can be formulated in the two constructions presented above. In this category of disputes, we also note a tendency to detail the order of AI data processing, namely, how and what data the AI received for training. At the same time, we take into account the large volume of digitized data in the public domain, including those data that are protected by copyright. In addition, the design of AI training to create large language models or other databases for AI training, based on the fair use of data, including copyrighted data, allows developers to minimize their risks in litigation, reducing the subject of the dispute to a specific comparison of a work created by AI and an author's work[12]. It should be noted that in the context of the use of AI in the segment of legal relations related to intellectual property, scientists in the United States list a fairly large range of problems and proposals for their elimination. In his research, D. Lim identifies general trends related to the integration of AI into the sphere of public and private legal relations, and also specifies specific problems related to the use of AI and copyright[13]. So, within the framework of the third section of this article, it is noted at the level of practical examples that AI developers, due to the existence of the law on the protection of trade secrets, have significant opportunities to conceal changes to AI to users, even if the user of the technology is an authority (represented by law enforcement agencies). Thus, it is quite difficult to establish the bias of these AI algorithms (as was the case in the example with AI Compass[14]). In addition, it is noted that there is a need to improve current legislation in order to provide authorities with greater access to the evaluation of AI algorithms and the possibility of detecting abuse of law (including copyright) by developers, as we see it, including based on the above court cases. In his research on copyright in the field of art and the role of AI, Z. Bozard conducted a detailed study of the current practice of using AI by developer companies and their abuse of the Doctrine of Fair Use[15]. It is noted that at the heart of the economic logic of the use of AI, including in the field of art, literature, etc., the current judicial practice is on the side of AI development companies that generate constant profits. The authors of works of art have extremely low chances of challenging the legality of the actions of AI developers due to this thesis. This requires a revision of a number of provisions of copyright law, taking into account the phenomenon of AI technology. Based on the above, it becomes obvious that there is a certain legal imbalance of interests of the parties due to the imperfection of the legislation on trade secrets and copyright in the United States. This imbalance gives development companies a fairly extensive list of tools for judicial protection of their interests, which is noted both in theoretical research and in judicial practice. In a separate category of court cases, it is necessary to highlight the practice associated with the use of generative AI in the preparation of statements of claim. The first significant example in relation to the UK is the case of Felicity Harber v. HMRC Commissioners [16]. In this case, Citizen F. Harber was charged with administrative responsibility for non-payment of taxes related to the transaction after the sale of real estate, in the format of a fine of 3,265.11 pounds. She decided to appeal this fine in court, citing her mental state and ignorance of the provisions of the law, under which she was charged with this responsibility. It is noteworthy that in the framework of the first instance, when submitting a statement of claim, F. Harber provided, as justification for her legal position, a reference to the alleged nine decisions of the Tribunal to substantiate her legal position. However, later in this case it was revealed that these nine solutions were formed by generative AI, which was subsequently confirmed by Citizen F. Harber. Separately, the court notes that the plaintiff did not know that AI generated fictitious decisions of the Tribunals, and this practice generates additional costs for participants in the trial to verify the evidence provided to the court and the legal positions of the parties. The result of this trial was that the court did not take into account the plaintiff's position and upheld the decision of the tax commissioners. If in the first case the practice of using generative AI was considered by an ordinary citizen who does not have specialized knowledge in the field of jurisprudence, then in the caseRoberto Mata v. Avianca Inc.[17] The problem of using generative AI was identified by the court as part of the legal practice of professional lawyers. In this case, attorney Steven A. Schwartz, representing R. Mata, used generative AI (ChatGPT) to prepare a court statement referring to court decisions directly and indirectly related to his client's case. When analyzing the plaintiff's legal position, the court additionally requested from the lawyers copies of court decisions referred to by the party in its statement. The court found that the links in the application are fictitious and generated by AI, they were not verified by the plaintiff's side. The court further fined the plaintiff's lawyers for providing knowingly false and misleading documents in court. It should be noted that the procedural procedure for evaluating evidence in the United States is prescribed in the Federal Rules of Evidence (FRE) [18], in particular the provisions of articles 401, 402, 403 and 901, assign the role of a controller to judges to determine the admissibility of evidence. In the context of evaluating recommendations, this implies disclosing sufficient information about the data on which the AI was trained in the development and structure and components of the AI system, ensuring data confidentiality and transparency of proposed solutions. It should be noted that in a joint study by American scientists, verification of the admissibility of the use of generative AI was additionally carried out on the example of more than 200,000 cases using OpenAI's ChatGPT 3.5, Google's PaLM 2, and Meta's Llama 2 technology[19]. Within the framework of this case, it was revealed that generative AI was more often mistaken when AI was asked about case law in federal district courts, and was more accurate in cases of the US Supreme Court. In addition, these models suffer from "counterfactual bias", in which the AI response is directly correlated with the user's question (including including false or inaccurate queries). A similar position regarding the inadmissibility of the use of generative AI technologies in the preparation of a legal position for judicial proceedings is taken by K. Lyon[20]. In his research, the author notes that this tool can only be an assistant at the current technological stage, and the main role of verification is given to the participants in the process, with appropriate legal consequences (including in case of intentional distortion or falsification of facts). In our opinion, this category of cases involving the use of AI in the judicial process also creates the need to develop additional tools for verifying evidence provided by the parties, and also determines the need to develop additional tools to bring responsibility for the use of AI in potentially illegal activities. The following category of cases is proposed to be considered in the context of antitrust regulation and the use of AI algorithms to manipulate prices and establish cartel agreements. This situation is typical for AI technologies used in the rental and hotel business, and the litigation is still at an early stage due to the lengthy nature of the consideration of disputes related to U.S. antitrust law. So in the Blair-Smith vs cases Caesars Entertainment, Inc. and Altman et al are against Caesars Entertainment, Inc.[21] It is noted that Caesar Entertainment, MGM Resorts, Hard Rock, Cendyn Group LLC and other hotel operators used a common room pricing algorithm platform developed by Cendyn Group LLC to artificially inflate prices (relative to average prices in a comparable market) for rooms. In their 110-page statement of claim, the lawyers of the parties argue in sufficient detail how the cartel of these platforms was carried out and the economic analysis and price dynamics for the numbers presented on these aggregators are presented[22]. At the moment, these cases have been combined into one production; appropriate judicial procedures are being carried out to analyze the evidence base on the part of the defendants. A similar situation occurs in the case of Bason v. RealPage, Inc.[23], where the plaintiffs describe in sufficient detail the formation of a cartel among landlords of apartment buildings. It is noted that since 2016 (possibly earlier), the basis of pricing in this market was the occupancy of premises, which influenced the possibility of competition among landlords. At the same time, landlords have started using a third-party aggregator from the companyRealPage, which provides software and data analytics to landlords (including possible rental pricing by the owners). The platform used by landlords does not oblige them to accept the proposed recommendations from RealPage, but emphasizes the need for "discipline" among participating landlords and comply with the proposed pricing scheme presented by RealPage. Representatives of the company also declare that in order for its services to be most effective, landlords must accept recommendations in eighty percent of cases. The plaintiffs additionally point out that RealPage influenced the increase in rental prices and this was noted in a marketing video, where the president of RealPage noted an unprecedented increase in prices for residential real estate rentals, reaching 14.5% in some markets. The above-mentioned lawsuits are at the initial stage of consideration and may significantly affect the principles of pricing using AI technologies. S. Marar, in his research on antimonopoly regulation and the use of AI in a market economy, notes the need to develop fundamentally new regulatory approaches to assessing damage to consumers from unscrupulous market participants using AI[24]. Based on the analysis of legislative acts (the Sherman Antimonopoly Act of 1890, the Clayton Antimonopoly Act of 1914, the Federal Trade Commission Act of 1914) and the law enforcement practice of antimonopoly authorities, it is noted that the current assessment of the market (the positions of economic entities, their coordination of actions (including in terms of setting prices), etc.) within the framework of The current practice is superficial. This requires the development of a new law that takes into account the potential harm to consumers, and additional tools for interactive control over business entities that use AI in their economic activities. In addition, the author notes the limitations in the standard-setting activities of the Federal Trade Commission, including through the prism of the law on consumer protection, which, in turn, requires the formation of judicial case law, which is currently only being developed. They share the above point of view in terms of the current legislative framework (including through the prism of the provisions of Section 1 of the Sherman Act) and current case law[25] scientists A. Asil and T. Vollman[26]. Their study notes that in the current version of the previously mentioned law, antimonopoly authorities and the court are unable to fix violations by companies that use AI for cartel purposes. This, in turn, allows companies to profit from anti-competitive behavior without any liability, thereby posing a significant threat to consumer welfare. In our opinion, this area of emerging litigation will require the development of additional tools to prove the malicious use of AI in order to extract profit from business entities. In Russia, the fundamental documents regulating the use of AI can be noted Decree of the President of the Russian Federation dated October 10, 2019 No. 490 "On the development of artificial Intelligence in the Russian Federation" (this law contains strategic goal-setting how and how it is recommended to regulate AI in the framework of subsequent rule-making initiatives), as well as the Federal Law "On conducting an experiment on the establishment of special regulation in order to create the necessary conditions for the development and implementation of artificial intelligence technologies in the subject of the Russian Federation - the federal city of Moscow and amendments to Articles 6 and 10 of the Federal Law "On Personal Data" dated 04/24/2020 No. 123-FZ (contains the procedure for conducting experimental and flexible regulation of AI technology). These initiatives are currently exhaustive, which is why subsequent judicial practice is based on other regulatory and legal sources. Separately, it should also be noted that in Russia there is a national AI portal[27], which contains summary information on AI regulation. However, it must be recognized that a consolidated portal containing a list of AI technologies, law enforcement and judicial practice related to this technology is currently missing in Russia[28]. Considering the domestic judicial practice with regard to copyright and the issues of the use of AI in this segment of legal relations, it is proposed to focus on the following court decisions. The first case, considered in a simplified manner and additionally considered by the Intellectual Property Rights Court in cassation, concerns a dispute between Business Analytics LLC (defendant) and Reface Technologies LLC (plaintiff)[29]. The plaintiff petitions for the defendant's illegal use of the video, the rights to which he acquired from Ajenda Media Group LLC. The defendant, in his legal position, used the argument that the desired video sequence was transformed by Deep Fake technology and could not be an object of copyright. However, the court, when analyzing this dispute, in accordance with the provisions of Articles 1229, 1252, 1257 and 1259 of the Civil Code of the Russian Federation, as well as on the basis of information provided by the plaintiff (the original video clip, the license agreement between the plaintiff and Ajenda Media Group LLC, as well as the defendant's video clip), established that the processed deep technology-fake video does not indicate that the video is available for free use. In addition, this processing does not imply any personal creative contribution on the part of the defendant. As a result, such a creative contribution is not recognized by the authors of the video series. Thus, the court satisfied the plaintiff's claims to recover compensation from the defendant. Further, in the order of cassation, the defendant's claims were left without satisfaction. A similar practice is presented in a dispute between an individual entrepreneur I.S. Kotelevets (plaintiff) and LLC "Zhemchuzhina Dental Center" (defendant)[30]. The essence of the dispute is similar to the previous case, namely, the defendant's unlawful use of the plaintiff's images, the rights to which belong to him. The defendant in his position referred to the fact that, firstly, the Internet resource on which the plaintiff's image was posted was acquired by the defendant as part of contractual obligations with a third party (IE Glotov V.M.), and secondly, the disputed image was generated through a neural network (artificial intelligence). In this regard, the defendant stated that the generated image could not be recognized as an object of copyright. Taking into account the provisions of the Civil Code of the Russian Federation, as well as the Resolution of the Plenum of the Supreme Court of the Russian Federation dated 04/23/2019 No. 10 "On the application of Part Four of the Civil Code of the Russian Federation" in the framework of the first instance and appeal, the court sided with the plaintiff, establishing his authorship in relation to the original image, and also noted that until proven otherwise, the author of the work a person is considered to be indicated as such on the original or a copy of the work, or otherwise in accordance with. Domestic scientists note the need to develop fundamentally new tools for regulating AI in the context of copyright. E. Afanasyeva, based on a comparative study, notes that with the current improvement of AI systems and their ability to independently create a product comparable to protected objects of intellectual activity, the need to maintain balanced AI regulation, taking into account the idea of preserving creative contribution from humans[31]. This thesis is also supported in the research of S. Y. Kashkin and J. T. Iskakov. [32], A.E. Ponomarchenko[33]. Separately, it is also possible to note the position of P. G. Shelengovsky, D. A. Gracheva, who propose, by analogy with the United States, to create databases for training that will not be protected by copyright law and will be specially created for training and using neural networks[34]. Considering this block, in the context of comparing domestic judicial practice and the US experience, as well as taking into account the proposals of the domestic scientific school, we note significant differences in establishing authorship and the absence of a legislative loophole fixed in the US case law for AI developers, compared with Russia. The need to develop a separate legislative act regulating the order of authorship in cases of using AI technologies will require a coordinated approach from both developers of AI systems and authors of creative works. However, as we can see, in the case of creating databases for AI training, where people's copyrights are leveled, we may encounter difficulties that authors in the United States are currently experiencing, which in the future will multiply the burden on the judicial branch of government and the public balance of interests. Analyzing the domestic judicial practice of using AI to generate claims, it should be noted that there is no such practice at the time of writing the article. In this regard, it is proposed to consider an adjacent block of public legal relations related to the fixation of offenses and intentional distortion of images in the cancers of public legal relations. As part of the challenge to the administrative fine issued to the Main Department of the Maintenance of the Territories of the Moscow Region (defendant) for violating the order and conditions of the territory of VERKHNYAYA VOLGA LLC (plaintiff), the procedure for fixing the offense, expressed in violation of the order of snow and ice removal of territories using the technology "Automated complex using an intelligent neural network of video recording violations with pre-installed software security"[35]. In the first instance, the court supported the plaintiff's arguments that the use of this system required the development of a conceptual apparatus and terms related to urban infrastructure facilities and deviations from them. However, in the subsequent appeal and cassation appeal, it was found that the administrative offense was fixed by special technical means, which were installed properly and in accordance with the procedure prescribed by the legislation of the Moscow region, including those provided for in the provisions of Part 2 of Article 16.4 of the Administrative Code of the Ministry of Defense. In addition, it was noted that the defendant's obligation used by the court of first instance to provide verification data for metrological measuring instruments (Resolution of the Plenum of the Supreme Court of the Russian Federation No. 20 dated 06/25/2019) is not applicable in this context and is used only in relation to special technical means. Special attention should be paid to the Resolution of the Plenum of the Supreme Court of the Russian Federation dated 06/25/2024 No. 17 "On certain issues arising from courts when considering cases of administrative offenses that infringe on the established procedure for information support of elections and referendums." In particular, paragraph 20 contains provisions clarifying that persons who have produced propaganda materials of all types "using misleading and false images, audio and audiovisual information, including those created using computer technology," in accordance with Part 5.12 of the Administrative Code of the Russian Federation, are subject to administrative liability. It should be noted that domestic scientists primarily note the prospect of the formation of predictive forecasting methods in relation to illegal, criminal activities. Thus, V.Y. Drozdov notes that it is necessary to shift the focus of law enforcement agencies from the use of AI at the stage of bringing a potential offender to justice, to forecasting and crime prevention[36]. This approach is being refined and detailed for the purposes of public relations and their regulation in the field of intelligent transport systems[37]. At the same time, if we talk about the problems of using AI in order to detect fraud in the framework of elections (including if attackers use deep fake technologies), the final conclusion on the malicious use of AI remains in the field of establishing the responsible official[38]. In our opinion, current judicial practice and approaches in the context of its broad interpretation allow us to note that the use of generative AI in the context of the production of documents used in public legal relations, in case of their unreliability, entails appropriate administrative legal consequences for the user. In addition, recommendations and fixing the facts of an offense by the authorities through the use of AI, with proper and conscientious use of it, are unlikely to be subsequently appealed in court. However, it is necessary to develop proactive methods of crime prevention in order to minimize the burden on the judicial system. In the context of the use of AI in commercial activities, it is necessary to note the emerging practice regarding the algorithmic processing of overdue debts by banks. This situation can be observed in three different court cases, the plaintiff in which is Sberbank PJSC, and the defendant is subordinate institutions of the Federal Penitentiary Service of Russia[39][40][41]. The essence of these cases boils down to the fact that the plaintiff is appealing against an administrative fine issued to him by the Federal Penitentiary Service of Russia, for violation of the provisions of subclause "b" of clause 3, Part 3, Article 7 of the Federal Law "On the Protection of the Rights and Legitimate Interests of Individuals in Carrying out Activities to Repay Overdue Debts and on Amendments to the Federal Law "On microfinance activities and microfinance organizations" dated 07/03/2016 No. 230-FZ, providing for a limited number of phone calls to the debtor (no more than 8 times a month). Within the framework of the evidence base reviewed by the court, a multiple exceeding of such a threshold was established, even with the provision of an automated intelligent agent who negotiated with the debtor. The plaintiff's arguments that the bank's interaction with the debtor using this AI agent should not be classified as telephone conversations were not accepted by the court, since it does not matter for the debtor whether he communicates with a car or a living person. It should be noted that in their study, T. G. Starostina and E. V. Romanenko emphasize that for all the disadvantages of AI technology, including economic costs, difficulties in regulating technology, the empathic aspect of AI interaction with humans, AI is a promising technology for this segment of legal relations, which subsequently requires the development of appropriate personnel policy, internal compliance tools, countering fraudulent AI-based technologies[42]. A similar position is presented in the study by A.A. Dulev, with an emphasis on the need for the publication of additional regulatory documents by the Central Bank[43]. In our opinion, the use of AI in the most widespread segments of legal relations in the private sector requires a unified local regulatory framework, which is recommended to be approved as an industry standard, with the fixation of mandatory control over technology by a specific official. In addition, the logical course is to fix common approaches in a single law on the use of AI, which is referred to in Decree of the President of Russia No. 490 (mentioned earlier). These actions will allow in the future to reduce the judicial burden in this segment of disputes. Thus, based on the above, the following conclusions and suggestions can be made: 1. It is proposed to introduce and consolidate additional details of judicial practice in relation to AI used in the judicial process and statistics of cases where AI actions are considered. This representative practice will simplify the judicial process in this segment and comply with the uniformity of judicial practice. 2. It is proposed, without fail, when using generative AI models, to oblige the parties to verify the data provided to them for their validity, reliability and legality of the information used. In case of detection of the use of generative AI for the purpose of intentionally misleading the court, it is necessary, as a matter of priority, to provide the specified information to law enforcement agencies for subsequent prosecution of the persons involved. 3. It is required to develop a comprehensive regulation of AI technology for the purpose of simplifying judicial proceedings and maintaining a balance of interests of the parties in the subsequent consideration of disputes. 4. In the subsequent development of draft laws, especially in the context of copyright, it is impossible to allow the leveling of the author's contribution during the subsequent processing and processing of this information by AI technology, as presented in the United States. References
1. Ìîðõàò, Ï.Ì. (2024). Ñomparative legal study of artificial intelligence positioning in judicial proceedings. Legal research, 7, 13-28.
2. AI and the Rule of Law: Capacity Building for Judicial Systems. Retrieved from https://www.unesco.org/en/artificial-intelligence/rule-law/mooc-judges 3. The national artificial intelligence research and development strategic plan: 2019 Update. Retrieved from https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf 4. Blueprint for an AI Bill of Rights. Retrieved from https://www.whitehouse.gov/ostp/ai-bill-of-rights/#safe 5. H.R. 4625 (IH) – Fundamentally Understanding The Usability and Realistic Evolution of Artificial Intelligence Act of 2017. Retrieved from https://www.govinfo.gov/app/details/BILLS-115hr4625ih/related 6. AI Litigation Database by the Institute for Trustworthy AI in Law and Society (TRAILS). Retrieved from https://blogs.gwu.edu/law-eti/ai-litigation-database/ 7. Kadrey v. Meta Platforms, Inc. 23-cv-03417-VC 11-20-2023. Retrieved from https://casetext.com/case/kadrey-v-meta-platforms-inc 8. The Digital Millennium Copyright Act. Retrieved from https://www.copyright.gov/dmca/ 9. Andersen v. Stability AI Ltd., 23-cv-00201-WHO (N.D. Cal. Oct. 30, 2023). Retrieved from https://casetext.com/case/andersen-v-stability-ai-ltd 10. Authors Guild v. OpenAI Inc. (1:23-cv-08292) District Court, S.D. New York. Retrieved from https://www.courtlistener.com/docket/67810584/authors-guild-v-openai-inc/ 11. Chabon v. OpenAI, Inc. (3:23-cv-04625) District Court, N.D. California. Retrieved from https://www.courtlistener.com/docket/67778017/chabon-v-openai-inc/ 12. Plot Twist: Understanding the Authors Guild v. OpenAI Inc Complaint. Retrieved from https://wjlta.com/2024/03/05/plot-twist-understanding-the-authors-guild-v-openai-inc-complaint/ 13. Lim, D. (2022). AI, Equity, and the IP Gap. SMU L. Rev., 75, 815. 14. Vaccaro, M. A. (2019). Algorithms in human decision-making: A case study with the COMPAS risk assessment software (Doctoral dissertation). 15. Bozard, Z. (2023). What does it mean to create art? Intellectual Property rights for Artificial Intelligence generated artworks. SCJ Int'l L. & Bus., 20, 83. 16. Harber v Commissioners for His Majesty's Revenue and Customs (INCOME TAX – penalties for failure to notify liability to CGT – appellant relied on case law which could not be found on any legal website – whether cases generated by artificial intelligence such as ChatGPT). Retrieved from https://www.bailii.org/uk/cases/UKFTT/TC/2023/TC09010.html 17. Mata v. Avianca, Inc. 22-cv-1461 (PKC). Retrieved from https://casetext.com/case/mata-v-avianca-inc-2 18. Federal Rules of Evidence. Retrieved from https://www.law.cornell.edu/rules/fre 19. Dahl, M., Magesh, V., Suzgun, M., & Ho, D. E. (2024). Large legal fictions: Profiling legal hallucinations in large language models. arXiv preprint arXiv:2401.01301. 20. Lyon, C. F. (2023). Fake cases, real consequences: misuse of ChatGPT leads to sanctions. NY Litigator, 28(2), 8-12. 21. BLAIR-SMITH v. CAESARS ENTERTAINMENT, INC., et al. Retrieved from https://dockets.justia.com/docket/new-jersey/njdce/1:2023cv06506/518494 22. HEATHER ALTMAN and ELIZA WIATROSKI vs. CAESARS ENTERTAINMENT, INC. etc. Case 2:23-cv-02536. Retrieved from https://www.classaction.org/media/altman-et-al-v-caesars-entertainment-inc-et-al.pdf 23. Bason et al v. RealPage, Inc. et al. Retrieved from https://dockets.justia.com/docket/california/casdce/3:2022cv01611/744996 24. Marar, S. (2024). Artificial Intelligence and Antitrust Law: A Primer. Available at SSRN 4745321. 25. American Tobacco Co. v. United States, 328 U.S. 781 (1946). Retrieved from https://supreme.justia.com/cases/federal/us/328/781/ 26. Asil, A., & Wollmann, T. (2023). Can Machines Commit Crimes Under US Antitrust Laws? Available at SSRN 4527411. 27. National Center for Artificial Intelligence Development under the Government of the Russian Federation. Retrieved from https://ai.gov.ru/ 28. Atabekov, A.R. (2023) / Model approaches to the integration of artificial intelligence into the sphere of public legal relations in Russia based on a comparative study of the experience of foreign countries. RUDN Journal of Law, 3, 686-699. 29. Court case card Ñ01-1330/202409ÀP-642/2024À40-200471/2023. Retrieved from https://kad.arbitr.ru/Card/4d7f0305-69af-44fe-8841-a59e84aa7deb 30. Court case card 13ÀP-37912/2023À42-3966/2023. Retrieved from https://kad.arbitr.ru/Card/86266278-1cb2-4722-977c-715455645cf3 31. Afanasyeva, E. (2020). Copyright in the era of artificial intelligence. Intellectual property. Copyright and related rights, 6, 59-66. 32. Iskakova, J. T., & Kashkin, S. Yu. (2020). Modern Copyright and Problems of Artificial Intelligence Development. Bulletin of the OE Kutafin University, 2(66), 43-52. 33. Ponomarchenko, A. Copyright and artificial intelligence: WHO OWNS THEM? Publishing house "Poznanie" CONFERENCE: DIGITAL TECHNOLOGIES AND LAW Kazan, September 22, 2023 Organizers: Kazan Innovative University named after VG Timiryasov. 34. Shelengovskiy, P. G., & Gracheva, D. A. (2023). Artificial Intelligence and Copyright in Modern Conditions. ECONOMICS. LAW. SOCIETY, 3, 79-85. 35. Court case card 305-ÝÑ24-3432F05-33861/202310ÀP-18633/2023À41-17944/2023. Retrieved from https://kad.arbitr.ru/Card/10d3ebf5-6663-4876-9902-fd664eaed894 36. Drozdov, V. Yu. (2021). Using Artificial Intelligence to Prevent Crime. Law and Justice, 9, 114-117. 37. Safiullin, R. N., Safiullin, R. R., & Pyrkin, O. P. (2022). Actual issues of legal regulation in the implementation of digital technologies in intelligent transport systems. In Digital technologies and law (pp. 28-35). 38. Kilyachkov, A. A., Chaldaeva, L. A., Korolev, D. A., & Bayer, A. V. (2021). Using Artificial Intelligence to Detect Signs of Electoral Fraud. Vlast, 5, 128-132. 39. Court case card 07ÀP-2790/2024À45-38245/2023. Retrieved from https://kad.arbitr.ru/Card/e279cd78-7c69-4d2a-8f88-3ce6385e3c63 40. Court case card 16ÀP-1672/2024À61-4957/2023. Retrieved from https://kad.arbitr.ru/Card/9c4456e7-5d30-4f34-9717-0c21d506e595 41. Court case card 06ÀP-3177/2023À73-4100/2023. Retrieved from https://kad.arbitr.ru/Card/ca54ac3e-a42d-4403-bdf9-a7db5f793874 42. Starostina, T. G., & Romanenko, E. V. (2022). Artificial Intelligence in Banking. Bulletin of the Ulyanovsk State Technical University, 2(98), 35-37. 43. Dulev, A. A. (2020). Innovative banking products and their development in the Russian Federation. Chronoeconomics, 5(26), 55-60. 44. Kupchina, E. V. (2021). The Application of Artificial Intelligence Technology in the US Civil Court System. Legal Concept= Pravovaya paradigma, 20(4). 45. Vavilin, E. V. (2021). Artificial Intelligence as a Participant in Civil Relations: Transformation of Law. Bulletin of Tomsk State University. Law, 42, 135-146. 46. Vashenyak, N. E. (2023). Rights to works created with the help of artificial intelligence. Copyright and artificial intelligence. Young scientist, 49(496), 253-255.
First Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
Conclusions based on the results of the conducted research are available ("1. It is proposed to introduce and consolidate additional details of judicial practice in relation to AI used in the judicial process and statistics of cases where AI actions are considered. This representative practice will simplify the judicial process in this segment and comply with the uniformity of judicial practice. 2. It is mandatory, when using generative AI models, to oblige the parties to verify the data provided to them for their validity, reliability and legality of the information used. In case of detection of the use of generative AI for the purpose of intentionally misleading the court, it is a priority to provide the specified information to law enforcement agencies for subsequent prosecution of the persons involved. 3. It is required to develop a comprehensive regulation of AI technology for the purpose of simplifying judicial proceedings and maintaining a balance of interests of the parties in the subsequent consideration of disputes. 4. In the subsequent development of draft laws, especially in the context of copyright, it is impossible to allow the leveling of the author's contribution during the subsequent processing and processing of this information by AI technology, as presented in the USA"), they are clear, specific, have the properties of reliability, validity and, undoubtedly, deserve the attention of the scientific community. The interest of the readership in the article submitted for review can be shown primarily by specialists in the field of constitutional law, civil law, information law, provided that it is finalized: additional justification of the relevance of the research topic chosen by the author (within the framework of the remark made) and the elimination of numerous violations in the design of the work.
Second Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
|