Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Legal Studies
Reference:

Analysis of the case law establishing circumstances of illegal distribution of generative content created using artificial intelligence

Bodrov Nikolay Filippovich

ORCID: 0000-0002-9005-3821

PhD in Law

Associate Professor of the Department of forensic expertise Kutafin Moscow State Law University

123242, Russia, Moscow region, Moscow, Sadovaya-Kudrinskaya str., 9, room 729

bodrovnf@gmail.com
Other publications by this author
 

 
Lebedeva Antonina Konstantinovna

PhD in Law

Associate professor of Forensic expertise’s department at Kutafin Moscow State Law University

125993, Russia, Moscow, Moscow, Sadovaya-Kudrinskaya str., 9

tonya109@yandex.ru
Other publications by this author
 

 

DOI:

10.25136/2409-7136.2024.11.72540

EDN:

TLSBYY

Received:

02-12-2024


Published:

09-12-2024


Abstract: The authors consider various cases from judicial and investigative practice related to the illegal distribution and use of generative content created using artificial intelligence technologies. Special attention is paid to the issues of proving the falsification of digital products created by neural networks, including voice and graphic deepfakes. The legal and technological aspects of the use of such evidence in court proceedings are analyzed. The authors also emphasize that there is no legal definition of a deepfake in the current legislation, and without this it is not possible to talk about the modernization of domestic legislation. Taking into account the pace of development of artificial intelligence technologies, it is necessary to define legally the deepfake. In the context of the rapid development of artificial intelligence technologies, it is necessary to regulate the deepfakes, taking into account the legal gaps that accompany the current level of artificial intelligence development and threats that are already real, which is confirmed by the considered judicial practice. The necessity of creating a data set for conducting experimental phonoscopic studies of phonograms with recordings cloned using neural networks of voices is substantiated. The methodological basis of the research is the universal dialectical method, general scientific (description, comparison, generalization, modeling, etc.) and private scientific methods. The novelty of the research lies in the identification and systematization of key problems related to the conduct of forensic examinations and the legal regulation of generative content. The paper presents recommendations for improving legislative norms and expert methods (using the example of forensic phonoscopic examination), including the need to create specialized databases and scientific and methodological approaches for the study of generative content. The conclusions of the article emphasize the importance of developing standards for diagnosing the use of generative artificial intelligence in the creation of digital products, as well as the need to improve the skills of experts conducting forensic examinations in relation to such objects. The obtained results can be used to form more effective mechanisms of legal response to the challenges associated with artificial intelligence technologies.


Keywords:

deepfakes, generative content, case law, forensic practice, neural networks, voice cloning, forensic science, criminalistics, forensic audio examination, database

This article is automatically translated.

Currently, artificial intelligence (AI) technologies for creating generative content are developing extremely rapidly and are being used in a wide variety of areas of public life. At the same time, the practice of considering court cases is being formed, in which digital traces representing AI-generated content are used as evidence.

This content is digital products in various forms: in the form of text, graphics, sound or a combination of them.

Of course, the active development of neural network technologies in the field of generative content creation provides enormous opportunities for its illegal use and distribution. The relevance of the chosen topic is due to the fact that there are more and more cases when synthesized content is used to mislead or overcome access control and management systems by the user [1, p. 38], in the framework of libel cases, various frauds, when considering civil cases (for example, divorce proceedings).

In such situations, law enforcement officers are faced with the task of establishing the circumstances associated with the creation, use and distribution of content generated using neural network technologies:

– technological difficulties in identifying such generative content, proving the fact of its creation both in general with the help of neural network technologies and specific neural networks used to create it;

– the expediency of appointing a forensic examination. Taking into account the fact that the methods of most forensic examinations are not adapted for the study of objects – the results of neural network generation, even the categorical conclusion of the expert may not matter for subsequent proof if the expert study did not establish the fact that the object of study is indeed a deepfake, without this, the reliability of the expert's conclusions in modern realities may to be questioned. Currently, the issues of forensic counteraction to deepfake content are analyzed in the scientific works of a very small number of domestic specialists, for example, in the publications of Galyashina E. I., Gromova A.V., Oshkukova S. S. and some others;

– ambiguity of judicial practice in the qualification of illegal acts related to the distribution and use of generative content, including the lack of unified approaches to the definition of the term "deepfake", as well as issues of establishing the entity responsible for the distribution/use of generative content. Thus, both foreign and domestic legislation has not yet been adapted to the challenges associated with the use of neural network technologies.

Currently, as a result, fundamentally new situations arise that require clarification of the evidentiary procedure. The problem of the distribution of dipfakes affects all participants in the proceedings [2, p. 254]. However, it seems to us that the central place in proving such cases will be occupied by the expert's opinion, since the primary problem becomes the difficulty in proving whether the presented digital product is generated using neural network technologies or not.

Judicial practice in this area is still being formed, certainly facing a number of difficulties, both legal and technological in nature.

This article examines the domestic and foreign practice of judicial and investigative establishment of circumstances in cases of illegal distribution of synthesized content (deepfakes).

The methodological basis of the research is the universal dialectical method, general scientific (description, comparison, generalization, modeling, etc.) and private scientific methods.

We will analyze those, so far few, cases where synthesized content has been used as evidence in civil, arbitration and criminal cases. In our opinion, in the near future, the volume of such cases in court proceedings will steadily increase, and the primary analysis and generalization of the specifics of such cases can be used to prevent judicial errors.

It is important to consider investigative and judicial situations related to the establishment of the circumstances of the distribution of synthesized content, as well as to summarize the practice of proof in cases where deepfakes acted as a tool or object of illegal activity.

We believe that consideration of the problem of establishing the circumstances of the illegal distribution of synthesized content should begin with an analysis of a very significant case of the disclosure of a crime abroad.

1. A criminal case against the headmaster

One of the cases in which a deepfake was provided as evidence occurred in the United States. Pikesville High School principal Eric Eiswerth became a suspect in a criminal defamation case after a soundtrack recording of his voice with various racist and anti-Semitic comments was distributed on social networks [A former physical education teacher who framed the school principal with the help of AI was detained at the airport with a weapon. URL: https://www.thebaltimorebanner.com/education/k-12-schools/eric-eiswert-ai-audio-baltimore-county-YBJNJAS6OZEE5OQVF5LFOFYN6M / (date of access: 11/28/2024)]. Deepfakes are increasingly used in the implementation of defamation, as, for example, N. N. Parygina points out. "It is now unprecedentedly easy to create a false impression that this or that person has said or done something" [3, p. 178].

The distribution of this soundtrack led to the dismissal of the director from work at the school, and caused public censure against the director on social networks. Threatening letters were sent to the director and his family, all of which, of course, contributed to the diminution of his business reputation and led to discrediting him as a person and as a school principal.

The controversial soundtrack was posted on an account on the social network Instagram [A banned social network in Russia]. Subsequently, it was possible to establish by expert means that this phonogram was generated using AI technologies. The expert opinion stated that the voice on the phonogram was a cloned voice of the director. During the investigation, it was established that the disputed soundtrack was distributed using a newly registered anonymous email address.

It was possible to establish the identity of the email address due to the tactical operation correctly used by the investigator. After establishing the circle of suspects in voice cloning, computer technicians were involved in studying computer logs and accounts registered on the school network.

Information about search queries about "big language models" (BYAM), neural networks for voice recognition and the creation of voice clones was found in the account of a physical education teacher, Dazhon Darien. The dates of the search queries immediately preceded the date of publication of the soundtrack.

To establish the identity of the e-mail address to D. Darien, during his interrogation, the investigator logged into the mail server where the e-mail address was registered and used the "restore password" function. After that, a message came to D. Darien's phone with a link necessary to restore the email password [The case involving the use of the AI-generated voice of the school principal. URL: https://www.wbaltv.com/article/police-former-pikesville-high-ad-facing-charges-ai-generated-voice-hoax/60603313( date of access: 11/28/2024)].

With the help of a number of examinations and studies conducted at the University of California (Berkeley) [The accusation of the headmaster. The case with using AI. URL: https://www.baltimorecountymd.gov/departments/police/news/2024/04/25/athletic-director-charged-in-pikesville-high-school-ai-case (date of application: 11/28/2024) It was found that the generated soundtrack was created using AI technology and distributed by a physical education teacher, Dazhon Darien, in January 2024.

On the controversial soundtrack, among other persons to whom racist statements were made, there was also the nickname of this physical education teacher "DJ". In relation to the physical education teacher, an investigation was previously launched into the case of theft at school after a statement made by the headmaster, and the generated soundtrack, thus, served as a kind of revenge of the physical education teacher to the director.

This incident led to serious unrest at school, the audio recording misled a significant number of people, greatly affected the trust of students and parents, the school administration. The Baltimore County Prosecutor stated that this was the first case where evidence was falsified using AI technology, as a result of which the entire county was involved in the case, since it was difficult to find experts competent in the study of such synthesized content.

In the situation under consideration, the problem is related to the fact that the very right to exercise judicial protection of one's honor, business reputation and good name depends on the ability of the subject to prove that realistic convincing digital evidence is falsified using neural network technologies.

The possibilities of proof in the investigative situation described earlier are significantly limited. First, Dajon Darien synthesized Eric Eiswerth's monologue speech, which deprived the latter of the opportunity to use witness testimony to defend against charges. The phonogram did not record the remarks of any of Eric Eiswerth's interlocutors, who could have testified that he did not utter such statements in conversation with them.

Secondly, Dajon Darien disseminated information using a mailing list, and the applicants for Eric Eiswerth's offense were the recipients of such a mailing list, for whom the quality of voice and speech synthesis turned out to be quite convincing.

Thirdly, the only source of information about the suspect was publicly available information about the conflict between Dagon Darien and Eric Eisvert. If the conflict had not been public or another attacker had distributed the generated phonogram before any conflict arose, the investigation and Eric Eiswerth himself would hardly have had information sufficient to determine the circle of persons potentially involved in the synthesis and distribution of the phonogram.

Fourthly, to prove the fact of falsification, it is necessary to conduct a diagnostic phonoscopic examination. In the case of pre-trial establishment of circumstances, a person who has applied for the help of a specialist phonoscopist is charged with quite significant financial expenses. Applying for pre-trial research in such a high-tech field entails quite serious financial costs, and the lack of research methods and the low effectiveness of research on the results of high-tech synthesis of phonograms causes very significant financial risks for the applicant, even in situations where he is absolutely sure that the phonogram has been synthesized.

The issue of payment for the production of a forensic examination may in some cases be a significant obstacle to the exercise of the right to defend against an accusation or an objection to a claim. For example, in domestic civil procedure practice, the costs of paying for a forensic examination are imposed by the court, as a rule, on the party who petitioned for its production [4]. According to the provisions of Article 96 of the Civil Procedure Code of the Russian Federation, the sums of money to be paid to experts are preliminarily deposited into an account opened in accordance with the procedure established by the budget legislation of the Russian Federation by the party that submitted the relevant request.

In criminal proceedings, financial expenses often arise even at the stage of non-procedural studies, when a specialist's opinion is required to substantiate the application, the payment for the preparation of which is made at the expense of the applicant's own funds.

The production of forensic phonoscopic examination is also associated with some risks. Firstly, in a situation where the result of an expert study will be a conclusion in the form of an NIP (it is not possible to answer the question posed by the court or the investigator), the applicant may actually be deprived of the possibility of subsequent proof.

Secondly, in a situation where forensic methods do not contain relevant sections devoted to the identification of signs of synthesis, the chance to identify the fact of synthesis itself is very small, since an expert in the production of research will not have methodological assistance to identify and describe any typical signs of generation or circumstances to be established when checking the version about synthesis.

Thirdly, in a situation where expert practice in such cases has not yet been formed, it is logical to assume that only the most highly qualified experts, who obviously are not so many among all practitioners, will be able to identify the fact of generation.

2. A criminal case against a hydrologist scientist

A high-profile case has become widely known in Russia, in which a hydrologist scientist Alexander Tsvetkov was detained as an accused of murder. The scientist was accused of murder 20 years ago. In 2023, A. I. Tsvetkov was detained at Domodedovo airport due to the fact that the facial recognition system worked, which found a partial (55%) similarity to the sketch of the alleged killer, compiled back in 2002.

At the same time, the statistical values of 55% in the scientific field are absolutely irrelevant, since even for competitive works on the subject of face recognition, works with an efficiency of 90 percent are accepted, and as a rule, works with values of 99.6 and higher take prizes [5].

As a result, A. I. Tsvetkov was under arrest for 11 months, although the fundamental impossibility of his being at the crime scene was obvious on the basis of other circumstances established by the investigation in the case.

It is quite obvious that in the investigation of crimes of previous years, erroneous identification results using neural network algorithms will act as convincing, and often irrefutable evidence of the involvement of persons who simply will not have the opportunity to refute the arguments of the prosecution.

Among such persons, as the results of foreign studies show [6, p. 6; 7], first of all, there are migrants whose racial and national-specific features of appearance are assigned incorrect statistical weights in the array, where, due to the low representativeness of their share in the total sample, an obvious bias of neural network algorithms is formed.

The circumstances that caused the termination of criminal prosecution against A.I. Tsvetkov were also the results of expert research.

According to the official statement of the authorized persons of the Main Investigative Department of the TFR in Moscow: "During the investigation of the criminal case, the investigation carefully checked the collected evidence, appointed and conducted forensic examinations, which established Tsvetkov's innocence to the incriminated crimes. In this connection, the criminal prosecution against Tsvetkov has been terminated" [Freedom met: how the scientist's innocence in the murders of 20 years ago was proved. URL: https://iz.ru/1661286/elena-balaian/vstretila-svoboda-kak-dokazali-nevinovnost-uchenogo-v-ubiistvakh-20-letnei-davnosti (date of access: 11/28/2024)].

3. Charges of crimes and administrative offenses based on the results of interference in elections

On January 22, 2024, the New Hampshire Attorney General's Office launched an investigation into the sending of automated voice messages (so-called "robocalls") received by thousands of residents of the state. The messages contained false information about the need to "save your vote for the November elections" and claimed that "your vote matters in November, not this Tuesday." The voice of US President Biden in recorded messages was cloned using a neural network algorithm. In addition, it was found that the message was the result of spoofing, that is, it was disguised as coming from the treasurer of the political committee that supported Biden's campaign in the primaries of the US Democratic Party.

During the investigation, it was found that the organizer of the robotic phone call (a typical procedure of the electoral process in the United States) was a citizen S. Kramer. Statements were received from thirteen victims. He was charged under two articles of New Hampshire law.:

13 serious crimes under Article RSA 659:40, III (criminal violation related to suppression of electoral activity). The article states: "No one should engage in suppressing electoral activity by knowingly trying to prevent or deter another person from voting or registering to vote based on fraudulent, deceptive, misleading or false information." S. Kramer is charged with using the voice of the President of the United States cloned using neural network technologies and reporting false information in order to dissuade 13 voters from participating in the primaries on January 23, 2024.

13 administrative offenses under article RSA 666:7-a (imitation of a candidate). The article establishes responsibility for falsely presenting himself as a candidate in a phone call. S. Kramer was accused of personally or with the involvement of other persons making calls to 13 voters posing as a candidate.

As a result, by decision of the Federal Communications Commission of the United States, the telecommunications company Lingo Telecom, whose equipment was used to make robotic calls, paid a fine, and criminal cases were initiated against S. Kramer. The US Federal Communications Commission also filed charges against S. Kramer, estimating the fine at $6 million [Consultant behind deepfaked Biden robocall fined $6m as new charges filed. US elections 2024. Retrieved from https://www.theguardian.com/us-news/article/2024/may/23/biden-robocall-indicted-primary (date of access: 11/28/2024)]. The investigation of the case is ongoing.

This case demonstrates a situation in which offenders achieve an instant (as is often the case with elections) and large-scale effect. In such a situation, forensic expert determination of the circumstances, even in the case of effective expert conclusions, will be irrelevant, taking into account the timing of examinations and research, since the result of unlawful encroachment by such methods cannot be prevented.

4. Deepfake phonogram as evidence in divorce proceedings

Dipfakes are provided as evidence not only in criminal cases, but also in civil cases. An illustrative case is, for example, the case of child custody rights, which was considered in the UK. During the divorce process, the wife presented as evidence a phonogram depicting the voice and speech of her husband, who threatened the life and health of his wife and their child. This kind of proof certainly reduced the chances of a spouse having custody of a child.

A phonoscopic examination appointed by the court was subsequently able to establish that changes were made to the file after recording: synthesized fragments were inserted into it [P. Ryan. Deepfake Audio Evidence used in U.K. Court to Discredit Dubai Dad. The National UAE. Retrieved from https://www.thenationalnews.com/uae/courts/deepfake-audio-evidence-usedin-uk-court-to-discredit-dubai-dad-1.975764 (date of access: 11/28/2024)].

The wife generated fragments of the threatening soundtrack based on her husband's voice messages in messengers. As an array of data for generation according to specified parameters (in the terminology of neural network generation – cloning), voice messages were used, a large volume of which made it possible to achieve a high level of similarity between voice and speech.

The key circumstances determining the process and results of proving in such cases are actually the competence of the attacker and the presence of a large volume of samples of the cloned voice. That is, one of the tools for identifying persons potentially involved in the generation may be the solution by an expert phonoscopist of the diagnostic problem of which data set (for example, by its volume or amplitude-frequency characteristics of phonogram samples) was used for generation.

An important role in determining the circumstances of generation is also played by the expert's awareness of the specific features of modern voice generation/cloning services in order to answer the question of which program was used to synthesize/clone a particular voice.

It seems that in order to develop methods for solving such diagnostic problems, first of all, it is necessary to organize and maintain up-to-date music libraries with the results of generation. This area requires separate theoretical and practical development.

5. Judicial practice of protecting exclusive rights to digital products

The first case known to us from Russian judicial practice is the arbitration proceedings on the recovery of compensation for violation of the exclusive right to a video in the amount between LLC "Reface Technologies" and LLC "Business Analytics" [Case materials No. A40-200471/23-27-1448 . URL: https://kad.arbitr.ru/Card/4d7f0305-69af-44fe-8841-a59e84aa7deb (date of application: 11/28/2024)].

The court found that Business Analytics LLC had posted an audiovisual work with a duration of 31 seconds on the Internet, the exclusive right to which belongs to Reface Technologies LLC.

The defendant justified the plaintiff's lack of exclusive right to this object by the lack of evidence of the exclusive right to the disputed video, as well as by the fact that "the created video is not an object of copyright due to the use of Deep-fake technology."

The court rejected these arguments, pointing out that "since Deep-fake technology is an additional tool for processing (technical installation) of video materials, and not a way to create them." Accordingly, the technical installation of the "source materials of the video by means of deep-fake technology does not in itself indicate that the video is available for free use (without the consent of the copyright holder), or that the group of persons who provided the script of the video, videography, and audio accompaniment did not make a personal creative contribution to the creation of the video and they are not recognized by its authors."

However, due to the fact that there is currently no legislative definition of a deepfake, this position does not seem entirely correct to us. In previously published peer–reviewed scientific articles, we pointed out that "Deepfake is a digital product in the form of text, graphics, sound or a combination of them, generated in whole or in part using neural network technologies, for the purpose of misleading or overcoming access control and management systems by the user" [1, p. 38].

Thus, based on the essence of the case under consideration, the controversial audiovisual work was not a deepfake, but is a generated digital product created using neural network technologies. The company did not hide the way it was created, and had no illegal purposes in distributing it. In addition, deepfake is not a technology, but the result of using neural network technologies and is not a technical installation tool.

Thus, another legal problem is the issue of recognizing the rights to waveforms generated using neural network technologies. In this case, the court decided to recognize the right of Reface Technologies LLC as violated and collected compensation from Business Analytics LLC [Decision of the Moscow Arbitration Court composed of judge V.I. Krikunova dated November 30, 2023 in case No. A40-200471/23-27-1448 URL: https://kad.arbitr.ru/Card/4d7f0305-69af-44fe-8841-a59e84aa7deb (date of application: 11/28/2024)].

6. The category of cases of telephone fraud using neural network technologies

Various types of deepfakes are widely used in the commission of telephone scams and scams using messengers. The specifics of such crimes have already been considered in the scientific literature. For example, one of the publications discusses some fraudulent schemes in which "fraudsters pose as heads of ministries and departments, rectors and other high-ranking officials" [8, p. 25]. These schemes are implemented using both voice and graphic deepfakes. Similar schemes are being implemented both for the purpose of illegally obtaining funds from the victim, and for the "theft" of personal data.

Both deepfakes created on the basis of biometric personal data of persons close to the victim (relatives, friends, acquaintances), as well as high-ranking officials and popular people are actively used. For example, in one of the recent cases, scammers created a telegram channel on behalf of the popular blogger Oksana Samoilova, posted various voice messages and videos on her behalf (voice and graphic dipfakes) with an offer to earn money, as a result of which citizens, trusting the popular blogger, transferred significant amounts of money. At the moment, the victims have already contacted the police with a statement about fraud [Spreading panic and undermining faith in humanity. Russians are being attacked by URL deepfakes: https://regnum.ru/article/3909268 (date of access: 11/28/2024)].

7. The practice of the Supreme Court of the Russian Federation

The Plenum of the Supreme Court of the Russian Federation also turned its attention to the problem of deepfakes, mentioning in its Rulings: "misleading fake images, audio and audiovisual materials, including those created using computer technology" (paragraph111), "dissemination of misleading fake images, audio and audiovisual images to voters, participants in the referendum materials, including those created with the help of computer technologies" (item 129) [Resolution of the Plenum of the Supreme Court of the Russian Federation No. 24 dated June 27, 2024 "On some issues arising during the consideration by courts of administrative cases on the protection of electoral rights and the right to participate in a referendum of citizens of the Russian Federation"]; "using introductory misleading and unreliable images, audio and audiovisual information, including those created using computer technology (paragraph 20)" [Resolution of the Plenum of the Supreme Court of the Russian Federation No. 17 of June 25, 2024 "On certain issues arising from courts when considering cases of administrative offenses infringing on the established the procedure for information support of elections and referendums"].

S. Kuzmichev, Chairman of the judicial staff of the Judicial Board for Administrative Cases of the Supreme Court of Russia, presenting the Ruling before the vote [The Supreme Court called the use of dipfakes in elections a violation of the rules of campaigning. URL: https://tass.ru/politika/21193603 (date of appeal: 11/18/2024)], and after him, the media made statements that the definition of a deepfake was given in these Resolutions.

The draft law, which provides for the introduction of additional qualified personnel into some articles of the Criminal Codec of the Russian Federation, proposing to establish the use of a deepfake as a qualifying feature, also gives a dubious formulation from a practical point of view: "using an image or voice (including falsified or artificially created) of the victim or another person, as well as using biometric personal data the victim or another person" [Draft Federal Law No. 718538-8 "On Amendments to the Criminal Code of the Russian Federation" (introduced on 09/16/2024 by Deputy of the State Duma Ya. E. Nilov, Senator of the Russian Federation A. K. Pushkov) // System of ensuring legislative activity. URL: https://sozd.duma.gov.ru/bill/718538-8 (date of application: 11/24/2024)]. The conclusion on the draft of this federal law also refers to the uncertainty of the wording "deepfake" used in the draft law: "The wording "falsified or artificially created" used in the draft law is uncertain and may lead to an ambiguous interpretation of the projected provisions" [Conclusion on the draft federal law No. 718538-8 of 11/08/2024 URL: https://sozd.duma.gov.ru/bill/718538-8 (date of application: 11/24/2024However, as reported in the media, this bill is "conceptually supported by the Supreme Court of the Russian Federation and the Government of Russia" [The Cabinet of Ministers and the Supreme Court supported the bill on punishment for diplomatic fraud. URL: https://tass.ru/ekonomika/22401345 (date of access: 11/24/2024)].

However, the lack of a definition of "deepfake" in Russian legislation can have an extremely negative impact on law enforcement practice. The proposed definition of a deepfake does not allow for unambiguous qualification of actions related to its use and distribution.

In our previously published articles, we described a list of potential threats to the use and distribution of deepfakes [9, pp. 174-176]. These cases directly confirm that the described threats have already become real, and various circumstances of cases in which crimes were committed using deepfakes are increasingly described in scientific articles [10].

The results of summarizing information about the cases considered in this article confirm not only the need to develop legal mechanisms for regulating generative content, but also to improve forensic counteraction to combat deepfakes. But at the present stage, forensic support both in our country and abroad is significantly lagging behind the pace of development of generative neural networks. At the same time, in scientific terms, when conducting forensic research, experts have to focus on foreign scientific papers, which today pay much more attention to the creation of specific algorithms for detecting deepfakes [11, 12].

8. Forensic support in cases of illegal distribution of generative content created using artificial intelligence technologies

In a situation of illegal distribution of voice dipfakes, it is necessary to use the tools of forensic phonoscopic examination [13, 14, 15]. At the same time, in addition to methodological support for forensic phonoscopic examination, the correct formulation of the issues formulated in the resolution or definition is also required.

First of all, a law enforcement officer should have the skills to formulate an expert task, especially at the current stage of methodological support development, when consulting assistance from knowledgeable persons can be obtained only from the most qualified specialists, who, as we noted earlier, are expected to be a minority of the total number of professionals in the field of forensic phonoscopy.

The classic and familiar formulation for most law enforcement officers for the technical study of phonograms is the question: "Are there signs of non-situational changes in the presented phonogram "The name of the phonogram.mp3", including installation, etc.?".

However, in the case of generated content, changes are not made to any soundtrack (although, as we described earlier, such cases are possible). It is important to understand that in situations where a deepfake is provided as proof, it is most often about creating a new phonogram reflecting a speech event that never existed.

Table 1. Formulations of questions submitted for forensic phonoscopic examination

№№

The existing formulation of the methodological recommendations

Incorrect

formulations

Proposed formulations

1.

Are there any signs of non-situational changes in the presented phonogram "Name of the phonogram.mp3", including installation, etc.?

Are there any signs of neural network audio signal generation in the presented phonogram "Name of the phonogram.mp3"?

Is the voice and sounding speech recorded on the phonogram "Phonogram Name.mp3" synthesized using neural network algorithms?

2.

Was the phonogram "The name of the phonogram.mp3" obtained by adding/combining fragments that were synthesized using neural network algorithms?

3.

What are the technical characteristics and parameters of the device used for sound recording?

What properties, characteristics and parameters does the phonogram "Phonogram name.mp3" have on the presented media?

What software (or software on which device) created the file "Phonogram name.mp3"?

At the same time, from a practical point of view, it is worth considering that question No. 3 "What software (or device) created the file "Phonogram name.mp3"?" in Table 1, taking into account the specifics of the object of research, is at the junction of the competencies of technical research of phonograms and computer-technical expertise.

In order to develop a scientific and methodological base for the production of forensic phonoscopic examination of synthesized speech, including cloned voices, a fundamentally important task is to create a specialized data array (dataset), which will include representative samples with phonograms of both cloned voices and those on which the neural network was trained. This approach, in our opinion, will allow us to develop approaches to the detection of deepfakes using forensic phonoscopy methods. As part of the National Strategy for the Development of Artificial Intelligence, the essence of the concept of "dataset" reflects the term "dataset", it was clarified in the latest edition of the Decree of the President of the Russian Federation:

"e) a data set is a composition of data that is structured or grouped according to certain criteria, meets the requirements of the legislation of the Russian Federation and is necessary for the development of programs for electronic computers based on artificial intelligence" [As amended by Decree of the President of the Russian Federation dated 02/15/2024 No. 124. Decree of the President of the Russian Federation dated 02/15/2024 No. 124 "On Amendments to the Decree of the President of the Russian Federation dated October 10, 2019 No. 490 "On the Development of Artificial Intelligence in the Russian Federation" and to the National Strategy approved by this Decree"].

In the context of forensic expertise, such a "data set" can be considered as a specialized type of music library designed to solve the tasks of forensic research. To date, the lack of such music libraries is a significant obstacle to the development of this area in forensic institutions and educational organizations. This factor, of course, slows down the development and implementation of new scientific and methodological approaches to the study of synthesized sounding speech, as well as complicates the training and advanced training of phonoscopy experts and specialists of other expert specialties. The creation of such music libraries could be an important step towards improving expert technologies and increasing their reliability in the context of the active introduction of AI technologies into the process of generating voice data.

At the moment, the algorithms for detecting spoofing attacks are based on neural network technologies that are trained on limited data sets. Phonograms generated using specific synthesis systems are used as such data. Nevertheless, the rapid development of neural network technologies creates significant difficulties for developers of hardware and software solutions. They do not have time to adapt their systems and implement updates in a timely manner, which negatively affects their work efficiency. As a result, automated spoofing detection methods aimed at detecting synthesized speech have not yet achieved high accuracy, which emphasizes the need to develop algorithms that are not tied to a specific neural network and improve tools for analyzing such phonograms.

In November 2024, the head of the Ministry of Finance announced the discussion of the possibility of creating a database of biometric samples of telephone scammers in order to combat them [the Ministry of Finance wants to start forming a database of biometrics of telephone scammers. URL: https://tass.ru/obschestvo/22440773 (date of access: 11/28/2024)].

In his statement, M. I. Shadaev did not specify which biometric data is planned to be collected, however, we assume that given the nature of telephone fraud (including audio, video calls in various messengers), we should talk about samples of the voice and appearance of intruders.

However, the effectiveness of this measure in modern conditions raises certain doubts. First, it is initially required to develop suitability criteria for adding such biometric data samples to the array. For example, most of the acoustic signals that have passed through the telephone tract are characterized by very limited quality parameters for recording voice and speech characteristics, which are often used by telephone scammers.

Secondly, due to the fact that fraudsters very often use voices and images created using AI technologies, it is initially necessary to determine whether the controversial waveform that is planned to be added from the conditional "phono/video library" is synthesized or not. At the present stage, as we have already indicated earlier, this is extremely difficult.

In addition, it is unclear how this system will function, whether specialists with special knowledge in the field of phonoscopic, video or portrait examinations will be involved to create it and index the data in it. Even if a positive decision is made on this issue, the question remains whether such volumes of data can be analyzed by the forces of existing employees of forensic institutions and how to correlate these labor costs with the need to perform their main functions of conducting forensic examinations and research.

The possibilities of searching in various music libraries, if we talk about unidentified persons, taking into account the current level of science and technology are extremely limited. It is worth noting that in 2018 the Ministry of Internal Affairs of Russia refused to keep records of phonograms of speech (voice) of unidentified persons from November 1, 2018 [Order of the Ministry of Internal Affairs of the Russian Federation dated 09/11/2018 No. 585 "On Amendments to the Order of the Ministry of Internal Affairs of the Russian Federation dated February 10, 2006 No. 70 "On the organization of the use of forensic records of internal Affairs bodies of the Russian Federation federation" // SPS Consultant plus]. Having a phonogram with the voice of an unknown person and a music library with biometric data (in this case, the voice of a person), it is extremely difficult for telephone scammers to search in order to compare available voice samples with a disputed phonogram.

For example, specialized software for comparing Icar lab 3 speakers allows, based on the algorithm of automatic comparison in the "1" mode:N" search for the target speaker in a set of preset phonograms. The module also allows for an automatic assessment of the technical characteristics of the compared phonograms. However, the maximum number of phonograms (parameter N) is 50.

And it is assumed that the database of biometric data samples of telephone scammers will be much larger, which requires the development of a fundamentally new approach to information analysis. It should be noted that in modern fingerprint registration systems, algorithms work with much smaller amounts of data than phonogram files, so at the present stage it is difficult to imagine what kind of analog structuring, marking and searching information in the database can be used for phonograms or videophonograms.

In addition, the creation of such a base also threatens the constitutional rights of citizens: Article 23 of the Constitution of the Russian Federation [The Constitution of the Russian Federation (adopted by popular vote on 12.12.1993 with amendments approved during the all-Russian vote on 07/01/2020] "2. Everyone has the right to privacy of correspondence, telephone conversations, postal, telegraphic and other communications. Restriction of this right is allowed only on the basis of a court decision."

To record telephone conversations, even with alleged telephone scammers, the mobile operator will need to obtain the consent of the subscriber, which from a practical point of view will create an obstacle to the operational and effective operation of the designed system.

In addition, due to the fact that there are no details about the implementation of the project yet, it is necessary to have an idea how and on what grounds the biometric data of telephone scammers will be entered into the proposed database. It is obvious that in the current system of legal regulation of these legal relations, this is possible after the court verdict comes into force, which in advance increases the lag of such data from the current pace of technological development of neural network algorithms.

Thus, the creation of such a database of biometric data of telephone scammers still raises more questions, both technical and legal, than solutions to the problems that have already accumulated.

In our opinion, it is necessary to create music libraries not of samples of alleged fraudsters, but of music libraries with the results of experimental phonoscopic studies. To ensure forensic counteraction to the spread of deepfake content, it is necessary to develop unified scientific and methodological approaches to the study of acoustic signals generated using neural network technologies (both with human voice and speech, and with background noises, which are already being used by fraudsters to mask the real situation of distant communication).

Conclusions

According to the results of the analysis, it was found that the rapid development of artificial intelligence technologies, in particular generative neural networks, leads to new challenges for law enforcement and forensic activities. The generalization of domestic and foreign judicial practice related to the facts of providing dipfakes and other results of neural network generation as evidence demonstrated the complexity and multidimensional nature of the tasks facing experts and, subsequently, law enforcement officers.

The key problems have already crystallized in law enforcement practice:

1. Uncertainty of legal regulation.

The lack of a unified definition of the deepfake and regulation of generative content creates legal uncertainty. This complicates the qualification of illegal actions related to the use of dipfakes, which in the future will inevitably make it difficult to prove the circumstances in court.

2. The technological complexity of the research.

Existing expert techniques do not always allow us to reliably determine the fact of content generation by neural network algorithms. This is especially relevant for phonoscopic examination and analysis of synthesized/cloned voices.

3. Limited access to technological research opportunities.

The technological, financial and organizational accessibility of expertise and research remains limited, which will hinder the realization of the right to a defense and a fair trial.

Despite the obviously small volume of such cases at this stage, predicting the measures necessary for more effective law enforcement and expert activity, it is possible to formulate some recommendations that are relevant now:

- Development of a scientific and methodological base.

It is necessary to create specialized music libraries and datasets with examples of synthesized content and access to the samples used for generation. This will ensure the training and advanced training of experts, as well as develop new methodological approaches to identifying deepfakes. Organizationally, this will lead to the simplification of such studies and to a reduction in the cost of their production.

- Interdisciplinary interaction.

An important task at the first stage is to coordinate the work of law enforcement officers, experts and technical specialists in the field of artificial intelligence technologies to create universal algorithms for detecting deepfakes that do not depend on specific generation technologies.

- Improvement of legislation.

It requires the introduction of clear legal definitions and mechanisms for regulating the use of generative technologies. Special attention should be paid to the qualification of illegal actions committed using deepfakes.

As specific changes to the criminal procedure legislation, the following seem to us to be the most timely:

1. The vector of digitalization of the criminal process should take into account the very fact that fake information in the form of specific objects – dipfakes will be provided as evidence in digital form. Thus, at a certain stage of technological development and the actual availability of highly realistic neural network generation technologies, it seems quite appropriate to supplement Article 196 of the Code of Criminal Procedure with paragraph 6 of the following content:

"6) the fact of generating digital data presented as evidence in a criminal case, when there is doubt about their reliability or it is necessary to establish the fact of using biometric personal data of a particular person in the process of such generation."

2. An important aspect is to create conditions for the safe storage, processing and research of synthesized content in order to exclude its modification or use for other illegal purposes when access to it will not be restricted.

In this regard, it is logical to supplement Article 82 of the Code of Criminal Procedure with a provision on the procedure for storing digital evidence, including the results of neural network generation and biometric information used to create them:

"Physical evidence presented in digital form, including content generated using artificial intelligence technologies, must be stored with protective measures that exclude their loss, modification or unauthorized access."

Such rules in Article 82 of the Code of Criminal Procedure of the Russian Federation at the present stage will be much more in demand than the rules for dealing with such archaisms as video or film.

A large amount of evidentiary information in digital form has been received in criminal proceedings for quite a long time. But the technologies of neural network content generation, by their existence, distribution and level of development, call into question the effectiveness of the existing criminal procedural regulation for the purpose of objectively establishing the circumstances in cases where the evidentiary process is associated with the study of information in digital form.

In other words, doubts about the nature of the origin of an object in digital form arise in court proceedings reasonably and inevitably. And legal mechanisms for evaluating or methodological aspects of verifying evidence for their possible neural network generation are currently not developed or ineffective (due to their insufficient level of development or high rates of obsolescence against the background of the boom in the development of new neural network technologies).

References
1. Bodrov, N.F., & Lebedeva, A.K. (2023). The concept of deepfake in Russian law, classification of deepfake and issues of their legal regulation. Legal Studies, 11, 26-41. doi:10.25136/2409-7136.2023.11.69014 Retrieved from http://en.e-notabene.ru/lr/article_69014.html
2. Pfefferkorn, R. (2020). Deepfakes’ in the Courtroom. Rochester. Boston University Public Interest Law Journal, 2, 244-276. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4321140
3. Parygina, N. N. (2024). ‘Latent defamation: an offence with a ‘creative’ approach’, Law, 7, 171-180. doi:10.37239/0869-4400-2024-21-7-171-180
4. Myskina, K.M. (2024). ‘On some problems of payment for forensic examinations in civil proceedings’. Vestnik of O.E. Kutafin University (Moscow State Law Academy), 3(115), 137-143.
5. Chingovska, I. et al. (2013). The 2nd competition on counter measures to 2D face spoofing attacks. In Proc. IAPR Int. Conf. Biometrics (ICB), Jun. 2013 (pp. 1-6. Retrieved from https://www.researchgate.net/publication/303156337_The_2nd_competition_on_counter_measures_to_2D_face_spoofing_attacks
6. Frisella, M. et al. (2022). Quantifying Bias in a Face Verification System. Computer Sciences & Mathematics Forum, 3. Retrieved from https://www.researchgate.net/publication/360168993_Quantifying_Bias_in_a_Face_Verification_System
7. Yucer, S. et al. (2020). Exploring Racial Bias within Face Recognition via per-subject Adversarially-Enabled Data Augmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Pp. 83-92. Retrieved from https://www.researchgate.net/publication/340805716_Exploring_Racial_Bias_within_Face_Recognition_via_per-subject_Adversarially-Enabled_Data_Augmentation
8. Bodrov, N.F., & Lebedeva, A.K. (2023). ‘The concept of deepfake in the Russian law, its classification and problems of legal regulation’ Legal Bulletin of DSU, 4(68), 173-181.
9. Alikhadzhiyeva, I. S. (2024). About new ways of committing crimes using personal data. Bulletin of Prikamsky Social Institute, 1(97), 22-30.
10. Pozdnyak, I. N. (2024). Digital threats in the modern world: deepfake technology, Forensic Expertise of Belarus, 2(19), 72-77.
11. Paulo Max Gil Innocencio Reis, Rafael Oliveira Ribeiro. (2024). A forensic evaluation method for DeepFake detection using DCNN-based facial similarity scores. Forensic Science International, 358. Retrieved from https://doi.org/10.1016/j.forsciint.2023.111747 Retrieved from https://www.sciencedirect.com/science/article/pii/S0379073823001974
12. Öztürk, S. B., Özyer, B. & Temiz Ö., (2024). Detection of Voices Generated by Artificial Intelligence with Deep Learning Methods. In 32nd Signal Processing and Communications Applications Conference (SIU),, 24, 1-4. Mersin, Turkiye. doi:10.1109/SIU61531.2024.10601078. Retrieved from https://ieeexplore.ieee.org/abstract/document/10601078
13. Bodrov, N. F., & Lebedeva, A. K. (2024). ‘Deepfake as an object of forensic expertise’, National and International Trends and Prospects of Forensic Expertise Development: Collection of Reports of the Scientific and Practical Conference with International Participation, 42-50. Nizhny Novgorod: NNGU.
14. Zubov, G. N., & Zubova, P. I. (2023). ‘Falsification of sound information with the use of artificial intelligence technologies. Features of technical research’, Vestnik kriminalistiki, 3, 5-26.
15. Spiridonov, M. S. (2023). ‘Artificial intelligence technologies in criminal procedural proving’, Journal of Digital Technologies and Law, 2, 481-497. doi:10.21202/jdtl.2023.20

First Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The subject of the research in the article submitted for review is, as its name implies, the judicial practice of establishing circumstances in cases of illegal distribution of generative content created using artificial intelligence technologies. The declared boundaries of the study have been observed by the scientist. The methodology of the research is not disclosed in the text of the article. The relevance of the research topic chosen by the author is undeniable and justified by him as follows: "Currently, artificial intelligence technologies (hereinafter referred to as AI) for creating generative content are developing extremely rapidly and are involved in a wide variety of areas of public life. At the same time, the practice of considering court cases is being formed, in which digital traces representing AI-generated content are used as evidence. This content is digital products in various forms: in the form of text, graphics, sound or a combination of them. Of course, the active development of neural network technologies in the field of generative content creation provides enormous opportunities for its illegal use and distribution. The relevance of the chosen topic is due to the fact that there are more and more cases when synthesized content is used to mislead or overcome access control and management systems by the user [1, p. 38], in the framework of libel cases, various frauds, when considering civil cases (for example, divorce proceedings). In such situations, law enforcement officers face the task of establishing the circumstances related to the creation, use and distribution of content generated using neural network technologies: – technological difficulties in identifying such generative content, proving the fact of its creation both in general with the help of neural network technologies and specific neural networks used to create it; – the expediency of appointing a forensic examination" and others . Additionally, the scientist needs to list the names of the leading experts who have been engaged in the study of the problems raised in the article, as well as reveal the degree of their study. The scientific novelty of the work is manifested in a number of conclusions of the author: "The issue of payment for the production of a forensic examination may in some cases be a significant obstacle to the realization of the right to protection from charges or objections to a claim. For example, in domestic civil procedure practice, the costs of paying for a forensic examination are imposed by the court, as a rule, on the party who petitioned for its production [4]. According to the provisions of Article 96 of the Civil Procedure Code of the Russian Federation, the sums of money to be paid to experts are preliminarily deposited into an account opened in accordance with the procedure established by the budget legislation of the Russian Federation by the party that submitted the relevant request. In criminal proceedings, financial expenses often arise even at the stage of non-procedural research, when a specialist's opinion is required to substantiate the application, the payment for the preparation of which is made at the expense of the applicant's own funds. The production of forensic phonoscopic examination is also associated with some risks. Firstly, in a situation where the result of an expert study will be a conclusion in the form of an NIP (it is not possible to answer the question posed by the court or the investigator), the applicant may actually be deprived of the possibility of subsequent proof. Secondly, in a situation where forensic methods do not contain relevant sections devoted to the identification of signs of synthesis, the chance to identify the fact of synthesis itself is very small, since an expert in the production of research will not have methodological assistance to identify and describe any typical signs of generation or circumstances to be established when checking the version about synthesis. Thirdly, in a situation where expert practice in such cases has not yet been formed, it is logical to assume that only the most highly qualified experts, who are obviously not so many among all practitioners, will be able to identify the fact of generation"; "However, the lack of a definition of "deepfake" in Russian legislation can have an extremely negative impact on law enforcement practice. The proposed definition of a deepfake does not allow for unambiguous qualification of actions related to its use and distribution. The results of summarizing information about the cases considered in this article confirm not only the need to develop legal mechanisms for regulating generative content, but also to improve forensic counteraction to combat deepfakes. But at the present stage, forensic expert support both in our country and abroad is significantly lagging behind the pace of development of generative neural networks"; "First of all, a law enforcement officer must have the skills to formulate an expert task, especially at the present stage of methodological support development, when consulting assistance from knowledgeable persons can be obtained only from the most qualified specialists, who, as we noted earlier, are expected to be a minority of the total number of professionals in the field of forensic phonoscopy. The classic and familiar formulation for most law enforcement officers for the technical study of phonograms is the question: "Are there signs of non-situational changes in the presented phonogram "The name of the phonogram.mp3", including installation, etc.?". However, in the case of generated content, changes are not made to any soundtrack (although, as we described earlier, such cases are possible). It is important to understand that in situations where a deepfake is provided as proof, it is most often about creating a new phonogram reflecting a speech event that never existed," etc. Thus, the article makes a certain contribution to the development of domestic legal science and, of course, deserves the attention of potential readers. The scientific style of the research is fully sustained by the author. The structure of the work is logical. In the introductory part of the article, the scientist substantiates the relevance of his chosen research topic. The main part of the work consists of the following sections: "1. Criminal case against the headmaster"; "2. A criminal case against a hydrologist scientist"; "3. Charges of crimes and administrative offenses based on the results of interference in elections"; "4. Deepfake phonogram as evidence in divorce proceedings"; "5. Judicial practice of protecting exclusive rights to digital products"; "6. Category of cases of telephone fraud using neural network technologies"; "7. The practice of the Supreme Court of the Russian Federation"; "8. Forensic support in cases of illegal distribution of generative content created using artificial intelligence technologies." The final part of the work contains conclusions based on the results of the study. The content of the article corresponds to its title, but is not devoid of shortcomings of a formal nature. So, the author writes: "The relevance of the chosen topic is due to the fact that there are more and more cases when synthesized content is used to mislead or overcome access control and management systems by the user [1, p. 38], in the framework of libel cases, various frauds, when considering civil cases (for example, divorce proceedings)" - "civil cases" (typo). The scientist notes: "However, it seems to us that the central place in proving in such cases will be occupied by the expert's opinion. Since the primary problem is the difficulty in proving whether the presented digital product is generated using neural network technologies or not" - "However, it seems to us that the central place in proving such cases will be occupied by the expert's opinion, since the primary problem becomes the difficulty in proving whether the presented digital product is generated when with the help of neural network technologies or not" (see the punctuation).
The author writes: "Deepfakes are increasingly used in the implementation of defamation, as indicated, for example, by N. N. Parygina. "It is now unprecedentedly easy to create a false impression that this or that person said or did something" [3, p. 178]" - "Deepfakes are increasingly used in the implementation of defamation, as pointed out, for example, by N. N. Parygina: "It is now unprecedentedly easy to create a false impression that one or another person has said or done something" [3, p. 178]." Thus, the article needs additional proofreading - it contains typos, punctuation and stylistic errors (the list of typos and errors given in the review is not exhaustive!). The bibliography of the study is presented by 15 sources (scientific articles), including in English. From a formal and factual point of view, this is enough. The author managed to reveal the research topic with the necessary completeness and depth. The work was done at a fairly high academic level. There is an appeal to opponents, both general and private (Ya. E. Nilov, A. K. Pushkov, M. I. Shadaev, etc.), and it is quite sufficient. The scientific discussion is conducted by the author correctly. The provisions of the work are justified to the appropriate extent and illustrated with examples. There are conclusions based on the results of the study ("According to the results of the analysis, it was found that the rapid development of artificial intelligence technologies, in particular generative neural networks, leads to new challenges for law enforcement and forensic activities. The generalization of domestic and foreign judicial practice related to the facts of providing dipfakes and other results of neural network generation as evidence demonstrated the complexity and multidimensional nature of the tasks facing experts and, subsequently, law enforcement officers. The key problems have already crystallized in law enforcement practice: 1. Uncertainty of legal regulation. The lack of a unified definition of a deepfake and regulation of generative content creates legal uncertainty. This complicates the qualification of illegal actions related to the use of dipfakes, which in the future will inevitably make it difficult to prove the circumstances in court. 2. The technological complexity of the research. Existing expert techniques do not always allow us to reliably determine the fact of content generation by neural network algorithms. This is especially relevant for phonoscopic examination and analysis of synthesized/cloned voices. 3. Limited access to technological research opportunities. The technological, financial and organizational accessibility of expertise and research remains limited, which will hinder the realization of the right to a defense and a fair trial. Despite the obviously small volume of such cases at this stage, predicting the measures necessary for more effective law enforcement and expert activity, it is possible to formulate some recommendations that are already relevant: - Development of a scientific and methodological base. It is necessary to create specialized music libraries and datasets with examples of synthesized content and access to the samples used for generation. This will ensure the training and advanced training of experts, as well as develop new methodological approaches to identifying deepfakes. Organizationally, this will lead to the simplification of such studies and to a reduction in the cost of their production. - Interdisciplinary interaction. An important task at the first stage is to coordinate the work of law enforcement officers, experts and technical specialists in the field of artificial intelligence technologies to create universal algorithms for detecting deepfakes that do not depend on specific generation technologies. - Improvement of legislation. It requires the introduction of clear legal definitions and mechanisms for regulating the use of generative technologies. Special attention should be paid to the qualification of illegal actions committed using deepfakes. As specific changes to the criminal procedure legislation, the following seem to us to be the most timely: 1. The vector of digitalization of the criminal process must take into account the very fact that fake information in the form of specific objects – dipfakes will be provided as evidence in digital form. Thus, at a certain stage of technological development and the actual availability of highly realistic neural network generation technologies, it seems quite appropriate to supplement Article 196 of the Code of Criminal Procedure with paragraph 6 of the following content: "6) the fact of generating digital data presented as evidence in a criminal case when there is doubt about their reliability or it is necessary to establish the fact of using biometric personal data of a particular person in in the process of such generation", etc.), they are clear, specific, have the properties of reliability, validity and undoubtedly deserve the attention of the scientific community. The interest of the readership in the article submitted for review can be shown primarily by specialists in the field of administrative law, information law, criminal procedure, provided that it is slightly improved: disclosure of the research methodology, additional justification of the relevance of its topic (within the framework of the remark made), elimination of violations in the design of the article.

Second Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

A REVIEW of an article on the topic "Analysis of the judicial practice of establishing circumstances in cases of illegal distribution of generative content created using artificial intelligence technologies". The subject of the study. The article proposed for review is devoted to topical issues of establishing circumstances in cases of illegal distribution of generative content created using artificial intelligence technologies. As noted in the article itself, "Judicial practice in the field under consideration is still being formed, certainly facing a number of difficulties, both legal and technological in nature. This article examines the domestic and foreign practice of judicial and investigative establishment of circumstances in cases of illegal distribution of synthesized content (deepfakes)." The specific subject of the study was, first of all, the provisions of Russian legislation, materials of judicial practice, and opinions of scientists. Research methodology. The purpose of the study is not stated directly in the article. At the same time, it can be clearly understood from the title and content of the work. The purpose can be designated as the consideration and resolution of certain problematic aspects of the issue of judicial practice of establishing circumstances in cases of illegal distribution of generative content created using artificial intelligence technologies. Based on the set goals and objectives, the author has chosen the methodological basis of the study. As noted in the article itself, "The methodological basis of the research is the universal dialectical method, general scientific (description, comparison, generalization, modeling, etc.) and private scientific methods. We will analyze those, so far few, cases where synthesized content has been used as evidence in civil, arbitration and criminal cases. In our opinion, in the near future, the volume of such cases in court proceedings will steadily increase, and the primary analysis and generalization of the specifics of such cases can be used to prevent judicial errors." In particular, the author uses a set of general scientific methods of cognition: analysis, synthesis, analogy, deduction, induction, and others. In particular, the methods of analysis and synthesis made it possible to summarize and share the conclusions of various scientific approaches to the proposed topic, as well as draw specific conclusions from the materials of judicial practice. The most important role was played by special legal methods. In particular, the author actively applied the formal legal method, which made it possible to analyze and interpret the norms of current legislation. For example, the following conclusion of the author: "the creation of such a base endangers the constitutional rights of citizens: Article 23 of the Constitution of the Russian Federation [The Constitution of the Russian Federation (adopted by popular vote on 12.12.1993 with amendments approved during the all-Russian vote on 07/01/2020]"2. Everyone has the right to privacy of correspondence, telephone conversations, postal, telegraphic and other communications. Restriction of this right is allowed only on the basis of a court decision." To record telephone conversations, even with alleged telephone scammers, the mobile operator will need to obtain the consent of the subscriber, which from a practical point of view will create an obstacle to the operational and effective operation of the designed system." The possibilities of an empirical research method related to the study of judicial practice materials should be positively assessed. In particular, it is indicated that "the Plenum of the Supreme Court of the Russian Federation also turned its attention to the problem of deepfakes, mentioning in its Rulings: "misleading fake images, audio and audiovisual materials, including those created using computer technology" (paragraph111), "dissemination of misleading voters, participants in the referendum fake images, audio and audiovisual materials, including those created using computer technology" (item129) [Resolution of the Plenum of the Supreme Court of the Russian Federation No. 24 of June 27, 2024 "On some issues arising during the consideration by courts of administrative cases on the protection of electoral rights and the right to participate in a referendum of citizens of the Russian Federation"]; "using misleading and unreliable images, audio and audiovisual information, including those created using computer technology (paragraph 20)" [Resolution of the Plenum of the Supreme Court of the Russian Federation No. 17 dated June 25, 2024 "On certain issues arising from courts when considering cases on administrative offenses that infringe on the established procedure for information support of elections and referendums"]". Thus, the methodology chosen by the author is fully adequate to the purpose of the study, allows you to study all aspects of the topic in its entirety. Relevance. The relevance of the stated issues is beyond doubt. There are both theoretical and practical aspects of the significance of the proposed topic. From the point of view of theory, the topic of establishing circumstances in cases of illegal distribution of generative content created using artificial intelligence technologies is complex and ambiguous. It is difficult to argue with the author of the article that "Currently, artificial intelligence technologies (hereinafter referred to as AI) for creating generative content are developing extremely rapidly and are involved in a wide variety of areas of public life. At the same time, the practice of considering court cases is being formed, in which digital traces representing AI-generated content are used as evidence. This content is digital products in various forms: in the form of text, graphics, sound or a combination of them." The examples from judicial practice given by the author in the article clearly demonstrate this issue. Thus, scientific research in the proposed field should only be welcomed. Scientific novelty. The scientific novelty of the proposed article is beyond doubt. First, it is expressed in the author's specific conclusions. Among them, for example, is the following conclusion: "The key problems have already crystallized in law enforcement practice: 1. Uncertainty of legal regulation. The lack of a unified definition of a deepfake and regulation of generative content creates legal uncertainty. This complicates the qualification of illegal actions related to the use of dipfakes, which in the future will inevitably make it difficult to prove the circumstances in court. 2. The technological complexity of the research. Existing expert techniques do not always allow us to reliably determine the fact of content generation by neural network algorithms. This is especially relevant for phonoscopic examination and analysis of synthesized/cloned voices. 3. Limited access to technological research opportunities. The technological, financial and organizational accessibility of expertise and research remains limited, which will hinder the realization of the right to a defense and a fair trial." These and other theoretical conclusions can be used in further scientific research. Secondly, the author suggests ideas for improving the current legislation. In particular, "In this regard, it is logical to supplement Article 82 of the Code of Criminal Procedure with a provision on the procedure for storing digital evidence, including the results of neural network generation and biometric information used to create them: "Physical evidence presented in digital form, including content generated using artificial intelligence technologies, must be stored using protective measures that exclude their loss, modification or unauthorized access." Such rules in Article 82 of the Code of Criminal Procedure of the Russian Federation at the present stage will be much more in demand than the rules for dealing with such archaisms as video or film." The above conclusion may be relevant and useful for law-making activities. Thus, the materials of the article may be of particular interest to the scientific community in terms of contributing to the development of science. Style, structure, content.
The subject of the article corresponds to the specialization of the journal "Legal Studies", as it is devoted to legal problems related to the formation of judicial practice on issues related to deepfakes and artificial intelligence. The content of the article fully corresponds to the title, as the author has considered the stated problems, and has generally achieved the purpose of the study. The quality of the presentation of the study and its results should be recognized as fully positive. The subject, objectives, methodology and main results of the study follow directly from the text of the article. The design of the work generally meets the requirements for this kind of work. No significant violations of these requirements were found. Bibliography. The quality of the literature used should be highly appreciated. The author actively uses the literature presented by authors from Russia and abroad (Bodrov N.F., Lebedeva A.K., Zubov G.N., Zubova P.I., Öztürk S.B., Özyer B., Temiz Ö. and others). I would like to note the author's use of a large number of materials of judicial practice, which made it possible to give the study a law enforcement orientation. Thus, the works of the above authors correspond to the research topic, have a sign of sufficiency, and contribute to the disclosure of various aspects of the topic. Appeal to opponents. The author conducted a serious analysis of the current state of the problem under study. All quotes from scientists are accompanied by author's comments. That is, the author shows different points of view on the problem and tries to argue for a more correct one in his opinion. Conclusions, the interest of the readership. The conclusions are fully logical, as they are obtained using a generally accepted methodology. The article may be of interest to the readership in terms of the systematic positions of the author in relation to the issues of improving legislation in relation to the illegal distribution of generative content created using artificial intelligence technologies. Based on the above, summing up all the positive and negative sides of the article, "I recommend publishing"