Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

World Politics
Reference:

Potential threats of unauthorized use of political deepfakes during political elections: international experience

Vinogradova Ekaterina Alekseevna

ORCID: 0000-0001-8055-6612

PhD in Politics

Director; Research Center: Artificial Intelligence Technologies in International Relations

102151, Russia, Moscow, Nerasovskaya str., 1, office 3

kata-vinogradova@mail.ru
Other publications by this author
 

 

DOI:

10.25136/2409-8671.2024.3.71519

EDN:

KNTVCO

Received:

19-08-2024


Published:

05-10-2024


Abstract: The subject of this study is new political technologies based on the use of artificial intelligence. The purpose of this study is to identify the threats and risks associated with the unauthorised use of political deepfakes in international practice. To achieve this goal, the author analyses the concepts of the influence of political deepfakes on elections, identifies types and risks of deepfakes in the political practice of different countries. The author of this study defines a political deepfake as a special campaign using artificial intelligence technology to undermine the reputation of political leaders in order to change the course of the electoral struggle or to discredit an incumbent politician. In the process of research, in addition to normative-institutional and historical conceptual and content analysis, elements of analysis of secondary statistical data were used. The article gives a characteristic of the main types of political deepfakes and provides examples of their maliciousness. On the basis of the author's analysis, the classification of types of political deepfakes, which have an impact on the cognitive functions of target audiences, is introduced into scientific circulation for the first time. Classification of types of political deepfakes, given in this study allows us to take a new look at the problem of cognitive impact of this type of disinformation on public opinion. The findings suggest that between 2017 and 2024, artificial intelligence technologies have become actively used to manipulate target audiences during political elections. One of the main difficulties in regulating political deepfakes is their veiling due to the use of mixed types of this type of manipulation.


Keywords:

Political deepfake, Artificial intelligence technologies, International relations, Disinformation, Digital avatar, Cognitive security, Political leader, Psychological information operations, Elections, Deepfakes Classification

This article is automatically translated.

Introduction

The relevance of the chosen research topic lies in the widespread use of artificial intelligence (AI) technologies, in particular deepfakes, to influence political actors during political elections and to destabilize the political situation in the international arena.

The article uses the following groups of sources: statistical data from foreign and Russian research laboratories, political strategies, reports, analytical reports, judicial acts, interviews, mass media, scientific works of foreign and domestic researchers.

The author of this study defines a political deepfake as a special campaign using artificial intelligence technology to undermine the reputation of political leaders in order to change the course of the electoral struggle or to discredit an already existing politician.

The development of digital technologies has revolutionized the media space, forcing traditional mass media to rethink their operating models. The transition from analog to digital technologies has changed not only the way content is delivered, but also its consumption and monetization [8, p.33].

A 2018 study conducted by employees of the Massachusetts Institute of Technology (MIT) showed that fake news is 70% more likely to get into the news feed and spread at a faster rate than reliable information from traditional media [37].

According to the Europol report for 2022, a big leap in the availability of deepfake technology was made thanks to the adaptation of the generative-adversarial network, which was described by Ian Goodfellow of Google in 2014 [21, p.8].

This technology was first used in the fall of 2017, when an anonymous user with the nickname "deepfakes" published a number of pornographic videos in which he used images of famous actresses on the Reddit website [33, p.145].

At the end of 2018, public attention focused on the possible political risks associated with the use of deepfake technology. The reason was a video by Donald Trump, in which he called on Belgium to leave the Paris Climate Agreement. The video was distributed by the Flemish Social Democratic Political Party in Belgium (Vooruit) in order to draw attention to the manipulation of public opinion on issues related to climate change. Emphasizing the falsity of this video at the end, a statement sounded: "We all know that climate change is a hoax like this video" [33, p.145].

However, the incident involving Belgian Foreign Minister and Deputy Prime Minister Sophie Wilmes in 2020 was recognized as the first officially malicious political fake. In the video, the politician made a fictional speech about the connection between COVID-19 and climate change [45, p.4].

Of great concern is the widespread use of deepfake technology in education, the media and culture, which can lead to distortion of historical facts and manipulation of public opinion with the malicious purpose of developing inhumane and anti-democratic values among certain groups of target audiences.

American researchers Ella Bush and Jacob Ware from the International Center for Combating Terrorism in their report "The Weaponization of Deepfakes: digital deception on the part of the extreme right" note that extremists are likely to use synthetic media to achieve their goals, such as spreading false provocative information, speeches by authoritative persons, disinformation about elections and distortion social and political events with the aim of inciting violence. Deepfakes will provide extremists with ideological ammunition, allowing influential individuals to create "evidence" of alleged offenses justifying extremist views [13, p.5].

Numerous sociological studies in recent years show that the use of political fake news during elections causes concern and distrust of official media among target audiences.

So, in 2023, the Luminate campaign conducted a study in which it was found that more than 70% of British citizens were concerned about the impact of deepfakes on the upcoming elections in the country [39, p.2].

British scientists Tvesha Sippya, Florence E. Enoka, Jonathan Bright, Helen Z. Margetza, in the article "Behind the dipfakes: 8% create; 90% are concerned. A study on the impact and perception of deepfakes among the UK population" notes that the total share of political deepfakes in the UK in 2024 was 34.1% [39, p.1].

In 2019, Singapore adopted the Law on Protection from False Information and Manipulation on the Internet (Protection from Online Falsehoods and Manipulation Act, POFMA). POFMA applies not only to written content, but also to "false or misleading" images and videos, which allows you to regulate, among other things, deepfakes [2, p.231].

On January 10, 2023, the law on deepfakes entered into force in China. According to the document, deepfake content carries risks of virus infection (networks and computers) and identity theft. Among the key new rules are the need to provide consent for the use of images by the user and the prohibition of using this type of technology to distribute news. Before publishing content using deepfake technology, it becomes mandatory to post a notification indicating to users that this is an edited or modified image. If the creator does not comply with this requirement, he will be forced to delete the content and brought to criminal responsibility for threatening national security [3, p.41].

As of 2024, according to the Ballotpedia report, 17 states have adopted laws regarding political diplomatic fakes in the United States [10, p.5].

On the eve of the municipal elections in Brazil, which will be held in October 2024, the country's Highest Electoral Court, regulating the use of artificial intelligence in elections, imposed a ban on the use of deepfakes in election campaigns for distorted images of political opponents [38].

To date, the governments of many countries are working in stages to improve their legal systems on the subject of diplomatic fraud. India, China, the USA, Singapore, Great Britain, South Korea, Australia, Japan, Austria and the European Union are most actively countering the spread of malicious false content, including at the legislative level [1, p.38].

Typification of political dipfakes and types of malicious threats

In 2021-2024, the types of political dipfakes were identified in the foreign scientific literature and the risks associated with their impact on target audiences were described.

Polish researcher Agata Ziobron in the article "Political dipfake: recommendations for the adoption of the law and the necessary standards that it must meet" provides the following typification of dipfakes:

  • Political deepfake – used to ironically depict or discredit political figures;
  • Pornographic deepfake – carries the features of visual pornography;
  • Creative deepfake – used in the fields of education, art and cinematography;
  • Satirical deepfake is aimed at ridiculing the shortcomings of a certain group of people or an individual, sometimes it contains slander;
  • Terrorist deepfake – created for the purpose of spreading terrorist propaganda;
  • A deepfake of evidence is created in order to present false evidence to the judicial authorities. This type of deepfake is the most dangerous in terms of achieving the goals of the trial;
  • Violent deepfake is used to threaten or force to perform certain actions or harm;
  • A mixed deepfake may include elements of two or more of the above types [41, p.80].

To date, there are two scientific hypotheses of the impact of AI technologies, and, in particular, deepfakes on society.

The authors of the first hypothesis are Brazilian scientists Anderson Ree Fontana Batista, Lucia Santaella, who believe that the use of generative artificial intelligence in political campaigns at the moment has a dual nature, capable of generating both positive and negative consequences. On the one hand, it can serve as a means of encouraging voter activism and engagement. On the other hand, it can lead to increased political polarization, causing unforeseen threats such as mass dissemination of disinformation, falsification and manipulation of election results [23, p.193].

This concept is shared by American scientists Ethan Bueno de Mesquita, Brandis Canes-Vron, Andrew B. Hall, Christian Lum, Gregory J. Martin, Yamil Ricardo Velez, who outlined their position in the article "Preparing for the use of generative artificial intelligence in the 2024 elections: recommendations and best practices based on scientific research."

According to the authors, the use of deepfakes during the election campaign has both negative and positive aspects. Among the negative aspects, they highlight the deterioration of the information environment. Research shows that generative artificial intelligence is able to create convincing deepfakes, which increases the likelihood of false information appearing during election campaigning. Of great concern is the fact that voters turn to chatbots for factual information about the elections, which are not reliable sources of information.

The second negative aspect is the use of microtargeting and manipulation. There is a risk that microtargeting based on artificial intelligence or emotionally manipulative chatbots can convince voters to act contrary to their interests and divide the electorate. Nevertheless, numerous sociological studies indicate that mass persuasion or manipulation of voters is unlikely, and that the perception of such manipulations is more worrying than their actual impact on elections.

Speaking about the positive impact, the authors note that the use of generative artificial intelligence technologies in politics opens up prospects for creating accessible policy reviews, helping voters evaluate candidates, facilitating communication between citizens and the state, as well as establishing equal rules of the game for campaigns with limited resources.

The second positive point American researchers attribute to the centralization of data. Due to the fact that the tools of generative artificial intelligence are concentrated in several technology companies, there are concerns about the level of control they can exercise over political information. This can lead to serious problems of content moderation and bias [19, p.2].

A less optimistic view of the impact of political deepfakes on society is held by Peruvian researcher Mathias Lavender in his article "Deepfake: when Artificial Intelligence threatens Law and Democracy." The scientist believes that political deepfakes, unlike pornographic deepfakes, can be much more dangerous, since they can change the opinion of a certain segment of voters and, in extreme cases, cause general chaos [29, p.89].

This concept is shared by German researcher Maria Pavelec, who in her article "Deepfakes as an opportunity for democracy?" He notes that deepfakes can be used to attack political opponents and deepen social divisions, fueling conflicts both within the country and between states. Currently, deepfakes are increasingly used in international disinformation campaigns, raising concerns among observers about the possible undermining of citizens' trust in generally recognized norms, which, in turn, undermines the foundations of democratic discourse [31, p.89].

Indonesian scientists in the article "Ethics in the era of artificial intelligence: how to ensure the integrity of communication" note that deepfakes pose a significant threat to society, the political system and business, since they: 1) put pressure on journalists who try to separate real news from fake; 2) threaten national security by spreading propaganda and interfering in elections; 3) undermine citizens' trust in the information provided government agencies; and 4) raise issues of cybersecurity in society [30, p.240].

A number of foreign scientists believe that the use of marketing techniques to distribute political fake news, in particular targeted advertising, can radically change the political situation. Thus, the Canadian scientist Simon Robichaud-Durand in his article "Hyperfakes: analysis of the phenomenon of "deepfakes" and recommendations" identifies several purposes for using deepfakes: pornographic goals of hyperfacing that damage the reputation, privacy and dignity of the victim [34, p.83], political goals of hyperfacing that harm society by undermining trust state institutions and are often used during elections [34, p.85].

Dutch scientists T. Dobber, N. Methui, D. Trilling, N. Helberger and K. de Vries express a similar position in their article "The implementation of microtargeting: do deepfakes affect political sentiment?", arguing that the use of microtargeting methods can enhance the effect of deepfakes by adapting to the acceptability of the target audience. In their research, they emphasize that deepfakes represent an effective and realistic form of disinformation [16, p.70].

Most modern research on the phenomenon of political deepfakes pays considerable attention to the classification of potential threats associated with the use of this technology in politics.

In 2021, the European Parliament published a document entitled "The European Strategy to Combat Deepfakes", which identifies three main categories of harm to society that can be caused by deepfakes: psychological, financial and social.

Psychological damage includes extortion, defamation, intimidation, bullying, and undermining trust. Financial damage can manifest itself through extortion, identity theft, fraud (for example, in the field of insurance and payments), manipulation of stock prices, as well as damage to brand and reputation. Social damage is associated with manipulation of information in the media, threat to economic stability, harm to the justice system and the scientific sphere, erosion of trust, threat to democracy, manipulation of elections, damage to international relations and national security [36, p.29].

German researcher Murat Karaboga in his article "Regulation of deepfakes at the EU level: a review of one patchwork quilt. The classification of the Digital Services Act and the AI Regulation Proposal" highlights the risks associated with the use of deepfakes. The first risk is possible psychological harm, especially for those who are directly affected by deepfake, since it can be used for intimidation and slander. [28, p.204].

The second risk is possible financial damage to both individuals and organizations, as they can be used for identity theft. In particular, this type of fraud can target organizations, companies or individuals [28, p.203]. In the future, deepfakes can also be used to damage brands or reputations, for example, by spreading false claims about business or bankruptcy of managers. As Russian scientists note in the textbook "International Security in the era of artificial Intelligence", posting videos of certain content on the Network, for example, with false statements about the difficult financial situation of one of the large companies, can even change the stock market in the interests of intruders. And it is quite obvious that the inclusion of deepfakes in the phishing mailing list increases efficiency at times [7, p.163].

The third risk is associated with possible social harm. This type of damage is debatable and is likely to be considered as a medium- or short-term danger, whose consequences will become apparent only with more frequent use of dipfakes or with a significant impact of individual dipfakes on society as a whole. Deepfakes can affect various spheres of society, but special attention is paid to the potential consequences for the media, the judicial system, scientific and economic institutions, national security and international relations.

Russian researcher Yu. V. Lukina notes that special attention should be paid to audio devices created with the help of artificial intelligence, which are becoming a powerful new weapon on the battlefield for disinformation on the Internet, threatening to increase the possibility of widespread dissemination of false information on the eve of the 2024 elections taking place in many countries [6, p.45].

In his study "Artificial Intelligence, deep forgeries and disinformation", American researcher Tod Helmus identifies several types of threats associated with deepfakes: interference in elections, increased social discord and decreased trust of target audiences in the media, undermining the trust of the target audience in the media [25, p.19].

Types of political dipfakes for cognitive influence on target audiences

The author of this study identifies several types of political dipfakes aimed at exerting cognitive influence over target audiences.

The political deepfake of the current political leader, created on the basis of a real video depicting the current political leader, is aimed at quickly influencing the mass consciousness and forming distrust of the government, which may entail the possibility of a coup d'etat and sow moral panic.

A striking example is the malicious political fake of Russian President Vladimir Putin in June 2023. The false video broadcast Putin's "appeal" about the military situation in a number of regions of Russia [4, p. 54]. The viral video was shown not only on Russian social networks, but also on the air of the interstate television and radio company Mir, which was hacked by intruders to post a provocative video of Russia [42].

Deepfake phishing refers to a type of mixed deepfake. In November 2023, a new fraudulent scheme appeared in South Africa, affecting the reputation of well-known journalists in the country. In early November, deepfake videos of the South African Broadcasting Corporation (SABC) presenters Bongiwe Zvane and Frances Hurda, advertising fake cryptocurrency investments, spread online.

Deepfake videos advertising a fraudulent investment scheme have attracted a lot of attention from the country's target audiences, and one of them featuring Frances Hurd has garnered more than 123,000 views on YouTube since its appearance on November 3. In response to the scam, Francis Heard and Bongive Zvane publicly denied their involvement in the videos generated by artificial intelligence. Mosho Monare, head of the SABC news department, condemned the scam, stressing the need to protect the reputation of the broadcasting corporation and its journalists. This incident not only undermined the credibility of individual journalists, but also exposed a more serious threat to the media, making it difficult for the public to distinguish real content from manipulative content.

A political deepfake avatar is a form of manipulation aimed at gradually changing the worldview of the target audience between the ages of 20 and 45. This audience includes people who receive information through social networks, chatbots, as well as gamers. Most often, this is a target group that avoids face-to-face communication and prefers virtual interaction. She tends to become attached to virtual characters, which leads to blurring the boundaries between reality and the virtual world.

According to the functional and value basis, the existing avatars are divided into three categories. 1) Media category (show business stars, public figures). 2) Avatars providing professional services (virtual experts, doctors, teachers). 3) Avatars are life companions (pets, relatives) [44].

A striking example is the case that occurred in South Korea on December 6, 2021, when the main opposition presidential candidate from the Power of the People party, Yoon Seok-yel, used artificial intelligence in his election campaign to create a digital character, Yoon. YI Yun's avatar was reportedly extremely popular among young South Koreans, as were his cheeky Trump-esque answers to online questions from voters as well as journalists. Yoon Seok-yeol recorded his voice and provided images to create a digital avatar, and his responses were provided by campaign staff[32]. Korean presidential candidates are capitalizing on digital technology to gain more support by using AI-powered virtual characters to replace them in election campaigns.

In South Korea, "deepfakes" are allowed in election campaigns if they are labeled "digital avatars", do not incite violence, do not lie and do not spread "fake news".

A political deepfake of a deceased politician.It belongs to the third category, designed for virtual interaction. It is used by radical and terrorist groups to recruit through games and social networks. As Chinese psychologists note, interacting with an avatar, it is impossible to understand whether it is a person or a machine, which leads to anxiety and mental changes, causing cognitive uncertainty [46, p. 419]. Thus, this form of information presentation does not form the mental skill of thorough analysis and understanding [9, p. 409].

This type of deepfake can be used as a recruitment tool for extremist groups. Online recruitment is becoming increasingly important for the radicalization of violent extremism. In 2016, 90 percent of extremists were recruited through social networks [13, p. 7].

A political deepfake is a caricature. It belongs to the type of satirical deepfake and is used for various political parodies. Like the latter types, it is aimed at gradually influencing target audiences in order to form negative information about a political leader in public opinion. This type of deepfake can become a serious threat to political discussions, affecting the cognitive functions of target audiences in order to deceive. Very often, political deepfakes are disguised as satire in order to avoid deletion, continuing to misinform target audiences [31, p. 94].

A striking example is the case that occurred on September 23, 2019 in Italy. The satirical show Striscia la notizia featured a fictional episode starring former Prime Minister Matteo Renzi. In a fake video, Renzi ridiculed some members of the government and the Democratic party, which he left. During the broadcast, it was not noted that this video was a fake, the news about Renzi's fake character was posted later on the website of the show Striscia la notizia [14].

One of the active innovators and inventors of political caricature through deepfake technology is the Brazilian marketer and video maker Bruno Sartori, who gained fame through a series of videos ridiculing political leaders [5, p. 42]. Using the technique of synthesizing human images and sounds, including rendering videos of famous television series and music videos, B. Sartori demonstrates political satire by creating memes of Brazilian presidents J. Bolsonaro, L. da Silva and other politicians. The video maker is confident that a disinformation company using deepfake technology is able to radically change political elections [12].

Thus, B. Santori's satirical deepfake videos are a new tool of soft power that uses national forms of intercultural communication (folklore, music, television series, literature), influencing the target audiences of the country.

Unauthorized impact of political deepfakes in the 2023-2024 elections.

In the period 2023-2024, the use of political deepfakes in elections became massively viralized.

This year has become a record election period in many countries of the world. Presidential and parliamentary elections are held in 64 countries, which account for more than half of the world's population.

According to Sumsub data for 2024, the number of deepfakes worldwide has increased by more than 245%. In some countries where elections are scheduled for 2024, such as the United States, India, Indonesia, Mexico and South Africa, there has been a significant increase in deepfakes [15].

The increase in malicious deepfakes is especially noticeable in countries where elections are scheduled for 2024: El Salvador and Venezuela (by 200%), India (by 280%), the USA (by 303%), South Africa (by 500%), Mexico (by 500%), Moldova and Chile (by 900%) Indonesia (by 1,550%) and South Korea (by 1,625%). In the European Union, where elections to the European Parliament were held in June, there is an increase in the number of cases of fraud compared to the previous year in countries such as Bulgaria (3,000%), Portugal (1,700%), Belgium (800%), Spain (191%), Germany (142%) and France (97%) [15].

The Korea Times newspaper reports that according to the information of the Korean National Election Commission (NIC), 129 cases of the use of dipfakes related to the upcoming elections were revealed between December 29, 2023 and January 1, 2024. All cases were found to be in violation of the law on the election of civil servants [24].

In October 2023, a few days before the parliamentary elections in Slovakia, a political fake of the leader of the Slovak Progressive Party, Michal Szymechka, spread on social networks discussing his plans to manipulate votes at polling stations with a journalist from the daily newspaper Dennik N, Monica Todova [35].

In November 2023, Sergio Massa, a candidate of the Union for the Motherland party, became a victim of a viral political fake with elements of doxing in Argentina. In a malicious video, the future president was captured with a bag of cocaine. The video gained more than 190 thousand "likes" on TikTok and was actively distributed in the WhatsApp messenger [18].

In December 2023, a deepfake video of former Pakistani Prime Minister Imran Khan serving a three-year prison sentence appeared on social networks aimed at campaigning for the Movement for Justice (PTI) party in the upcoming general elections in the country [26]. The four-minute video showed a fake video of the imprisoned leader sitting next to the national flag of Pakistan, calling for a vote for the PTI party and accusing the current authorities of banning public rallies, as well as harassment and kidnapping of members of his party.

In January 2024, deepfake phishing was aimed at undermining the reputation of Claudia Sheinbaum, the presidential candidate of the Morena Party in Mexico [40]. Claudia Sheinbaum's image was used to create a mixed type of deepfake in which she invited Mexicans to invest in a proposed financial platform. Deepfake has been used to distort political statements, spread false information and manipulate public opinion, posing a serious threat to Mexico's political and financial environment ahead of the 2024 elections.

As many experts have noted, the elections in Mexico have become the largest in terms of the number of uses of artificial intelligence technologies to influence the course of the election campaign. Over the past two years, the use of deepfakes has increased by 300% in Mexico. This situation has led to an increase in misinformation and manipulation on the Internet. According to a study conducted by Kaspersky Lab in 2023, 72% of Mexican users are unaware of the existence of this technology [17].

The use of deepfake avatars during political elections has become particularly relevant after the election scandal in Indonesia in January 2023.

In mid-February 2024, presidential and parliamentary elections were held in Indonesia. On January 6, 2024, Erwin Aksa, chairman of the Golkar political party, distributed a deepfake video depicting the late Indonesian President Suharto on the social media platform X. The video is a realistic simulation of the former president urging voters to vote for Golkar party candidates: "I am President Suharto, the second President of Indonesia, I invite you to elect representatives of the people of Golkar."

The political deepfake avatar went viral, gaining 4.2 million views and 1,200 comments in five days [22]. At the end of December 2023, the Ministry of Communications and Information Technology of Indonesia issued a dispatch on the ethics of using artificial intelligence, but it was only advisory in nature [20]. The scandal with the above-mentioned political fake has caused a loud debate about the ethical and legal consequences of using such technologies for the country's political campaigns. The electoral manipulation we are considering once again indicates that AI technologies can carry a malicious message to control target audiences during political elections.

The use of political avatars to influence target audiences is a new stage in the development of the digital state. In 2024, the British technology company Smarter UK announced the launch of its first avatar, a politician [11].

This practice is not new and was applied in South Korea in 2022, when presidential candidate Yoon Seok-yeol used an official political avatar to attract voters.

British AI Steve is a virtual version of politician Steve Endacott, who was defeated in the 2022 local elections in Rochdale (Greater Manchester County), representing the Conservative Party. The initiative is that AI Steve plans to "rethink democracy" by offering voters the opportunity to vote on the actions that AI Steve should take as a local MP. Then the real Steve Endacott will make this decision in parliament.

Using the mechanism of approval or disapproval through the AI Steve, it will be determined what actions Endacott will take (if more than 50% vote for a specific action, it will be carried out). The company believes that this approach is based on the majority principle, which is a key aspect of democratic governance. According to this principle, the decisions with the most support should be made. Given the potential influence of voters on parliamentary decisions, experts from Smarter UK believe that using Steve's AI can help build trust between voters and their representatives.

India is the leader in the use of political deepfakes in the 2024 elections. According to a study conducted by Adobe called "Prospects for Trust in India", almost 86% of Indian residents believed that malicious deepfakes could affect the results of elections in the country [23].

In May 2024, the Election Commission of India invited all political parties to refrain from using fake news and other forms of disinformation in their social media posts during the elections. This step was taken after the Election Commission was criticized for ineffective measures to combat such campaigns in the most populous country in the world [27].

The published recommendations require political parties to delete all fake audio and video recordings within three hours of their discovery. In addition, the parties are encouraged to identify and punish those responsible for creating fake content. The decision of the Election Commission was made after an appeal to the Delhi High Court to consider this issue.

Conclusion

The use of political deepfakes in the 2023-2024 elections indicates the increasing threat of using digital disinformation technologies to influence target audiences in different countries, making it difficult for the public to distinguish real content from manipulative, which can lead to political destabilization.

Numerous studies claim that by 2026, up to 90 percent of online content can be created synthetically, which means that the use of deepfakes is likely to become a common source of cybercrime [13, p.3]. and direct interference in elections.

The unauthorized use of political dipfakes during elections is a global problem. The modern scientific literature identifies various types of political dipfakes that influence the target audience during political elections. The classification of the types of political dipfakes given in this study allows us to take a fresh look at the problem of the cognitive impact of this type of disinformation on public opinion.

Despite the existing laws, restrictions and recommendations, the activity of spreading disinformation through political fake news in recent years has shown a significant increase in its mass viralization. One of the main difficulties in regulating political dipfakes is their veiling through the use of mixed types of this type of manipulation.

A significant obstacle in the regulation of this technology is also the lack of generally recognized international standards. Another barrier to the effective study and implementation of artificial intelligence technologies in the field of cybersecurity is the spread of a false negative image of Russia and China as leading countries in this field in the English-language scientific literature.

Based on the results of the analysis, the author offers the following recommendations:

— development of domestic detectors for detecting deepfakes;

— conducting a wide information campaign about the technology of deepfakes for the public;

coverage of cases of the use of political fake news during elections in the media and social networks;

— imposing fines on social networks and information resources for posting deepfakes on their platforms;

— the introduction of mandatory watermarks in deepfake videos;

— introduction of legislative regulation in the field of political dipfakes;

— monitoring of deepfake propaganda by digital police forces;

— creation of a specialized research group to analyze the impact of deepfakes on society.

References
1. Analytical review. (2023). Deepfakes in the digital space: main international approaches to research and regulation. Moscow: ANO ‘Competence Centre for Global IT Cooperation’.
2. Vinogradov, V., & Kuznetsova, D. (2024). Foreign Experience in Legal Regulating Deepfake Technology. Law. Journal of the Higher School of Economics, 2, 215-239. doi:10.17323/2072-8166.2024.2.215.240
3. Vinogradova, E. (2023). The malicious use of political deepfakes and attempts to neutralize them in Latin America. Journal «Latinskaya Amerika», 5, 35-48. doi:10.31857/S0044748X0025404-3
4. Vinogradova, E. (2024). Artificial Intelligence Technologies in the BRICS Political Agenda. Journal «Latinskaya Amerika», 1, 46-60. doi:10.31857/S0044748X0024415-5
5. Vinogradova, E. (2023). Artificial intelligence technologies and the rise of cyber threats in Latin America. Journal «Latinskaya Amerika», 3, 34-48. doi:10.31857/S0044748X0024415-5
6. Lukina, Y. V. The use of dipfakes in social and political life. Retrieved from https://rupolitology.ru/wp-content/uploads/2024/01/RP_27_Lukina.pdf
7International Security in the Age of Artificial Intelligence. (2024). In two volumes Volume 1. Textbook for universities. Edited by M.V. Zakharova and A.I. Smirnov. Moscow: Aspect Press.
8The Image of Russia's Digital Future: Formation and Representation. (2024). Under the general editorship of V.V. Zotov, G.R. Konson, S.V. Volodenkov. Zotov, G.R. Konson, S.V. Volodenkov Moscow: MIPT, phystech.
9Prolegomena of Cognitive Security. (2023). Collective monograph edited by I.F. Kefeli. SPb.: Publishing House ‘Petropolis’.
10Ballotpedia’s Artificial Intelligence Deepfake Legislation Tracker. (2024). Annual Report.. State of Deepfake Legislation.
11. Britain’s first AI politician claims he will bring trust back to politics – so I put him to the test. Retrieved from https://www.deepl.com/ru/translator#en/ru/Britain%E2%80%99s%20first%20AI%20politician%20claims%20he%20will%20bring%20trust%20back%20to%20politics%20%E2%80%93%20so%20I%20put%20him%20to%20the%20test
12Bruno, Sartori: deepfakes, política e ameaças. Retrieved from https://revistatrip.uol.com.br/trip/bruno-sartori-deepfakes-politica-e-ameacas
13. Busch, Ella, & Ware, Jacob. (2023). The Weaponisation of Deepfakes. Digital Deception by the Far-Right. ICCT Policy Brief December. doi:10.19165/2023.2.0720
14Che cosa sappiamo del deepfake di Renzi (e di tutti gli altri). Retrieved from https://pagellapolitica.it/articoli/che-cosa-sappiamo-del-deepfake-di-renzi-e-di-tutti-gli-altri
15. Deepfake Cases Surge in Countries Holding 2024 Elections, Sumsub Research Shows. Retrieved from https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/
16. Dobber, T., Metoui. N., Trilling, D., Helberger, N., & de Vreese C. (2021). Do (Microtargeted) deepfakes have real effects on political attitudes? International Journal of Press. Politics, 26(1), 69-91. doi:10.1177/1940161220944364
17En México 72% de las personas desconoce qué es un deepfake. Retrieved from https://notipress.mx/tecnologia/en-mexico-72-por-ciento-personas-desconoce-que-es-deepfake-11679
18. Es falso que en este video viral Sergio Massa recibe una bolsa con cocaína, lo que le arrojan son cartas. Retrieved from https://chequeado.com/ultimas-noticias/es-falso-que-en-este-video-viral-sergio-massa-recibe-una-bolsa-con-cocaina-lo-que-le-arrojan-son-cartas/
19. Ethan Bueno de Mesquita, Brandice Canes-Wrone, Andrew B. Hall, Kristian Lum, Gregory J. Martin, & Yamil Ricardo Velez. (2023). Preparing for Generative AI in the 2024 Election: Recommendations and Best Practices Based on Academic Research.. Stanford Graduate School of Business and the University of Chicago Harris School of Public Policy November.
20. Ethical guidelines on use of artificial intelligence (AI) in Indonesia. Retrieved from https://www.herbertsmithfreehills.com/notes/tmt/2024-02/ethical-guidelines-on-use-of-artificial-intelligence-ai-in-indonesia
21Facing reality? (2022). Law enforcement and the challenge of deepfakes. An Observatory Report from the Europol Innovation Lab.
22Fake Suharto video fuels debate on AI use in Indonesian election campaign. Retrieved from https://www.benarnews.org/english/news/indonesian/suharto-deepfake-used-in-election-campaign-01122024135217.html
23. Batista Anderson Röhe, Santaella Lucia. (2024). Prognósticos das deepfakes na política eleitoral. Pronósticos del ultrafalso en la política electoral. Organicom, 21(44), 187-196. doi:10.11606/issn.2238-2593.organicom.2024.221294
24. Hàn, Quốc: Chuyên gia lo ngại nội dung deepfake ảnh hưởng kết quả bầu cử. Retrieved from https://hanoimoi.vn/han-quoc-chuyen-gia-lo-ngai-noi-dung-deepfake-anh-huong-ket-qua-bau-cu-658823.html
25. Helmus, T. C. (2022). Artificial Intelligence, Deepfakes, and Disinformation. Retrieved from https://www.rand.org/pubs/perspectives/PEA1043-1.html
26Imran Khan–Pakistan’s Jailed Ex-Leader–Uses AI Deepfake To Address Online Election Rally. Retrieved from https://www.forbes.com/sites/siladityaray/2023/12/18/imran-khan-pakistans-jailed-ex-leader-uses-ai-deepfake-to-address-online-election-rally/
27India urges political parties to avoid using deepfakes in election campaigns. Retrieved from https://techcrunch.com/2024/05/07/india-elections-deepfakes/
28. Karaboga, M. (2023). Die Regulierung von Deepfakes auf EU-Ebene: Überblick eines Flickenteppichs und Einordnung des Digital Services Act- und KI-Regulierungsvorschlags. Digitale Hate Speech, 197-220. doi:10.1007/978-3-662-65964-9_10
29. Lavanda, Oliva Matías. (2022). Deepfake: Cuando la inteligencia artificial amenaza el Derecho y la Democracia. Revista de Derechoy Tecnología, 2, 84-95.
30. Leliana, Intan, Irhamdhika. Gema, Haikal. Achmad, Septian. Rio, Kusnad.i Eddy. (2023). Etika Dalam Era Deepfake: Bagaimana Menjaga Integritas Komunikasi'. Jurnal Visi Komunikasi, 22(02), 234-243. doi: https://doi.org/10.32699/device.v14i1.6984
31. Pawelec, Maria. (2024). Deepfakes als Chance für die Demokratie? The Nomos eLibrary, 89-101. Retrieved from https://doi.org/10.5771/9783748928928-89
32Presidential candidates' computer-generated avatars heat up debate. Retrieved from https://www.koreatimes.co.kr/www/nation/2024/07/113_320192.html
33. Rini, Regina, & Cohen, Leah. (2022). Deepfakes, Deep Harms. Journal of Ethics and Social Philosophy, 22(2), 143-161. York University, Toronto. doi:10.26556/jesp.v22i2.1628
34. Robichaud-Durand, S. (2023). L’hypertrucage: analyse du phénomène des «deepfakes» et recommandations. Lex Electronica, 4, 78-98. Retrieved from https://doi.org/10.7202/1108807aradressecopiéeune
35Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy. Retrieved from https://www.wired.com/story/slovakias-election-deepfakes-show-ai-is-a-danger-to-democracy/
36Tackling deepfakes in European policy. (2021). STUDY. Panel for the Future of Science and Technology. EPRS. European Parliamentary Research Service Scientific Foresight Unit (STOA).
37The spread of true and false news online. Retrieved from https://www.science.org/doi/10.1126/science.aap9559
38TSE proíbe uso de inteligência artificial para criar e propagar conteúdos falsos nas eleições. Retrieved from https://www.tse.jus.br/comunicacao/noticias/2024/Fevereiro/tse-proibe-uso-de-inteligencia-artificial-para-criar-e-propagar-conteudos-falsos-nas-eleicoes
39. Tvesha, Sippya, Florence, E. Enocka, Jonathan, Brighta, Helen, Z. Margettsa. Behind the Deepfake: 8% Create; 90% Concerned Surveying public exposure to and perceptions of deepfakes in the UK.. arXiv:2407.05529v1 [cs.CY]. 8 Jul 2024. Retrieved from https://doi.org/10.48550/arXiv.2407.05529
40Video circulating of Claudia Sheinbaum is apparently a ‘deepfake’. Retrieved from https://mexiconewsdaily.com/politics/video-circulating-of-claudia-sheinbaum-is-apparently-a-deepfake/
41. Ziobroń, Agata. (2024). Political deepfake. Remarks de lege lata and postulates de lege ferenda. Rozprawy i Materiały, 1(34), 79-95. doi:10.48269/2451-0807-sp-2024-1-04
42. На ТВ показали «обращение Путина» о военном положении в ряде областей России. Это был взлом, а ролик оказался дипфейком. Retrieved from https://rusnewshub.ru/2023/06/05/
43. Per cent Indians believe misinformation and harmful deepfakes will affect future elections. Retrieved from https://indianexpress.com/article/technology/tech-news-technology/misinformation-and-harmful-deepfakes-will-affect-future-elections-in-india-adobe-9357027/
44. 聊天機器人能改善心理健康?研究:可有效協助憂鬱症患者緩解症狀.. https://heho.com.tw/archives/256168#
45. 오일석 연구위원. 딥페이크(Deep Fake)에 의한 민주적 정당성의 왜곡과 대응 방안. 2024. – 27 p.
46. 元宇宙中虚拟人相关的心理问题探讨 心理技术与应用. (2023). Psychology: Techniques and Applications, 7, 414-420.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The subject of the peer–reviewed research is new political technologies using artificial intelligence - misleading the voter by creating and distributing deepfakes (“deepfake"). The author rightly associates the relevance of this topic with the powerful influence exerted by diplomatic campaigns on political actors, which has a significant potential to destabilize the political space. Usually, the study of the use of these technologies in politics was limited to domestic processes (primarily election campaigns) due to the deeper impact of these technologies on the layman. It was usually assumed that professional politicians and officials were more rational and therefore had the competence to distinguish deepfakes from the facts of reality. However, the author places this problem in an international context, exploring the features and consequences of the use of deepfakes both at the domestic and international levels, which in itself gives the peer-reviewed study some scientific freshness. Unfortunately, the author himself does not say a word about the methodology he used, shifting this work to the reader. However, from the context, it can be understood that in the research process, in addition to the normative and institutional (when analyzing specific institutions in various countries regulating processes related to political deepfakes) and historical (when analyzing the history of the creation and transformation of these institutions), conceptual and content analysis were used (when studying the main approaches and interpretations of deepfake technology in scientific and journalistic literature, declarations of political actors and other media content), as well as some elements of the analysis of secondary statistical data. The correct application of these methods allowed the author to obtain results with signs of scientific novelty. First of all, we are talking about the systematization of the types of deepfake technology, as well as the typification of the effects they produce. The analysis of the use of deepfake technologies in the election campaigns of 2023-24 is also interesting. and the specific transformation of these technologies revealed by the results of this analysis in recent years. Finally, as mentioned above, there is a certain scientific interest in the transfer of the author's problems to the international level and the conclusion that there are no internationally recognized standards in regulating these technologies, which leads to numerous abuses at the national level, on the one hand, or on the contrary, to connivance on the part of governments of different countries. Structurally, the work also does not cause significant complaints: its logic is quite consistent and reflects the main aspects of the research. The following sections are highlighted in the text: - "Introduction", where a scientific problem is posed, its relevance is argued, and primary conceptualization is carried out; - "Typification of political deepfakes and types of malicious threats", where, in the process of analyzing relevant literature, a classification of political deepfakes and the consequences they generate is developed; - "Types of political deepfakes for cognitive influence to target audiences", which reveals the features of each type of deepfake technology from the standpoint of their cognitive impact; - "Unauthorized impact of political deepfakes in the elections of 2023-2024", where, using the example of election campaigns in different countries, the transformation of deepfake technology in recent years is revealed; - "Conclusion", which summarizes the results of the study, draws conclusions and determines the prospects for further research. The style of the reviewed article is scientific and analytical. There are a number of stylistic errors in the text (for example, the stylistically incorrect title of one of the sections of the article: "Types of political dipfakes for cognitive influence on target audiences"; etc.), but in general it is written quite competently, in good Russian, with the correct use of scientific terminology. The bibliography includes 46 titles, including sources in several foreign languages, and adequately reflects the state of research on the subject of the article. The appeal to opponents takes place in terms of the conceptualization of political deepfakes. The specially discussed advantages of the article include a fairly wide range of empirical material used for analysis, as well as a very relevant and interesting topic. GENERAL CONCLUSION: the article proposed for review can be qualified as a scientific work that meets the basic requirements for works of this kind. The results obtained by the author will be interesting for political scientists, sociologists, specialists in the field of mass media, public policy, world politics and international relations, as well as for students of the listed specialties. The presented material corresponds to the topic of the magazine "World Politics". According to the results of the review, the article is recommended for publication.