Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Litera
Reference:

Features of fake content in the media space in the era of artificial intelligence development

Nerents Dar'ya Valer'evna

PhD in Philology

Associate professor, Department of Journalism, Russian State University for the Humanities

125993, Russia, Moskovskaya oblast', g. Moscow, Miusskaya ploshchad', 6, aud. 525

ya.newlevel@yandex.ru
Other publications by this author
 

 

DOI:

10.25136/2409-8698.2024.7.43843

EDN:

TETGUG

Received:

20-08-2023


Published:

28-07-2024


Abstract: The subject of the research in this article is the specificity of fake content in the conditions of active development of artificial intelligence (AI). The first half of 2023 demonstrated how serious an impact AI can have on the life of society in general, and on the information agenda in particular. In the era of a continuous stream of received data, it is increasingly difficult to verify the content and critically comprehend all the information received. The so-called deepfakes have become a truly relevant media threat that can cause mass unrest and affect the mood of the audience. In this regard, it is relevant and important to study fakes created by AI in order to identify ways to detect and expose them. The article touches upon such important aspects as the concept and characteristic features of fake content as a threat to information security, the distinctive features of deepfakes as an AI product, and presents key steps for recognizing fakes. The scientific novelty of the research lies in the disclosure of the role of deepfakes in the modern information space, the dangers they carry (including due to their instant replication through social networks) and options to combat them within the media consumption of a wide audience. Moreover, it demonstrates the need to develop further steps to increase the level of media literacy and to anticipate the negative consequences that fakes can cause.


Keywords:

fake, deepfake, artificial intelligence, media space, information, mass media, journalism, media literacy, information safety, media threat

This article is automatically translated.

The modern media space consists of a continuous stream of information coming from all over the world. In 2022, the total volume of data in the world amounted to about 97 zettabytes, and, according to forecasts, by 2025 this volume will increase to 180 zettabytes (Share of unique data and replicated data in the global datasphere in 2020 and 2024 // Statista. URL: https://www.statista.com/statistics/1185888/worldwide-global-datasphere-unique-replicated-data ). A large number of this volume is no longer compiled manually, but by artificial intelligence (AI). It is the neural network at this stage of the development of information and communication technologies that creates databases, collects and aggregates information, compiles reports and even writes posts and texts for media resources. Increasingly, people are unable to distinguish a human product from a product created by a "machine". Despite the obvious advantages (saving time and reducing energy consumption, the ability to process a much larger amount of information, speeding up work processes, the ability to get away from routine tasks, etc.), there are a number of information threats in the media environment that can significantly affect the behavior and moods of the entire society. This article will focus on the role of AI in creating and spreading media threats, as their impact becomes truly noticeable.

First of all, this applies to the creation and dissemination of information. Neural networks are becoming both an important link in the production of digital products and a generator of media content. The information on the Internet is so diverse and voluminous that even a critically minded person can be misled and manipulated. The data that a person receives may be irrelevant, outdated, incomplete, or specially fabricated. And the latter type seems to be the most dangerous variant of the media threat. We are talking about fakes, which, in the context of the active development of new technologies, have also undergone changes and have become practically indistinguishable from facts.

Researchers define fake in different ways, while unanimously agreeing on the description of the essence of this phenomenon. Thus, A.D. Krivonosov understands fake as false information that is replicated in the media as a news report [1, p. 175]. I.A. Sternin and A.M. Shesestina indicate that a fake can only represent news or a message posing as news and containing some kind of statement, since someone's opinion or the assessment cannot be classified as fake news [2, p. 4]. S.N. Ilchenko defines fake as a journalistic message containing unreliable or unverified information that does not correspond to reality and published in the media [3, p. 12]. Yu.M. Ershov notes that fake can be done in another way It can be called disinformation, since it represents the purposeful use of fictional news in order to undermine the reputation of both an individual and a company or even an entire institution [4, p. 246].

Thus, fake can be designated as material aimed at misleading, in other words, false or untrue content. One of the distinguishing features is the sensational nature of the message. Emotional presentation is paramount, and facts are secondary [5, p. 28].

The following are the reasons for the appearance of a fake:

1) information misheard or misinterpreted by the author;

2) economic reasons (the desire to gain financial benefits or increase the number of subscribers);

3) creating a negative attitude towards a certain point of view (or competitor);

4) deliberate misrepresentation for a political purpose.

Fakes can spread spontaneously and have a serious impact on public sentiment. As factors contributing to the spread of fakes, one can identify an insufficient level of audience awareness of the topic, "information hunger" (inability to verify data or obtain information from official sources), susceptibility to sensational headlines and the shocking nature of news presentation, distrust or negative attitude of the audience towards an expert or expertise, unwillingness to reconsider their own formed views.

Moreover, the continuously growing flow of information, the inability to distinguish the main from the secondary, the active development of new technologies (bots are able to continuously replicate the necessary data), emotional instability (trolls can cause severe stress and mislead with provocative messages) create a favorable environment for a growing flow of various kinds of fake content.

False materials can be published in different formats, whether it is a post on a social network or a multimedia longrid. However, the so-called deepfakes have the most destructive potential [6, p. 74]. Deepfakes are fakes created with the help of AI. Artificial intelligence combines a large number of photographic images and makes a video recording of them. The program can accurately determine how a person will react and behave in a certain situation. In other words, the essence of the technology lies in the fact that part of the algorithm studies the image of an object in detail and tries to recreate it, until the other part ceases to distinguish between the real image and the one created by the neural network.

Deepfakes are almost impossible to recognize, since videos are usually characterized by a high degree of realism [7, p. 69]. This technology became widespread in 2017 in the United States, thanks to the development of "deep learning" technology, in which AI based on big data processing learns to reproduce certain patterns (models). At the present stage, deepfakes are used in many spheres of life. For example, in 2023, the fake Salvador Dali opened his exhibition in Florida (Museum creates deepfake Salvador Dalí to greet visitors // YouTube. URL: https://www.youtube.com/watch?v=64UN-cUmQMs). There are well-known cases when paintings are drawn using a neural network, dialogues are prescribed in scripts for TV series, realistic photographs and even scientific texts are created. In journalism, deepfake is used to hide the faces of a source who wishes to remain anonymous, and at the same time not to make the image blurry. However, there are also negative examples.

Deepfake was actively used to create fake news and fake videos. One of them is the infamous 2018 video in which former US President Barack Obama directly insults the then–current President Donald Trump (Fagan K. A viral video that appeared to show Obama calling Trump a 'dips---' shows a disturbing new trend called 'deepfakes' // The Insider. URL: https://www.businessinsider.com/obama-deepfake-video-insulting-trump-2018-4). With this video, the author of the video, Jordan Peele, demonstrated a real information threat that can seriously affect public opinion and the mood of the masses. And now a public figure, an official, a politician or a representative of show business can blame the neural network and say that his statements are the result of AI work, and he has never been there and never said such a thing. How far it is possible to prove this is a complex issue and is just beginning to rise in the legal, ethical and scientific field. However, it is already obviously possible to get the support of the audience with such a psychological technique. For example, producer Joseph Prigozhin called a fake a scandalous audio recording of his conversation with businessman Farhad Akhmedov (Balasyan L. Joseph Prigozhin denies the authenticity of an audio recording of a conversation with billionaire Akhmedov criticizing the government // Kommersant. URL: https://www.kommersant.ru/doc/5899921). However, no one could find evidence of this.

AI is able to create fakes of such high quality that they are indistinguishable from the original. In the future, such technologies, due to the creation of a false video or photo image, can not only ruin the life of one person, but lead to mass riots, rallies and even military clashes.

Another sensational example was a fake photo of the Pope dressed in a fashionable down jacket and walking through the streets of New York, which was taken using the Midjorney neural network (Gosteva A. Viral photo of the pope in a stylish down jacket turned out to be fake // Lenta.ru . URL: https://lenta.ru/news/2023/03/27/thepope1 ). Or photos of the arrest of D. Trump, who is allegedly forcibly put into a police car (Makarychev M. Fake pictures of Trump's "forcible arrest" by police appeared on the Network // Rossiyskaya gazeta. URL: https://rg.ru/2023/03/21/v-seti-poiavilis-fejkovye-snimki-silovogo-aresta-trampa-policejskimi.html). It is not uncommon for AI to be used to modulate voice and create a fake phone number, which allows you to deceive gullible citizens and extort large amounts of money from them.

Such examples demonstrate serious threats to an unprepared audience from AI. Manipulators and scammers pursuing their goals can skillfully use these "weapons". This issue became especially acute with the creation of the ChatGPT-4 neural network, texts, photos, videos of which it is sometimes impossible to recognize and identify generation. In the Russian Federation, the development of AI is treated cautiously, not seeking to introduce it into all areas of production abruptly and promptly.

At the same time, the creation of fake content, intentionally or inadvertently, is considered a serious risk for the audience from the position of using AI in the media. Social networks and blogging content are a source of information not only for the youth audience, but also for many journalists who, in pursuit of traffic and popularity, strive not so much to check the news as to become the first and publish an "exclusive". Thanks to AI, deepfakes can deceive even experienced reporters.

Another option is when the neural network itself makes mistakes by incorrectly reading the designations (for example, instead of 1-7%, indicating 17% or instead of 1925, talking about 2025). Such factual errors in the news about stock quotes or financial transactions can lead to global consequences. The texts created by the neural network may lack the context of what is happening, which will also lead the reader to the wrong conclusion.

From an ethical point of view, an important aspect is the lack of indication when the material is published by AI and when by a real journalist. For example, the news feed in many online publications is based on the principle of lack of authorship, which does not allow the reader to see who wrote this text and in what way. For example, a sports publication Sports.ru It uses AI when maintaining a sports chronicle and generating various headlines, which sometimes have errors and inaccuracies. And then the next question arises: who is responsible for inaccuracies, factual errors, misinterpretation of data or comments? Does the audience agree to read texts generated by neural networks or watch the news with virtual presenters?

The audience's distrust of neural networks and computer algorithms can become a significant obstacle to the development of AI in the media sphere as a whole. According to a VTsIOM study, in 2022, more than a third of Russians surveyed (32%) do not trust AI technologies. The strongest fears are the leakage of the data they collect, the use for selfish purposes and the risk of making decisions for which no one is responsible (The Russians named their main fears of artificial intelligence // RBC. URL: https://www.rbc.ru/society/28/12/2022/63ab45de9a7947664c3ef893). In addition to the above, respondents believe that AI will lead to the degradation of the population (which, in general, may be close to the truth, given recent trends in education, when students consider it acceptable to submit a text generated by neural networks as an abstract, course project, and even a WRC), as well as the risks of systematic failures and errors in work.

The lack of qualified personnel to configure and manage such software is also important. This does not allow the editorial board to go deeper and explore all the possibilities of the neural network, but, as they say, go "to the top" using only accessible and understandable functions. This approach leads to a low quality of the created content and a negative attitude of the audience towards such experiences. As a result, the editorial board loses a lot of money and cannot continue its experiments.

A significant difficulty is the lack of a complete picture of the world (algorithms create an "information bubble"), when it is the AI that decides for the user what is important and what is secondary and can be ignored, thereby preventing a person from seeing everything that is happening around, but focusing only on a small part. This, in turn, contributes to the polarization of opinions, because modern audiences are less likely to resort to reflection, refusing to trust or even perceive information that does not reflect their beliefs.

In general, deepfakes represent an actual media threat, because unlike photos, the correction of which is not something unique, users tend to still trust video recordings. Therefore, deepfakes in video format are highly likely to be perceived as authentic, and thanks to social networks they can be replicated in a matter of minutes. And here the main weapon of struggle is only critical thinking. The more in danger people feel, the more they are influenced by them. They are less critical and spend less effort checking information. Even rational people can be misled. Manipulators skillfully use this. In this case, a certain algorithm will make sure that the received data is authentic:

1) read the entire material (the title does not always reflect the content or essence of the text);

2) determine the author and the date of publication (this is a major media outlet or someone's author's blog, how authoritative the author of the text is and knows the topic);

3) check the address bar (these may be "clones" that copy pages of real media, changing the letter or adding some kind of punctuation mark, or fake accounts that also differ by literally one character in the name);

4) find and study the primary source of information;

5) fake materials contain a large number of emotional statements, subjective judgments, evaluative expressions (emphasis on sensationalism or exclusivity). As a rule, factual information comes first in journalistic materials, and emotional statements play a secondary role.

However, if we are talking about diplomatic missions, these rules may not be enough. With the generated video, it is worth paying attention to the image quality, the naturalness of body movements, glare in the eyes or on glasses, the naturalness of lighting and background colors, etc. In other words, only extreme care and purposeful search can see artificially created frames.

For the most part, only the user himself is able to distinguish a fake from the truth. Of course, the largest and most high-profile fakes will be publicly exposed, but this is only a small part of the untruth that a person faces every day. The ability to critically comprehend any important information, refer only to verified sources, and double-check data comes to the fore. In other words, it is more important than ever to continue to improve media literacy and comply with the basic rules of information security.

References
1. Krivonosov, A.D. (2022). Evolution of Fakes in the Age of Digitalization. Proceedings of St. Petersburg State University of Economics, 6(138), 174-177.
2. Sternin, I.A., & Shesternina, A.M. (2020). Markers of fake in media texts. Working materials. Voronezh: OOO «RITM».
3. Ilchenko, S.N. (2021). Fake Control, or News You Shouldn't Believe: How the Media Fool Us. Rostov n/D: Feniks.
4. Ershov, Yu.M. (2018). The phenomenon of fake in the context of communication practices. Bulletin of Tomsk State University. Philology, 52, 245-256.
5. Galyashina, E.I. & others. (2023). Faking as a means of information warfare in the Internet media: scientific and practical manual. Moscow: Blok-Print.
6. Kolesnikova, E.V. (2022). Deepfake technology in journalism. The world of modern media: new opportunities and prospects: collection of scientific papers. Pp. 74-77. Moscow: Znanie-M.
7. Smirnov, A.A. (2019). "Deep Fakes". Essence and assessment of potential impact on national security. Free thought, 5(1677), 63-84.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

Evaluation and analysis of fake content in the media space is increasingly common in the scientific field. Indeed, the topic is quite relevant and interesting, and it is considered from different points of view, for example, the appearance of this segment, or its distribution, or the definition of the role / function of the specified block. Therefore, the choice of the author is quite justified, and the work is a kind of addition to the already existing rather large mass of sources (RSCI, the Internet, VAK publications, etc.). Moreover, the fake version and its distribution are associated with the galloping development of artificial intelligence. The work, in my opinion, has a summary character, there is no productive conceptual idea in it that would be different, original, and clearly new. However, a general systematic assessment of the available data is also necessary, because it allows you to predict the development of scientific thought further. The purpose and tasks set by the author of the reviewed essay are embodied; the methodology of analysis is focused on the empirical-systemic type, a pronounced hypothesis of the nature of fake content would not interfere with the work, M.B. the polar layout of this rather difficult problem. The style tends towards the scientific type. For example, this is manifested in the following fragments: "fakes can spread spontaneously and have a serious impact on public sentiment. As factors contributing to the spread of fakes, one can identify an insufficient level of awareness of the audience in the topic, "information hunger" (inability to verify data or obtain information from official sources), susceptibility to sensational headlines and the shocking nature of news presentation, distrust or negative attitude of the audience towards an expert or expertise, unwillingness to reconsider their own formed views", or "false materials can come out in different formats, whether it's a post on a social network or a multimedia longrid. However, the so-called deepfakes have the most destructive potential. Deepfakes are fakes created with the help of AI. Artificial intelligence combines a large number of photographic images and makes a video recording of them. The program can accurately determine how a person will react and behave in a certain situation. In other words, the essence of the technology lies in the fact that part of the algorithm studies the image of an object in detail and tries to recreate it until the other part ceases to distinguish between the real image and the one created by the neural network," or "the lack of qualified personnel for configuring and managing such software is also important. This does not allow the editorial board to go deeper and explore all the possibilities of the neural network, but, as they say, go "to the top" using only accessible and understandable functions. This approach leads to a low quality of the created content and a negative attitude of the audience towards such experiences. And as a result, the editorial board loses a lot of money and cannot continue its experiments," etc. The work is not devoid of a practical component, although most of what the author said is "on the surface." Deliberately, the author designates in the text the so-called preventive nature of "fakes" and "deepfakes", it seems that this is quite rational. For example, "in general, deepfakes represent an actual media threat, because unlike photos, the correction of which is not something unique, users tend to still trust video recordings. Therefore, deepfakes in video format are highly likely to be perceived as authentic, and thanks to social networks they can be replicated in a matter of minutes. And here the main weapon of struggle is only critical thinking. The more in danger people feel, the more they are influenced by them. They are less critical and spend less effort checking information. Even rational people can be misled. Manipulators skillfully use this. In this case, a certain algorithm will make it possible to verify the authenticity of the data received ..." etc. The material, in my opinion, touches on a "fresh" topic and may be interesting / useful for both an experienced reader and those who are just approaching the problem of analyzing "fake materials". The basic requirements of the publication are taken into account, the structure is maintained, the available volume is sufficient to disclose the topic, but more examples / illustrations could be introduced. The results of the text are objective, they correspond to the main data block: "for the most part, only the user himself is able to distinguish fake from the truth. Of course, the largest and most high-profile fakes will be publicly exposed, but this is only a small part of the untruth that a person faces every day. The ability to critically comprehend any important information, refer only to verified sources, and double-check data comes to the fore. In other words, it is more important than ever to continue to improve media literacy and comply with the basic rules of information security." Bibliographic sources are relevant, the type of publications is variable. I recommend the article "Features of fake content in the media space in the era of artificial intelligence development" for publication in the magazine "Litera".