Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Philosophy and Culture
Reference:

Culturological reconstruction of ChatGPT's socio-cultural threats and information security of Russian citizens

Bylevskiy Pavel Gennadievich

ORCID: 0000-0002-0453-526X

PhD in Philosophy

Associate Professor, Department of Information Culture of Digital Transformation; Department of International Information Security, Moscow State Linguistic University

119034, Russia, Ostozhenka str., 36, office 106

pr-911@yandex.ru
Other publications by this author
 

 

DOI:

10.7256/2454-0757.2023.8.43909

EDN:

UZDRFW

Received:

22-08-2023


Published:

29-08-2023


Abstract: The subject of the study is the socio-cultural threats to the information security of Russian citizens associated with ChatGPT technologies (Chat Generative Pre-trained Transformer, a machine-generated text response generator simulating a dialogue). The object of research − evaluation of the ratio of advantages and threats of generative language models based on "machine learning" in modern (2021-2023) scientific literature (journals HAC K1, K2 and Scopus Q1, Q2). The scientific novelty of the research lies in the culturological approach to the analysis of threats to the security of Russian citizens associated with the use of ChatGPT as one of the technologies of "artificial intelligence". The formulation of the problem of the classical Turing test "to distinguish a person from a machine" is characterized as a scholastic abstraction, instead a more correct and productive approach is proposed: a socio-cultural assessment of the value (based on cultural axiology) of new computer technologies. The starting point of the analysis is the determination of socio-cultural value (or, conversely, damage) as a result of the use of generative language models based on machine learning. Further, the contribution and responsibility of various socio-cultural subjects of its creation and application are revealed − user, creator and developer. The result of the application of the proposed approach is the deconstruction of the discourse of the "philosophy of artificial intelligence" in terms of uncritical translation of developer declarations intended for marketing and attracting financing. Hypertrophied perception, precariously balancing on the edge of utopia and dystopia, is assessed as a risk of incorrect identification and ranking of threats to information security. Assumptions about the hypothetical "superweapon of psychological warfare" mask modern incidents of cross-border leakage of confidential data, the risks of being held accountable for publishing deliberately false information and illegal content as a result of using ChatGPT. National security measures are recommended, including restrictive measures and increasing the general civil culture of information security of users, as well as the orientation of domestic developments of solutions of this type on traditional values, socio-cultural identity and interests of Russian citizens.


Keywords:

ChatGPT, generative language model, artificial intelligence, turing test, digital sovereignty, socio-cultural threats, information security, traditional values, sociocultural identity, disinformation

This article is automatically translated.

Introduction

The appearance of new versions of ChatGPT (4.0 in March 2023), the most well-known and promoted computer generator of text responses simulating dialogue, generates bursts of public and scientific interest. The latest computer technologies are accompanied by previously unknown not only advantages, but also threats to information and socio-cultural security. The analysis of modern scientific literature (publications in the journals of the Higher Attestation Commission K1, K2 and Scopus Q1, Q2 for 2021-2023) shows a significant spread of estimates of the ratio of advantages and threats of ChatGPT, a generative language model based on "machine learning". It is understandable that many expectations look too high: ChatGPT is a computer automation of socio-cultural activities, belonging to the group of "end-to-end" technologies of "artificial intelligence" [1], the role of which has increased significantly as a result of the digital transformation that began in the 2010s [2].

A factor aggravating attention to the security aspects of such services for Russian citizens is the complication of international relations, including with global digital platforms based in the United States, due to the start of a Special military operation in Ukraine in 2022. Exaggerated estimates of ChatGPT's capabilities, from utopia (high automation of author's work [3], error-free management of society [4]) to dystopia (superweapon in psychological warfare [5]), mask real risks. Taking into account the approval by the Russian Government on December 22, 2022 of the "Concept of formation and development of the culture of information security of citizens of the Russian Federation", a correct assessment of the balance of opportunities and risks of such digital services for Russian users is in demand [6]. The application of a culturological approach can contribute to an effective risk assessment and the development of security measures for the use of ChatGPT by Russian citizens in socio-cultural activities as a public foreign mass digital service.

 1. Generative language models as a technical means of culture

For a profile weighted assessment of socio-cultural opportunities and threats of ChatGPT, as a technology of "artificial intelligence", a culturological approach is used. Developers' declarations about the possibilities of such solutions need a restrained critical reflection: the history of the "springs and winters" of "artificial intelligence" since the 1950s shows that they were often greatly exaggerated in order to aggressively market to attract investment. The poles of such inflated estimates are utopia (machines will work for everyone and solve everything) and dystopia (machines will enslave, destroy people), the line between which is not unconditional: if machines can do everything better, then what is the meaning of human existence?

Some "philosophers of artificial intelligence" can be blamed for the fact that they, under the strong impression of the success of technology, simply translated the amateurish views of the developers of new computer solutions on a person and his thinking, translating into the professional language of philosophical terms. Unjustifiably exaggerated estimates of the prospects for computer automation of technical systems are promoted by the view of a person, eloquently expressed in the most vaguely ambiguous concept of "artificial intelligence" and related terms [7]. Technical devices that are not even living organisms are endowed with human abilities. Examples are the usual computer "perception", "vision", "hearing", etc., widely used "interaction" [8], "cooperation" of a person with a computer [9] and other similar oxymorons (in the spirit of "communication of an employee with a hammer"). Computer "neural networks" are a buzzword [10], but in fact they model not the actual mechanisms of human thinking, learning [11], but the formalized mechanistic ideas of developers about it; the term "neuron" simply means a nerve cell, not necessarily a brain.

The prerequisite for the "humanization" of electronic computing technology is the mechanistic approach to man, formed in the philosophy of Modern times, not as an animate being or even as a living organism, but as a man-made object (mechanism, automaton, machine), instead of which it is possible to create a more productive one. It is precisely this explicable, but erroneous methodological approach that causes unrealistic, highly inflated estimates of both the advantages and threats of computer automation. The concrete historical relations of people as socio-cultural subjects are reduced, simplified, viewed from the point of view of computing, mechanics, electrical engineering and other related scientific and technical disciplines. However, such assessments, responding to significant socio-cultural needs and interests of influential groups, can greatly influence the behavior of many people and thereby the course of historical development.

From the point of view of the theory of culture, the only subject of socio-cultural activity is a person, collectives of people (society). Technical objects used and manufactured by man are tools, instruments of people's joint activity, their active communication in work, everyday life and the transformation of social relations. Rock art, a library of clay tablets with cuneiform script, paper and pencil, a book and a statue, a brush and a canvas, a library and a museum, a telephone and a TV, a social network, an Internet messenger, "machine painting" [12] and ChatGPT — all this is equally, although at different stages of historical development, technical means of joint socio-cultural activities of people.

The culturological approach to man and his technical tools corresponds not only to the labor theory of value in English classical political economy, but also to the definition of the subject of law by modern jurisprudence. Culturology proceeds from the primacy and priority of human abilities, in relation to which technology has a secondary, instrumental character. There is not and cannot be a technique that does not depend on a person: he invents, creates and uses it to expand his capabilities, increase culture, develop knowledge, skills and abilities, but also for confrontation with his own kind. The main subject of the creation and use of technology is the person himself, the main purpose of the use of technology is to create values that satisfy human needs (although values can be negative, cause damage).

Not being a socio-cultural subject, technology, from the stone axe to ChatGPT and beyond, can acquire the character of a factor of advantages or a threat to a person coming from the same person (both another and himself), depending on the social relations between them, including consciousness and will. Creating an axe allows a person to cut down a tree, but it is not the axe itself that cuts, but a person, and not everyone, but a woodcutter who knows how to chop with an axe correctly. This example is true for tools of any degree of complexity, autonomy and automation (including "artificial intelligence" technologies) [13]: their skillful use allows you to expand human abilities.

 2. Cultural alternative to the Turing test

We can offer a culturological version of the classic Turing test, designed to determine the "reasonableness" of the machine. The very formulation of the task "to distinguish a person from a machine" looks like a scholastic abstraction outside the criteria of truth and value: the criterion is the opinion of the test taker, moreover, in specially selected laboratory, and not in practical field conditions. With any test result, an endless competition of "shield and sword", tested algorithms with testing techniques is generated, taking into account the scrupulousness or, on the contrary, the impressionability of the specialists conducting the test. The task of testing computer technologies by the method of cultural axiology is set in a different way: to identify the value of the application, its nature and significance (or, conversely, damage); and then the defining roles of socio-cultural subjects - the user, the creator and the developer. What is more valuable and more important in the medical diagnosis and the therapy recommended on its basis: their fidelity or the question of what technical means were used to formulate them [14]? This method can also be used to assess the benefits and risks of using ChatGPT by Russian citizens.

A similar approach is used in political economy to calculate wages and profits, in contrast to the cost of material factors of production, in the difference between the taxation of people (income tax, etc.) and property (including production, including computers). Computer equipment and software, regardless of complexity, level of automation and "humaneness" (including solutions of the ChatGPT class), are characterized not as subjects, but as technical means of violating information security, committing a crime.

Technical tools that expand human capabilities may look more or less independent, depending on the degree of autonomy, the level and scale of automation of operations performed with their help, and the remoteness of the results in time and space. However, people responsible for unlawfully causing damage to others with the help of any number of high-tech means, as well as those responsible for man-made accidents, are identified by investigation and brought to justice without regard to the imitation illusions generated by "artificial intelligence". In investigations of incidents and crimes, including those related to information security, the personal roles and participation, the degree of guilt in causing damage to each person involved are determined.

ChatGPT as an "artificial intelligence" technology, from the point of view of the culturological approach under consideration, is neither intelligence nor even chat, since this digital service only simulates human communication and any personality of an imaginary "interlocutor" [15]. The ChatGPT user, who asks questions by typing, does not have an interlocutor, as such he perceives ... an automated self-service service. In fact, the user communicates with himself, "talking" through a text set: the user himself answers his questions, first looking for suitable answers in an electronic reference book. The generated text results, issued at the user's request, are not answers, are not of subjective origin, but exclusively of a technical and material nature [16], like routes automatically plotted by navigation Internet services or the manufacture of a vending machine of the selected paid cocktail.

A whole gallery of ironically satirical images of such solutions has been created in world culture, from a machine for combining books from words at the Academy of Projectors on the flying island of Laputa J. Swift to the computer poet, "Electribald Trurl" by St. Lem. In the 1990s, the first such computer programs were honestly called "bredogenerators" by their Russian developers. However, under certain conditions, narcotic delirium (for example, the Pythian priestesses of the temple of Apollo in Ancient Greece) can be successfully passed off as a divine prophecy with a hidden deep meaning.

The illusion of "communication" arises due to a complex psychological focus: uncritical perception of the name "chat" and the concept of "artificial intelligence"; aggressive advertising of developers broadcast by the mass press, as well as an imitation verbal shell that gives the user the results of queries on behalf of some illusory "I" ChatGPT. With the same success, you can create a verbal or voiced by the generated timbre of the voice "chat"-an add-on over any automated digital service, including search and navigation. The psychological illusion of "communication" disappears even for the most impressionable and gullible users if the results of requests are not issued on behalf of the ChatGPT "personality", but accompanied by a large, bright font notification that this is an automated electronic help service with a link to a detailed description of the principles of operation. The marking "18+" is also appropriate, given the need for minors to be accompanied by the use of ChatGPT by an experienced adult, a teacher [17], such as cases of legally mandatory mentions of "organization banned in the Russian Federation", "foreign agent", etc.

 3. ChatGPT risks and Security measures for Russian Citizens

The content of text "responses" to user requests generated by computer solutions, including ChatGPT, is determined by the following three main elements of the service:

1) a "library", a structured database of marked-up (formalized by key parameters) texts (created by people or automatically processed in one way or another), selected according to the rules formulated by the developers;

2) the parameters of "training", and in fact automated testing of algorithms for searching and selecting fragments that formally most correspond to the request (criteria and degree of compliance are set by developers who control the results of processing);

3) algorithms for compiling the selected fragments into a coherent text in accordance with the rules of "natural language".

The first two elements depend entirely on the developers, including their preference policy and censorship, selective application of the criteria of "hate speech" [18], etc. Technical specifications are approved in accordance with the interests of the owner, investors of the service and the state of which they are residents, including "influence groups". OpenAI, the developer of ChatGPT, is based in the USA, as is one of the largest investors and partners of the project, Microsoft. Accordingly, the principles of selecting (and screening out) texts for the "library", as well as the criteria for matching their fragments to the user's request used to compile the "response", must fully comply with OpenAI's corporate interests and US law.

It is extremely important that, unlike Internet search services, ChatGPT "answers", more precisely, the results of machine processing of the existing "library" of texts based on the user's request ... do not contain links to sources. There are also no other built-in verification tools ("fact-checking"); developers guarantee users only a certain degree of plausibility, but not the reliability, not the truth of the "answers". If we refer modern versions of ChatGPT to "artificial intelligence", then we have a model of generating "fake news" and falsifying history [19] in the style of an average journalist of the "yellow press" of the USA, strongly politically engaged.

From the point of view of the applied culturological approach, the "ethical" risks of falsification of authorship as a result of the undeclared use of ChatGPT by journalists, scientists, students, etc. [20] are overstated. The role of ChatGPT in the creation of author's texts is not fundamentally different from any other technical means of culture: a quill pen with ink, pencil, fountain pen or ballpoint pen, sources of handwritten and printed, electronic documents, libraries with catalogs, Internet search services, etc. ChatGPT is nothing more than a tightly censored digital library with algorithms for issuing results on requests without references to sources and verification of truth ("fact-checking") [21]. Only high originality is guaranteed, but not the reliability and semantic quality of the text generated on request. Quality control and evaluation, as well as responsibility remain with the author using the service, as well as specialists evaluating the texts created in this way (editors, etc. [22]). The fee for a publication created using ChatGPT will rightfully belong to the person-author, as well as liability up to criminal in case the law is violated [23].

Thus, the risks of using ChatGPT as a "super weapon" of anti-Russian psychological warfare, falsification of authorship, mass unemployment of journalists and other creators of texts can only be recognized as hypothetical and overestimated. But now ChatGPT is actually used by attackers to mask malicious codes embedded in software products [24]. There are incidents of leakage and publication of confidential information entered by users in requests without taking into account the fact that this is a private cross-border service capable of publishing the information entered. So, in the spring of 2023, Samsung Corporation suffered significant damage from leaks containing corporate trade secrets due to the use of ChatGPT by employees. One of the main reasons for such incidents is the insufficient culture of information security, professional and mass user [25].

The risks that have increased as a result of anti-Russian technological sanctions of unfriendly countries include discriminatory actions of global digital platforms based in the United States [26] against Russian citizens and the official press. In case of deterioration of international relations, OpenAI may increase the anti-Russian orientation of ChatGPT, increasing the risks of Russian citizens.

 Results

The main result of the study is the conclusion: the socio-cultural risks of ChatGPT for Russian users can be minimized by means of state regulation and improving the culture of information security of users, as well as by creating domestic solutions of this class. As one of the means of state regulation, an obligation is proposed to label the ChatGPT interface with a warning that "chat" is only a trademark, but in fact an automated cross?border electronic library service for self-service users based in the United States and operating in accordance with the laws of this country. Texts issued upon request must be accompanied by a warning about the need for verification due to the possibility of the presence of deliberately false information, including those violating Russian legislation, in the case of dissemination of which the user will be held liable. In case of non-compliance with these requirements, state authorized bodies may take additional measures to restrict and block mass access to ChatGPT on the territory of Russia and in the Russian segment of the Internet.

The creators of domestic automated text generators can be recommended to increase the share of independent developments taking into account Russian traditional values and socio-cultural identity; to introduce links to sources, automated fact-checking services and compliance with Russian legislation in the results issued on request, as well as to abandon the imitation of chat, creating the illusion of communication.

An effective risk reduction measure is to increase the general civil culture of information security through educational organizations, the press and social advertising, social networks and messengers ? explaining to the mass audience the essence and mechanisms of operation of machine text generators, related threats and risks. It is also necessary to inform the mass audience about the risks of obtaining and distributing false information, liability in cases that violate Russian legislation, as well as threats related to the cross-border nature of the ChatGPT service and the possible use of its owners and developers to spread disinformation.

References
1. Gill, S., & Kaur, R. (2023). ChatGPT: Vision and challenges. Internet of Things and Cyber-Physical Systems, 3, 262-271. doi:10.1016/j.iotcps.2023.05.004
2. Soifer, V.A. (2021). Human factor. Ontology of Designing, 11(1(39)), 8-19. doi:10.18287/2223-9537-2021-11-1-8-19
3. Agathokleous, E., Saitanis, C., Fang, Ch., & Yu, Zh. (2023). Use of ChatGPT: What does it mean for biology and environmental science? Science of The Total Environment, 888. doi:10.1016/j.scitotenv.2023.164154
4. Mironova, N.G. (2021). Philosophical understanding of social risks of intellectual automation of social management. The Digital Scholar: Philosopher's Lab, 4(2), 125-144. doi:10.32326/2618-9267-2021-4-2-125-144
5. Goncharov, V.S. (2022). Application of combined artificial intelligence technologies in information and psychological warfare. Political science issues, 12(4(80)), 1118-1126. doi:10.35775/PSI.2022.80.4.015
6. Bulychev, I.I., Kazakov, V.G., & Kiryushin, A.N. (2023). The future of artificial intelligence: skeptics vs pragmatists. Military academic journal, 2(38), 10-21. EDN: FJVPRO
7. Gruzdev, A.A., Samarin, A.S., & Illarionov, G.A. (2023). Digital philosophy projects in the context of digital humanities development. Journal of Siberian Federal University. Humanities & Social Sciences, 16(7), 1165-1176. EDN: GBBFUA
8. Yesenin, R.A. (2023). Psychological challenges of digital reality: artificial intelligence today and in the future. Professionalʹnoe obrazovanie i rynok truda, 11(2(53)), 121-128. doi:10.52944/PORT.2023.53.2.009
9. Krepsm, S., & Jakeschm, M. (2023). Can AI communication tools increase legislative responsiveness and trust in democratic institutions? Government Information Quarterly, 3(40). doi:10.1016/j.giq.2023.101829
10. Plotnikova, A.M. (2023). Neural network as a keyword of the current moment. Philological class, 28(2), 45-54. EDN: TBIHHU
11. Guile, D. (2023). Machine learning – A new kind of cultural tool? A “recontextualisation” perspective on machine learning + interprofessional learning. Learning, Culture and Social Interaction, 42. doi:10.1016/j.lcsi.2023.100738
12. Milovidov, S.V. (2022). Artistic features of computer art works created using machine learning technologies. Artetcult, 4(48), 36-48. doi:10.28995/2227-6165-2022-4-36-48
13. Bryantseva, O.V., & Bryantsev, I.I. (2023). The problem of subjectivity of artificial intelligence in the system of public relations. The Bulletin of the Volga Region Institute of Administration, 23(3), 37-50. doi:10.22394/1682-2358-2023-3-37-50
14. Liu, J., Zheng, J., Cai, X., Wu, D., & Yin, Ch. (2023). A descriptive study based on the comparison of ChatGPT and evidence-based neurosurgeons. iScience. doi:10.1016/j.isci.2023.107590
15. Short, Ñ., & Short, J. (2023). The artificially intelligent entrepreneur: ChatGPT, prompt engineering, and entrepreneurial rhetoric creation. Journal of Business Venturing Insights, 19. doi:10.1016/j.jbvi.2023.e00388
16. Volodenkov, S.V., & Fedorchenko S.N. (2022). Features of the phenomenon of subjectivity in the conditions of modern technological transformations. Polis. Political Studies, 5, 40-55. doi:10.17976/jpps/2022.05.04
17. Vershinina, Yu.V., Dyatlova, E.V., Kovsh, & K.Yu. (2023). The possibilities of artificial intelligence in the educational process on the example of the chat bot ChatGPT. Review of pedagogical research, 5(5), 200-205. EDN: IBMPAS
18. Cohen, S., Presil, D., Katz, O., Arbili, O., Messica, & S. Rokach, L. (2023). Enhancing social network hate detection using back translation and GPT-3 augmentations during training and test-time. Information Fusion, 99. doi:10.1016/j.inffus.2023.101887
19. Kansteiner, V. (2023). Digital doping for historians: is it possible to make history, memory and historical theory artificial intelligence? KANT: Social Sciences & Humanities, 1(13), 56-70. doi:10.24923/2305-8757.2023-13.5
20. Elliott Casal, J., & Kessler, M. (2023). Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing. Research Methods in Applied Linguistics, 2, 3. doi:10.1016/j.rmal.2023.100068
21. Currie, G. (2023). Academic integrity and artificial intelligence: is ChatGPT hype, hero or heresy? Seminars in Nuclear Medicine, 5(53), 719-730. doi:10.1053/j.semnuclmed.2023.04.008
22. Kahambing, J. ( 2023). ChatGPT, ‘polypsychic’ artificial intelligence, and psychiatry in museums. Asian Journal of Psychiatry, 83. doi:10.1016/j.ajp.2023.103548
23. Zorin, A.R. (2023). On the issue of legal regulation of ChatGPT. International Law Journal, 6(6), 35-38. EDN: UOEZHV
24. Filyukov, D.A. (2023). Application of neural networks for the formation of malicious software code. Innovacii i investicii, 7, 199-204. EDN: ZBHRXM
25. Anderson, S. (2023). “Places to stand”: Multiple metaphors for framing ChatGPT's corpus. Computers and Composition, 68. doi:10.1016/j.compcom.2023.102778
26. Smirnov, A.I., & Isaeva, T.V. (2023). International security: challenges and threats of artificial intelligence technologies. Meždunarodnaâ žiznʹ, 8, 94-107. EDN: ALSAZN

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

In the journal Philosophy and Culture, the author presented his article "Cultural deconstruction of socio-cultural threats to ChatGPT information security of Russian citizens", in which a study of the socio-cultural potential of protecting the information space from the unfair use of artificial intelligence tools and programs was conducted. The author proceeds from the study of this issue from the fact that the application of a cultural approach can contribute to an effective risk assessment and the development of security measures for the use of ChatGPT by Russian citizens in socio-cultural activities as a public foreign mass digital service. The relevance of the study is due to the current geopolitical and socio-cultural situation, namely the complication of international relations, including with global digital platforms based in the United States, due to the start of a Special military operation in Ukraine in 2022. The practical significance of the research lies in the fact that the results obtained can be used for further research and development of methodological materials in the fields of cultural studies and the development of information security culture in Russia. Having analyzed the scientific validity of the problem, the author notes the growing public and scientific interest in the issue under study. The analysis of modern scientific literature helped the author to identify a significant range of estimates of the ratio of advantages and threats of ChatGPT, a generative language model based on "machine learning". Therefore, the scientific novelty of the study lies in the application of a cultural approach to identify potential socio-cultural risks and threats arising from the use of artificial intelligence tools. The purpose of this study is to analyze the risks of using ChatGPT as a means of psychological warfare, falsification of authorship, mass unemployment of journalists and other creators of texts. The methodological base consists of general scientific methods of analysis and synthesis, the forecasting method. The author notes the need for a culturological approach for a profile-based balanced assessment of the socio-cultural opportunities and threats of ChatGPT as an artificial intelligence technology, since the developers' declarations about the possibilities of such solutions need restrained critical reflection. The author insists that they were often greatly exaggerated in order to aggressively market to attract investment. The poles of such overestimations are utopia and dystopia. Adhering to the theory of culture, the author expresses the opinion that the only subject of socio-cultural activity is a person, groups of people (society). Technical objects used and manufactured by man are tools, instruments of people's joint activity, their active communication in work, everyday life and the transformation of social relations. Both paper and pencil, as well as a social network, an Internet messenger and ChatGPT - all these are equally, although at different stages of historical development, technical means of joint socio-cultural activities of people. Therefore, hasty attempts to exalt and humanize artificial intelligence and its tools should be avoided. Not being a socio-cultural subject, technology can take on the character of an advantage factor or a threat to a person coming from a person, depending on the social relations between them, including consciousness and will. The author proposes a culturological version of the classic Turing test, designed to determine the "reasonableness" of the machine. The author sets the task of testing computer technologies by the method of cultural axiology first of all to identify the value of the application, its nature and significance (or, conversely, damage); and then the defining roles of socio-cultural subjects ? the user, creator and developer. ChatGPT as an "artificial intelligence" technology, from the point of view of the culturological approach under consideration, is neither intelligence nor even chat, since this digital service only simulates human communication and any personality of an imaginary interlocutor. The illusion of communication arises due to a complex psychological focus. According to the author, the risks of using ChatGPT as a "super weapon" of anti-Russian psychological warfare, falsification of authorship, mass unemployment of journalists and other creators of texts seem hypothetical and overestimated. But now ChatGPT is actually used by hackers to mask malicious codes embedded in software products. The author suggests minimizing the socio-cultural risks of ChatGPT for Russian users by means of state regulation and improving the culture of information security of users, as well as by creating domestic solutions of this class. The author considers an effective risk reduction measure to increase the general civil culture of information security through educational organizations, the press and social advertising, social networks and messengers ? explaining to the mass audience the essence and mechanisms of operation of machine text generators, related threats and risks. In conclusion, the author presents a conclusion on the conducted research, which contains all the key provisions of the presented material. It seems that the author in his material touched upon relevant and interesting issues for modern socio-humanitarian knowledge, choosing a topic for analysis, consideration of which in scientific research discourse will entail certain changes in the established approaches and directions of analysis of the problem addressed in the presented article. The results obtained allow us to assert that the study of the socio-cultural aspect of information security is of undoubted scientific and practical cultural interest and deserves further study. It should be noted that the author has achieved his goal. The material presented in the work has a clear, logically structured structure that contributes to a more complete assimilation of the material. The bibliographic list of the study consists of 26 sources, including foreign ones, which seems sufficient for generalization and analysis of scientific discourse on the studied problem. It should be noted that the article may be of interest to readers and deserves to be published in a reputable scientific publication.