Library
|
Your profile |
Philology: scientific researches
Reference:
Barebina N.S., Breeva A.P., Glyzina V.E., Kosyakov V.A.
Revisiting the Cultural and Linguistic Causality of Technogenic Stereotypes
// Philology: scientific researches.
2024. ¹ 1.
P. 74-82.
DOI: 10.7256/2454-0749.2024.1.69637 EDN: EDFSLO URL: https://en.nbpublish.com/library_read_article.php?id=69637
Revisiting the Cultural and Linguistic Causality of Technogenic Stereotypes
DOI: 10.7256/2454-0749.2024.1.69637EDN: EDFSLOReceived: 21-01-2024Published: 05-02-2024Abstract: The object of the study is fragments of lay discourse about artificial intelligence and new technologies that express antipathy. The subject of the study is evaluative and emotional judgments about technologies and forms of their linguistic objectification, rooted in the discourse as a product of linguaculture. The authors consider the issue of pessimistic perception of technological progress in Western society. A productive way to study this issue is the stereotype as a cognitive-linguistic phenomenon. It was accepted that stereotypes play a special role in representing general features of national, social and other human character traits. The authors conclude that one of the main functions of the stereotype is its orienting function, which has influence on the components of the human being’s personality. The authors used such scientific methods as: introspection, comparison, analysis of dictionary definitions, descriptive-interpretative method, interpretation of discourse. This methods, in combination with text markup techniques, made it possible to identify fragments of discourse that were significant for analysis. The research was based on the English language texts from The Guardian and The New York Times news aggregators, on materials from MIT Technology Review and Neuroscience News magazines. The authors conclude that technogenic stereotypes in English discourse have a mythological basis in the form of religious motives and specific linguacultural images. This contrasts with the rational-critical view of information technology innovation. Intermediate findings include the conclusion about the role of media, social networks, which are the key actor of the discourse of hype about technological expansion. The novelty of the study lies in the fact that a new area of public discussion has been allocated, which is considered in the light of data on the orienting function of the stereotype, which requires academic reflection by specialists from different fields. Keywords: stereotype, technogenic stereotype, technophobia, discourse, mythological thinking, artificial intelligence, robots, religion, linguoculture, critical thinkingThis article is automatically translated. Introduction. Technological innovations are an integral part of modernity, but the perception of new technologies in different societies has not yet been studied enough. Human interaction with technology is described in the literature in a wide range of phenomena from technophobic escapism and neo-Luddism to cooperation. So far, the most researched issue related to the degree of technology adoption in society is age and professional factors [7; 15]. It cannot be denied that technophobic sentiments, on the one hand, exclude entire strata of society from active life, and on the other hand, they inhibit the responsible use of new technologies, which is also pointless. Technology is an area that is highly susceptible to prejudice and stereotypes. The purpose of the article is to analyze the role of stereotype in the formation of technophobia and consider whether there is a correlation between linguistic culture and technogenic stereotypes. We will supplement the presentation with new data from the linguistic understanding of the orienting function of the stereotype as a cognitive-linguistic phenomenon. Stereotype as a mechanism of orienting influence in society. The nature of a stereotype depends on a person's understanding and interpretation of the world around them. Stereotypes exist due to the fact that they are able to satisfy the psychological need to save cognitive efforts, reducing the process of cognition and simplifying complex phenomena and phenomena to simple and understandable ones. According to V. A. Kosyakov, the use of stereotypes in the form of templates gives a person the opportunity to feel like a member of a socio-cultural group. It should be noted that the stereotype often does not meet the requirement of accuracy and differentiation of a person's perception of the social reality in which she finds herself, giving her the opportunity to dramatically reduce the response time to changing reality. Maintaining certain stereotypes is a necessary condition for the functioning of society [4]. Despite the fact that there is no unified classification of the stereotype in modern humanities yet, scientists have come to the consensus that this is a mechanism for categorizing human experience. A person at birth does not associate himself with a certain ethnic group, but becomes Russian, German, etc., staying in one or another national environment. Through education and language, a child gets an idea of the world and society, morality, behavior, and the value system of the culture of which he has become a member [8]. The language reflects the cultural characteristics of numerous ethnic groups, nationalities, and peoples. It is language that forms for a person the environment in which he can exist as a person [2]. It is language activity that forms certain stereotypes that a person subsequently perceives as truly true and will be guided by them when meeting with objects of reality. To date, ethnic stereotypes are the most studied, since they constitute the identity and mentality of the nation and have a pronounced connection with the national character. In our work, we consider the stereotype from the point of view of the bio-cognitive theory of language, which was developed in the second half of the twentieth century by biologists U. Maturana and F. Varela [6]. This theory has become widespread in other sciences, including linguistics. Bio-cognitive theory considers language as a biological feature of a human being as a species, whose main function is to influence other members of society in order to change their behavior and provide the most comfortable conditions for their survival. In this context, it is not the function of categorizing human experience that comes to the fore, but the orienting function: since language is the most important link between people, influencing or changing it, it is possible to have an effective orienting effect not only on individuals, but also on society as a whole. Mythological thinking and discourse about robots. In order to understand the orienting essence of the stereotype, it is advisable to turn to mythological thinking. But let's make a reservation right away that we do not consider the question of the relationship between myth and stereotype, realizing that such a complex issue should be studied in a separate article. We are talking specifically about mythological thinking, which is opposed to logical and rational thinking. Mythological thinking differs from scientific thinking in terms of critical reflection, comparison and evaluation of multiple sources of information. In general, this type of thinking is an effective cognitive process that explains the diversity of the complex world around us. In particular, it is based on a system of stereotypes. In this section, we will look at how certain stereotypes about technological risks are presented in the language field; for this we will use the linguistic construct of discourse. One of the most discussed issues and an incentive for unjustified technophobia today is the issue of artificial intelligence and robots. Artificial intelligence is most often accompanied by stereotypes and misconceptions that distort its real capabilities. According to the independent scientific journal Neuroscience News, there are three categories of fears related to artificial intelligence: ? loss of control (this stereotype is associated with the idea that artificial intelligence will become a threat to humanity, as robots will gain control over technology); ? loss of confidentiality (use of open data, violation of privacy, storage, as well as sale and exchange of personal information about human actions); ? the crisis of human value (this stereotype implies that technology can become smarter than humans, artificial intelligence systems will be able to perform all tasks and functions that were previously performed only by humans) [14]. There are detailed and reasoned refutations by experts. For example, I.M. Dzyaloshinsky, in the course of a detailed analysis of the features of human intelligence and the intelligence of complex computer systems, proves "that human intelligence is a product of a multi-thousand-year history of development and interaction of complex processes (personality development, complex relationships within a social community, people's joint activity), whereas artificial intelligence only simulates human cognitive processes, therefore, people are responsible for everything that artificial intelligence does."[3] Despite the opinions of experts, stereotypes about technological threats are surprisingly tenacious. There is a well-known case described in an article by O. Schwartz, about how researchers studying bot communication (bot, in short. from the English robot), discovered the strange phrase “Balls have zero to me to me to me to me to me to me to me to”. For experts, this phrase was not something unusual, it showed that some restriction was not included. On the contrary, the media and opinion leaders interpreted this case as the fact that robots had created their own language of communication. Fast Company, an American magazine and website about business and innovation, published an article “AI Is Inventing Language Humans Can't Understand. Should We Stop It?” [18]. The article's material, which stated that bots deviate from standard English from time to time and chat in their own language, had a viral potential that was realized on the Web in the form of content around artificial intelligence, its development and getting out of control. We can say that a kind of hype discourse has developed around artificial intelligence. According to the authors of the monograph “Wording Robotics”, discourses about robots play a role in the gap that now exists between technologies and those intelligent structures as society sees them [19]. The discursive markers are phrases like “AI apocalypse", “artificial brains", “artificial superintelligence" and “creepy Facebook bot AIs". Even before the advent of the current model of artificial intelligence, this question was of interest to science fiction writers and film directors, one of the first films on this topic was “2001: A Space Odyssey", released in 1968. It was followed by: “Blade Runner”, “Terminator”, “Matrix”, “Alien”, where the acting characters were humanoid robots or artificial intelligence, with far from always predictable behavior. In the pessimistic way of this discourse, robotics is something alien, far from life, unknown, fantastic, very different from the usual, it is a discourse about the opposite of man and robot [13]. Let's do a little retrospection on the stereotype of computer intelligence. Exaggerated statements about the dangers were made long before the advent of "smart" computers. It is known that the English-speaking culture gives robots anthropic properties. This can be judged by anecdotes about robots that go to school: communicate with their girls, die (examples 1-3): (1) Griffin: Why did the robot go back to robot school? Brent: Tell me. Griffin: Because his skills were getting a little rusty! (2) Billy: What did the man say to his dead robot? Bob: What? Billy: “Rust in peace.” (3) Jack: What did the robot say to his girlfriend? Ben: Beats me. Jack: I’m sparking all over you! [16]. However, as the study by M. L. Lisetsky and A. L. Sopina shows, "competing conceptual metaphors appear in English-language journalistic discourse: ROBOT IS SERVANT, ROBOT IS FRIEND, ROBOT IS ENEMY, which indicates the uncertainty of society's positions in relation to robots in general" [5, p. 29]. Let us cite the opinion of J. Bell that speculative statements about the harmfulness of artificial intelligence and bots occur due to the fact that publications consider not so much technologies, attention to which is very superficial, as our cultural hopes and anxieties [9]. This idea was expressed more than thirty years ago by F. L. Schodt, analyzing the role of robotics in Japanese culture and suggesting that the desire to introduce robotic systems everywhere has quasi-religious grounds [17]. This opinion is shared by modern researchers, noting that Japanese culture has a completely different attitude to technology, and, in particular, to robots. Thus, H. Knight argues that the root of the "Terminator syndrome", as well as the difference between the attitude of Japanese and Americans towards robots, is rooted in something much older than the idea of robots [11, p. 7]. S. Mims fixes the conclusion about the fundamentally different statuses of artificial assistants in two cultures: if for For the Japanese, a robot is a pretty and friendly assistant, while for the Americans it is a dangerous construction [12]. J. Ito also explains this in the context of religion in the "Christianity – Buddhism" dichotomy, where the latter is more conducive to faith in harmonious coexistence [10]. Summarizing the authors' views on how the religious context affects the attitude to technology in different cultures, we note that we are talking about the fact that animism, inherent in the Shinto faith and later Buddhism, correlates with the idea that all objects have a spirit, including artificial objects created by man. This aspect of religious thought influences the Japanese attitude towards nature and spiritual existence. It is believed that spirits exist in all objects, they are always endowed with consciousness, will and other human properties, and they should be treated with respect. Monotheistic religions, on the contrary, adhere to the doctrine that only God can give life, and any person who breathes life into an inanimate object likens himself to the Creator. Having risen, a person tries to oppose himself to the Lord, which, in the postulates of Christianity, is one of the deadly sins. Data from the field of mythological thinking related to the concept of a cultural hero also expand our understanding of cultural differences in the perception of technology. So, J. Ito cites the example of the superhero Astro Boy or Tetsuwan Atom (an android living with humans), which has become part of the culture for the Japanese generation [10]. R. Aylet continues this thought and expresses his point of view that in Western culture the idea of robot hostility is associated with "inherited Western society the idea of a hybrid", arrogance in an attempt to surpass the gods, a clear example of this is the image of a monster created by Dr. Frankenstein, the story of which has become the leitmotif for many stories about robots [1]. The main results. As we can see, technogenic stereotypes, at first glance strange and superficial, have a mythological basis that allows members of the linguistic and cultural community to interact with a technological object, relying not on their own experience, but on the experience of linguistic interactions of native speakers, that is, on images heard from parents or friends, meanings manifested in religious beliefs, history, borrowed from books, newspapers and TV shows. Using the example of two linguistic cultures, we have shown a different mythology of technogenic stereotypes. The orienting stereotype acts as a mechanism of influence, since a person is guided not by his own experience and critical thinking, but is completely subordinated to those images that are formed indirectly, with the help of language. Conclusion. In this article, we have considered the phenomenon of stereotype as a cognitive-linguistic phenomenon. Linguistically, stereotypes are characterized by stable descriptions of the stereotyped object or situation, which consist of its speech portrait, descriptions of actions, typical dialogues, scenarios, etc. After studying and analyzing the English-language discourse, which expresses disturbing attitudes about artificial intelligence and technology as a danger to humanity, we conclude that the English language explicates the idea which differs from the Eastern understanding of the introduction of new technologies. It is obvious that the Western (as well as the Eastern) point of view on the risks associated with a possible negative scenario is not based on analytical and critical judgments, but is rooted in the linguistic and cultural community in the form of a system of stereotypes. Media platforms, the press, and social networks are fertile ground for fixing stereotypes, which provide thoughtful and emotionally colored information, supplementing it with the necessary comments, thereby forming a certain stereotype. Such a stereotype does not allow the application of individual attitudes and, replacing phenomenological experience, becomes the opinion of society, which, focusing on a certain language policy, can predict and even control people's behavior, providing an effective orienting effect. Information and environmental issues are increasingly of concern to researchers, theorists and practitioners from various countries and cultures, it is accepted that any alarmism strategies in this matter are unproductive. We believe that the time has come for an orderly discussion about what we can and cannot change, and how humanity can fix the current situation. References
1. Aylett R., Sharkey N., & Vargas P. (2022). Life with robots. What Every Anxious Human Needs to Know; translated by I. D. Golybina. Moscow: AST.
2. Humboldt, V. (2000). Selected Works on Linguistics. Moscow: Progress. 3. Dzyaloshinsky, I. M. (2022). Human Cognitive Processes and Artificial Intelligence in the Context of Digital Civilization. Moscow: IP Ar Media. 4. Kosyakov, V. A., Nikolaeva, N. N., & Shastina, I.A. (2011). Orienting Function of the Language. Irkutsk: Publishing house BGUEP. 5. Lisetsky, M. L., & Sopina, À. L. (2019). Friend or Foe: Personification of Robots in Media Discourse. Current problems of linguistics, 1, 29–34. 6. Maturana, U. R., & Varela F. H. (2001). The Tree of Knowledge. Moscow: Progress-Tradition. 7. Motorina, I. E., Akimova, I. A., Kondaurova, K. I., & Maloletneva, I. V. (2021). Technophobia as a Reality of the Modern World. International Scientific Research Journal, 4-3(106), 159–164. 8. Ter-Minasova, S. G. (2000). Language and Intercultural Communication. Moscow: Slovo. 9. Bell, G. (2006). No More SMS from Jesus: Ubicomp, Religion and Techno-spiritual Practices. UbiComp 2006: Ubiquitous Computing. UbiComp 2006. Lecture Notes in Computer; P. Dourish, A. Friday (eds). Vol. 4206. Springer, Berlin, Heidelberg (pp. 141–158). Retrieved from https://doi.org/10.1007/11853565_9 10. Ito, J. (2018). Why Westerners Fear Robots and the Japanese DO Not. Wired. July, 2018. Retrieved from https://www.wired.com/story/ideas-joi-ito-robot-overlords/?mbid=social_twitter_onsiteshare 11. Knight, H. (2014). How Humans Respond to Robots: Building Public Policy through Good Design. The project of Civilian Robotics. Brookings. Retrieved from https://www.brookings.edu/wp-content/uploads/2014/07/humanrobot-partnershipsr2.pdf 12. Mims, C. (2010). Why Japanese Love Robots (And Americans Fear Them). MIT Technology Review, October 12, 2010. Retrieved from https://www.technologyreview.com/2010/10/12/120635/why-japanese-love-robots-and-americans-fear-them/ 13. Mori, Ì. (2017). The Uncanny Valley: The Original Essay. Robotics & Automation Magazine. 2017. Retrieved from https://web.ics.purdue.edu/~drkelly/MoriTheUncannyValley1970.pdf 14. Neuroscience News Communications. (2023). Artificial Intelligence, and Our Fears: A Journey of Understanding and Acceptance Featured. Neuroscience. June 23, 2023. Retrieved from https://neurosciencenews.com/artificial-intelligence-fear-neuroscience-23519/ 15. Nyholm, S. (2022). A new control problem? Humanoid robots, artificial intelligence, and the value of control. AI and Ethics, 3(4), 1-11, 1229–1239. doi:10.1007/s43681-022-00231 16. Robot jokes. Retrieved from https://scoutlife.org/about-scouts/merit-badge-resources/robotics/19223/robot-jokes/ 17. Schodt, F. L. (1990). Inside the Robot Kingdom: Japan, Mechatronics and the Coming Robotopia. Tokio, New York: Kodansha International Inc. 18. Wilson, M. (2017). AI Is Inventing Language Humans Can’t Understand. Should We Stop It? Fast Company. 30 Sept.2017. Retrieved from https://www.fastcompany.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it 19. Wording Robotics: Discourses and Representations on Robotics. (2019). J.-P. Laumond, E. Danblon, C. Pieters (eds.). Springer.
Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
|