Library
|
Your profile |
Litera
Reference:
Gurushkin P.Y., Korneeva K.V.
Artificial intelligence in media communications and creative professions: threats and opportunities
// Litera.
2024. № 9.
P. 91-101.
DOI: 10.25136/2409-8698.2024.9.71609 EDN: FBFUAC URL: https://en.nbpublish.com/library_read_article.php?id=71609
Artificial intelligence in media communications and creative professions: threats and opportunities
DOI: 10.25136/2409-8698.2024.9.71609EDN: FBFUACReceived: 30-08-2024Published: 06-10-2024Abstract: The article is devoted to the study of the influence of artificial intelligence (AI) on modern journalism in the system of digital media communications. Both threats and opportunities introduced by new neural network technologies are under consideration. The article presents the results of a survey conducted among representatives of creative professions, the purpose of which was to find out the attitude to artificial intelligence and its role in their professional activities. The article focuses on the need for further research and regulation of the use of neural network technologies to minimize risks of their use in media communications. To conduct the study, a survey method was chosen that allowed to collect quantitative data on the opinions and perceptions of AI by representatives of the media industry. The scientific novelty of this study lies in the systematic analysis of opinions and perceptions of AI among representatives of professions that receive their main financial income through creative and intellectual work, such as journalism, advertising and public relations, design, music, etc. Unlike most existing studies that focus on the technical aspects of AI, this study focuses on the social and ethical aspects of its application. This allows us to gain a more comprehensive understanding of the role of AI in society and identify key factors influencing its perception. The practical significance of the study lies in the fact that its results can be used to develop recommendations for integrating AI into various spheres of life, as well as regulating issues of control over the use of neural network technologies in modern society. The results can also help analyze public sentiment and expectations, which will allow making targeted and effective decisions in the development and implementation of AI technologies. Keywords: Artificial intelligence, Media communications, Creative professions, Neural networks, Automation, Creativity, Machine Learning, Job Displacement, Ethics, SurveyThis article is automatically translated. Artificial intelligence (AI) is one of the main technologies leading to digital transformation and the so–called fourth industrial revolution. The symbiosis of a machine, a person and a computer is a key feature of phenomena such as virtual reality, video games, smartphone applications, smart watches, information search using voice recognition, etc. Other examples are home assistants created by Google, Amazon, Yandex. The prevailing view in the scientific literature is that digital transformation has a radical impact on work and employment, and the introduction of artificial intelligence and other digital technologies leads to significant job losses. Until recently, the main interest in the impact of AI has been focused on routine professions, as well as those activities and tasks that can be improved through automation and AI applications. However, recently this interest has spread to the fields of art and creativity, given that AI also affects them. Modern media communications are an excellent example of the integration of new computer technologies, increasingly we are talking about the ability of artificial intelligence to draw pictures, compose music, animate photographs, write thematically and stylistically complex texts, create graphic design, etc. But what exactly is the impact of AI on media communication professionals and representatives of creative professions, such as journalism, music, advertising and public relations? Creativity is one of the human abilities that is considered the most difficult to emulate. Creative professions are based on creative thinking and require a non-standardized approach to tasks. At the center of this creative and intellectual activity is the concept of "inspiration", which neural networks are not able to copy. To maximize the understanding of the perception of representatives of creative professions of the situation around the rather intensive process of integrating AI into everyday and professional life, we conducted a pilot survey. The study sample included 483 respondents (251 men aged 18-35, 232 women aged 18-35) of creative professions (journalists, designers, bloggers, producers, musicians, artists, composers, photographers and videographers). According to the Student's criteria, the sample is representative, allows you to obtain sufficiently reliable data for analysis, and fully reflects the general population. The questionnaire questions were designed to cover various aspects of AI perception, including its current and future applications, human replacement capabilities, and the need to regulate and control the use of AI. Perception of differences in the use of AI The first question of the study concerned the perception of differences in understanding the difference between two terms: "neural network" and "artificial intelligence". The respondents were asked to answer whether they consider these concepts to be identical and whether they have enough knowledge to explain it. This issue is currently fully relevant for understanding the perception of digital integrations in spheres and industries for representatives of most professions, for any modern and socialized person. Yes, there is and I can explain it: A significant part of the respondents (about 40%) replied that they see differences in these concepts and can explain it. This is how representatives of intellectual labor in the field of media communications reacted, who consciously and purposefully use the capabilities of neural networks for personal and professional purposes. Yes, most likely there is, but I can't explain it: 55% of respondents expressed the opinion that there are differences in the use of AI, but they can't explain exactly what they are. This may indicate that, despite the general awareness of the possibilities of AI, not all respondents have sufficient knowledge to deeply understand the specific applications and nuances of using AI in various fields. There is no difference, but I cannot explain it: about 3% of respondents do not see any differences between the concepts of "artificial intelligence" and "neural network", although they cannot explain why. Perhaps these respondents perceive AI as a temporarily popular technology that is more often implemented for entertainment. This category does not realize the real possibilities of neural networks and does not need a deep dive into the theory and practice of their application. There is no difference, and I can explain it: about 3% of respondents are sure that AI and the neural network are identical concepts, and they can explain it. They believe that the basic principles of AI and neural networks are the same, and are characterized primarily by big data processing and machine learning, they are universal and can be used in any context. I find it difficult to answer: about 1% of respondents chose a neutral option, avoiding a direct answer to this question. This may indicate a lack of awareness or confidence in their knowledge on the topic of AI. However, this indicator is almost the most important, since it speaks most eloquently and exhaustively about the popularity of both AI technologies themselves and the general information noise around them.
The next question of the study concerned the real and potential threats of AI to professional implementation. It is important to note that the wording does not link specifically to the media communications industry, respondents were asked to answer whether they believe that the risks associated with the widespread integration of neural networks are sufficiently real and dangerous in the long term for society as a whole. Of course, through the prism of his professional experience. Yes, it certainly is: A small proportion of respondents (about 15%) believe that AI is already a source of risk for many professions. These respondents see in AI the potential for significant personnel changes in various fields, including the economy, health, education and daily life, including the one in which they are implemented. Perhaps in the future, but not now: about 50% of respondents believe that AI may become a very real threat in the future, but so far its role is not so significant. This part of the respondents believes that AI technologies have not yet reached their maximum potential and require further development, primarily in overcoming problems with understanding the context and the need to process a large amount of data for training. It is not now and will not be in the future: approximately 30% of respondents believe that they are completely protected from any digital competition. For them, AI does not pose a real personnel threat today. These respondents are skeptical about the capabilities of AI and do not expect significant changes related to its development. They perceive AI as a tool with limited use, which will not be able to replace or significantly influence the creative aspects of human activity. There is reason to believe that this opinion applies to any other mechanized and digital innovations, which, in their opinion, will only simplify work processes, but there can be no talk of any equivalent replacement. In this context, it is interesting to conduct a repeat study with a comparative component on this topic in a year or two. I find it difficult to answer: about 15% of respondents found it difficult to answer this question. This figure is quite large for such a question, which may indicate a lack of awareness or, rather, a refocus on other, more important and understandable threats and difficulties. The possibility of replacing human AI The third question of the study concerned the possibility of replacing human AI in various fields of activity. Yes, I think it can: 6% of respondents believe that AI can completely replace them right now. In this case, it is rather an impulse for further research: representatives of which spheres and areas of media communications determine this danger to be so real; are there actual examples of such a replacement; is this ratio constant, regardless of the leaps in the development of digital technologies, etc. Not now, but in the future it is possible: 46% of respondents believe that AI can replace humans in the future, but so far the technology is not sufficiently developed for this. These respondents see the potential of AI in automating and optimizing various processes, but also recognize that at the moment AI cannot completely replace human intuition, creativity and emotional intelligence. They expect that with the development of technology, AI will become more capable of performing complex tasks that require these qualities. No, it will never be able to replace: about 40% of respondents are sure that AI will never be able to completely replace humans. These respondents point to unique human qualities such as empathy, creativity, and adaptability that cannot be replicated by machines. They also express concerns about the ethical and social consequences of completely replacing human labor with AI. I find it difficult to answer: about 7% of respondents found it difficult to answer this question. This may indicate a lack of confidence in their knowledge or a lack of understanding of the current state and prospects of AI development.
The need to restrict access to AI The next question of the study concerned the need to restrict access to AI and regulate its use. No, everyone should have equal opportunities to use AI: 65% of respondents believe that access to AI should be equal for everyone. The majority of respondents believe that AI can become a powerful tool for improving the quality of life and creating new generative opportunities, limiting access to it can determine the emergence of certain social inequalities, which will negatively affect some vital socio-economic sectors. They also note that equal access to AI can contribute to a more equitable distribution of its benefits and reduce the risk of concentration of power in the hands of a minority. Yes, access should be restricted: approximately 20% of respondents believe that access to AI should be restricted. These respondents point out the potential risks and dangers associated with the unrestricted use of AI, such as privacy violations, cyber threats, and the possibility of using AI for malicious purposes. They believe that the introduction of restrictions and regulation of access to AI will help reduce these risks and ensure safer use of technology. I find it difficult to answer: about 20% of respondents have difficulty answering this question. This may indicate a lack of information about the possible risks and benefits of restricting access to AI, or uncertainty about the need for such measures. The last question of the study concerned the need to control the use of AI to ensure its safety and effectiveness. Yes, I think this will help make the use of AI more controlled: approximately 67% of respondents believe that control over the use of AI is necessary and will help make it safer and more effective. These respondents point to the need to create certain regulatory and ethical frameworks for the use of AI in order to prevent possible abuse and minimize risks. It is likely that labeling the content generated by the neural network will help to avoid illegal borrowing, but it is impossible to completely secure your professional perspective in this way. This is especially true for representatives of creative professions in the field of media communications. No, this will deprive AI of its main advantage: about 17% of respondents believe that the introduction of control over the use of AI will deprive it of its main advantage. At the same time, it is rather not about flexibility and the prospect of rapid development of new technologies, but about minimizing the possibilities to hide the work performed by the neural network. Undoubtedly, mandatory labeling can slow down innovation and limit the potential of AI, but at the same time discipline (structure) its unlimited use, which should have a positive impact on those who are engaged in creative work. I find it difficult to answer: about 15% of respondents find it difficult to answer this question. This may indicate uncertainty or lack of information about the consequences of the introduction of control over the use of AI and its impact on technology development. In addition, confusion may also be caused by a lack of understanding of exactly how the use of neural networks can be tracked and how the generated content can be required to be labeled.
Conclusion This study confirmed that opinions about the role and future of AI among those who work in the field of media communications and earn creative work vary greatly. Nevertheless, the majority of respondents recognize the importance of AI and its potential, but openly broadcast concerns about replacing humans with new computer and network technologies and talk about the need to control their use. 1. Division of opinion: The study demonstrated that there is no consensus among representatives of creative professions about the role and future of artificial intelligence. A significant proportion of respondents recognize its importance and potential in the field of media communications, but many also express concerns about the threat of replacing human labor with AI, especially in the context of creative and artistic tasks. 2. Awareness problem: Data analysis has shown that despite the growing interest in AI, not all respondents have sufficient knowledge to fully understand the differences between AI and neural networks, as well as their potential and risks. This highlights the need for further education and training of professionals in this field. 3. Risks and ethical challenges: A significant proportion of respondents express concern about the possible ethical and social consequences of the introduction of AI, including the risks of job loss and increased inequality. These concerns highlight the importance of developing a regulatory and ethical framework for using AI to minimize negative impacts. 4. The need for regulation: Most respondents believe that control and regulation of access to AI is necessary to ensure its safe and responsible use. At the same time, there is a group of respondents who fear that excessive regulation could slow down innovation and limit the potential of AI. 5. The future of AI in the field of media communications: despite existing concerns, many respondents see the potential of AI in expanding the possibilities of creative processes, however, they emphasize that at the current stage of development, AI technologies are not able to completely replace unique human qualities such as creativity, intuition and emotional intelligence. In general, the study highlights the need to continue studying the impact of AI in the media communication system and society as a whole, as well as developing strategies aimed at the safe and effective integration of AI into various spheres of life. References
1. Antonov, V. P. (2021). Artificial Intelligence and Big Data: Fundamentals of Analysis and Interpretation. Moscow: Yurait.
2. Borisov, A. V. (2018). Artificial Intelligence in Manufacturing Automation. St. Petersburg: BHV-Petersburg. 3. Bruce, A., & Sanders, T. (2017). Artificial Intelligence: Principles and Approaches. Moscow: Alpina Publisher. 4. Grigoriev, M. N. (2021). Integration of Artificial Intelligence into Business Processes. Moscow: DMK Press. 5. Ilyushin, A. G. (2021). Artificial Intelligence and Its Applications. Moscow: KnoRus. 6. Krylov, A. S. (2022). Neural Network Technologies and Their Applications in Modern Science. St. Petersburg: Piter. 7. McKinsey & Company. (2023). Generative AI and the Future of Work: Impacts on Employment, Productivity, and Wages. McKinsey Global Institute. Retrieved from https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/generative-ai-and-the-future-of-work 8. Morozov, D. V. (2023). Application of Artificial Intelligence in Management. St. Petersburg: BHV-Petersburg. 9. Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). New York: Pearson. 10. Sidorov, E. A. (2021). Artificial Intelligence and Cybersecurity. Moscow: Infra-M. 11. Sokolov, V. V. (2023). Artificial Intelligence and Ethics: Philosophical Aspects. Moscow: Nauka. 12. Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Alfred A. Knopf. 13. Frolov, D. S. (2022). Artificial Intelligence in Law: Opportunities and Challenges. Moscow: Eksmo.
First Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
Second Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
|