Library
|
Your profile |
NB: Administrative Law and Administration Practice
Reference:
Yakunina A.V.
Protection of privacy in the era of digital communication development
// NB: Administrative Law and Administration Practice.
2024. № 2.
P. 53-62.
DOI: 10.7256/2306-9945.2024.2.70695 EDN: EPDVQZ URL: https://en.nbpublish.com/library_read_article.php?id=70695
Protection of privacy in the era of digital communication development
DOI: 10.7256/2306-9945.2024.2.70695EDN: EPDVQZReceived: 09-05-2024Published: 03-07-2024Abstract: This article examines the influence of technological progress on ensuring and protecting the privacy in the context of expanding digital communications. The level of expectations of a specific society regarding the state of privacy serves as a benchmark, either stimulating or weakening actions to improve national legislation. A sensitive attitude towards the perception of technology, based on common sense, will help avoid obvious violations of proportionality and maintain a balance between private and public interests in society. The author analyzes the impact of digital technologies on private life with the expansion of state support in this area, using the most influential corporations, Palantir Technologies Inc. and Cambridge Analytica, as examples, and proposes effective measures to safeguard personal data in the modern digital world. In preparing this article, a complex of methods was used, including comparative legal analysis, systemic analysis, historical legal method, and empirical methods for analyzing the practical implementation of legal norms. The scientific novelty of the research is determined by a comprehensive and thorough analysis of the impact of digital technologies on privacy, as the applied information technologies provide a greater opportunity for both conscious and unconscious violations of rights and freedoms. Additionally, the article discusses the ethical and legal aspects of using personal data of citizens and provides recommendations for improving confidentiality in the digital age. In this regard, the issue of privacy in the context of the evolution of digital communications becomes particularly relevant. The conclusion of the article discusses the importance of balancing public and private interests in the field of digital technologies and data protection and proposes ways to address this complex issue in the interests of all stakeholders. Keywords: privacy, confidentiality, personal data, digitization, Palantir Technologies, Cambridge Analytica, data protection, transparency report, impact assessment, digital communicationsThis article is automatically translated. The human right to privacy has been enshrined in law since the middle of the 20th century and has been confirmed in international documents, including the Universal Declaration of Human Rights (article 12), the Convention for the Protection of Human Rights and Fundamental Freedoms (article 8), the Convention on the Rights of the Child (article 16), the International Convention on the Protection of the Rights of All Migrant Workers and their family members (Article 14), the CIS Convention on Human Rights and Fundamental Freedoms (Article 9), as well as many other international treaties and regional agreements on human rights [1, pp. 13-19]. This process was an important step in the recognition of this right by the international community. The analysis of acts of international law allows us to conclude that the human right to privacy belongs to the category of fundamental human rights and freedoms recognized by birth [2, 103-105]. Any interference by other persons must be determined by law and subjected to a critical assessment of its necessity and proportionality. The level of expectations of a particular society regarding the state of privacy serves as a benchmark that stimulates or weakens actions to improve national legislation. A sensitive attitude to the perception of technology, based on common sense, will avoid obvious violations of proportionality and maintain a balance between private and public interests in society. Technological progress creates new opportunities for people, society and the state, but also generates many problems, threats and risks [3, pp. 118-119]. An alarming trend is the increasingly active use of information and communication technologies by State and non-State actors, as well as mixed (commercial organizations with state participation) groups, in their own interests. Large IT corporations are multidimensional and combine coercive and disruptive measures using both conventional and non-traditional methods and tactics to achieve their goals. Today, the main problem is that citizens often do not know which products and software are used by companies to work with their data, as well as which data is processed by such organizations and what precautions are provided to protect personal data and prevent their misuse. A private company, having access to a large amount of data about a person, rejects requests for transparency, the so-called "transparency report", because of the protection of commercial interests. The data obtained by such companies is used in their own economic interests for advertising, covert surveillance, differentiated pricing, influence on elections, targeted misinformation, forecasting sentiment in investment markets, corporate risk management, etc. The applied information technologies provide a great opportunity for both conscious and unconscious violation of the right to privacy. In this regard, the issue of privacy in the context of the evolution of digital communications is becoming particularly relevant. In most countries of the world, the development and implementation of digital technologies is a matter of domestic state policy, which, in turn, is reflected in government programs for long-term development. Similar government programs are used in Japan (Smart Japan ICT Strategy), Canada (Pan-Canadian Artificial Intelligence Strategy), Great Britain (UK Digital Strategy 2017), France (AI for Humanity), India (AI Garage), Russia (Digital Economy of the Russian Federation) and many other countries. In turn, these programs require the adoption of appropriate legal acts and increased financial support for the scientific and technical sphere [4, pp. 76-82]. At the same time, digital technologies are often so important to citizens that their misuse carries a serious risk of violating privacy and other fundamental rights and freedoms. Various types of mass surveillance, Internet monitoring and data collection measures, such as browser browsing history; purchase history; search history; location; financial data; health data and other personal information enter data banks, affecting the interests of each person [5]. In this regard, there is concern about the threats of violation of the right to privacy, especially in the context of expanding state support for digital technologies and strengthening total control over human activities both on the Internet and in real life. This underlines the need to develop an effective international system for the protection of individual rights and freedoms online. Personal data of citizens is needed not only by the state. Commercial organizations constantly collect data for their own benefit. They claim that this is done to increase the quality of customer service, but this information can also be used against the interests of citizens. For example, there are concerns that Amazon Echo records conversations, Apple transmits user data to intelligence agencies, and Google sells its users' data. In this regard, large technology companies are being scrutinized for using unethical methods in dealing with confidential user data. With huge human and financial resources, large corporations created technological monopolies and gradually began to dominate the scientific and technical market, thereby establishing a broad economic influence. However, due to unfair treatment of the confidentiality of user data, these organizations often become involved in litigation, which to some extent reflects the shifting balance between private and public interests. Of particular concern is the growing influence of technology companies actively seeking cooperation with the government and eventually penetrating the State system through the provision of technical support for information resources of public authorities, thereby influencing their work. As an example, one of the most influential corporations in the modern digital world, Palantir Technologies Inc. (hereinafter referred to as Palantir, Palantir Technologies), is a data integration and analytics company whose products are often used by national security services and law enforcement agencies around the world. Palantir's services, including its Gotham platform, are used by police across America, and sometimes these agreements become part of non-public arrangements. For example, in 2012, the New Orleans Police Department and Palantir Technologies entered into an agreement to provide Palantir with software to track connections between citizens and previously identified gang members as part of a crime prevention program. The police were able to analyze their criminal past and social media activity and predict the likelihood of crimes being committed by them or against them and their family members. At the same time, there was no official mention of cooperation between the police and Palantir, and therefore, questions about the sources of funding for the above-mentioned program, the compliance of the use of the surveillance system with ethical standards and legislation remained unanswered. We believe that the answer should lie in legislative norms aimed at achieving a balance between private and public interests by creating mechanisms that guarantee the right to privacy, along with guarantees that preserve the right to search, receive and freely disseminate information. It is obvious that Palantir Technologies used New Orleans as a testing ground for its police forecasting technology in order to be able to conclude multimillion-dollar contracts with law enforcement agencies around the world in the future. It is known that more than ten years after Palantir began operating in New Orleans, it patented at least one crime forecasting system and began providing similar software to the intelligence services of other countries to determine the propensity of citizens to commit terrorist attacks. On the one hand, measures to conceal the details of the agreement between government officials and a private company could have been justified by reducing crime if the proposed forecasting technologies had significantly affected the crime rate in New Orleans. However, if you believe the statistics provided by the Federal Bureau of Investigation (FBI), this has not significantly affected the crime rate, and even an increase in the crime rate has been noticed since 2016: The crime rate in New Orleans, Los Angeles, in 2015 was 949.56 per 100,000 population, which is 2.5% less than in 2014; The crime rate in New Orleans, Los Angeles, in 2016 was 1069.72 per 100,000 population, which is 12.65% more than in 2015; The crime rate in New Orleans, Los Angeles, in 2017 was 1,121.41 per 100,000 population, which is 4.83% more than in 2016; The crime rate in New Orleans, Los Angeles, in 2018 was 1,163.3 per 100,000 population, which is 3.74% more than in 2017. Thus, it is impossible to talk about a favorable outcome of using tracking technologies, since as a result of the "experiment" Palantir Technologies company gained access to the personal information of American citizens, while the citizens themselves were not protected from crimes committed against them and their loved ones. This is not the first time that the use of Palantir technologies by the US Government has become known. Since at least 2009, the company has been supporting the Pentagon in detecting improvised explosive devices in Afghanistan and Iraq as part of a joint risk assessment program. Since the project was carried out during the period of hostilities, the company did not have to worry about violations of civil liberties that inevitably arise when using crime prevention technologies, even in wartime [6]. Today, Palantir is engaged in digital profiling and actively cooperates with the US Migration and Customs Service (ICE) to facilitate the deportation of migrants, while simultaneously processing a wide range of data: citizenship, information about an asylum application, racial or ethnic origin, political views, religion and philosophical beliefs, union membership, income information, sexual life and sexual orientation, criminal record information and much more. The data obtained is periodically used to violate the declared human rights and freedoms. For example, there are two known cases in 2017 and 2019 when ICE used Palantir technologies to conduct raids and facilitate the arrest of migrants, which led to massive violations of civil rights and the separation of children from their families. Of course, the State has the right to exercise jurisdiction within the established borders, but taking into account its obligations regarding respect for human rights. While protecting the rights and freedoms of one group of people, the rights and freedoms of others should not be neglected. The London Police tested Palantir Predictive Crime Mapping products for twelve months, from May 2014 to April 2015. Palantir later won a tender for a contract with the UK National Health Service worth 480 million pounds ($579 million) to redesign the medical data collection system, identify patterns and, ultimately, rebuild the entire system, despite opposition from the British Medical Association (BMA), the Association of Doctors of Great Britain, patient groups and privacy advocates. In February 2017, after the purchase of software from Palantir Technologies, the Danish Ministry of Justice submitted for public discussion a bill aimed at justifying the processing of personal data of the population using software provided by Palantir (for the Danish police and intelligence). In January 2020, Palantir expanded the geography of its influence and began working with governments of other countries. In recent years, Palantir's contracts with governments around the world increased by 74% from December 31, 2018 to June 30, 2020 – from $670.6 million to $1.2 billion. The lack of transparency in all these contracts is a constant concern on the part of the public. It is impossible to be sure whether, for example, an assessment of the impact on personal data protection and an assessment of the impact on human rights in relation to Palantir products has been carried out, since the company itself does not disclose this information. Over the past ten years, there has been a continuous increase in the number of companies using technological progress to expand their influence in public and public affairs. A striking example is the Cambridge Analytica company, which engaged in psychographic profiling of American voters, thereby posing a threat to the electoral process. This example should also be seen as a challenge to democratic institutions in general. The Facebook scandal (the Facebook social network belongs to the multinational holding company Meta, whose activities were recognized by a court decision as extremist and banned in the Russian Federation) broke out in 2018 [7], when Christopher Wiley, director of research at Cambridge Analytica and SCL (Strategic Communication Laboratories) Group, acted as The company's whistleblower gave a long interview, explaining that psychographic profiling allowed Cambridge Analytica to influence voters by using social media data to create a "psychological warfare" tool. Facebook has obtained data from about 87 million Facebook users (the Facebook social network belongs to the multinational holding company Meta, whose activities were recognized by a court decision as extremist and banned in the territory of the Russian Federation) [8] using the application "thisisyourdigitallife" to create a personality profile. Users who downloaded the app via a social network not only answered questions about themselves, but also agreed to access other profile data, including likes and contact lists. Thus, the company received about five thousand "data points" about each user and his contacts, thereby being able to model the behavior of about 230 million people. The data obtained was used to create Cambridge Analytica targeting algorithms to predict and influence the behavior of individual voters in the 2016 presidential election. To date, the question of the real extent of Cambridge Analytica's influence on the will of American and British voters remains open, but there is no doubt that the negative experience of this company in different countries has attracted a lot of attention, which indicates the need for further development of effective means of protecting subjective human rights in the digital environment [9]. Thus, against the background of a crisis of confidence in the work of authorities with personal data after numerous scandals related to the misuse of citizens' data around the world (from Cambridge Analytica to the A-levels algorithm), it would be necessary to avoid another lack of mutual understanding in the citizen–society–state system by introducing requirements for greater transparency and applying stricter regulatory measures in the activities of technology giants. At the same time, it is important to note that excessive regulation can hinder the development of innovations and undermine the economic potential of the digital space, therefore it is important to ensure a balance of interests in this area [10, 678-679]. It is necessary to conduct independent monitoring of compliance by technology companies with international and national data protection standards as often as possible, as well as to assess their impact on human rights in general. In addition, in our opinion, it is necessary to increase the transparency of the procurement activities of the technological sector of the state. The conclusion of any contracts with technology companies should be based on the principles of legality, fairness and transparency, as well as integrity and confidentiality. If a company gets access to personal data of citizens, the purposes of processing such data must be defined, their storage period is limited and strict accountability is established. It is extremely important to increase the level of public attention to the issues of cooperation between the state and technology companies. It is advisable to introduce rules aimed at ensuring transparency in this area: Government agencies that have contracts with large technology companies should publish such contracts and agreements on the sharing of personal data and conduct an appropriate human rights impact assessment that will compare and assess the risks of data processing and will contain an action plan to reduce them to an acceptable level. In addition, the creation of ethical rules for data collection and processing can contribute to a favorable atmosphere for business development, as well as stimulate innovation through a culture of responsible behavior. Both with companies such as Palantir Technologies and with any other large technology companies with which the state cooperates, there must be reliable guarantees for the protection of human rights and freedoms. No one should sacrifice their privacy rights as a price to pay for living in the digital age. References
1. Garcheva, L.P. (2022). On Some Risks of Human Rights Violations in the Conditions of Digitization. Scientific Notes of V.I. Vernadsky Crimean Federal University. Legal Sciences, 1, 13-19. doi:10.37279/2413-1733-2022-8-1-13-19
2. Romashov, P.A. (2019). On the Right to Privacy in the Digital Age. Perm Legal Almanac, 2, 103-118. Retrieved from https://elibrary.ru/item.asp?id=38548880 3. Kairbayeva, L. K. (2020). Protection of personal data in international and European. Law Scientific and legal journal "Bulletin of the Institute of Legislation and Legal Information of the Republic of Kazakhstan", 5(63), 118-124. Retrieved from https://elibrary.ru/item.asp?id=48340106 4. Silchenko, R.N. (2019). Problems of Protecting Human Rights and Freedoms in the Application of Artificial Intelligence Technologies. Problems of Economics and Legal Practice, 4, 76-82. doi:10.33693/2223-0092-2021-11-3-74-79 5. Shumilenko, A.P., & Pastukhova, L.V. (2019). International Human Rights Law. Simferopol: IT "ARIAL". 6. Aituarova, A.M. (2023). Returning to the Scientific Publication of M.Zh. Kulikpaeva "International Legal Foundations for Ensuring the Right to Privacy in the Context of Digital Technology Development". Bulletin of the Institute of Legislation and Legal Information of the Republic of Kazakhstan, 1(72), 267-275. doi:10.52026/2788-5291_2023_72_1_267 7. Hu, M. (2020). Cambridge Analytica’s black box. Big Data & Society, 7(2). doi:10.1177/20539517209380 8. Kostina, O.V. (2022). Inheritance of Social Media Accounts as a Means of Balancing the Interests of Citizens and Entrepreneurs. Legal Science, 6, 53-56. Retrieved from https://www.elibrary.ru/item.asp?id=49099138 9. Rodrigues, R. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology, 4. doi:10.1016/j.jrt.2020.100005 10. Bogdanov, D.E. (2020). Technodeterminism in Private Law: Influence of Bioprinting on the Development of the Concept of Protecting the Right to a Digital Image. Bulletin of Perm University. Legal Sciences, 50, 678-704. doi:10.17072/1995-4190-2020-50-678-70
Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
Thus, the works of the above authors correspond to the research topic, but do not have a sign of sufficiency, do not contribute to the disclosure of various aspects of the topic. Appeal to opponents. The author conducted a serious analysis of the current state of the problem under study. All quotes from scientists are accompanied by author's comments. That is, the author shows different points of view on the problem and tries to argue for a more correct one in his opinion. Conclusions, the interest of the readership. The conclusions are fully logical, as they are obtained using a generally accepted methodology. The article may be of interest to the readership in terms of the systematic positions of the author in relation to the issues stated by the author. Based on the above, summing up all the positive and negative sides of the article, "I recommend publishing" |