Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Security Issues
Reference:

Ethical Regulation of Artificial Intelligence as a Factor of Information Security: the Experience of Thailand

Gorian Ella

ORCID: 0000-0002-5962-3929

PhD in Law

Associate Professor, Vladivostok State University

690014, Russia, Primorsky Krai, Vladivostok, Gogol str., 41, office 5502

ella-gorjan@yandex.ru
Other publications by this author
 

 

DOI:

10.25136/2409-7543.2022.3.38626

EDN:

RMFIRR

Received:

11-08-2022


Published:

18-08-2022


Abstract: The object of the study is the relations in the field of ethical regulation of the use of artificial intelligence technologies. The subject of the study is represented by the regulatory documents of Thailand, which establish prescriptions for the design, development, research, training, deployment and application of artificial intelligence technologies. The features of the Thai approach to the regulation of relations in this area are highlighted. The subjects involved in the regulatory mechanism of artificial intelligence are determined. The ethical requirements for the use of artificial intelligence technologies in relation to certain categories of subjects are investigated. The interrelation of ethical regulation of artificial intelligence technologies and security in the information space is traced. The national model of ethical regulation of artificial intelligence is characterized. Thailand implements a state-centrist model of ethical regulation of artificial intelligence, in which the state defines the basic ethical principles of artificial intelligence and regulates in detail the activities of public and private sector entities on each of these principles. A feature of the Thai model is the emphasis on the importance of training and advanced training of civil servants who are able to use digital technologies in management processes and effectively implement ethical principles in the course of performing their functions. Thailand has included artificial intelligence users in the circle of subjects responsible for the implementation of ethical requirements, as active participants in regulatory processes that can quickly influence the content of algorithms by providing the necessary information to artificial intelligence operators. The Thai model is designed to educate not a passive, but an active user of digital technologies, who is encouraged to raise awareness and improve skills in their use, which leads to strengthening the legal status of users with the help of "soft law" tools. Thailand's implementation of its model of ethical regulation of artificial intelligence will have a positive impact on ensuring information security.


Keywords:

personal data, information security, ethics, data security, critical information infrastructure, Thailand, ASEAN, digital economy, fintech, national security

This article is automatically translated.

In charge. Nowadays, artificial intelligence technologies (hereinafter referred to as AI) play an important role in various aspects of human activity: Siri, Alexa and Google Assistant in smartphones, Tesla car autopilot technology, social media feeds and music services, a list of recommendations on YouTube, traffic maps of Yandex services.Maps and Google Maps are just a few examples of the use of AI technologies. AI is actively used in the financial and banking sector, insurance, aviation and military industry, medicine and law enforcement.

Artificial intelligence is part of the decision-making and processing tools, or it can be the main technology of the production process, which threatens changes in the economy and in the employment market. A negative factor in the use of AI technologies is the unpredictability of decisions and the consequences of their application. Therefore, the establishment of clear and unambiguous guidelines regarding the use of AI technologies is one of the tasks of the state.

The government, the scientific community and the industry have come to a consensus that the security of applications using AI technologies is becoming a key factor in their application, therefore, at the state level, it is necessary to adopt guarantees to reduce the potential risks associated with AI [1]. To this end, the private and public sectors are developing catalogs of ethical AI principles, compliance with which in existing AI systems and products is associated with the need for high-tech AI management systems, including its training, testing and security verification. These management systems are still at the stage of intensive development, therefore they are not sufficiently ready for widespread commercial implementation. The main technical obstacles are deeply rooted in the fundamental problems of modern AI research, such as moral cognition at the human level, ethical reasoning based on common sense and interdisciplinary development of AI ethics. Nevertheless, some states at the governmental level define fundamental ethical principles within which it is necessary to develop and apply AI technologies.

The researchers note that against the background of global competition for the use of the opportunities provided by artificial intelligence, many countries and regions are openly participating in the "race for AI" [2]. In their opinion, the increased transparency of the risks associated with AI technology has led to increasingly loud calls for regulators not to limit themselves to the benefits, but also to ensure proper regulation to ensure "trustworthy" AI, that is, legitimate, ethical and reliable. In addition to minimizing risks, such regulation could contribute to the introduction of AI, increase legal certainty and, therefore, also promote the positions of countries in the race. Consequently, according to the researchers, the "race for AI" also generates a "race for AI regulation" [2].

Following China [1] and Singapore [3], Thailand also started regulating AI technologies, challenging Singapore's leadership in ASEAN in the field of information security and fintech, taking initiatives to unify information security standards and offering its infrastructure and resources for international projects [4]. Thailand is actively building up the regulatory framework for regulating cybersecurity processes. During the three years from 2017 to 2019, the National Cybersecurity Strategy 2017-2021, the Personal Data Protection Act 2018 and the Cybersecurity Act 2019 were adopted. For the purpose of orderly development of the fintech industry, the Digital Government Development Plan for 2020-2022 (Digital Government Development Plan 2020-2022, URL: https://www.dga.or.th/wp-content/uploads/2021/09/-?.?.-2563-2565-.pdf ) and the Manual on Ethics of Artificial Intelligence prepared by the Ministry of Digital Economy and Society of Thailand (AI Ethics Guideline, URL: https://www.etda.or.th/getattachment/9d370f25-f37a-4b7c-b661-48d2d730651d/Digital-Thailand-AI-Ethics-Principle-and-Guideline.pdf.aspx?lang=th-TH).

Ethical requirements for AI technologies and, accordingly, products and services using them are intended to secure the information technology sphere, to exclude violation of the rights and legitimate interests of subjects. Undoubtedly, AI is used to facilitate and accelerate technical processes and minimize production costs, which benefits both AI operators and users of products and services at the same time. But, on the other hand, there are risks of abuse of these technologies by both operators and third parties illegally interfering in the processes. We should not forget about the risks of obtaining discriminatory/unfair AI solutions in the event of a flaw in algorithms and vulnerability of process integrity. These factors affect the overall security of information systems. Therefore, the ethical regulation of AI in the future affects the information security of a particular branch of the sphere, whether it is the financial and banking sector or healthcare. In addition, the use of AI technologies in critical information infrastructure facilities requires a cautious approach in both technical and organizational and legal aspects. The above determines the relevance of our research.

The purpose of the study is to determine the features of Thailand's approach to the ethical regulation of the use of artificial intelligence technologies in the aspect of information security. In order to obtain the most reliable scientific results, a number of general scientific (system-structural and formal-logical methods) and special legal methods of cognition (comparative legal and formal-legal methods) were used.The subject of the study is the main program and regulatory legal acts of Thailand in the field of regulation of AI technologies, as well as a number of scientific studies on the topic.

The topic we have chosen for the study has not yet been presented in the Russian scientific literature and is a logical continuation of our research within the framework of the RFBR grant "Ensuring the rights of investors in the banking and financial sectors in the conditions of digitalization of the economy in the Russian Federation and the leading financial centers of East Asia: a comparative legal aspect". From foreign studies, two key works should be noted. In 2008, an article was published on ethical issues and political consequences of the introduction of information technologies into the national security system of Thailand [5]. The value of this study lies in the fact that it was the first attempt to shed light on the "forbidden" topic in Thailand by comparing the experience of the United States and Thailand. According to the author, the American model of national security has a significant impact on the approach of Thai intelligence and the use of information technology in the national security system, while both intelligence communities have similar problems with intelligence and cause ethical debates about human rights violations. The second article was published in 2021: the author analyzed the Manual on the Ethics of Artificial Intelligence in the light of political events and social transformations [6]. According to the author, on the one hand, Thailand is trying to join the ranks of more developed economies, but, on the other hand, the legislator has a strong desire to preserve his own traditional values. Thailand has not yet managed to resolve this dilemma, and this can be seen in the content of the document.

The main part. Artificial intelligence is changing labor markets in both developed and emerging economies, raising fears of unprecedented job cuts. AI technologies are already widely used in all sectors and sectors of the economy: from law and medicine to accounting, digital manufacturing and tourism. Some industries are particularly vulnerable to automation, such as industrial manufacturing and business process outsourcing (BPO). In emerging economies that have a significant workforce in these areas, there is a growing call for proactive action to minimize job losses and take advantage of the job creation opportunities provided by technology. Therefore, governments should be interested in the proper use of AI technologies and related automation tools to strengthen the international competitiveness of a national manufacturer, improve the quality of public services and develop new types of business.

In Thailand, the authorities are busy developing effective measures that will allow them to use these opportunities and minimize risks. Several strategic technological directions of digitalization of public administration have been identified (Digital Government Development Plan 2020-2022, URL: https://www.dga.or.th/wp-content/uploads/2021/09/-?.?.-2563-2565-.pdf) (1) the use of virtual reality (VR) and augmented reality (AR) technologies in modeling environments or situations for the purpose of regulating/managing public safety, telemedicine, new formats of education and tourism; (2) processing big data and creating forecasts and estimates in a business environment using Internet of Things (IoT) and Smart Machine technologies to analyze user responses in real time; (3) application of Advanced Geographic Information System technology in the management of geographical data, as well as its application in the management of agricultural resources, transport system and other areas; (4) disclosure of information and provision of data to users by updating databases and websites to ensure wider public access to information; (5) application of Smart Machine technology to manage and respond to automated services (the Smart Machine system will be gradually developed and, therefore, will be able to assess and solve problems throughout service delivery chain); (6) solving cybersecurity problems by establishing cybersecurity standards, revising the relevant rules in order to improve them; (7) using blockchain technology for data storage and network use in order to verify and reduce the number of intermediaries in a reliable security environment; (8) using cloud computing technology for data storage to simplify installation systems, reducing system maintenance costs and saving investment in networking; (9) the use of IoT technology to facilitate the transformation of public services into digital formats, and at the same time, IoT technology can also support the work of government in the field of communications, the use of mobile technologies, big data analysis and cooperation with the business sector; (10) training of personnel with digital skills and training of civil servants to improve public services through the effective use of digital technologies with effective management.

The Manual on Ethics of Artificial Intelligence (hereinafter referred to as the Manual) establishes six ethical principles of AI: 1) competitiveness and sustainable development; 2) compliance with laws, ethics and international standards; 3) transparency and accountability; 4) security and confidentiality; 5) equality, diversity, inclusiveness and fairness; 6) reliability. All subjects related to AI technologies are divided into three groups: government (regulatory) bodies, AI technology operators (developers, designers, researchers, manufacturers, suppliers) and users. For the first two, separate prescriptions have been developed for the implementation of these principles in their activities. Let's consider how each of the ethical principles should be implemented.

Competitiveness and sustainable development.State (regulatory) bodies should analyze and evaluate research, development and application of AI in terms of benefits for humans, society and the environment, with subsequent support for the most promising projects and stimulating cooperation between stakeholders.

It is necessary to create a digital infrastructure that accumulates knowledge about AI technologies and mechanisms, as well as monitor and control the use of AI technologies in order to prevent negative impact on processes. Regulators should develop a policy to support both public and private entities, stimulate innovation in the field of AI, allowing to create a new industry, and further promote and support it. Special attention is required to promote educational projects to spread knowledge about AI among the general population. In order to counter potential AI threats, it is important to build partnerships with national, regional and international actors and to participate in the development of AI regulatory frameworks on an appropriate scale.

AI technology operators must have the necessary knowledge and understanding of AI technology processes in that part. It is important to present the advantages that the use of a specific AI technology gives to a person, social, economic and environmental spheres of life. It is necessary to reduce the various risks that may arise from the use of AI technology. The design, development and implementation of AI should take into account the advantages acquired by stakeholders and the environment, as well as the possibility of modifying its work.

Compliance with laws, ethics and international standards.State (regulatory) bodies are charged with the responsibility to promote education to increase awareness and understanding in the field of AI and its impact on users, as well as to support research in the aspect of AI and human rights.

Regulatory regulations should be adopted regarding procurement by developers and suppliers of AI technologies, personal data security, the right to privacy, confidentiality and other human rights. It is necessary to support the development and implementation of internationally recognized standards and best practices, and ensure certification (licensing) of the activities of developers and service providers in the field of AI. An important step in this direction is to support the creation of public organizations that exercise public and professional control over AI operators, as well as the creation of a system of legal advice to AI operators on the legal consequences of their activities, ethics and human rights.

AI technology operators need to develop measures to assess, reduce and prevent legal and ethical risks of violation of human rights and freedoms, negative impact on society and the environment. Any actions in relation to AI technologies should be carried out in accordance with the developed ethical principles. An ethical evaluation of research, design, development, maintenance and use of AI technologies should be carried out, the results of which should be presented to interested parties. An assessment of the risks of the impact of research, design, development, maintenance and use of AI technologies with violations of ethical principles should be carried out. AI technologies should be designed in such a way as to ensure the possibility of human intervention at every stage of decision-making and the ability of a person to control all AI actions, including the possibility of adjustment.

Transparency and accountability.AI regulators should review the transparency of models and algorithms used by AI operators in terms of compliance with the principle of explicability: algorithms used in a product/service should be understandable, easily explicable and predictable, as well as AI training methods.

This thesis should be the basis for the policies, technical standards and rules developed by regulators. It is necessary to develop a mechanism to ensure the accountability of the AI operator throughout the entire life cycle of the system. Such a mechanism should provide for internal and external audit. The audit report must necessarily contain paragraphs on risk assessment; actions to reduce or prevent them; the negative impact of AI technologies on humans and the environment, as well as on the person responsible for investigating and eliminating the cause of such an impact.

AI technology operators should provide access to the data arrays used by AI and provide for the possibility of limiting the algorithms and processes used. Otherwise, the algorithms of AI actions should be explained, and interested parties should be informed about the reasons for making this or that decision. Users should be informed that they are interacting with artificial intelligence and understand that the outcome of decisions depends on artificial intelligence. Algorithms should be monitored, and the results should be recorded in the audit log. Diagnostics of data collection methods, data arrays, algorithms and decision-making processes should be carried out periodically.

Security and privacy.Government (regulatory) bodies should develop policies and technical standards for security, personal data protection and confidentiality to minimize the vulnerability of the threat from AI.

It is important to provide for the requirement of disclosure of information relevant to the safety of life, health, and the environment. Regulatory authorities in the field of AI should implement a risk management system, define risk management and internal control methods, as well as review and improve them on a regular basis throughout the entire life cycle of the system, including, if necessary, dismantling the system. The frequency of such events is at least once a year or with significant changes in AI technology. At the state level, a plan for supervision, monitoring and risk management in the long term should be developed, agreed upon and implemented: this will allow us to prepare for the emergence of artificial intelligence capable of continuous self-improvement (recursive self-improving AI). It is necessary to encourage and support cooperation at different levels, as well as to provide for the possibility of developing an integrated AI at the level of an organization, country, region. It is necessary to distinguish between "undesirable" AI technologies (capable of harming a person) and to support the exchange of knowledge and experience of supervision in order to cope with the negative impact of AI technologies.

AI technology operators should design, develop and provide services to ensure the security and protection of information systems from threats and unwanted applications. Special attention should be paid to the protection of personal data, human life and health, as well as the environment. AI technologies used for processing personal data must use the principles of legality and fairness, as well as have a specific purpose of processing, comply with the requirement of proportionality of data processing and confidentiality. Special emphasis is placed on the need to comply with the Personal Data Protection Act 2018, in particular with regard to the volume of data, informing the data subject, anonymization and deletion of personal data, etc. Subjects of critical information infrastructure dealing with AI technologies should be guided by the requirements of the Cybersecurity Act 2019 and ensure the resilience of systems to attacks, being ready to use a backup system recovery plan after various problems (fallback plan).

Equality, diversity, inclusivity and justice.

Government (regulatory) bodies should establish, promote and maintain guidelines for the use of AI technologies, encourage their diversity depending on the types and scenarios of use. The AI regulator should encourage the creation of an open AI platform in order to avoid monopolies that contribute to slowing down development. Anti-monopoly legislation should counteract the use of AI technologies in order to suppress competition both on a regional and industrial scale. It is necessary to promote and support equal opportunities for access to education, products, services and related technologies in the field of AI. Potential users should be encouraged to use products and services equipped with AI technologies more actively.

AI technology operators should research, design and develop AI technologies taking into account the needs and expectations of users, taking into account all social categories, including vulnerable groups of people (minorities, the elderly, people with disabilities). Stakeholders whose interests may be affected by the bias and unfairness of AI technology decisions should be involved in the process of designing and developing such AI technologies. It is necessary to monitor and correct non-operational systematic errors (leading to biased decisions) in a timely manner. Different data sets should be used for AI training, testing and validation: this will allow you to detect possible systematic errors for each data set. When developing AI technologies, it is necessary to comply with existing international standards and recommendations, such as ISO 40500:2012 (W3C Web Content Accessibility Guidelines 2.0 and W3C Web Content Accessibility Guidelines 2.1). Tools should be used to test the propensity of artificial intelligence to make biased decisions (bias detection tools), and when testing, check the results when using data that was not included in the training datasets. It is necessary to involve people with disabilities to participate in testing tasks and processes of AI technologies, which will make it possible to obtain results that meet the needs and expectations of such users in the future. The development and maintenance of AI should be based on the principle of social justice: equal access to AI technologies used in products and services should be available to all users regardless of age, gender, race and other differences.

Reliability.State (regulatory) bodies should develop regulations, establish criteria and processes for assessing the quality and reliability of data sets and AI models on an ongoing basis by analyzing data and user feedback, followed by changing the AI model towards improvement.

Policies and procedures should be developed to verify communication channels with users and regular checks should be carried out to obtain the necessary information. In parallel, it is necessary to support research, design, development, maintenance and use of reliable artificial intelligence technologies. Guidelines for assessing the quality and reliability of AI should be developed jointly with interested representatives of the public and private sector.

AI technology operators should define AI research methods. They should be clearly defined for each stage: design methods, development methods and methods of use. They should be applied systematically and comprehensively. It is necessary to know and understand the factors affecting the quality of data arrays used in AI, such as accuracy, completeness, veracity, updatability, relevance, integrity, usability, and human intervention. Particular attention should be paid to technologies that have access to confidential data arrays (medical secrecy, personal data, financial (banking) secrecy, data on legal/law enforcement activities). Testing of AI models should be carried out in accordance with the dynamics of actual conditions for safety for users and the environment. AI technology providers should organize feedback channels so that users can report problems, as well as a channel for requesting a review of decisions made by the system.

The manual provides a set of guarantees for the rights of users of products and services equipped with AI technologies. Such guarantees correlate with the responsibilities of AI operators for the ethical use of AI and ensure the rights of a technologically vulnerable party. In particular, users should improve their knowledge and skills of using AI products and services, and have an idea of the advantages of AI solutions. It is important to follow the news related to AI technologies in order to be aware of possible threats. It is necessary to check the reliability of products and services with AI, as well as certificates issued by authorized organizations. It is necessary to get acquainted with the design and principles of AI, to request detailed information from service providers about the ethical principles used in AI products and services. In case of problems with the use of products and services, users should inform the service provider of all necessary information to improve, correct and develop AI technology in accordance with the tasks assigned. It is necessary to request an explanation from researchers, designers and service providers about the algorithm and principles of the technology in order to exclude the possibility of erroneous and reprehensible results. In the case of using personal data for research, design, development and provision of AI services, users have the right, in accordance with the Personal Data Protection Act 2018, to make a request and receive information about the data array used by AI operators. The right to revoke consent to the collection, use or disclosure of information is guaranteed, as well as to object to such actions by the operator, including requiring the deletion/ destruction or temporary suspension of the use of personal data.

Conclusions. Summing up the above, we note the following features of Thailand's approach to the ethical regulation of the use of artificial intelligence technologies in the aspect of information security. Thailand implements a state-centrist model in which the state defines the basic ethical principles of AI and regulates in detail the activities of public and private sector entities on each of these principles. Unlike the two-tier model of ethical regulation of AI developed in China, the Thai model includes only one set of general principles. Together with the compressed (2 years) plans for the digitalization of public administration and the economy with the allocation of strategically important industries and spheres of activity, such a model seems justified. A feature of the Thai model is the emphasis on the importance of training and advanced training of civil servants who are able to use digital technologies in management processes. This will allow them to effectively implement the ethical principles of AI in the course of performing their functions. The allocation of state (regulatory) bodies as the entity responsible for the deployment of AI technologies, and giving them appropriate responsibilities in the field of ethical regulation, is an additional guarantee of information security. Thailand's inclusion of users of products and services with AI technologies in the circle of subjects responsible for the implementation of ethical requirements for AI is noteworthy: users are involved as active participants in regulatory processes that can quickly influence the content of algorithms by providing the necessary information to AI operators. The Thai model is designed to educate not a passive, but an active user of digital technologies, who is encouraged to raise awareness and improve skills in their use, which leads to the strengthening of the legal status of users with the help of "soft law" tools. It can be assumed that Thailand's implementation of its model of ethical regulation of artificial intelligence will have a positive impact on ensuring information security.

The research was carried out with the financial support of the RFBR within the framework of the scientific project 20-011-00454 "Ensuring the rights of investors in the banking and financial sectors in the conditions of digitalization of the economy in the Russian Federation and the leading financial centers of East Asia: a comparative legal aspect".

References
1. Gorian, E.V. (2022). Ethical regulation of artificial intelligence as a security factor for the financial and banking sector: experience of China. Security Issues, 2, 41-52. DOI: 10.25136/2409-7543.2022.2.38380.
2. Smuha, N.A. (2021). From a ‘race to AI’ to a ‘race to AI regulation’: regulatory competition for artificial intelligence. Law, Innovation and Technology, 13(1), 57-84.
3. Gorian, E.V. (2020). Artificial Intelligence in the financial and banking sector: experience of Singapore. The Territory of New Opportunities. The Herald of Vladivostok State University of Economics and Service, 3(12), 86–99.
4. Gorian, E.V. (2021). Thailand Cyber Security Regulatory Mechanism. Security Issues, 3, 1-20. DOI: 10.25136/2409-7543.2021.3.36255.
5. Kitiyadisai, K. (2008). Information systems for national security in Thailand: ethical issues and policy implications. Journal of Information, Communication and Ethics in Society, 6(2), 141-160.
6. Hongladarom, S. (2021). The Thailand national AI ethics guideline: an analysis. Journal of Information, Communication and Ethics in Society, 19(4), 480-491.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The subject of the study. The subject of the research of the peer-reviewed article "Ethical regulation of artificial intelligence as a factor of information security: the experience of Thailand" is the ethical standards applied in Thailand, regulating the use of artificial intelligence, and ensuring information security in society. Research methodology. The author used many modern methods of scientific knowledge in the preparation of the article. The main method is comparative analysis. Without studying foreign experience and using comparative jurisprudence to avoid repeating mistakes in domestic legislation and judicial practice, it is difficult to assess the effectiveness of solving problems of law and law enforcement on the legal status (regime) of artificial intelligence in Russia. In addition, the author used a combination of theoretical and empirical information. In general, the use of modern methods of scientific knowledge allowed the author to present the material consistently, competently and clearly. The relevance of research. It should be assumed that the study of progressive provisions of foreign law will indeed contribute to the improvement of its own legislation, since it allows not only to discern the identity and historical succession, but also to find out whether the provisions on the legal status (regime) of artificial intelligence, as a new legal phenomenon, have common features characterizing as a special value in the information society, and how much foreign borrowing is acceptable in the formation of its own legal system without prejudice to its uniqueness and individuality. The use of foreign experience reveals a lot of issues and conflicts that require resolution in relation to national law. This is what the author of the reviewed article points out. Various digital technologies used in the information society require the right approach in legal regulation. The global digitalization of public relations needs adequate legal regulation. It is precisely for these circumstances that the foreign experience of legal regulation is important. Scientific novelty. Undoubtedly, the article on the topic "Ethical regulation of artificial intelligence as a factor of information security: the experience of Thailand" is new for Russian legal science, there is no fundamental research on this topic. This situation is explained by the novelty of the Institute of artificial intelligence itself and the only recently begun formation of legal norms regulating public relations, the subject of which is artificial intelligence. Moreover, scientific disputes about the possibility of recognizing artificial intelligence as a subject of law add even more uncertainty to the solution of this issue. Style, structure, content. The article is written in a scientific style, structured, the content of the article reveals the topic stated by the author. Bibliography. In our opinion, the author should have studied the works of Russian scientists who deal with the legal status of artificial intelligence (A.V. Minbaleev, T.A. Polyakova, V.B. Naumov, E.V. Vinogradov, T.Ya. Khabrieva, etc.). The article would only benefit from covering general theoretical issues. Although the noted remark does not detract from the work done by the author. We believe that the topic of the article is too narrow, and that is why the number of sources is insignificant. Although the generally accepted requirement for scientific articles is to use at least 15 sources, including publications of recent years. Appeals to opponents. The appeal of the author of the reviewed article to the opponents is very correct. Conclusions, the interest of the readership. The article "Ethical regulation of artificial intelligence as a factor of information security: the experience of Thailand" as a whole (with the exception of a note on the bibliography) meets the requirements for publications of this kind. The article is recommended for publication in the scientific journal "Security Issues". Based on the relevance of the topic of the article, it may be of interest not only to specialists in the field of information law and information security, but also to a wide range of readers.