Library
|
Your profile |
Software systems and computational methods
Reference:
Sheinman , V., Starikov , D.D., Tiumentsev , D.V., Vavilov , G.D. (2024). Improving the Efficiency of Software Development Processes: Container Technologies. Software systems and computational methods, 4, 151–161. https://doi.org/10.7256/2454-0714.2024.4.72015
Improving the Efficiency of Software Development Processes: Container Technologies
DOI: 10.7256/2454-0714.2024.4.72015EDN: JXRTJCReceived: 17-10-2024Published: 05-01-2025Abstract: The article discusses the impact of containerized technologies on software development processes. It focuses on the role of containerization in optimizing the deployment and management of applications, as well as in increasing the flexibility and scalability of software systems. The study analyzes key aspects of containerization, including application isolation, increasing software portability between different environments, and reducing operating costs by optimizing the use of computing resources. Modern tools such as Docker and Kubernetes, which allow standardizing and automating the processes of infrastructure deployment and management, are considered. To analyze the effectiveness of container technologies, benchmarking techniques have been used to evaluate their impact on infrastructure flexibility and software system performance. The sources of data were scientific publications. The novelty of the research lies in considering the application of container technologies in the context of modern software development practices, which allows to significantly accelerate the processes of development, testing and deployment of software products. The results show that containerization improves system performance, simplifies application management, and reduces operational costs. Examples of practical use of Docker and Kubernetes in large companies demonstrate that containerization significantly increases infrastructure flexibility and scalability of solutions, allowing developers to easily adapt to changing conditions and market requirements. In conclusion, it is emphasized that container technologies play a key role in modern software development processes, and their further development will contribute to even more significant improvements in automation and infrastructure management of software systems. Keywords: containerization, software development, Docker, Kubernetes, scalability, resource optimization, software operation, automation, platforms, process isolationThis article is automatically translated. Introduction Container technologies play an important role in software development by providing effective solutions for managing applications and infrastructure. With the increasing complexity of software systems and the rapid pace of change in the IT industry, containerization provides the necessary tools to optimize the processes of creating, testing, and deploying software elements. Container platforms such as Docker, as well as container orchestration systems, including Kubernetes, have become the standard in the field of development. These technologies allow applications and their dependencies to be isolated into independent, lightweight environments, which increases the flexibility and portability of software between different computing environments. This eliminates issues related to configuration and infrastructure incompatibilities and simplifies software lifecycle management. The purpose of this article is to analyze how the use of container technologies contributes to improving the efficiency of software development processes. The main part. The concept and features of containerization The methodology of creating and deploying software applications in isolated environments is called containerization. Containers provide the means to package software and all its dependencies into self-sufficient, isolated units from the host system, which significantly simplifies the portability and reproducibility of applications between different computing platforms [1]. Unlike traditional virtualization, containers use a common operating system core, while providing complete isolation of processes, file system, and network resources for each element. Containers use isolation mechanisms such as cgroups and namespaces, which allow you to restrict access to resources and create isolated namespaces for processes, network interfaces, and systems [2]. This ensures that files running in containers will not affect each other, providing a high level of security and isolation without the need for full-fledged virtualization. Containerization provides many advantages for software development and operation. These are various aspects of working with applications, from increasing flexibility to optimizing resource usage (Table 1). Table 1. Advantages of containerization [3, 4]
The advantages of containerization presented in the table emphasize its importance for the modern software development and operation process. More efficient use of resources and faster deployment help optimize infrastructure and reduce application maintenance costs. Taken together, this makes containerization a powerful tool for accelerating innovation and improving overall efficiency in the IT industry. The main technologies and tools of containerization According to an online survey conducted by the American company Statista in 2024, one of the most popular and sought-after technologies for containerization is Docker – 59% of developers reported using it [5]. It provides tools for creating, managing, and deploying containers, allowing you to easily package software products along with their dependencies into standard container images (Fig.1). Figure 1. Basic states of containers in Docker Docker components include the Docker Engine, Docker Hub, and Docker Compose [6]. The Docker Engine is a key element of the platform responsible for creating and executing containers and interacting with the operating system. Docker Hub is a cloud-based platform for storing and distributing container images. It allows developers to share ready-made software elements with other users, as well as download necessary products from a shared repository. Docker Compose is a tool that simplifies working with multi-container applications by providing the ability to describe configurations in the form of YAML files and manage the processes of their launch and interaction. The container orchestration system built into Docker is called Docker Swarm [7]. It provides easy integration with existing Docker environments and allows you to distribute containers across multiple nodes, supporting load balancing and automatic recovery. Kubernetes is an open source container orchestration system designed to automate the deployment, management, and scaling of software products built on containers. It is designed to work efficiently with large distributed systems and offers powerful tools for managing a large number of containers in various computing environments [8]. This makes Kubernetes an indispensable tool for companies seeking a high level of automation and flexibility in software development (Fig.2). Figure 2. Kubernetes system architecture Developers and operators interact with the system via an API server to manage container applications. The work nodes contain pods, which are groups of containers, as well as components for monitoring and management – Kubelet, cAdvisor, and Kube-Proxy. The nodes are connected through network plug-ins that enable containers to interact in the cluster. According to Statista research, in 2023, 38% of developers identified improved system performance, availability, and fault tolerance as the main advantage of Kubernetes. Key features of the technology include automatic scaling, network and storage orchestration, auto-repair, and load balancing [9]. Automatic scaling allows the system to dynamically change the number of containers depending on the current load on the application, which contributes to the efficient use of resources. Network and storage orchestration provides managed interaction between containers and provides access to external data stores, simplifying work with distributed files. The auto-repair function automatically restarts or replaces containers in case of their failure, maintaining the smooth operation of the system. Load balancing distributes incoming traffic between containers, which allows you to achieve optimal product performance, even under high load. OpenShift is a containerization platform based on Kubernetes. It provides additional tools for managing the security, development, and operation of files in containers (Fig.3). Figure 3. OpenShift scheme OpenShift is focused on the corporate segment and includes tools for application development [10]. The user initiates backups through a command that interacts with the Kubernetes API. The backup controller manages the process by saving configurations to storage and uploading data to a cloud provider with disk snapshots. The technology offers improved container security and authentication process management mechanisms, as well as integration with CI/CD (Continuous Integration and Deployment), making it an effective tool for large organizations with complex infrastructure requirements. CRI-O is an open source project specifically designed for use with Kubernetes. It provides a minimalistic but efficient container runtime environment that allows Kubernetes to directly manage containers without the need for more complex solutions such as Docker (Fig. 4) Figure 4. Comparison of Kubernetes interaction with containers via dockerhim and cri-containerd The diagram shows two different approaches for interacting with Kubernetes containers. In the first case, dockerhim is used, an intermediate layer that allows Kubernetes to interact with containers via Docker. In this architecture, Kubernetes uses the Kubelet component to send commands via the CRI (Container Runtime Interface) to Docker, which manages containers through the low-level containerd component. Containers are created and managed by Docker, which adds an extra layer of abstraction. The second approach uses CRI-O. In this case, Kubelet interacts directly with CRI-O via the same CRI interface, eliminating the need to use Docker. Instead, CRI-O directly manages containers via containerd, which simplifies architecture, improves performance, and eliminates redundant layers. CRI-O focuses on minimizing functionality to perform only those tasks that are necessary to work with Kubernetes. This makes it a more lightweight and efficient alternative for Kubernetes clusters [11]. Podman is an open source tool with functionality similar to Docker, but with some difference: Podman does not require a background process to manage containers. This makes it a more secure alternative to Docker, especially in environments with high security requirements (Fig.5) Figure 5. Comparison of Docker architecture and Podman Podman integrates with other tools such as Buildah for creating container images and Skopeo for managing them, making it a reliable tool for working with containers in environments with high security requirements. According to a study conducted at the National Energy Research and Computing Center (NERSC) in collaboration with the American software manufacturer Red Hat, Podman technology has been adapted for use in the field of high-performance computing (HPC). Podman offers advantages such as the ability to run containers without root privileges, which increases the level of security, as well as the ability to build container images directly on the nodes of the Perlmutter supercomputer [12]. The results of testing various Podman operating modes, including podman-exec and Podman container-per-process, demonstrated performance comparable to the direct use of hardware resources, which makes Podman a promising tool to use. The use of containers at various stages of software development Containerization provides developers with the ability to effectively manage software systems at all stages of their lifecycle, from development to operation. Containers help to create isolated environments that can be quickly deployed and tested, which is especially important for large companies [13]. During the development phase, containers are used to create standardized environments that ensure the application runs identically on different devices. For example, the American company Google uses containerization to develop its services, including Google Search and Gmail [14]. Containers allow developers to create applications in identical production environments, which eliminates the problems associated with differences in the configuration of operating systems and libraries. This greatly simplifies the process of writing code and its subsequent testing. Google Cloud provides the ecosystem necessary for faster software development and deployment without compromising security. According to reports, 98% of customers of companies that use Google's containerization tools deploy an application on the first attempt in less than 5 minutes. Software testing is the most important stage of development, where container technologies allow you to create isolated and reproducible test environments. In the American streaming service Netflix, containers play a key role in speeding up testing and improving the quality of new versions of applications and services. For these purposes, Netflix uses Docker and Kubernetes [15]. Using Docker, the company creates containers in which different versions of applications are isolated, which allows testing in identical environments, avoiding problems related to configuration differences. Kubernetes helps Netflix scale test environments to simulate different streaming service scenarios under different load levels. This allows you to test the performance and resilience of applications to high loads, identify potential errors faster, and reduce the risks associated with application updates. The Russian company Yandex uses containerization at the testing stage to test new functionality of its products. For these purposes, Yandex uses tools such as Docker and Kubernetes. Docker allows you to create isolated containers in which individual service components are developed and tested [16]. These containers help to reproduce test environments that are identical to real ones, which reduces the likelihood of errors. Kubernetes is used to automate the deployment and management of containers on server clusters. This allows Yandex to scale test environments and conduct load tests to test the performance of new features under different load levels. Thanks to this integration of Docker and Kubernetes, the company can test multiple versions of its products in parallel, which speeds up the development cycle and reduces the risks of implementing updates in production environments. During the software deployment and operation phase, containerization provides significant benefits by providing flexibility and scalability to applications. An example of this is the practice of American Express (USA), which actively uses containerization in software development and operation [17]. The company uses Docker and Kubernetes to automate the deployment of financial services, which allows them to flexibly scale applications depending on changing requirements and system load. Once deployed, containers continue to play an important role in software operation and scaling, providing automatic resource management and flexibility in configuration changes. Conclusion Container technologies help to increase the efficiency of software development by isolating applications and their dependencies, which facilitates portability between different environments. This eliminates compatibility issues, speeding up development, testing, and deployment processes. Using containers optimizes resource allocation, reducing the burden on the infrastructure and improving the scalability of software products. Platforms such as Docker and Kubernetes provide automation and flexibility for file management, which reduces the time needed to implement new features and minimizes infrastructure maintenance costs. Financing. The study had no sponsorship. The contribution of the authors. All the authors have made an equal contribution to the writing of this article. Conflict of interests. The authors declare that there is no conflict of interest. Funding. The study had no sponsorship. Authors contributions. All authors contributed equally to this article. Conflict of interest. The authors declare no conflict of interest. References
1. Beloded, N.I., & Demidenko, K.G. (2023). Development and application of containerization technology in software development. Actual Problems of Scientific Research: Theoretical, 57.
2. Bondarenko, A.S., & Zaytsev, K.S. (2023). Using container management systems to build distributed cloud information systems with microservice architecture. International Journal of Open Information Technologies, 11(8), 7-23. 3. Mozharovsky, E.A. (2024). Mobile application development: from idea to market. Modern Scientific Research and Innovation, 1. 4. Aluev, A. (2024). Scalable web applications: a cost-effectiveness study using microservice architecture. Cold Science, 8, 32-38. 5. Muzumdar P., Bhosale A., Basyal G., & Kurian G. (2024). Navigating the Docker ecosystem: a comprehensive taxonomy and survey. arXiv preprint arXiv:2403.17940. 6. Christudas, B.A. (2024). Introducing Docker. Java Microservices and Containers in the Cloud: with Spring Boot, Kafka, PostgreSQL, Kubernetes, Helm, Terraform, and AWS EKS, Berkeley, CA: Apress, 281-343. 7. Higgins, T., Jha, D.N., & Ranjan, R. (2024). Swarm Storm: an automated chaos tool for Docker Swarm applications. Proceedings of the 33rd International Symposium on High-Performance Parallel and Distributed Computing, 367-369. 8. Mironov, T.O. (2023). Building an information architecture for a software release automation cycle system. Enterprise Engineering and Knowledge Management. 9. Dudak, A.A. (2024). Comparative analysis of development tools for project management systems: advantages of TypeScript and React technology stacks. New Science: From Idea to Result, 9, 32-40. 10. Glumov, K.S. (2024). Patterns underlying Java, Kubernetes, and modern distributed systems. Bread-Baking in Russia, 68(1), 6-12. 11. Poggiani, L., Puliafito, C., Virdis, A., & Mingozzi, E. (2024). Live Migration of Multi-Container Kubernetes Pods in Multi-Cluster Serverless Edge Systems. Proceedings of the 1st Workshop on Serverless at the Edge, 9-16. 12. Stephey, L., Canon, S., Gaur, A., Fulton, D., & Younge, A. (2022). Scaling Podman on Perlmutter: Embracing a community-supported container ecosystem. 2022 IEEE/ACM 4th International Workshop on Containers and New Orchestration Paradigms for Isolated Environments in HPC (CANOPIE-HPC), IEEE, 25-35. 13. Sidorov, D. (2024). Leveraging web components for scalable and maintainable development. Sciences of Europe, 150, 87-89. 14. Cherednikov, K.A., Lavrova, E.D., & Marukhlenko, A.L. (2023). A modern view on containerization. Modern Information Technologies and Information Security, 115-119. 15. Erdenebat, B., Bud, B., & Kozsik, T. (2023). Challenges in service discovery for microservices deployed in a Kubernetes cluster–a case study. Infocommunications Journal, 15(SI), 69-75. 16. Makarova, N.V., & Savichev, D.E. (2023). Application of artificial intelligence methods in software maintenance. Actual Problems of Economics and Management, 1, 17. 17. Kosarev, V.E., & Dobridnik, S.L. (2023). Practical aspects of developing and implementing a digital ruble in banking information systems. Innovations and Investments, 2, 143-149.
Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
|