Library
|
Your profile |
Software systems and computational methods
Reference:
Smirnov, .A., Podolskiy, E.A., Cherenkov, A.V., Gosudarev, I.B. (2024). A comparative analysis of the performance of JavaScript code execution environments: Node.js, Deno and Bun. Software systems and computational methods, 4, 109–123. https://doi.org/10.7256/2454-0714.2024.4.72206
A comparative analysis of the performance of JavaScript code execution environments: Node.js, Deno and Bun
DOI: 10.7256/2454-0714.2024.4.72206EDN: KAZINOReceived: 04-11-2024Published: 05-01-2025Abstract: The subject of the study was the performance of JavaScript program execution in modern environments Node.js, Deno and Bun. These platforms are used to develop server applications and have significant differences in architecture, functionality and performance. Node.js is the most mature and widespread solution and is actively used in most modern web applications. Deno is a newer environment developed by the creator of Node.js, offering improved security, TypeScript support, and other innovations. Bun, on the other hand, is a modern and high-performance alternative focused on the speed of server-side applications. The purpose of the study is to identify the performance differences between the major modern runtime environments (Node.js, Deno and Bun) for further utilization of these environments in web application development. A computer experiment method using Docker containers and process automation using Ansible was used for the study. The execution time of different scenarios in each of the execution environments was measured. The scientific novelty of this study lies in the fact that for the first time a holistic and valid methodology for measuring and comparing JavaScript code performance in modern runtime environments has been proposed, which will allow researchers to build on the proposed approach in further experiments and extend it to new runtime environments. The results show that Bun shows the best performance in synchronous computations (sorting, JSON processing), but is behind Node.js and Deno in checking for prime numbers. Deno showed high performance in asynchronous operations, thanks to the use of Rust and the Tokio library. Node.js, despite lower results in synchronous tasks, showed stable performance in tests and remains a solid choice for large projects. In the course of the study, recommendations were developed for selecting the appropriate server-side JavaScript code execution environment for various tasks. Keywords: JavaScript, Node.js, Deno, Bun, Performance, Computer experiment, Backend, Web, Server, DockerThis article is automatically translated. Introduction JavaScript was announced in 1996. The purpose of the language was to add interactivity to static web pages. A lot of time has passed since that moment. JavaScript has become one of the most popular programming languages, and its scope has expanded. The key expansion of the language's functionality was the appearance of the server-side JavaScript code execution environment, Node.js in 2009. Node.js allowed using one language for both client development and server creation. This has led to the active development of JavaScript-based technologies such as server applications, microservices, and APIs. Node imperfection.js and the demand for server-side JavaScript code execution environments have pushed the industry to develop and create new solutions. The creator of the Node.In 2018, J.S. Ryan Dahl introduced a new technology, Deno, which was supposed to solve all the problems of its predecessor. Following Deno in 2022, a new environment was announced - Bun. Each of the technologies presented offers unique approaches to security, performance, and ease of development. However, despite the growing number of studies and articles on comparing JavaScript runtimes, there has not yet been a comprehensive analysis of all three technologies that takes into account all key criteria. The object of this research is the JavaScript language and the execution of program code in this language. The subject of the study was the performance of JavaScript programs in modern environments. The purpose of the study is to identify differences in the performance of the main modern execution environments (Node.js, Deno and Bun) for further application of these environments in the development of the backend of web applications. The scientific novelty of this study lies in the fact that for the first time a holistic and well-founded methodology for measuring and comparing the performance of JavaScript code in relation to modern execution environments has been proposed, which will allow researchers to build on the proposed approach in further experiments and extend it to new execution environments. The prospects The field of software systems on the web platform is developing extremely dynamically and is accompanied by the development of new execution environments. So far, these environments have been developed abroad. In line with import substitution, it is planned to develop a system of criteria for evaluating the performance of JavaScript code execution and use it to develop domestic solutions in this area. Among other things, further research will focus on the development of Russian cloud infrastructure components that provide secure and effective methods for supporting server solutions based on the JavaScript ecosystem. Literature analysis The topic of comparing the performance of JavaScript code execution environments has not been sufficiently studied in the scientific literature. Most works focus on comparing the performance of different programming languages or compare only two execution environments [1, 2]. Official Node websites.js, Deno, and Bun claim the superiority of their products, but often without detailed experimental data. In [3], Deno and Bun were compared, where it was shown that Bun excels in array processing and recursive computing, while Deno excels in asynchronous tasks. The article [4] provides a comparison of Node.js and Bun, where it is noted that Bun promises a significantly faster startup, which makes it attractive for serverless functions. The article [5] notes that although Bun is still new and more comprehensive testing is needed, initial tests show promising results. The lightweight coroutine model and optimized garbage collector in Bun can provide improved performance for I/O-related workload. However, Bun is relatively new and has a smaller community compared to Node.js can create obstacles to its widespread use. In [6], the performance of Node was compared.js and Deno, which show approximately the same level of performance in the tasks of coordinate transformation and searching for straight lines in images. This is due to the use of the same V8 engine. The book [7] describes the basic requirements for professional benchmarking: repeatability, verifiability and portability, the principle of non-interference, an acceptable level of accuracy and honesty. The authors also consider the risks of benchmarking, such as inaccurate time measurements and compiler optimization. All these requirements and recommendations are taken into account in this study. In the article [8], it is recommended to use the "box with a mustache" graph to visually represent the distribution of data. This graph shows the distribution of data across their quartiles and identifies potential outliers, which is especially useful for comparing distributions between different groups or conditions. The M/G/m queue model studied in [9] is based on the fact that modern web servers have a queue of requests and a pool of threads for processing incoming requests. In JavaScript execution environments, instead of a thread pool, the Event Loop mechanism is used, which ensures asynchronous operation execution. Synchronous operations, however, are performed in the main thread, which exists in a single instance. If this thread is busy performing resource-intensive calculations, the waiting time for a response will increase many times. In the article [10], an experimental study is conducted to measure and compare Node performance.js and Jetty in the context of processing REST requests. As part of this study, the server performs a number of operations specific to each request: JSON parsing, object creation, destructuring, processing, data collection, and response generation in JSON format. These operations, which are an integral part of processing each request, were selected for performance analysis in this work. Experimental methodology To ensure reproducibility and portability of experimental calculations, all operations were performed in an isolated environment of Docker containers. Docker images were created based on standard images available in the Docker Hub repository (Table 1). Alpine images were chosen as the basis, which are characterized by minimalism and increased safety. Table 1 – Basic Docker images used
The studies were conducted in cloud computing environments using virtual machines (Table 2). This approach makes it possible to bring the experimental conditions closer to real production conditions. Table 2 – Virtual machine characteristics
The description of the experiment process is shown in Figure 1. Figure 1 – The process of conducting the experiment The experiment included measuring the execution time of the following scenarios, each of which was run 1000 times to ensure a sufficient number of items in the sample:
Experimental results Based on the results of the experiment, comparative tables with the main characteristics of the distribution were compiled and graphs were plotted. During the experiment, it was recorded that the average load of a dual-core CPU was 58%. This phenomenon can be explained as follows: one of the processor cores was experiencing maximum load due to the tasks of single-threaded engines, while the other core was running background processes of the operating system. During the experiment, there was a gradual increase in the amount of RAM used to perform tasks, from 400 MB to 500 MB. The allocation of 2 GB of virtual memory turned out to be excessive for performing tasks. Deno showed the best performance when performing a quick array sort, with the lowest average execution time of 17.3 ms and a minimum value of 11.5 ms. Bun took the second place with an average time of 19.1 ms and a minimum value of 13.2 ms. Node.js showed the worst results with an average time of 23.5 ms. The numerical indicators are presented in Table 3, and their visualization is shown in Figure 3. Table 3 – Results of quick array sorting
Figure 2 – A box with a mustache for quick sorting results The large standard deviation relative to other experiments is explained by the fact that the execution time of the quicksort algorithm depends on how the elements that were randomly generated in this experiment are distributed in the array. To increase the visibility of the scale of the main distribution, outliers are not shown in the graphical illustrations, since their presence makes it difficult to visually perceive the distribution of data. Bun demonstrated the best performance in JSON processing, with an average execution time of 5.8 ms and a minimum value of 5.3 ms. Deno took second place with an average time of 13.4 ms, and Node.js showed the worst results with an average time of 15.0 ms. The standard deviation was the lowest for Bun (0.3 ms), which indicates the high stability of its performance. The maximum time differs the most from the median (by 56%) for a Node.js, which indicates a significant outlier. The numerical indicators are presented in Table 4, and their visualization is shown in Figure 4. Table 4 – JSON processing results
Figure 3 – A box with a mustache for JSON processing results Bun is significantly superior to Deno and Node.js in performing destructuring operations, with an average time of 3.5 ms and a minimum value of 3.1 ms. Deno took the second place with an average time of 15.5 ms, and Node.js showed the worst results with an average time of 17.0 ms. The standard deviation of the Bun was the lowest (0.2 ms), which indicates stable performance. The numerical indicators are presented in Table 5, and their visualization is shown in Figure 5. Table 5 – Results of destructuring
Figure 4 – A box with a mustache for the results of destructuring Bun demonstrated the best performance in the Object.assign operation, with an average time of 3.3 ms and a minimum value of 3.0 ms. Deno showed an average execution time of 16.9 ms, and Node.js – 18.3 ms. The standard deviation of the Bun was the lowest (0.2 ms), indicating stable performance. The numerical indicators are presented in Table 6, and their visualization is shown in Figure 6. Table 6 – Object.assign results
Figure 5 – A box with a mustache for Object.assign results Node.js and Deno showed similar results when checking the number for simplicity, with averages of 16.7 ms and 16.0 ms, respectively. Bun performed significantly worse, with an average execution time of 119.5 ms. The standard deviation of Bun was the highest (2.7 ms), which indicates less stability. The numerical indicators are presented in Table 7, and their visualization is shown in Figure 7. Table 7 – Results of the number primality test
Figure 6 – A box with a mustache for the results of the primality check Table 8 – Promise processing results
Figure 7 – A box with a mustache for the results of processing promises Deno showed the best performance when processing promises, with an average execution time of 173.1 ms and a minimum value of 162.8 ms. Node.js came in second with an average time of 195.5 ms, while Bun performed the worst with an average time of 230.1 ms. Conclusion This study provides a comprehensive analysis of the performance of JavaScript code execution environments: Node.js, Deno, and Bun in various scenarios, including quick sorting, JSON processing, destructuring, Object.assign, primality checking, and promise processing. The experiment was performed in an isolated environment of Docker containers using the Ansible tool to automate the process, which ensured reproducibility and portability of the results. The results of the experiment showed that Bun demonstrates the best performance in most synchronous calculations, such as quicksort, JSON processing, destructuring, and Object.assign. However, in the primality check scenario, Bun is significantly inferior to Node.js and Deno, which may be due to the specifics of the implementation of calculating the remainder of the division in Bun. Deno, in turn, demonstrated high efficiency in asynchronous operations, such as promise processing, due to the fact that it was developed in the Rust language and uses the Tokio library for asynchronous tasks, which contributed to its optimized asynchronous architecture. Node.js, despite the fact that it does not win in most synchronous scenarios, provides stable performance in all tests and remains a reliable choice for large projects where time-tested stability and the availability of specialized libraries are important. Moreover, there is widespread adoption and a greater number of developers familiar with Node.js makes it preferable in cases where general compatibility and community support are required. It is recommended to use Bun for tasks requiring high build speed and efficiency in synchronous computing, especially in non-service functions. However, when choosing Bun for tasks involving asynchronous operations or specific calculations, additional performance studies should be conducted, as in some scenarios it may be less efficient than Node.js or Deno. Deno, due to its high performance in asynchronous operations and its emphasis on security, is the preferred choice for tasks where a balance between security and performance is important. In addition, Deno has proven to be faster than Node.js, in all scenarios. It is assumed that similar performance results between Nodes.js and Deno may be due to the fact that both execution environments use the same V8 engine, which confirms the importance of taking the engine into account when evaluating the performance of JavaScript environments. Table 9 shows the relative time characteristics of median calculations, where the minimum time is assumed to be 100%, for convenient comparison of the results with each other. Table 9 – Relative calculation results for medians
In general, the choice of a JavaScript runtime environment should be based on specific project requirements, such as the type of computing (synchronous or asynchronous), the need for security, and stability. The results of this study provide developers with sound recommendations for choosing the most appropriate environment, depending on the specifics of the task. References
1. Martinsen, J., & Grahn, H. (2011). A methodology for evaluating JavaScript execution behavior in interactive web applications. In Proceedings of the 9th IEEE/ACS International Conference on Computer Systems and Applications (pp. 241-248). Retrieved from https://doi.org/10.1109/AICCSA.2011.6126611
2. Rodygina, I. V., & Nalivaiko, A. V. (2021). Comparative analysis of technologies for developing the server side of the sales management system. Proceedings of Southern Federal University. Technical Sciences, 4(221), 256-266. Retrieved from https://doi.org/10.18522/2311-3103-2021-4-256-266 3. Tikhonov, D. S., Cherenkov, A. V., & Dolgov, A. V. (2023). Comparative analysis of server development technologies on Deno and Bun platforms. Scientific and Technical Innovations and Web Technologies, 2(1), 55-59. 4. Kniazev, I., & Fitiskin, A. (2023). Choosing the right JavaScript runtime: An in-depth comparison of Node.js and Bun. Norwegian Journal of Development of the International Science, 9(102), 72-84. Retrieved from https://doi.org/10.5281/zenodo.7945166 5. Koper, D., & Woda, M. (2022). Performance analysis and comparison of acceleration methods in JavaScript environments based on simplified standard Hough Transform algorithm. In J. Kacprzyk (Ed.), Advances in Intelligent Systems and Computing (Vol. 1394, pp. 131-142). Springer. Retrieved from https://doi.org/10.1007/978-3-031-06746-4_13 6. Akinshin, A. (2022). Professional benchmark: The art of performance measurement. Saint Petersburg, Russia: Piter Publishers. 7. Salnikova, K. V. (2021). Analysis of data arrays using the boxplot visualization tool. Universum: Economics and Law, 6(81), 11-17. Retrieved from https://doi.org/10.32743/unilaw.2021.81.6.11778 8. Gregg, B. (2019). BPF performance tools. Boston, MA: Addison-Wesley Professional. 9. Lu, J., & Gokhale, S. S. (2008). Performance analysis of a web server. International Journal of Information Technology and Web Engineering, 3(3), 50-65. Retrieved from https://doi.org/10.4018/jitwe.2008070104 10. Lundar, J. A., Grønli, T. M., & Ghinea, G. (2013). Performance evaluation of a modern web architecture. International Journal of Information Technology and Web Engineering, 8(1), 36-50. Retrieved from https://doi.org/10.4018/jitwe.2013010103 11. Smirnov, A. A., Cherenkov, A. V., & Podolskii, E. A. (2024). Comparative analysis of JavaScript runtime environments Node.js, Deno and Bun for server-side web application development. International Journal of Information Technologies and Energy Efficiency, 9(4), 100-106. 12. Lei, K., Ma, Y., & Tan, Z. (2014). Performance comparison and evaluation of web development technologies in PHP, Python, and Node.js. In Proceedings of the 17th IEEE International Conference on Computational Science and Engineering (pp. 661-668). Retrieved from https://doi.org/10.1109/CSE.2014.142 13. Suvorov, D. A. (2009). Measurement of current processor performance and program quality: Methods for evaluating and improving real performance. Open Education, 6(2), 59-65. 14. Pentkovskii, V. M., Volkonskii, V. Y., Yermolitskii, A. V., & Olenev, A. N. (2012). Methodology for evaluating computing systems performance. Current Problems of Modern Science, 6(4), 358-363. 15. Zhuikov, R., & Sharygin, E. (2015). Methods of preliminary optimization for JavaScript programs. Proceedings of Institute for System Programming of RAS, 27(6), 67-86. 16. Basumatary, B., & Agnihotri, N. (2022). Benefits and challenges of using Node.js. International Journal of Innovative Research in Computer Science & Technology, 10(3), 67-70.
First Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
Second Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
|