Library
|
Your profile |
Software systems and computational methods
Reference:
Karpovich, V.D., Gosudarev, I.B. (2025). WebAssembly performance in the Node.js environment. Software systems and computational methods, 2, 12–34. . https://doi.org/10.7256/2454-0714.2025.2.74049
WebAssembly performance in the Node.js environment
DOI: 10.7256/2454-0714.2025.2.74049EDN: QOVQLOReceived: 10-04-2025Published: 18-04-2025Abstract: Modern runtime environments such as browsers, Node.js, and others provide developers with tools that go beyond traditional JavaScript. This study focuses on a modern approach to building web applications where components written in different programming languages can be executed and shared using WebAssembly. The subject of the research is the testing and analysis of performance benchmarks comparing JavaScript and WebAssembly modules in the Node.js runtime. The focus is on evaluating performance in computational tasks, memory interaction, data processing, and cross-language communication. The author thoroughly explores topics such as WebAssembly integration in applications, its advantages for resource-intensive tasks like image processing, and the objectivity, representativeness, and reproducibility of the tests. The work follows an applied, experimental approach. It includes performance comparisons between pure JavaScript and WebAssembly modules. Metrics like response time and system resource consumption were used to assess efficiency. The scientific novelty of this work lies in the development and theoretical grounding of testing approaches for web applications using WebAssembly. Unlike most existing studies focused on WebAssembly's performance and security in browser environments, this work emphasizes automated testing of WebAssembly modules outside the browser — a relatively unexplored area until now. A methodological approach is proposed for testing WebAssembly modules in Node.js, including principles for test structuring, integration with JavaScript components, and execution analysis. This approach takes into account the specifics of the server environment, where WebAssembly is increasingly used — particularly for high-load computational modules, cross-language logic, and secure isolated execution. The novelty also lies in defining criteria to evaluate whether certain application components are suitable for migration to WebAssembly in terms of testability, providing developers with a tool for making architectural decisions. The proposed ideas are backed by experimental results, including test case implementations for WebAssembly and JavaScript interaction scenarios. Keywords: WebAssembly, JavaScript, performance optimization, heavy processing tasks, images processing, technology integration, server-side processing, experimental analysis, Algorithmic optimization, compilation and interpretationThis article is automatically translated. introduction In modern conditions of web technology development, the performance requirements of web applications are constantly increasing. The increase in the amount of data processed and the complexity of the algorithms performed makes it urgent to find effective solutions in the field of code optimization. JavaScript, as one of the main programming languages for developing web applications, provides developers with powerful tools and capabilities; however, due to its interpreted nature, it is performance-limited compared to compiled languages. With the advent of WebAssembly, new perspectives have emerged for improving the performance of web applications. WebAssembly is a low-level binary format that allows you to run code written in programming languages such as C, C++, Rust, and others directly in the JavaScript runtime environment. This technology creates an opportunity for developers to write productive code that takes advantage of compilation, which avoids the disadvantages of JavaScript in tasks that require high computing power. The relevance of studying WebAssembly is due to its growing popularity and adoption in a wide range of applications, from games to video processing and graphics. However, this technology has not been sufficiently researched yet, and there is a gap in the literature regarding WebAssembly testing and performance compared to traditional approaches. The purpose of this paper is to analyze the possibility of integrating WebAssembly into existing JavaScript applications, as well as to assess its impact on performance.
STAGES During performance testing of JavaScript and WebAssembly in the Node environment.The following stages were highlighted in js: ‒ Selection of a test application: It was determined that image processing would be a suitable task for performance evaluation, as it is a computationally intensive operation that requires relatively large resources. In particular, the task of applying a filter by applying a convolution algorithm was chosen.; ‒ development of a test application: a simple web server was implemented that includes image processor functionality in both JavaScript and WebAssembly. To implement the image processing function on WebAssembly, the C language was used, which was compiled into a Wasm file using the Emscripten tool.; ‒ setting up the test environment: to create a stable test environment and minimize the impact of external factors, the Docker containerization tool was used; tools for removing, storing and displaying metrics, a web server with tested functionality, and load testing software were raised in containers.; ‒ conducting tests: in the above-mentioned test environment, tests were run using various images and sequentially sending requests to the server; during testing, metrics about the operation of the application and the state of the system were collected; ‒ analysis of the results: after the testing was completed, the collected data was analyzed, based on the results of the analysis, conclusions were drawn about in which cases the use of WebAssembly may be appropriate.
TEST APPLICATION The research results will not be able to be objective and reflect reality if experiments are conducted on an application that is not suitable for the simulated situation. Within the framework of this study, it is assumed that WebAssembly will be used as a complement to JavaScript as the main programming language; WebAssembly will be used pointwise where the use of JavaScript is impractical, irrational or inefficient. Thus, it is assumed that this tool will be able to expand the range of situations in which it is advisable to use Node.js as the main development platform [1]. It follows from this that the application being experimented on must contain a section to be wrapped in Wasm format. In the case when the functionality can be effectively implemented using native Node tools.js, we will assume that the use of third-party tools is not justified. Image processing was chosen as the main functionality that can be optimized using WebAssembly. Applying a complex filter is a highly loaded operation associated with a large amount of data that is regularly used in real–world applications; such a subject area will allow a fairly objective study of the impact of WebAssembly on application performance and optimization [2]. Architecturally, the application is an HTTP server with a public API that allows you to upload, download, and process images. The technology stack includes the Node platform.js, a library for express and TypeScript routing. The routes for image processing are presented in three versions: with an algorithm implemented in JS and implemented in C and compiled into wasm code in single-threaded and multi-threaded modes. The WebAssembly module is initiated once and then calls the exported processing function. The source code of the application is posted in the repository on GitHub and is available via the link https://github.com/Winter4/image-processor-backend [3].
IMAGE PROCESSING ALGORITHM Image processing is performed using an algorithm based on convolution. Convolution is a mathematical operation based on the concept of a kernel (also called a filter or mask). The core is a square matrix with a necessarily odd size, containing numbers that determine the pixel weights [4]. The convolution is performed as follows: the center of the core is superimposed on the current pixel. Each element of the core is multiplied by its pixel, and the results are summed up. The received amount replaces the current (central) pixel. After that, the core is shifted to the next pixel, and the process repeats. As a result, a new image is formed, where each pixel is the result of a convolution. All calculations should be performed with the original image, which does not include the already "collapsed" pixels. The convolution formula for an image with one channel per pixel looks like this: I'(x, y) = Σki=-k Σkj=-k I(x + i, y + j) * M(i, j), where x, y are the coordinates of the current pixel; I'(x, y) is the pixel value of the resulting image in coordinates (x, y); i, j is the offset along the x and y axes, respectively, relative to the coordinates of the current pixel; k is half of the side of the matrix rounded down to the integer; I(x + i, y + j) is the pixel value of the source image in coordinates (x + i, y + j); M(i, j) is the value of the core element in coordinates (k + i, k + j). When processing the edges of an image where the core extends beyond the boundaries, various methods are used: zero padding, reflection of the corresponding pixels on the other side of the core, repetition of the nearest pixel, or cyclic repetition, in which the edges "close" on each other as if the image is cyclically repeated. If a pixel is a collection of numbers with the same range of values, for example, 3 color channels in the RGB system, convolution will be applied for each individual channel. That is, to calculate the new value of the R-channel of the current pixel, calculations must be performed with the R-channels of the surrounding pixels, then with the G- and B-channels, respectively. If a pixel is a collection of numbers with different ranges of values, for example, RGBL, where L is the brightness from 0 to 9, then there are two options: normalization of the values, calculation and subsequent denormalization, or using a separate core to calculate a new brightness value. Applying a single core to non-normalized values is highly likely to lead to deviations from the expected result, but it is theoretically acceptable. Convolution is a universal tool underlying image processing and even more complex systems such as convolutional neural networks used in object recognition tasks and other areas of data analysis.
TEST ENVIRONMENT The test environment consists of an application server, software for removing metrics, software for storing and analyzing metrics, software for visualizing metrics, and software for load testing. Cadvisor, an open source tool developed by Google for monitoring and analyzing container performance, was used as software for removing metrics. Cadvisor automatically detects containers running on the host, collects statistics on their performance, and provides real-time data. Although the tool is mainly designed for Docker, it can be used with all containerization platforms that support cgroups [5]. Prometheus was selected for the metrics repository. It is a popular open source monitoring and notification system designed to collect, store, and analyze metrics from various sources. It was originally created on SoundCloud, and in the process became the de facto industry standard. Prometheus operates on the principle of a pull model, that is, it polls target data sources via HTTP; the metrics format has also become a standard. Prometheus uses PromQL as its query language. The metrics are stored in their own temporary database, where the data is organized into time series with labels in the key-value format. For alerts, Prometheus integrates with other software, such as Alertmanager, which allows you to configure flexible notification rules and send them via Slack, email, etc. [6] Grafana, an open source platform for visualizing and working with data, is responsible for visualization of metrics. It allows you to connect to various data sources such as Prometheus, InfluxDB, Elasticsearch, MySQL, PostgreSQL and many others to create customizable dashboards with visualization. Grafana supports a wide range of graphical components, including graphs, histograms, heat maps, and tables, making it a versatile tool for data analysis. It is possible to set flexible filters, apply templates, and use variables to simplify working with dashboards in complex systems. Notification settings are supported: you can set thresholds for metrics and receive them. Grafana is widely used in DevOps to monitor infrastructure, servers, applications, and databases, as well as in business to build analytical reports. Due to its modularity, scalability, and integration with various systems, Grafana is a key tool for data visualization in modern development [7]. Load testing is performed using k6, an open source tool for load and performance testing developed by the creators of Grafana. k6 is written in Go and provides a convenient syntax for configuring in JavaScript. Thanks to integration with CI/CD systems such as Jenkins, GitLab CI and GitHub Actions, k6 can be easily integrated into automation processes. It also supports remote write - real-time data export. k6 is characterized by flexibility, ease of use, high performance and the ability to work with large volumes of requests [8]. To minimize the influence of third-party factors on the test results, it is necessary to isolate all elements. Containerization technology is excellent for this. The Docker container engine was chosen for implementation because of its popularity and prevalence (in fact, it is an industry standard). To containerize the handler application, you need to write an image build script - Dockerfile:
Figure 1 – Dockerfile for building an application image
The Dockerfile code is presented in text form in Appendix A. A docker-compose file was written for consistent deployment of the infrastructure and test environment [9]. The full file code is provided in Appendix B. To increase the objectivity of the study, we will limit the container with the application to 1 CPU core and 2 GB of RAM.
TESTING We will test the application in the test environment described above. Since the bottleneck of the application is the image processing functionality, and the purpose of the study is to evaluate the performance of this particular functionality, testing will be carried out by sequentially sending requests in one stream, one by one. This will avoid the overhead associated with processing parallel requests, make the most accurate performance assessment, and track system stability failures such as memory leaks [10]. Testing will be conducted in two modes: using a small and large image. A small image is a file format .jpg, 314 KB in size, with a resolution of 1080x1050 pixels; the total number of pixels is 1,764,000. A large image is a file format .jpg, 10.3 MB in size, with a resolution of 11384 x 4221 pixels; the total number of pixels is 48 051 864. A test using a small image will consist of sending 5,000 requests sequentially; using a large image, it will consist of sending 150 requests sequentially. Some of the files for testing were taken from the resource examplefile.com [11]. The standard deviation of the Gaussian filter (sigma) is 7, and the core size is 7x7 [12].
Single-threaded mode The test results with a small image and a JS implementation are shown in Figure 2.:
Figure 2 - test results with a small image on a JS implementation
The system's performance during the test with a small image and a JS implementation is shown in Figure 3.:
Figure 3 – System performance during a test with a small image on a JS implementation
As you can see, 5,000 requests were processed in 35 minutes and 55 seconds; the average response time was 431 ms, which corresponds to the average RPS 2.32. The average RAM load was 40 MB. CPU usage was running into the limit the entire time of the test.
The test results with a small image and Wasm implementation are shown in Figure 4.:
Figure 4 – test results with a small image on the Wasm implementation
The system's performance during the test with a small image and Wasm implementation is shown in Figure 5.:
Figure 5 – System results during the small image test on the Wasm implementation
5,000 requests were processed in 48 minutes and 25 seconds; the average response time is 581 ms, which corresponds to an average RPS of 1.72. The average RAM usage was 43.7 MB. CPU usage was running into the limit the entire time of the test.
The test results with a large image and a JS implementation are shown in Figure 6.:
Figure 6 – test results with a large image on a JS implementation
The system's performance during the test with a large image and a JS implementation is shown in Figure 7.:
Figure 7 – System performance during a large-image test on a JS implementation
150 requests were processed in 26 minutes and 21 seconds; the average response time is 10.47 seconds, which corresponds to an average RPS of 0.09. The average RAM usage was 409.4 MB. CPU usage was running into the limit the entire time of the test.
The test results with a large image and Wasm implementation are shown in Figure 8.:
Figure 8 – test results with a large image on the Wasm implementation
The system's performance during the test with a large image and Wasm implementation is shown in Figure 9.:
Figure 9 – System performance during a large-image test on a Wasm implementation 150 requests were processed in 36 minutes and 39 seconds; the average response time was 14.56 seconds, which corresponds to an average RPS of 0.06. The average RAM load was 539.3 MB. CPU usage was running into the limit the entire time of the test.
Multithreaded mode JavaScript lacks native multithreading. The cluster package from the standard Node package.js can replicate an application, but each replica individually will still be single-threaded, and thus the specific performance will remain unchanged. In Emscripten (the WebAssembly compiler), in turn, native multithreading support appeared in 2019 in version 1.38.27. Emcc uses the Web Workers API [13] and SharedArrayBuffer [14] to implement it. At the same time, multithreading is encapsulated at the level.the wasm file and compiler settings. That is, when replicating such an application, multithreading will be used on each of the replicas, which potentially increases specific performance. To study the effectiveness of multithreading in emcc, the source code is based on .c has been modified to use the Worker API, and the appropriate flags have been added to the compiler settings. The module was compiled using a pool of 4 threads. At the same time, the implementation of the convolution algorithm as such has not changed – neither the amount of calculations nor their complexity. The source code of this version of the application is available in the same repository. The tests will be conducted with different core restrictions for the application container: 1 core, 2 cores, 4 cores. The RAM limits remain unchanged at 2 GB.
The test results with a small image and a limit of 1 CPU core are shown in Figure 10.:
Figure 10 – test results with a small image and a limit of 1 processor core
The system's performance during the test with a small image and a limit of 1 CPU core is shown in Figure 11.:
Figure 11 – System performance during a test with a small image and a limit of 1 processor core
5,000 requests were processed in 50 minutes and 14 seconds; the average response time is 602 ms, which corresponds to an average RPS of 1.65. The average RAM usage was 67.2 MB. CPU usage was running into the limit the entire time of the test. The test results with a large image and a limit of 1 CPU core are shown in Figure 12.:
Figure 12 – test results with a large image and a limit of 1 processor core
The system's performance during the test with a large image and a limit of 1 CPU core is shown in Figure 13.:
Figure 13 – System performance during a test with a large image and a limit of 1 processor core
150 requests were processed in 37 minutes and 37 seconds; the average response time was 14.94 seconds, which corresponds to an average RPS of 0.06. The average RAM load was 547.2 MB. CPU usage was running into the limit the entire time of the test. The test results with a small image and a limit of 2 CPU cores are shown in Figure 14.:
Figure 14 – test results with a small image and a limit of 2 processor cores
The system's performance during the test with a small image and a limit of 2 CPU cores is shown in Figure 15.:
Figure 15 – System performance during a test with a small image and a limit of 2 processor cores
5,000 requests were processed in 24 minutes and 55 seconds; the average response time was 299ms, which corresponds to the average RPS of 3.34. The average RAM usage was 67.1 MB. CPU usage was running into the limit the entire time of the test.
The test results with a large image and a limit of 2 CPU cores are shown in Figure 16.:
Figure 16 – test results with a large image and a limit of 2 processor cores
The system's performance during the test with a large image and a limit of 2 CPU cores is shown in Figure 17.:
Figure 17 – System performance during a test with a large image and a limit of 2 processor cores
150 requests were processed in 19 minutes and 17 seconds; the average response time was 7.65 seconds, which corresponds to the average RPS of 0.13. The average RAM load was 541.6 MB. CPU usage was running into the limit the entire time of the test.
The test results with a small image and a limit of 4 CPU cores are shown in Figure 18.:
Figure 18 – test results with a small image and a limit of 4 processor cores
The system's performance during the test with a small image and a limit of 4 CPU cores is shown in Figure 19.:
Figure 19 – indicators of a system with a small image and a limit of 4 processor cores
5,000 requests were processed in 14 minutes and 53 seconds; the average response time was 178 ms, which corresponds to the average RPS 5.6. The average RAM usage was 69.7 MB. CPU usage was aiming for 350% during the entire test, which is equivalent to a full load of 3.5 CPU cores. The test results with a large image and a limit of 4 processor cores are shown in Figure 20.:
Figure 20 – test results with a large image and a limit of 4 processor cores
The system's performance during the test with a large image and a limit of 4 processor cores is shown in Figure 21.: Figure 21 – System performance during a test with a large image and a limit of 4 processor cores
150 requests were processed in 10 minutes and 44 seconds; the average response time was 4.26 seconds, which corresponds to an average RPS of 0.23. The average RAM load was 521.8 MB. CPU usage was aiming for 350% during the entire test, which is equivalent to a full load of 3.5 CPU cores.
analysis Based on the tests performed, it can be argued that in single-threaded mode, the overhead of using the Wasm module does not pay off either with a small image (for which such a result was expected) or for a large one (for which a decrease in response time could be predicted). The response time for JS and Wasm solutions is 430ms versus 580ms and 10.47 seconds versus 14.56 seconds, respectively; the figures increase by 35% and 39%, respectively. The reasons for this can be considered the slow operation with memory in WebAssembly, because the specificity of the task is to work with relatively large amounts of data. In multithreaded mode, the situation is changing. The algorithm implemented on Wasm in multithreaded mode, but with a limit of 1 core per container, shows a close to single-threaded result of 600ms versus 580ms and 14.94 s versus 14.56s; the increase in query response time is ~40% relative to the JS implementation and ~3% relative to the single-threaded Wasm implementation. The increased request response time for a single-threaded implementation can be explained by the overhead of running the workers. However, as soon as the core limit starts to increase, the results improve in proportion to the addition of resources: with a limit of 2 cores, the Wasm solution shows 299ms in a small picture and 7.65seconds in a large picture; the response time to the request decreases by 30% and 27%. With a limit of 4 cores, the Wasm solution begins to use the full potential of its four streams and shows 178ms and 4.26s for small and large images, respectively, which means a reduction in response time by ~59%. It can also be argued that the Wasm solution consistently consumes more device RAM resources: from 30% to 65%, depending on the configuration. However, if we compare the absolute values, the difference will not be so significant: 40MB versus 67MB and 409MB versus 547MB. When taking into account the configurations of modern computing devices, these values can be considered comparable. The results of the analysis are summarized in a table and presented in Table 1 for convenience.:
Table 1 – comparison of results
Final results: in single-threaded mode, WebAssembly shows a 38% deterioration in the result on average. In multithreaded mode, WebAssembly shows almost the same result or better, depending on the dedicated processor cores. When using one core, the result is worse by 39% and 42%; when using two cores, the result improves by 28.5% on average; when using four cores, the result improves by 59%. Based on this, we can conclude that JavaScript is the preferred option in single-threaded mode, as it handles tasks faster. WebAssembly reveals its potential only when using multithreading, where an increase in the number of cores leads to significant performance improvements. The use of WebAssembly is justified for computationally intensive tasks and processing of large images, especially in conditions of sufficient computing resources.
conclusion During the research, JavaScript and WebAssembly performance testing was successfully conducted. An application suitable for testing was developed; a test environment was prepared and configured; performance tests were conducted, metrics were collected on the state of the application during testing and on the CPU and RAM usage of the system during testing; the collected data was analyzed, conclusions were drawn based on them about the feasibility and effectiveness of various solutions; recommendations on the use of JavaScript were formulated and WebAssembly; all the results have been documented and systematized. The result of the study was data on the performance of WebAssembly when working in a Node environment.js, conclusions based on this data and recommendations for using the technology in a product application. Also, the results of this study can serve as a basis for further research in several directions. In particular, it is advisable to study in more detail the behavior of WebAssembly modules in various runtime environments, including server, mobile, and embedded platforms, with an emphasis on stability, security, and compatibility in automated testing. This will allow us to develop recommendations for adapting existing approaches to a wider range of tasks, as well as identify the limitations and potential risks associated with using WebAssembly outside the browser. Also of considerable interest is the development of specialized tools and frameworks that integrate WebAssembly testing into modern CI/CD processes. This area includes not only automating the launch of tests, but also expanding the tools for analyzing code coverage, tracking performance, detecting memory leaks, and other low-level metrics. It is also promising to explore isolation and sandbox methods for testing insecure or untrusted code, which is especially important in the context of microservice architecture and serverless computing. Together, this forms the basis for a systematic approach to quality assurance for applications that use WebAssembly as a full-fledged computing component.
APPLICATION A is a Dockerfile for building an application image.
FROM node:20-alpine WORKDIR /app COPY . . RUN npm ci RUN npm run tsc ENV NODE_ENV=production CMD ["node", "./compiled/source/api.js"]
APPLICATION B – docker-compose file for environment deployment
services: app: build: . image: image-processor-backend:latest container_name: backend ports: - "5001:5001" healthcheck: test: wget -nv -t1 --spider http://localhost:5001/ping/ interval: 5s timeout: 5s retries: 5 restart: always deploy: resources: limits: cpus: '4' memory: 2048M reservations: cpus: '1' memory: 1024M
cadvisor: image: gcr.io/cadvisor/cadvisor:latest container_name: cadvisor ports: - "9100:8080" volumes: - /:/rootfs:ro - /var/run:/var/run:ro - /sys:/sys:ro - /var/lib/docker/:/var/lib/docker:ro - /dev/disk/:/dev/disk:ro
prometheus: image: prom/prometheus container_name: prometheus volumes: - ./benchmark/prometheus.yml:/etc/prometheus/prometheus.yml - prometheus-data:/prometheus command: --web.enable-remote-write-receiver --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus ports: - "9090:9090" depends_on: - app - cadvisor
grafana: image: grafana/grafana container_name: grafana volumes: - ./benchmark/grafana/datasources:/etc/grafana/provisioning/datasources - ./benchmark/grafana/dashboards:/etc/grafana/provisioning/dashboards - ./benchmark/grafana/dashboards-json:/var/lib/grafana/dashboards - grafana-data:/var/lib/grafana ports: - "3001:3000" depends_on: - prometheus
k6: image: grafana/k6 container_name: k6 environment: - K6_PROMETHEUS_RW_SERVER_URL=http://prometheus:9090/api/v1/write - K6_PROMETHEUS_RW_TREND_STATS=avg volumes: - ./benchmark/k6:/k6 command: run --out experimental-prometheus-rw /k6/k6.js # image entrypoint is 'k6' depends_on: app: condition: service_healthy required: true
volumes: grafana-data: driver: local name: image-processor-benchmark-grafana
prometheus-data: driver: local name: image-processor-benchmark-prometheus References
1. Fredriksson, S. (2020, July 25). WebAssembly vs. its predecessors: A comparison of technologies. Diva Portal. https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1460603&dswid=-4940
2. Alevärn, M. (2021, July 27). Server-side image processing in native code compared to client-side image processing in WebAssembly. Diva Portal. https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1587964&dswid=-8296 3. GitHub repository with project source code. https://github.com/Winter4/image-processor-backend 4. Smelyakov, K. (2019, September 8). Efficiency of image convolution. IEEE. https://doi.org/10.1109/CAOL46282.2019.9019450 5. Shanmugam, A. S. (2017, September 13). Docker container reactive scalability and prediction of CPU utilization based on proactive modelling. National College of Ireland. https://norma.ncirl.ie/2884/1/aravindsamyshanmugam.pdf 6. Xu, X., & Xu, A. (2020, September 27). Research on security issues of Docker and container monitoring system in edge computing system. IOPscience. https://doi.org/10.1088/1742-6596/1673/1/012067 7. Holopainen, M. (2021, May 3). Monitoring container environment with Prometheus and Grafana. Theseus. https://www.theseus.fi/bitstream/handle/10024/497467/Holopainen_Matti.pdf 8. Sepide, B. (2024, July 9). Automated performance testing in ephemeral environments. University of Padua. https://thesis.unipd.it/handle/20.500.12608/66511 9. Reis, D. (2021, December 22). Developing Docker and Docker-Compose specifications: A developers' survey. IEEE. https://doi.org/10.1109/ACCESS.2021.3137671 10. Ehsan, A. (2022, April 26). RESTful API testing methodologies: Rationale, challenges, and solution directions. MDPI. https://doi.org/10.3390/app12094369 11. Web resource, collection of various test files. https://examplefile.com 12. van dek Wilk, M. (2017). Convolutional Gaussian processes. NeurIPS Proceedings. https://proceedings.neurips.cc/paper/2017/hash/1c54985e4f95b7819ca0357c0cb9a09f-Abstract.html 13. Hellberg, L. (2022, July 6). Performance evaluation of Web Workers API and OpenMP. Diva Portal. https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1681349&dswid=-9566 14. Lassen, P. (2021, June 29). WebAssembly backends for Futhark. Futhark. https://futhark-lang.org/student-projects/philip-msc-thesis.pdf
First Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
Second Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
|