Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Software systems and computational methods
Reference:

Automating third-party library migrations

Zorchenkov Alexey Mikhailovich

Leading Engineer, Huawei Tech Company LLC

141002, Russia, Moskovskaya oblast', g. Mytishchi, ul. Beloborodova, 15, kv. 81

zorchenkov@gmail.com

DOI:

10.7256/2454-0714.2022.1.34337

Received:

16-11-2020


Published:

03-04-2022


Abstract: Manual migration between various third-party libraries is a problem for software developers. Developers usually need to study the application programming interfaces of both libraries, as well as read their documentation to find suitable comparisons between the replacement and the replaced methods. In this article, I will present a new approach (MIG) to machine learning that recommends mappings between the methods of two API libraries. My model learns from manually found data of implemented migrations, extracts a set of functions related to the similarity of the method signature and text documentation. I evaluated the model using 8 popular migrations compiled from 57,447 open source Java projects. The results show that the model can recommend appropriate library API mappings with an average accuracy rate of 87%.   This study examines the problem of recommending method comparisons when migrating between third-party libraries. A new approach is described that recommends the comparison of methods between two unknown libraries using features extracted from the lexical similarity between method names and textual similarity of method documentation. I evaluated the result by checking how this approach and three other most commonly used approaches recommend a comparison of migration methods for 8 popular libraries. I have shown that the proposed approach shows much better accuracy and performance than the other 3 methods. Qualitative and quantitative analysis of the results shows an increase in accuracy by 39.51% in comparison with other well-known approaches.


Keywords:

library migrations, machine learning, Natural Language Processing, Frequency of the term, the method of support vectors, Reverse document frequency, feature engineering, Library documentation, extracting information, vector space model

This article is automatically translated.

Accepted abbreviations?

Abbreviation

Full name

Description

NLP

Natural Language Processing

Natural Language Processing

TF-IDF

Term Frequency -Inverse Document Frequency

A statistical measure used to assess the importance of a word in the context of a document that is part of a document collection or corpus. The weight of a certain word is proportional to the frequency of use of this word in the document and inversely proportional to the frequency of use of the word in all documents of the collection.

SVM

Support Vector Machine

the method of support vectors

 

1      Introduction

Modern software systems rely heavily on third-party libraries to save time, reduce implementation costs, and improve software quality by offering extensive, reliable, and state-of-the-art functionality. [1],[2],[3]. However, since software systems are developing rapidly, there is a need for appropriate tools that provide reliable and effective methods for developers to make decisions when replacing old and outdated libraries with modern ones. This process of replacing a library with another, while maintaining the same code behavior, is known as library migration [4],[5]. It is widely known that the migration process between libraries is a rather difficult, error-prone and time-consuming process [6],[2],[3],[1]. Developers need to study the API of the new library and related documentation to find the correct API method(s) to replace the corresponding functionality in the current implementation belonging to the API of the outdated library. Developers often need to spend considerable time to make sure that new features do not have any side effect. For example, previous work has shown that developers usually spend up to 42 days migrating between libraries [7].

Typically, software companies assign migration tasks to developers who have more experience in order to reduce the risks of adverse side effects in the product. For example, [Figure 1] shows that developers who have more than ten years of experience,

Figure 1

they perform migrations more often than developers with less experience. The data is based on migrations collected by Alrubaye et al. [6] containing information about developers who previously performed migration tasks, such as names, email addresses, years of experience and migration dates. In addition, I found that 95.3% of the 57,447 Java projects used at least one third-party library (API). On average, there are 65 API update or migration processes for each project. Previously, a number of migration approaches and methods were proposed in order to determine the replacement of an outdated API with a newer version of the same API [8],[9],[10],[11].

Other studies recommend which library to choose when replacing: [12],[13],[14],[15],[16]. However, such approaches do not provide clear guidance to developers on how to specifically perform flexibly configurable migration at the method level. In fact, method-level recommendations have been the focus of many studies, but only to recommend the same library in different programming languages and operating systems. [17],[18],[19].

Obviously, there is a need for a more comprehensive recommendation methodology that does not depend on either the library or the programming language, i.e. it takes two different libraries as input and shows how to replace one with another at the method level.

In this article, I present a new machine learning method - a model trained on migrations previously performed by developers and recommends migrations at the API level for similar migration contexts. The model accepts two different libraries as input and outputs potential mappings between their API methods as a result. The main idea is to reuse valuable knowledge on migrations available in previous migrations performed manually by developers in other open source projects, i.e. to learn from the "Wisdom of the Crowd". The model uses predefined features related to the similarity of method signatures and their corresponding API documentation to build the algorithm.

2                Methodology The migration rule is denoted by a pair: the source (remote) library and the target (added) library and is represented as . For example, represents a Migration Rule where the easymock library migrates to the new mockito library. For a given migration rule, we denote – a set of methods belonging to , where , and – a set of methods belonging to , where . Our goal is to find correspondences between and . In such a way that each source method is mapped to the corresponding target method, this process is called method mapping.

Figure 2: Approach to the recommendation of mapping methods

Figure 3

Methods that require a significant amount of data at the training and verification stages (logistic regression and neural networks) showed the worst accuracy. On the contrary, enhanced decision trees are known for excellent performance on relatively small datasets due to the use of a set of Decision Trees and a weighted selection mechanism.

Figure 4

2.9 Setting parameters.

Figure 5

Setting parameters significantly affects learning in a given task [30]. In this regard, I have adjusted the training to improve accuracy. Since training is used based on a Two-class reinforced decision tree, I started the setup with the following parameters: the maximum number of leaves is 20, the minimum number of leaf instances is 10, the learning rate is 0.2, the number of trees is 100. Further, the training parameters were consistently changed until the error stabilized at 0.5% [Figure 4] with the number of leaves 6, the minimum number of leaf instances 47, the learning rate 0.14, the number of trees 233. [Figure 5] shows a comparison of training with and without parameter settings. The training schedule with parameter settings is located further from the curve and the accuracy increases by 2.3%: from 90.7% to 93.0% 3                Known methods of recommendations for mapping methods of library API migrations 3.1 Considered approaches. Consider the implementation of the other three best approaches for comparison with MIG. ·      Learning to Rank (LTR). We use the same set of features, and the same data set, evaluate each pair of methods belonging to the source and target API. The evaluation function is a linear combination of features whose weights are trained on known comparisons of migration methods. , where is the result of training on previously marked method correspondences. For the fairness of comparing LTR with other algorithms, only the method that received the maximum score is taken into account. ·      Text mining based approach (TMAP). The TMAP [19] method ranks method correspondences based on five attributes: , where is obtained by applying to the class description of the source method and the target method that creates and , then the cosine similarity to and is applied . it is obtained by applying to the descriptions of the source method and the target , creating and and calculating the cosine similarity between and . · Method Signature (MS). This method calculates the similarity of method signatures for all possible combinations as follows [28]: , where it calculates the similarity at the token level [8] between two types of return values, and counts the longest common sequence between two given method names [29]. 3.2 Results. The calculation of the accuracy of the mappings obtained by MIG and the best previously known approaches (LTR, TMAP [19] and MS [20]) is presented in [Table 1]. MIG has the best accuracy among all approaches according to all the rules, varying from 80% to 98%, which is significantly higher than other methods by 39.52% (as the difference between the average accuracy of MIG for migrations of 8 libraries is 86.98% and the best average for other methods is 47.46%).

Table 1

Migration rule

LTR

TMAP

MS

MIG

logging->slf4j

28.26%

21.73%

26.08%

85.00%

comm-lang->slf4j

33.33%

33.33%

33.33%

89.90%

easymock->mockito

26.66%

46.66%

46.66%

80.00%

testing->junit

51.72%

51.72%

37.93%

98.00%

slf4j->log4j

77.77%

66.66%

77.77%

85.00%

json->gson

35.29%

47.05%

41.17%

85.00%

json-simple->gson

60.00%

60.00%

40.00%

92.90%

gson->jackson

66.66%

50.00%

50.00%

80.00%

Average Accuracy

47.46%

47.14%

44.12%

86.98%

 

To explain the difference in the accuracy of the results of the approaches under consideration, the result of migration from json to gson is presented in [Figure 6]. In [Figure 6] (A), all 4 methods were able to correctly recommend replacing the method with . MS was able to find the correct replacement because the return types of both methods are the same. In addition, the names of the methods are very similar. TMAP was able to recommend the correct target method because they have a similar description and name . LTR gave the correct result due to the similarity of the descriptions , the signature of the input parameters and the type of return value . These three features have relatively high weights in comparison with other features, which significantly increases the accuracy of the ranking algorithm. In contrast to [Figure 6] (B), only MIG was able to recommend the correct target method "void addProperty(String property, Number value)". LTR recommends "void addProperty(String property, String value)" because the input parameter "String value" has more similarity with the source method of the software and than the similarity of "Number value" with the correct target method, taking into account that the other attributes have the same values for both target methods. The error occurred due to the polymorphic nature of the method: LTR was able to recommend the correct name of the method, but not the one with the correct parameters. TMAP recommended "JsonElement parse(JsonReader json)" because they had more similarity with the source method than the correct target method. MS recommended "JsonElement get(String memberName)" because it has a higher similarity of signatures with the original "put" method than the correct target method with the original method. In both cases [Figure 6] (A) and (B), MIG recommended the correct mapping of the method because he "learned" these types of patterns through numerous decision trees.

Figure 6

The analysis of the results showed that the following contexts are quite complex for all approaches: ·        Reloading the method. Refers to two methods that have the same names, but differ in the number of parameters, the type of parameters or the sequence of parameters. ·        Polymorphic methods. They are redefined in the class hierarchy, where the method of the child class has the same name and number of parameters with the method of the base class, but with different types. ·        Generalized methods (Generics). Methods with input and output parameters that allow the method to work with objects of various types. Also very difficult are the cases when the source and target methods differ in name, type of return value and even input parameters. Finally, there are methods without proper documentation that present difficulties for TMAP, LTR and MIG. [Figure 7] shows the dependence of the accuracy of MIG recommendations on the amount of data used for training. As the amount of training data increases, the accuracy increases from 83.3% with 10% of the training data to 92% with 90% of the data used for training. This confirms the independence of the features and the sufficiency of even a small amount of data to achieve good MIG accuracy.

Figure 7

4 Conclusion This study examines the problem of recommending a comparison of methods when migrating between third-party libraries. A new approach is described that recommends the comparison of methods between two unknown libraries using features extracted from the lexical similarity between method names and textual similarity of method documentation. I evaluated the result by checking how this approach and three other most commonly used approaches [19],[28] and [33] recommend a comparison of migration methods for 8 popular libraries. I have shown that the proposed approach shows much better accuracy and performance than the other 3 methods. Qualitative and quantitative analysis of the results shows an increase in accuracy by 39.51% in comparison with other well-known approaches. As part of future research, I plan to significantly expand the number of migrations used along with a wider set of binary classifiers. I also plan to expand the number of features by also including the context of using methods in the program code.  

References
1. Hussein Alrubaye and Mohamed Wiem. Variability in library evolution. Software Engineering for Variability Intensive Systems: Foundations and Applications, page 295, 2019.
2. Raula Gaikovina Kula, Daniel M German, Ali Ouni, Takashi Ishio, and Katsuro Inoue. Do developers update their library? Empirical Software Engineering, 23(1):384–417, 2018.
3. Bradley E Cossette and Robert J Walker. Seeking the ground truth: a retroactive study on the evolution and migration of software libraries. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering, page 55. ACM, 2012.
4. Cedric Teyton, Jean-Remy Falleri, and Xavier Blanc. Mining library migration graphs. In Reverse Engineering (WCRE), 2012. 19th Working Conference on, pages 289–298. IEEE, 2012.
5. Cedric Teyton, Jean-R´emy Falleri, and Xavier Blanc. Automatic discovery of function mappings between similar libraries. In Reverse Engineering (WCRE), 2013 20th Working Conference on, pages 192–201. IEEE, 2013.
6. Hussein Alrubaye and Mohamed Wiem Mkaouer. Automating the detection of third-party java library migration at the function level. In Proceedings of the 28th Annual International Conference on Computer Science and Software Engineering, pages 60–71. IBM Corp., 2018.
7. Hussein Alrubaye, Mohamed Wiem Mkaouer, and Ali Ouni. On the use of information retrieval to automate the detection of third-party java library migration at the function level. In 27th IEEE/ACM International Conference on Program Comprehension. IEEE, 2019.
8. Miryung Kim, David Notkin, and Dan Grossman. Automatic inference of structural changes for matching across program versions. In ICSE, volume 7, pages 333–343. Citeseer, 2007.
9. Thorsten Schafer, Jan Jonas, and Mira Mezini. Mining framework usage changes from instantiation code. In Proceedings of the 30th international conference on Software engineering, pages 471–480. ACM, 2008.
10. Barthel´emy Dagenais and Martin P Robillard. Recommending adaptive changes for framework evolution. ACM Transactions on Software Engineering and Methodology (TOSEM), 20(4):19, 2011.
11. Wei Wu, Yann-Gael Gu ¨ eh´ eneuc, Giuliano Antoniol, and Miryung Kim. Aura: a hybrid approach to identify framework evolution. In 2010 ACM/IEEE 32nd International Conference on Software Engineering, volume 1, pages 325–334. IEEE, 2010.
12. Hao Zhong, Tao Xie, Lu Zhang, Jian Pei, and Hong Mei. Mapo: Mining and recommending api usage patterns. In European Conference on Object-Oriented Programming, pages 318–343. Springer, 2009.
13. Collin McMillan, Mark Grechanik, Denys Poshyvanyk, Qing Xie and Chen Fu. Portfolio: finding relevant functions and their usage. In Proceedings of the 33rd International Conference on Software Engineering, pages 111–120. ACM, 2011.
14. Ali Ouni, Raula Gaikovina Kula, Marouane Kessentini, Takashi Ishio, Daniel M German, and Katsuro Inoue. Search-based software library recommendation using multi-objective optimization. Information and Software Technology, 83:55–75, 2017.
15. Johannes Hartel, Hakan Aksu, and Ralf L ¨ ammel. Classification of apis by hierarchical clustering. In Proceedings of the 26th Conference on Program Comprehension, pages 233–243. ACM, 2018.
16. Daiki Katsuragawa, Akinori Ihara, Raula Gaikovina Kula, and Kenichi Matsumoto. Maintaining third-party libraries through domain-specific category recommendations. In 2018 IEEE/ACM 1st International Workshop on Software Health (SoHeal), pages 2–9. IEEE, 2018.
17. Amruta Gokhale, Vinod Ganapathy, and Yogesh Padmanaban. Inferring likely mappings between apis. In Proceedings of the 2013 International Conference on Software Engineering, pages 82–91. IEEE Press, 2013.
18. Rahul Pandita, Raoul Praful Jetley, Sithu D Sudarsan, and Laurie Williams. Discovering likely mappings between apis using text mining. In Source Code Analysis and Manipulation (SCAM), 2015 IEEE 15th International Working Conference on, pages 231–240. IEEE, 2015.
19. Rahul Pandita, Raoul Jetley, Sithu Sudarsan, Timothy Menzies and Laurie Williams. Tmap: Discovering relevant api methods through text mining of api documentation. Journal of Software: Evolution and Process, 29(12), 2017.
20. Ferdian Thung, David Lo, and Julia Lawall. Automated library recommendation. In 2013 20th Working Conference on Reverse Engineering (WCRE), pages 182–191. IEEE, 2013.
21. Ferdian Thung, Richard J Oentaryo, David Lo, and Yuan Tian. Webapirec: Recommending web apis to software projects via personalized ranking. IEEE Transactions on Emerging Topics in Computational Intelligence, 1(3):145–156, 2017.
22. Collin Mcmillan, Denys Poshyvanyk, Mark Grechanik, Qing Xie, and Chen Fu. Portfolio: Searching for relevant functions and their usages in millions of lines of code. ACM Transactions on Software Engineering and Methodology (TOSEM), 22(4):37, 2013.
23. Collin McMillan, Mark Grechanik, and Denys Poshyvanyk. Detecting similar software applications. In Proceedings of the 34th International Conference on Software Engineering, pages 364–374. IEEE Press, 2012.
24. Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and TAMT Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE transactions on evolutionary computation, 6(2):182–197, 2002.
25. Amit Singhal et al. Modern information retrieval: A brief overview. IEEE Data Eng. Bull., 24(4):35–43, 2001.
26. Lei Yu and Huan Liu. Feature selection for high-dimensional data: A fast correlation-based filter solution. In Proceedings of the 20th international conference on machine learning (ICML-03), pages 856–863, 2003
27. Tom Mitchell, Bruce Buchanan, Gerald DeJong, Thomas Dietterich, Paul Rosenbloom, and Alex Waibel. Machine learning. Annual review of computer science, 4(1):417–433, 1990.
28. Hoan Anh Nguyen, Tung Thanh Nguyen, Gary Wilson Jr, Anh Tuan Nguyen, Miryung Kim, and Tien N Nguyen. A graphbased approach to api usage adaptation. In ACM Sigplan Notices, volume 45, pages 302–321. ACM, 2010.
29. James W Hunt and Thomas G Szymanski. A fast algorithm for computing longest common subsequences. Communications of the ACM, 20(5):350–353, 1977.
30. Andrea Arcuri and Gordon Fraser. Parameter tuning or default values? an empirical investigation in search-based software engineering. Empirical Software Engineering, 18(3):594–623, 2013.
31. Alessandro Del Sole. Introducing microsoft cognitive services. In Microsoft Computer Vision APIs Distilled, pages 1–4. Springer, 2018.
32. Edward Loper and Steven Bird. Nltk: the natural language toolkit. arXiv preprint cs/0205028, 2002.
33. Xin Ye, Razvan Bunescu, and Chang Liu. Learning to rank relevant files for bug reports using domain knowledge. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 689–699. ACM, 2014.