Developing software with the help of a virtual assistant

Antonio Mastropaolo (PhD student), professor Gabriele Bavota, Matteo Ciniselli (PhD student) e Rosalia Tufano (PhD student)
Antonio Mastropaolo (PhD student), professor Gabriele Bavota, Matteo Ciniselli (PhD student) e Rosalia Tufano (PhD student)

Institutional Communication Service

10 October 2022

A virtual assistant that can help programmers by performing non-trivial tasks even of a certain complexity: with this proposal in 2019, Gabriele Bavota, currently an associate professor at USI Faculty of Informatics, was awarded an ERC Starting Grant, the research funds assigned by the European Research Council to the most promising young researchers. 

Working on the DEVINTA project is a team that includes, in addition to Professor Bavota, three PhD students - Rosalia Tufano, Matteo Ciniselli and Antonio Mastropaolo - and postdoctoral researchers Emad Aghajani, now transferred to the private sector, and Luca Pascarella, who is about to start working at the Swiss Federal Institute of Technology in Zurich. 


Professor Bavota, where did the idea of creating an "artificial assistant" to develop software come from? 

The idea comes from three observations. The first is that software systems are among the most complex constructs created by humans and, as such, pose substantial challenges both during their development and in the subsequent maintenance phase. These challenges must be met in an increasingly competitive market, requiring software developers to create high-quality products in the shortest possible time. Maximising developer productivity and supporting them in complex tasks is, therefore, a priority in software engineering. 

However, and this is the second observation, the support provided by existing development tools is very limited, mainly when focusing on complex tasks such as understanding code or correcting errors. 

Finally, the advances in Artificial Intelligence (AI) and the amount of data available in open source projects make applying AI to problems relevant to developers a clear opportunity. For example, it is possible to train AI models to learn how to automatically correct errors in code (bug-fixing). This can be done by learning from millions of bug-fixing activities performed by software developers in thousands of open-source projects. 


What are the advantages over tools that do not use artificial intelligence as software libraries?  

We are talking about different types of support. A software library provides the developer with already implemented functionality that they can reuse, saving time and lowering costs. We focus on supporting developers in complex problems for which the only way to automation is to learn from what real developers have done in the past. For example, in one of our lines of research, we are automating the code review process, which is the activity by which a team of developers (reviewers) analyse code written by another developer, identifying any problems and suggesting solutions on how to improve the quality of the code. This process is costly in terms of time (it can take several instances of iterations between the reviewers and the developer who wrote the code) and substantially increases development costs as several developers are allocated to the same task. In DEVINTA, we have trained AI models that can partially replace reviewers in this process, providing automated and immediate feedback to the developer, just as a "human" reviewer would. These models have learned "how to review code" through the work of tens of thousands of reviewers in open-source projects. 


Is the purpose of DEVINTA to build a concrete tool, or is it more theoretical work? 

The project does not aim to build a concrete product but a set of approaches that automate several non-trivial tasks for developers. At the moment, we are focused, in addition to the code review I have already mentioned, mainly on automating implementation tasks, i.e. recommending to the developer the code needed to implement a specific feature and on automated code documentation to support code understanding. 


Are the developed solutions ready to be used yet?

Not yet, but we expect at least some of them to be by the end of the project. This is due to the need to improve the accuracy of the recommendations generated by the approaches we develop. For example, in the case, as mentioned earlier, of code review, the AI model is able to behave like a human reviewer in about 20 per cent of the cases on which we have tested it. This indicates the need for more research before considering the model usable.