VSSE2016

Previous Workshops

ETAPS 2016


9.00 - 9.15 Welcome to VSSE
Grigory Fedyukovich (University of Washington), Antti Hyvärinen (University of Lugano), Arie Gurfinkel (SEI/CMU)
9.15 - 10.00 Building Modular Program Verifiers using the Viper Infrastructure
Peter Mueller (ETH Zurich)
10.00 - 11.00 Coffee break
11.00 - 11.45 Building a Witnessing Compiler
Kedar Namjoshi (Bell Labs, Nokia)
11.45 - 12.30 Towards Correct Composition and Transformation of Concurrent System Semantics
Anton Wijs (Eindhoven University of Technology)
12.30 - 14.15 Lunch
14.15 - 15.00 Feedback Generation for Performance Problems in Introductory Programming Assignments
Florian Zuleger (TU Wien)
15.00 - 16.00 Coffee break
16.00 - 16.45 Automating Regression Verification
Philipp Ruemmer (Uppsala University)
16.45 - 17.30 Towards a Uniform Analysis of Models and Code
Temesghen Kahsai (NASA Ames / CMU)



Abstracts

Peter Mueller (ETH Zurich)

Building Modular Program Verifiers using the Viper Infrastructure

Modular program verification decouples the correctness proof of a component from the implementations of the other components in a program. Modularity makes verification scalable and allows one to give correctness guarantees for individual components such as libraries. Moreover, it supports software evolution by reducing the impact of program extensions and changes and, therefore, minimizing the amount of code that needs to be re-verified when the program evolves. This talk gives an overview of the state of the art in modular verification of heap-manipulating programs. It explains how modular verification techniques are supported by Viper, an infrastructure that facilitates both prototyping and the development of automated program verifiers.

Philipp Ruemmer (Uppsala University)

Automating Regression Verification

Regression verification is an approach complementing regression testing with formal verification. The goal is to formally prove that two versions of a program behave either equally or differently in a precisely specified way. In recent work, we have presented an automatic approach for regression verification that reduces the equivalence of two related imperative integer programs to satisfaction of a set Horn constraints, which can then be solved using state-of-the-art model checkers for Horn clauses (such as Z3, HSF, Spacer, or Eldarica). The method is able to (fully automatically) show equivalence of non-trivial programs over integer arithmetic. In the presentation, after a general introduction of regression verification and the encoding as Horn clauses, I will outline how the approach can be extended to datatypes other than integer arithmetic, and to programs operating on heap.
The talk will present joint work with Dennis Felsing, Sarah Grebing, Vladimir Klebanov, Mattias Ulbrich.

Florian Zuleger (TU Wien)

Feedback Generation for Performance Problems in Introductory Programming Assignments

Providing feedback on programming assignments manually is a tedious, error prone, and time-consuming task. In this talk, I will motivate and address the problem of generating feedback on performance aspects in introductory programming assignments. We studied a large number of functionally correct student solutions to introductory programming assignments and observed: (1) There are different algorithmic strategies, with varying levels of efficiency, for solving a given problem. These different strategies merit different feedback. (2) The same algorithmic strategy can be implemented in countless different ways, which are not relevant for reporting feedback on the student program. We propose a light-weight programming language extension that allows a teacher to define an algorithmic strategy by specifying certain key values that should occur during the execution of an implementation. We describe a dynamic analysis based approach to test whether a student's program matches a teacher's specification. Our experimental results illustrate the effectiveness both our specification language and our dynamic analysis.

Kedar Namjoshi (Bell Labs, Nokia)

Building a Witnessing Compiler

Much of the research in software verification is on programs that operate on scalar data. How does one verify a program such as a compiler, which operates on other programs? How can one provide a guarantee that the final executable object is as correct and as secure as the high-level program from which it is derived?

The traditional approach to compiler correctness is to produce a proof which shows that every transformation preserves the behavior of all legal input programs. Despite many advances in theorem proving methods, this is still a time-consuming task requiring enormous human effort, one that is essentially infeasible for the verification of production compilers, which are written in languages such as C or C++. My talk is about a different, more practical, mechanism, which trades in human effort for computing effort. In this view, every compiler transformation is written so that it generates a "witness" justifying its correctness on the specific program being compiled. Witnesses are presented in a format that can be validated independently, ideally using off-the-shelf constraint solvers. I will talk about our experience with the implementation of witnessing versions of common compiler transformations, such as dead code elimination and SSA (single static assignment) conversion, from the LLVM framework. The necessary augmentations are small, a few hundred lines of simple code. The (trusted) witness validator is implemented in about 3000 lines of OCaml with support from SMT solvers. I will also discuss intriguing open questions and describe applications of witnesses beyond correctness.

Anton Wijs (Eindhoven University of Technology)

Towards Correct Composition and Transformation of Concurrent System Semantics

We present a technique to define and apply behavioural transformations on formal models of concurrent systems. Such transformations allow to change the potential behaviour of system models, and can be verified in isolation. In particular, the preservation of safety and liveness properties of such a transformation can be determined without involving any concrete input model. This is particularly useful for model-driven development approaches, where systems are designed and created by first developing an abstract model, and iteratively modifying this model until it is concrete enough to automatically generate source code from it. The main benefits of using our approach are:

1. Verified transformations can be safely applied on any input model, allowing to construct a repository of reusable, correct transformations.
2. Constructing models incrementally via verified transformations allows to indirectly reason about the correctness of those models, even if they are too large to verify in a direct manner using conventional tools.

The technique is implemented in a tool called REFINER, which integrates with the existing explicit-state model checking toolsets mCRL2 and CADP.

Temesghen Kahsai (NASA Ames / CMU)

Towards a Uniform Analysis of Models and Code

Model-based design is widely used in the development of critical control software for embedded systems. Using models is a key to keep the development costs manageable, since correcting bugs on models is much cheaper than on the final implementation. This paradigm can however be effective only when it is accompanied by powerful tools for simulation, verification, synthesis and code generation. The design of embedded control software can be assisted by a variety of modeling languages and tools, among which synchronous languages has become a de-facto standard. Specifically, Simulink has long been used as the standard modeling language of embedded systems. Often Simulink models can call external C/C++ functions, e.g. legacy code that needs to be integrated in the model. For verification, this means that tools need to reason not only about the Simulink blocks but also about the C/C++ functions. In this talk I will present CoCoSim - a modular integrated verification framework for Simulink models. The key distinguishing features of CoCoSim are: (i) its modular design, (ii) its tight integration with existing simulation/analysis tools for Simulink and (iii) its ability to reason in a uniform way of Simulink models and C/C++ code.

ETAPS