grascomp Graduate School in Computing Science

Fall 2011 Doctoral School Day

COMP051 - Charles Pecheur (coordinator)
Documents and Links
Users
 

Lecture slides are available in the Documents and Links section via the menu on the left.

Videos of the lectures are available on the INGI streaming server.


GRASCOMP Doctoral School

INGI Fall 2011 Doctoral School Day

Tuesday 29 November 2011
Université catholique de Louvain

The Computing Science and Engineering pole (INGI) of the Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM) at the Université catholique de Louvain is organizing a full-day session of doctoral lectures on the broad theme of Software Engineering and Programming Systems.  The program will consist of extended, didactic presentations (about one hour each) on selected topics in computer science, presented by an international panel of confirmed researchers.  The presentations will be given in English.

This seminar allows Belgian doctoral researchers to learn about latest advances and trends in the broad field of Software Engineering and Programming Systems and to meet with colleagues researchers from Belgium and abroad, all within a single day.  It also offers an opportunity for the presenters to disseminate their research achievements to and to meet their peers in the Belgian academic community.

Location

The lectures will take place in auditorium BARB94, Sainte-Barbe Bldg, Place Sainte Barbe 1, 1348 Louvain-la-Neuve (Google map).

Participation

The Doctoral School Day is targeted at all Belgian doctoral students but is open to all.  Participation will be credited to doctoral students upon request, as part of the GRASCOMP doctoral school programme.  Attendance is free but registration is required by subscribing to the course website (see instructions below).  Lunch will not be included in the registration (there are plenty of opportunities in the city of Louvain-la-Neuve to get a good and cheap lunch).

To register:
  • Visit the GRASCOMP Campus Website at http://icampus.grascomp.be.
  • If you have not yet, create a user account for yourself.  Make sure you provide a valid e-mail address.
  • Enrol to course COMP051 with the key "fall2011".

Programme

09:00–10:00
Measuring and mining evolution of software projects
Alexander Serebrenik (Assistant Professor, TU Eindhoven, NL)
10:15–11:15
Clara: Proving safety and security properties by evaluating runtime monitors ahead of time
Eric Bodden (Research Group Leader, European Center for Security and Privacy by Design, DE)
11:30–12:30
High-level Abstractions for Instrumentation-based Dynamic Program Analysis
Walter Binder (Assistant Professor, University of Lugano, CH)


14:00–15:00
Adopting MDE to support the evolution of component-based FOSS systems
Davide Di Ruscio (Assistant Professor, University of L'Aquila, IT)
15:15–16:15
Statistical Model Checking: An Overview
Axel Legay (Associate Professor at Aalborg University, DK and full-time researcher at INRIA Rennes, FR)
16:30–17:30
Maintaining Source Code Quality: Tools and Techniques
Andy Kellens (Post-doctoral researcher, Vrije Universiteit Brussel, BE)

Contact

For any question, please contact Kim Mens (Kim.Mens (at) uclouvain.be) and/or Charles Pecheur (Charles.Pecheur (at) uclouvain.be).



Abstracts

Measuring and mining evolution of software projects

Alexander Serebrenik (TU Eindhoven, NL)

Software maintenance is an area of software engineering with deep financial implications. Indeed, maintenance and evolution costs were forecasted to account for more than half of North American and European software budgets in 2010. Similar or even higher figures were reported for countries such as Norway and Chile. In this talk we discuss recent advancement on two popular approaches to assessing evolution of software projects: measuring and mining software. Software metrics, commonly used to measure software, are usually defined at micro level (method, class, package), while the analysis of maintainability and evolution requires insights at macro (system) level. Metrics should, therefore, be aggregated. We discuss recent work on software metrics aggregation techniques, and advocate econometric inequality induces to perform aggregation. A complementary approach to studying software evolution consists in mining software repositories, e.g., version control systems, bug trackers and mail archives. While abundant information is usually present in such repositories, successful information extraction is often challenged by the necessity to simultaneously analyze different repositories and to combine the information obtained. We propose to apply process mining techniques, originally developed for business process analysis, to address this challenge. However, in order for process mining to become applicable, different software repositories should be combined, and “related” software development events should be matched: e.g., mails sent about a file, modifications of the file and bug reports that can be traced back to it. In this talk we discuss the approach proposed, as well as a series of case studies addressing such aspects of the development process as roles of different developers, the way bug reports are handled and conformance to software engineering standards.

Clara: Proving safety and security properties by evaluating runtime monitors ahead of time

Eric Bodden (TU Darmstadt, DE)

A runtime monitor observes events during a program's execution and validates these events against the specification of a safety or security property. When detecting a property violation, the monitor can log the violation or even prevent the violating event from actually occurring. In this talk we focus on the Clara system for evaluating runtime monitors ahead of time. Clara statically evaluates runtime monitors expressed as "aspects" in the aspect-oriented programming language AspectJ. Monitors expressed as aspects are easy to write, read, maintain and analyze. This allows Clara to use syntactic, pointer-based and control-flow-based analysis techniques to partially evaluate runtime monitors already at compile-time. Partial ahead-of-time evaluation is a powerful concept: For many programs, Clara can prove the absence of property violations on all possible executions. For other programs, Clara typically restricts the program instrumentation for runtime monitoring to a necessary minimum, speeding up the runtime monitoring process by orders of magnitude.

High-level Abstractions for Instrumentation-based Dynamic Program Analysis

Walter Binder (University of Lugano, CH)

Dynamic program analysis tools support numerous software engineering tasks, including profiling, debugging, and reverse engineering. Prevailing techniques for building dynamic analysis tools are based on low-level abstractions that make tool development tedious, error-prone, and expensive. To simplify the development of dynamic analysis tools, some researchers promoted the use of aspect-oriented programming (AOP). However, as mainstream AOP languages have not been designed to meet the requirements of dynamic analysis, the success of using AOP in this context remains limited. For example, in AspectJ, join points that are important for dynamic program analysis (e.g., the execution of bytecodes or basic blocks of code) are missing, access to reflective dynamic join point information is expensive, data passing between woven advice in local variables is not supported, the generated woven code violates current constraints on class redefinition in production JVMs, and the mixing of low-level bytecode instrumentation and high-level AOP code is not foreseen. In this talk, I introduce DiSL, a new domain-specific language for instrumentation. DiSL allows representing any bytecode instrumentation. It uses Java annotation syntax such that standard Java compilers can be used for compiling DiSL code. The language offers an open join point model, synthetic local variables, efficient processing of method arguments, and comprehensive static and dynamic context information. The DiSL weaver guarantees complete bytecode coverage and conforms with current class redefinition constraints. We have implemented several dynamic analysis tools in DiSL, including profilers for the inter- and intra-procedural control flow, debuggers, dynamic metrics collectors integrated in the Eclipse IDE to augment the static source views with dynamic information, and a tool for dynamic symbolic execution of bytecode. The tools are concise and perform equally well as implementations using low-level techniques. DiSL has also been conceived as an intermediate language for future domain-specific analysis languages, as well as for AOP languages.

 

Adopting MDE to support the evolution of component-based FOSS systems

Davide Di Ruscio (University of L'Aquila, IT)

FOSS (Free and Open Source Software) systems present interesting challenges in system evolution. On one hand, most FOSS systems are based on fine-grained units of software deployment – called packages – which promote system evolution; on the other hand, FOSS systems are among the largest software systems known and require sophisticated static and dynamic conditions to be verified, in order to successfully deploy upgrades on users’ machines. The slightest error in one of these conditions can turn a routine upgrade into a system administrator’s nightmare. In this presentation I will describe EVOSS a model-based approach to support the upgrade of FOSS systems. The approach promotes the simulation of upgrades to predict failures before affecting the real system. Both fine-grained static aspects (e.g. configuration incoherences) and dynamic aspects (e.g. the execution of configuration scripts) are taken into account, improving over the state of the art of upgrade planners. The effectiveness of the approach is validated by instantiating the approach to widely-used FOSS distributions.

Statistical Model Checking: An Overview

Axel Legay (Aalborg University, DK & INRIA Rennes, FR)

Quantitative properties of stochastic systems are usually specified in logics that allow one to compare the measure of executions satisfying certain temporal properties with thresholds. The model checking problem for stochastic systems with respect to such logics is typically solved by a numerical approach that iteratively computes (or approximates) the exact measure of paths satisfying relevant subformulas; the algorithms themselves depend on the class of systems being analyzed as well as the logic used for specifying the properties. Another approach to solve the model checking problem is to simulate the system for finitely many runs, and use hypothesis testing to infer whether the samples provide a statistical evidence for the satisfaction or violation of the specification. In this tutorial, we survey the statistical approach, and outline its main advantages in terms of efficiency, uniformity, and simplicity.

Maintaining Source Code Quality: Tools and Techniques

Andy Kellens (Vrije Universiteit Brussel, BE)

Despite several decades of research and experience, software systems still tend to be rewritten from scratch - on average - every seven years. On one hand, this problem is caused by the fact that software systems are amongst the most complex artifacts being produced by mankind today, and that these systems are not governed by classic engineering principles. On the other hand, software constantly needs to be changed in order to fulfill  new requirements. This constant evolution results in a degradation of internal quality attributes of the structure and source-code of the software (for example maintainability, comprehensibility, consistency, modularity, and so on) up to a point where change is no longer possible and a complete rewrite becomes inevitable. In this presentation we address the topic of maintaining such internal quality attributes of software systems. We discuss some of the properties of the structure and the source-code of software systems, and how these properties impact the life expectance of such software systems. Furthermore, we provide a bird's eye overview of the various techniques and tools that assist software engineers in preventing the degradation of the internal quality attributes of software systems. As a practical illustration of the application of such tools, we present the outcome of an on-going collaboration between academia and industry. We discuss why maintaining source-code quality is important for our industrial partner, and we relate our experiences in building a set of pragmatic tools for maintaining this source-code quality.