Grascomp's Day, November 3rd, 2011, ULB, Brussels
This day aims to bring together Ph.D. students and researchers involved in the GRASCOMP doctoral school in computer science and engineering.
Presentation of their work by PhD students should improve the mutual awareness of the participants and foster future initiatives throughout all research teams related to computer science in the French-speaking Community of Belgium.
The Day will consist in:
- 2 invited talks given by Post-Doc Researchers
- 5 contributions, one by a representative (Dr or nearly graduated) of each 5 implied institutions (FUNDP, UCL, ULB, ULg, UMons), proposed by the Grascomp Committee
- Poster Sessions
Ph.D. students are invited to present a A0 poster about their research. Please send to Pierre.Manneback@umons.ac.be Poster's author and title by October 27th, 2011. 2 rewards of 125€ each will be granted ! One by the Grascomp Commitee, and one by the participants themselves.
Location: ULB, Salle Solvay, 5th Floor, Building NO, Campus de la Plaine, Brussels
Maps and Access:
Agenda• 9h00- 9h15: Opening, Stefan Langerman and Pierre Manneback
• 9h15-10h15: An Energy-aware Hybrid Metaheuristic for Scheduling
Precedence-Constrained Parallel Applications, Mohand Mezmaz, Post-Doc UMONS
• 10h15-10h45: Coffee Break and Posters
• 10h45-11h15: Recent Advances in Batch Mode Reinforcement Learning, Raphaël Fontenau, Post-Doc, ULg
• 11h15-11h45: Stable Feature Selection in Empty Spaces:Applications to Gene Profiling and Diagnosis
from DNA Microarrays, Thibault Helleputte, Post-Doc, UCL
• 11h45-12h15: A Transformation-Based Approach to Context Aware Modeling, Sylvain Degrandsart, UMONS
• 12h15-13h30: Lunch and Posters
• 13h30-14h30: Mendel: Source Code Recommendation based on a Genetic Metaphor, Angela Lozano, Post-Doc UCL
• 14h30-15h00: Coffee Break and Posters
• 15h00-15h30: Quality evaluation and improvement framework
for database schemas, Jonathan Lemaitre, FUNDP
• 15h30-16h00: Efficient in-Memory Object Graph Versioning, Frédéric Pluquet, ULB
• 16h00-17h00: Cocktail
An Energy-aware Hybrid Metaheuristic for Scheduling Precedence-Constrained Parallel Applications
The presentation will be focused on the problem of scheduling precedence-constrained parallel applications on heterogeneous computing systems (HCSs) like cloud computing infrastructures. This kind of application was studied and used in many research works. Most of these works propose algorithms to minimize the completion time (makespan) without paying much attention to energy consumption.
We propose a new parallel bi-objective hybrid genetic algorithm that takes into account, not only makespan, but also energy consumption. We particularly focus on the island parallel model and the multi-start parallel model. Our new method is based on dynamic voltage scaling (DVS) to minimize energy consumption. In terms of energy consumption, the obtained results show that our approach outperforms previous scheduling methods by a significant margin. In terms of completion time, the obtained schedules are also shorter than those of other algorithms. Furthermore, our study demonstrates the potential of DVS.
Mohand Mezmaz, UMONS
Recent Advances in Batch Mode Reinforcement Learning Raphaël Fonteneau, ULg
Batch mode reinforcement learning (BMRL) is a field of research which focuses on the inference of high-performance control policies when the only information on the control problem is gathered in a set of trajectories. When the (state, action) spaces are large or continuous, most of the techniques proposed in the literature for solving BMRL problems combine value or policy iteration schemes from the Dynamic Programming (DP) theory with function approximators representing (state-action) value functions. While successful in many studies, the use of function approximators for solving BMRL problems has also drawbacks. In particular, the use of function approximator makes performance guarantees difficult to obtain, and does not systematically take advantage of optimal trajectories. In this talk, I will present a new line of research for solving BMRL problems based on the synthesis of ``artificial trajectories'' which opens avenues for designing new BMRL algorithms. In particular, it avoids the two above-mentioned drawbacks of the use of function approximator.
Stable Feature Selection in Empty Spaces: Applications to Gene Profiling and Diagnosis
In many technological or
industrial fields, the amount of high dimensional data is steadily growing. The
number of dimensions is however often growing much faster than the number of
points available. The field of genomics is a typical illustration of this
trend. This setting makes many machine learning applications subject to the
curse of dimensionality, making difficult the estimation of models that will
from DNA Microarrays Thibault Helleputte, UCL
New techniques have been designed, achieving highly sparse feature selection
while allowing the estimation of models with good classification performance in
a context where only few points are available, and those points lay in a high
dimensional space. They make use of adequate inductive biases, among which
several means of regularization, to mitigate the lack of extra samples in that
high dimensional setting. Those biases can consist either of internal
information from the data only but taking many different « views » of
the data (ensemble methods), or of the use of external extra information. This
extra information can be either expert prior knowledge (from biologists for
example), or contained in other datasets about related tasks (transfer learning
or multi-task learning).
All those methods have been tested over several gene expression microarray
datasets for diagnosis and biomarker discovery tasks. Microarrays measure at
once the rate of transcription (the expression) of thousands of genes into
mRNA, the intermediate messengers between genes and proteins production. Those
datasets are typically made of few tens of samples (patients) and thousands of
A Transformation-Based Approach to Context Aware Modeling Sylvain Degrandsart, UMONSContext-aware computing is a paradigm for governing the numerous mobile devices surrounding us. In this computing paradigm, software applications continuously and dynamically adapt to different ``contexts'' implying different software configurations of such devices. Unfortunately, modeling a context-aware application for all possible contexts is only feasible in the simplest of cases. Hence, tool support verifying certain properties is required. We develop the Context-Aware Application model (CAA), in which context adaptations are specified explicitly as model transformations. By mapping this model to graphs and graph transformations, we can exploit graph transformation techniques such as critical pair analysis to find contexts for which the resulting application model is ambiguous. We validate our approach through an example of a mobile city guide, demonstrating that we can identify subtle context interactions that might go unnoticed otherwise.
Mendel: Source Code Recommendation based on a Genetic Metaphor Angela Lozano, Post-Doc UCL
When evolving or maintaining software systems, developers spend a considerable amount of time understanding existing source code. To successfully implement new or alter existing behavior, developers need to answer questions such as: “Which types and methods can I use to solve this task?”, “Should my implementation follow particular naming or structural conventions?”, “How is similar behavior implemented in the system?”. This presentation describes Mendel, a source code recommendation tool that aids developers in answering such questions. Based on the entity the developer currently browses, the tool employs a genetics-inspired metaphor to analyze source-code entities related to the current working context and provides its user with a number of recommended properties (naming conventions, used types, invoked messages, etc.) that the source code entity currently being worked on should exhibit. To validate our approach, we analyze to which extent Mendel is able to provide meaningful recommendations, by comparing our recommendations with the actual implementation of five open-source systems. The results seem to confirm the potential of our approach.
The quality of data schemas is a critical concern, not
only in the development of databases and information systems in general, but
also in such subsequent processes as system maintenance, evolution and
migration. The study of schema quality is often addressed in the literature by
means of frameworks, many of them being dedicated to the evaluation of the
quality of schemas, while ignoring quality improvement issues. In our research,
we are developing a methodology based on a framework dealing with schema
quality through the identification and scoring of schema defects and their
modification according to requirements related to usage and evolution. The
original aspects of this framework are the use of classes of schema patterns
that express the same semantics (through semantics-preserving ransformations) and its parametrization to fit specific
contexts such as data models, abstraction level and quality criteria.
Quality evaluation and improvement framework for database schemas Jonathan Lemaitre, FUNDP
Efficient in-Memory Object Graph Versioning Frédéric Pluquet, ULB
Object versioning refers to how an application can have access to previous states of its objects. Implementing this mechanism is hard because it needs to be efﬁcient in space and time, and be well integrated with the programming language.
We present HistOOry, an object versioning system that uses an efﬁcient data structure to store and retrieve past states. HistOOry supports three kinds of versioning: linear versioning (only the last version can be modified while old versions can only be browsed in a read-only mode), backtracking versioning (a set of versions can be deleted) and branching versioning (any version can be modified, resulting to a version tree with concurrent versions).
HistOOry needs only three primitives, and the existing code does not need to be modiﬁed to be versioned. It provides ﬁne-grained control over what parts of objects are versioned and when. It stores all states, past and present, in memory. Code can be executed in the past of the system and will see the complete system as it was at that point in time.
We have implemented our model in Java and Smalltalk. We used it for several applications that need versioning, as checked postconditions, stateful execution tracing and a planar point location implementation.
List of Posters
Model Based Testing of Software Product Lines
a Better Exploitation of Heterogeneous Architectures (Multi-CPU/Multi-GPU) in
Multimedia Processing Algorithms
faster exact multiprocessor schedulability test for sporadic tasks
Programming meets the Semantic Web, Castor: a specialized CP solver for
for ACTL Model Checking
Evolution in Dynamic Software Systems
availability and authenticity for secure routing in MANETs
Technique for AES Implementation on FPGA
How to Secure Implementations Against Side-Channel Attack via Aspects
|| Sustaining QoS in Web services using
DRM scheme to protection of data in privacy-sensitive environments
reconstruction of the spine from multi-planar radiographs
Configuration Interfaces from Feature Models
Management of graphic accelerators in virtualization
Empirical Approach for Cognitively Effective Visual Symbols
Steps Towards Two-level Mixtures of Markov Trees
multi-task sequence labeling for predicting structural properties of proteins