SoSyM

SoSyM Awards of Previous Years


2018 Reviewer awards

SoSyM appreciates the hard work and effort of reviewers to deliver critical feedback to help us evaluate author submissions. The heart of scientific progress depends on the volunteer efforts of the research community. We would like to recognize the specific efforts of the following reviewers, who received the "2018 Best Reviewer Award":
  • Anthony Anjorin
  • Gabor Bergmann
  • Neil Ernst
  • Kathrin Figl
  • Mario Gleirscher
  • Paul Ralph
         
  • Manfred Reichert
  • Christoph Seidl
  • Stefan Stanciulescu
  • Eric Walkingshaw
  • Matthias Weidlich

2018 The ten year most influential regular paper award

... was selected by the Editors-in-Chief from the regular papers which had the most influence within the last ten years (2008 incl. 2017).
Certificate-Atkinson

2018 The ten year most influential theme section paper award

... was selected by the Editors-in-Chief from the theme section papers which had the most influence within the last ten years (2008 incl. 2017).
Certificate-Dingel

2018 Best paper awards

... go to the best four papers that have been published in the selection period July 2017 and June 2018 and have not been presented in any form before. Thus the papers in addition get the possibility to present the most important findings in the "SoSyM Journal-First papers"-session at the MODELS conference 2018.

Certificate-Petrenko
Certificate-Cremona
  • [CLB+17] Fabio Cremona, Marten Lohstroh, David Broman, Edward A. Lee, Michael Masin, Stavros Tripakis
    Hybrid co-simulation: it's about time
    In: Journal on Software and Systems Modeling (SoSyM), doi: 10.1007/s10270-017-0633-6, Springer. 2017.
Certificate-Ross
Certificate-Voelter

2017 The eight year most influential regular paper award

... was selected by the Editors-in-Chief from the regular papers which had the most influence within the last eight years (2009 incl. 2016).
Certificate-Harel

Abstract

Live Sequence Charts (LSC) extend Message Sequence Charts (MSC), mainly by distinguishing possible from necessary behavior. They thus enable the specification of rich multi-modal scenario-based properties, such as mandatory, possible and forbidden scenarios. The sequence diagrams of UML 2.0 enrich those of previous versions of UML by two new operators, assert and negate, for specifying required and forbidden behaviors, which appear to have been inspired by LSC. The UML 2.0 semantics of sequence diagrams, however, being based on pairs of valid and invalid sets of traces, is inadequate, and prevents the new operators from being used effectively.

We propose an extension of, and a different semantics for this UML language—Modal Sequence Diagrams (MSD)—based on the universal/existential modal semantics of LSC. In particular, in MSD assert and negate are really modalities, not operators. We define MSD as a UML 2.0 profile, thus paving the way to apply formal verification, synthesis, and scenario-based execution techniques from LSC to the mainstream UML standard.


2017 The eight year most influential theme section paper award

... was selected by the Editors-in-Chief from the theme section papers which had the most influence within the last eight years (2009 incl. 2016).
Certificate-Winkler

Abstract

Traceability—the ability to follow the life of software artifacts—is a topic of great interest to software developers in general, and to requirements engineers and model-driven developers in particular. This article aims to bring those stakeholders together by providing an overview of the current state of traceability research and practice in both areas. As part of an extensive literature survey, we identify commonalities and differences in these areas and uncover several unresolved challenges which affect both domains. A good common foundation for further advances regarding these challenges appears to be a combination of the formal basis and the automated recording opportunities of MDD on the one hand, and the more holistic view of traceability in the requirements engineering domain on the other hand.


2017 Best paper awards

... go to the best six papers that have been published in the selection period July 2016 and June 2017 and have not been presented in any form before. Thus the papers in addition get the possibility to present the most important findings in the "SoSyM Journal-First papers"-session at the MODELS conference 2017.

Certificate-Voelter
Certificate-Tikhonova
Certificate-Ross
Certificate-Famelis
Certificate-Egea
Certificate-Hartmann

2016 The eight year most influential regular paper award

... was selected by the Editors-in-Chief from the regular papers which had the most influence within the last eight years (2008 incl. 2015).
Certificate-Aalst

Abstract

Motivation and original contribution: The most challenging problem in process mining remains the automated discovery of end-to-end process models from event logs. One of the essential problems in process mining is that one cannot assume to have seen all possible behavior. At best, one has seen a representative subset. Therefore, classical synthesis techniques are not suitable as they aim at finding a model that is able to exactly reproduce the log (language equivalence). Process mining techniques typically try to avoid such "overfitting" by generalizing the model to allow for more behavior. This generalization is often driven by the representation language and very crude assumptions about completeness. As a result, parts of the model are "overfitting" (allow only what has actually been observed) while other parts may be "underfitting" (allow for much more behavior without strong support for it). The two-step approach proposed in this paper was the first to systematically control the balance between "overfitting" and "underfitting". First, using a highly configurable approach, a transition system is constructed. Then, using the "theory of regions", the model is synthesized. The approach was implemented in the context of ProM and especially the first step is still frequently used, also in conjunction with other synthesis approaches.

Enhancements: Our approach to first build a transition system based on some configurable abstraction (before discovering higher-level constructs involving concurrency) has become an integral part of many discovery approaches [1]. All state-based region approaches require such a step [2]. We (and many others) also worked on language-based regions in the context of process discovery [3]. Here, one also needs to balance between overfitting and underfitting [4]. Soon after the publication of our two-step approach, we used similar abstractions for time prediction [5]. Here, we also select a suitable abstraction, but use it to augment the model with time information learned from earlier process instances. Subsequently, the time-annotated model is used to predict things like completion time.

Future directions: Process discovery remains a challenging problem. Currently, the state-of-the art is formed by the so-called inductive mining approaches [1,6]. The advantage of these approaches is that they are scalable, can handle noise, infrequent behavior, and incompleteness, and also provide formal guarantees. However, there are still constructs that can be discovered using the classical two-phase approach and not by the inductive mining techniques. Moreover, it seems that process discovery should also aim for hybrid models where structured parts and unstructured parts can be combined. Future research should aim to bridge the gap between the informal models used in commercial process mining tools and the formal models getting the lion's share in academic research.

1. W.M.P. van der Aalst. Process Mining: Data Science in Action. Springer-Verlag, Berlin, 2016.

2. M. Solé and J. Carmona. Process Mining from a Basis of State Regions. Proceedings of the 31st international conference on Applications and Theory of Petri Nets, volume 6128 of Lecture Notes in Computer Science, pages 226-245. Springer-Verlag, Berlin, 2010.

3. J.M.E.M. van der Werf, B.F. van Dongen, C.A.J. Hurkens, and A. Serebrenik. Process Discovery Using Integer Linear Programming. Fundamenta Informaticae, 94 (3-4): 387-412. 2009.

4. S.J. van Zelst, B.F. van Dongen, and W.M.P. van der Aalst. Avoiding Over-Fitting in ILP-Based Process Discovery. In H.R. Motahari-Nezhad, J. Recker, and M. Weidlich, editors, International Conference on Business Process Management (BPM 2015), volume 9253 of Lecture Notes in Computer Science, pages 163-171. Springer-Verlag, Berlin, 2015.

5. W.M.P. van der Aalst, M.H. Schonenberg, and M. Song. Time Prediction Based on Process Mining. Information Systems, 36(2):450-475, 2011.

6. S.J.J. Leemans, D. Fahland, and W.M.P. van der Aalst. Scalable Process Discovery and Conformance Checking. Software and Systems Modeling, 2016.


2016 The eight year most influential theme section paper award

... was selected by the Editors-in-Chief from the theme section papers which had the most influence within the last eight years (2008 incl. 2015).
Certificate-Anastasakis

Abstract

In this paper we proposed the formalization of UML with the help of Alloy and presented a mapping between UML Class Diagrams and OCL to Alloy metamodels. Due to the fundamental differences between UML and Alloy, the semantics of certain UML metamodel elements is different from the semantics of Alloy metamodel elements. Some of the challenges arising from these differences are explored in the article. Additionally a UML profile for Alloy was introduced to forbid the definition of UML models that cannot be transformed to Alloy and provide execution semantics to UML. The transformation rules are implemented in a tool called UML2Alloy. The tool allows the automated transformation of UML Class Diagrams that conform to our profile along with OCL constraints into Alloy.

The transformation rules were used to validate if models of Role-Based Access Control (RBAC) systems satisfy certain properties. We later used Alloy to analyze dynamic aspects of UML models in addition to static aspects. In particular, UML Sequence Diagrams were used to model the dynamic aspects of systems. The work was also used in the analysis of models of secure applications, which showed that SSL is susceptible to a man-in-the-middle-attack if an active attacker can participate in an SSL protocol handshake that uses minimal certificate checking.


2016 Best paper awards

... go to the best four papers that have been published in the selection period July 2015 and June 2016 and have not been presented in any form before. Thus the papers in addition get the possibility to present the most important findings in the "SoSyM Journal-First papers"-session at the MODELS conference 2016.

Certificate-Kalenkova
Certificate-Ivanchikj
Certificate-Acretoaie
Certificate-Falkner

2015 Best paper awards

... go to the best four papers that have been published in the selection period January 2014 and June 2015 and have not been presented in any form before. Thus the papers in addition get the possibility to present the most important findings in the "SoSyM Journal-First papers"-session at the MODELS conference 2015.

Certificate-Rago
Certificate-Eichler
Certificate-Farwick
Certificate-Song

2015 The eight year most influential regular paper award

... was selected by the Editors-in-Chief from the regular papers which had the most influence within the last eight years (2007 incl. 2014).
Certificate-Immonen

Abstract

Motivation and original contribution: Today's software systems are complex and distributed, as the users' needs have become more demanding, especially due to the growth in mobile devices and ubiquitous computing environments. These systems must provide variety of services for the users in their everyday life. Reliability of these services becomes highly important, as they are expected to work correctly and be available on demand. The traditional reliability views and measures do not scale up to the needs placed on today's complex systems involved by several stakeholders. Reliability and availability must be approached from a global perspective, starting with the collection of the stakeholders' requirements. The software architecture design phase is the first stage of software development in which it is possible to evaluate reliability and availability. We provide comparison of the existing analysis methods and techniques that can be used for reliability and availability prediction at the architectural level [1]. We discover suitable methods, and identify the major shortcomings and the development targets of the methods. For the comparison, we define a comparison framework that defines the required characteristic of analysis methods from context, user, method content and evaluation perspectives, finally enabling the selection of the best suitable method for architectural analysis. The comparison framework can be extended and used for other evaluation methods, as well adapted for future directions.

Enhancements: We have continued our work on reliability prediction and evaluation in several contexts. For software product families, we provide the QRF (Quality Requirements of software Families) method for managing variability of quality properties, and for transforming the quality requirements to architectural models [2]. Our integrated evaluation approach for service-oriented systems enhances the reliability evaluation at component and system level, and is supported by a tool chain [3]. We provide the knowledge-based quality modeling and evaluation approach for model-based engineering that enables the semi-automatic design flow of software engineering; from quality requirements specification to finally measuring quality aspects from the source code [4]. For dynamic service compositions, we have developed a set of quality ontologies and the adaptation loop to monitor, analyse, plan and execute changes in the service systems, allowing the service systems to be able to adapt their quality properties at run-time [5]. We have also defined the required phases of the composite service design and execution to achieve reliable composite service, and then revealed the current status in the research field of reliable composite service engineering [6].

Future directions: Our future directions include 1) the knowledge based service engineering in digital service ecosystems, and 2) applying autonomic computing in digital services and service ecosystems.

1. Immonen, Anne; Niemelä, Eila. 2007. Survey of reliability and availability prediction methods from the viewpoint of software architecture. Software and Systems Modeling, Vol. 7, No 1/February 2008. pp. 49-65. http://www.springerlink.com/content/h2604n4724716445

2. Niemelä, Eila; Immonen, Anne. 2007. Capturing quality requirements of product family architecture. Information and Software Technology. 49 (11-12), 2007, pp. 1107-1120. doi:10.1016/j.infsof.2006.11.003.

3. Palviainen, M., Evesti, A., Ovaska, E. 2011. Reliability Estimation, Prediction and Measuring of Component Based Software, Journal of Systems and Software, vol. 84 (6), pp. 1054-1070, doi:10.1016/j.jss.2011.01.048

4. Ovaska, E. Evesti, A., Henttonen, K., Palviainen, M., Aho, P. 2010. Knowledge based quality-driven architecture design and evaluation. Information and Software Technology. Elsevier, 52(6), pp. 577-601, doi:10.1016/j.infsof.2009.11.008

5. Pantsar-Syväniemi, S.; Purhonen, A.; Ovaska, E.; Kuusijärvi, J.; Evesti, A. 2012. Situation-based and self-adaptive applications for the smart environment. Journal of Ambient Intelligence and Smart Environments, Vol. 4, Nr. 6, pp. 491-516. doi:10.3233/AIS-2012-0179

6. Immonen, A., Pakkala, D. 2014. A survey of methods and approaches for reliable dynamic service compositions, Service Oriented Computing and Applications, Volume 8, Issue 2, pp. 129-158, doi 10.1007/s11761-013-0153-3


2015 The eight year most influential theme section paper award

... was selected by the Editors-in-Chief from the theme section papers which had the most influence within the last eight years (2007 incl. 2014).
Certificate-Mens

Abstract

This article focuses on the formal foundations of software refactoring, through the use of graph transformations. Refactoring aims to improve the internal structural quality of software code in an automated way. The idea of using graph transformation theory for formalising refactorings was already explored in earlier work. The novelty of the SoSyM article resided in looking at the problem at a higher level of granularity. Often, the software refactoring process involves a combination of multiple individual refactorings, each of which can be encoded as graph transformations. To check for the applicability of such complex refactorings, one needs to analyse the interrelationships and interdependencies between these transformations. The formal technique of critical pair analysis, encoded in the AGG graph transformation tool, proved to be an appropriate tool for analyzing such dependencies. We used it to propose an incremental resolution process of model inconsistencies. We also implemented our ideas in the popular Eclipse software development environment, by making use of the Eclipse Modeling Framework (EMF). This resulted in a model-driven tool called EMF Refactor providing support for model quality analysis through the use of model quality metrics, model smells and model refactoring. We also formally analysed EMF model transformations by means of algebraic graph transformation theory, and support for conflict and dependency analysis was added recently to the EMF tool suite, as analysis technique for the model transformation language Henshin.

In parallel to the use of graph transformation as a formal foundation for model refactoring and model inconsistency management, we investigated other appropriate formalisms. For example we explored the use of description logics for model refactoring, and more recently the use of automated regression planning for resolving model inconsistencies.