Alexander Wolf (Imperial College) talked about the automation of experiments on large-scale systems (computer science experiments such as investigating properties of distributed hash tables or web applications under different workloads). The challenges in experimenting with such systems are: how do you generate workloads that are realistic, how do you ensure that the experiments are repeatable, and how do you design the actual experiments – e.g. what properties do you measure, how do you measure them. Alex then introduced the Weevil tool that addresses some of these challenges; this will be discussed in more detail in tomorrow’s lecture.
Thursday, 23 September 2010
ULSS Doctoral School - Day 4
Carlo Ghezzi (Politecnico di Milano) talked about adaptive evolvable systems (where adaption refers to the ability of software to detect changes and react in a self-managed manner, and evolution requires the intervention of a designer). He started with a brief history of early software engineering approaches, which did not follow a precisely formulated process and assumed that organizations are monolithic and stable (so change is avoided). But this assumption is usually incorrect and so maintenance is required. Traditionally, maintenance has been offline, where the maintenance can be corrective maintenance to fix bugs, adaptive maintenance to satisfy changes in the environment, or perfective maintenance to satisfy changes in the requirements. Adaptive systems attempt to reduce the maintenance effort by modelling and reasoning about the goals, requirements and assumptions that a system has (I guess the underlying idea is related to autonomic computing). One approach that could be taken is to use inheritance (in the object oriented sense) with dynamic code generation to create code that extends existing classes. As long as the method interfaces do not change, then the method body can be dynamically changed at run-time using inheritance and polymorphism.