Tuesday 5 July 2011

IEEE CLOUD 2011 Conference, Day 1

Conference Opening Session
The IEEE Computer Society President, Sorel Reisman, opened the conference and provided an overview of the IEEE’s cloud computing initiative. In addition to their on-going efforts of running conferences and publications, the IEEE is going to focus on standards for the cloud. An interesting announcement was made just before the first keynote: from next year, the conference will have a “Journal” track, where authors can submit 14-page papers. The idea of having a conference with a “Journal” track sounds a bit strange but lets see how it turns out…

Keynote 1: Data, Data, Data: The Core of Cloud/Services Computing
Peter Chen, Louisiana State University (LSU) & Carnegie-Mellon University (CMU)
Peter started by putting up the Wikipedia definition of cloud computing and only a few people in the audience agreed with it, so he went onto describe the high-level pros/cons of the cloud.

His keynote argued that thinking about the cloud from a computational viewpoint is wrong, and we should instead take a data viewpoint. We should think about data explosion problems and view clouds as data warehouses, not just compute-cycle generators. The research questions here are: how to store large data amounts? How to retrieve them efficiently? How should data security be managed? How should we preserve data for long-term archival purposes?

Peter concluded by stressing that the ultimate vision of cloud computing should be “information utility” as defined by his definition:
Anybody should be able to get any information (based on access rights), organised in any presentation form specified, in any place, in any time, on any device, in a timely manner, at reasonable costs (or free).


Presentations

Decision Support Tools for Cloud Migration in the Enterprise
Ali Khajeh-Hosseini, Ian Sommerville, Jurgen Bogaerts, Pradeep Teregowda
My talk went well, apart from me bumping into a 2-meter IEEE logo and tipping it behind the projector screen (this classic clip has been recorded and will probably find it’s way onto YouTube).

The main questions from my talk were cost-related. Someone asked if the Cost Modelling Tool can be used for private clouds; the answer is yes, we can add pricing models for private clouds along-side public clouds. Another person asked whether we can use the tool to study hybrid cloud deployments; again the answer is yes, we can define different groups in a model, one group can be a public option, another group can be a private-cloud option, and we can study the overall hybrid costs.

A non-cost-related question was: whether our premise that “cloud = organisational change” also holds for enterprises that have already experienced IT outsourcing, because for them, migrating to the cloud might be simpler as they have already gone through some of the risk assessment exercises that are relevant for cloud migration. The organisations that we’ve worked with so far have not mentioned this issue, and it would be interesting to do cloud migration case studies with organisations that have IT outsourcing experience.

MADMAC: Multiple Attribute Decision Methodology for Adoption of Clouds
Prasad Saripalli, Gopal Pingali (IBM T.J. Watson Research, USA)
Prasad talked about IBM’s Multi-Attribute Decision-Making (MADM) based approach to helping CIOs make rational decisions during the migrating of IT systems to the cloud. The decision area is a choice of public/private IaaS/PaaS/SaaS clouds and once a platform has been selected, a vendor needs to be chosen from that category (although Prasad did not present this part of the research as IBM would obviously be biased and recommend IBM’s cloud).

A brief overview of MADMAC: given an existing system and a set of migration options (one of which is simply to do nothing and keep the legacy system), they ask an expert, or a group of experts, to weigh the importance of each decision-attribute (e.g. costs, security) by assigning a numeric value to each attribute. So if the attribute under investigation were cost, the value would be the cost of that option. If it were latency, the value would be the latency of that option in milliseconds. For attributes that are not easily measured, they ask the expert to pick from a Likert scale range (e.g. the importance of security). The sum of the decision-attribute values of the available options are then calculated to judge which option is best for that system. I asked how they handle the socio-technical aspects of migration decisions, such as the politics in the workplace or the hidden agendas of IT managers, and Prasad said that they hold a meeting with the group of stakeholders and use the Wide-Band Delphi method to arrive at a consensus after several iterations.

Cost-wait Trade-offs in Client-side Resource Provisioning with Elastic Clouds
Stéphane Genaud, Julien Gossa (Universit ́e de Strasbourg)
Stephane described their work that attempts to solve the following problem: given a set of requests (or jobs), when should a VM be started to serve the requests and when should a VM be re-used to serve the requests. The cheapest option is to have one VM for all requests, but the fastest option is to start a new VM for each request that comes-in when there are no free VMs. They studied the optimisation of cost vs. performance by using a bin-packing algorithm and evaluating different strategies (first-fit, best-fit, worst-fit). They found a very small difference between the cost savings of the studied strategies (a few percent). As he acknowledged, they did not consider different types of instances, memory and storage I/O requirements of the requests and their effect on performance.

Real Time Collaborative Video Annotation using Google App Engine and XMPP Protocol
Abbas Attarwala, Deepak Jagdish, Ute Fischer (University of Toronto, Canada; Nokia Research Center; Georgia Tech, USA)
Abbas gave a technical overview of their video annotation application that is deployed on Google AppEngine.

DIaaS: Data Integrity as a Service in the Cloud
Surya Nepal, Shiping Chen, Jinhui Yao, Danan Thilakanathan (CSIRO ICT Centre, Australia)
I missed the main talk but caught the end of the questions, and it was pretty intense. The questioner argued that the paper’s contributions were irrelevant to cloud computing as they dealt with networking and data transfer issues, which begs the question: what can be categorized as relevant research to cloud computing?

A Home Healthcare System in the Cloud – Addressing Security and Privacy Challenges
Mina Deng, Milan Petkovic, Marco Nalin, Ilaria Baroni (Philips Research Europe, The Netherlands; Scientific Institute Hospital San Raffaele, Italy)
Mina talked about the TClouds project, an EU project with a 10million EURO budget that started in Oct 2010 and will keep going until Oct 2013. The project aims to architect internet-scale ICT infrastructures for different business domains. Mina’s talk was focused on the healthcare domain, where Philips is one of the industry partners working with them. Philips’ healthcare monitoring devices are being used to collect health data and the group has setup a private cloud (based on CloudStack) that is used to store and process the data, which is currently captured using wrist-watch-like devices.

Efficient Bidding for Virtual Machine Instances in Clouds
Sharrukh Zaman, Daniel Grosu (Wayne State University, USA)
I missed the main part of this talk but talked to Sharrukh after his talk. He described their research that investigated alternative markets for IaaS clouds. An interesting question that was asked at the end of this talk was: why would providers have markets for computing resources? Markets are for rare resources but if there are plenty of resources then why would cloud providers need to do develop such complicated market mechanism? Why not just keep using the simple pricing models they currently have? Sharrukh argued that although these market mechanisms might not be needed now, but they are likely to be needed in the future when the demand for clouds increases.

Multi-Dimensional SLA-Based Resource Allocation for Multi-Tier Cloud Computing Systems
Hadi Goudarzi, Massoud Pedram (University of Southern California, USA)
Hadi talked about their resource allocation model that takes into account the heterogeneious nature of datacenters and the operational cost of servers (fixed cost when a server is on, a proportional cost relating to the CPU utilization, which corresponds with energy use). They are interested in the placement of applications in IaaS clouds where SLAs have to be considered to ensure that performance metrics are not violated.

Modelling Contract Management for Cloud Services
Mario A. Bochicchio, Antonella Longo (University of Salento, Italy)
Antonella argued that public clouds are currently black boxes, you don’t know much about their location, security levels, who has access to them etc. This is not very comforting to businesses that want to use clouds. The aim of this research is to develop a tool to support contract management between providers and consumers. I guess their underlying assumption is that cloud providers will provide tailored contracts, but I have not come across a public cloud provider that does this. Anotnella presented some preliminary work about the requirements for such a tool and an information model of the data that needs to be captured in such a tool.

Panel: Science of Cloud Computing
Co-Moderators: Ling Liu, Georgia Institute of Technology, USA
Manish Parashar, Rutgers University, USA
Panelists: Geoffrey Charles Fox, Indiana University, USA
Robert Grossman, University of Chicago, USA
Jean-Francois Huard, CTO, Netuitive, Inc.
Vanish Talwar, HP Labs, USA

This panel attempted to discuss some of the main research challenges for cloud computing, each panelist was asked to give a 10min talk about their views on what’s important for cloud research, and this was followed by an open floor discussion of the viewpoints and research challenges. Some of the research challenges that were discussed by the panel were:
- long-term data preservation (e.g. for scientific experiments)
- management of large-scale datacenters (e.g. over 1M servers)
- simple and elegant languages and platforms for large-scale science simulations (e.g. something better than High-Performance Fortran)
- training students for application development in the cloud (e.g. distributed and parallel algorithms that can deal with node failures)

No comments:

Post a Comment