Sunday 10 July 2011

IEEE CLOUD 2011 Conference, Day 4

Day 4 included a panel, a keynote and a number of research presentations:

Panel: Opportunities of Services Business in Cloud Age
This panel was a bit mixed, different speakers talked about their research/industry projects and interests. I don’t think anything new was said here. One of the panelists argued for standards in the cloud, and mentioned that there are 45 groups working on this. He pointed out the following two IEEE meetings where standards are going to be discussed:
P2301 - Guide for Cloud Portability and Interoperability Profiles (CPIP)
P2302 - Standard for Intercloud Interoperability and Federation (SIIF)

Keynote: Web Services in the Scientific Wilds
Carole Goble, University of Manchester, UK
Carole discussed the use of web services in sciences (in particular biological sciences). Most scientists are not trained software engineers and have a hacking attitude towards software development. This, as well as other reasons, has resulted in a mess of services to emerge from such scientific fields: different data formats; some are wrappers for command-line tools; inconsistent APIs etc.

Carole also predicted “the death of scientific papers”, where scientists would provide web services of their experiments, data, algorithms etc. She pointed to the idea of Executable Journals and the use of VMs for papers. For this to happen, research funding bodies should give credit to scientist who provide web services that are used by the scientific community (not just publications).

Towards Pay-As-You-Consume Cloud Computing
Shadi Ibrahim, Bingsheng He, Hai Jin (Huazhong University, China; Nanyang Technological University, Singapore)
Our case studies demonstrate significant variations in the user costs, indicating significant unfairness among different users from the micro-economic perspective. Further studies reveal the reason for such variations is interference among concurrent virtual machines. The amount of interference cost depends on various factors, including workload characteristics, the number of concurrent VMs, and scheduling in the cloud. In this paper, we adopt the concept of pricing fairness from micro economics, and quantitatively analyze the impact of interference on the pricing fairness. To solve the unfairness caused by interference, we propose a pay-as-you-consume pricing scheme, which charges users according to their effective resource consumption excluding interference. The key idea behind the pay-as-you-consume pricing scheme is a machine learning based prediction model of the relative cost of interference.

Price Heuristics for Highly Efficient Profit Optimization of Service Composition
Xianzhi Wang, Zhongjie Wang, Xiaofei Xu (Harbin Institute of Technology, China)
As the de facto provider of composite services, the broker charges the consumers; on the other hand, it awards cost to the providers whose services are involved in the composite services. Besides traditional quality-oriented optimization from the consumers’ point of view, the profit that a broker could earn from the composition is another objective to be optimized. But just as the quality optimization, service selection for profit optimization suffers from dramatic efficiency decline along with the growth in the number of candidate services. On the premise that the expected quality are guaranteed, this paper presents a “divide and select” approach for high-efficiency profit optimization, with price as heuristics. This approach can be applied to both static and dynamic pricing scenarios of service composition. Experiments demonstrate the feasibility.

Differentiated Service Pricing on Social Networks Using Stochastic Optimization
Alexei A.Gaivoronski, Denis Becker (Norwegian University, Norway)
This paper develops a combined simulation and optimization model that allows to optimize different service pricing strategies defined on the social networks under uncertainty. For a specific reference problem we consider a telecom service provider whose customers are connected in such network.Besides the service price, the acceptance of this service by a given customer depends on the popularity of this service among the customer’s neighbors in the network. One strategy that the service provider can pursue in this situation is to stimulate the demand by offering the price incentives to the most connected customers whose opinion can influence many other participants in the social network. We develop a simulation model of such social network and show how this model can be integrated with stochastic optimization in order to obtain the optimal pricing strategy. Our results are reported.

Energy-efficient Management of Virtual Machines in Eucalyptus
Pablo Graubner, Matthias Schmidt, Bernd Freisleben (University of Marburg, Germany)
In this paper, an approach for improving the energy efficiency of infrastructure-as-a-service clouds is presented. The approach is based on performing live migrations of virtual machines to save energy. In contrast to related work, the energy costs of live migrations including their pre- and post-processing phases are taken into account, and the approach has been implemented in the Eucalyptus open-source cloud computing system by efficiently combining a multi-layered file system and distributed replication block devices. To evaluate the proposed approach, several short- and long-term tests based on virtualmachine workloads produced with common operating system benchmarks, web-server emulations as well as different MapReduce applications have been conducted.

Exploiting Spatio-Temporal Tradeoffs for Energy-aware MapReduce in the Cloud
Michael Cardosa, Aameek Singh, Himabindu Pucha, Abhishek Chandra (University of Minnesota; IBM Research, Almaden, USA)
MapReduce is a distributed computing paradigm widely used for building large-scale data processing applications. When used in cloud environments, MapReduce clusters are dynamically created using virtual machines (VMs) and managed by the cloud provider. In this paper, we study the energy efficiency problem for such MapReduce clusters in private cloud environments, that are characterized by repeated, batch execution of jobs. We describe a unique spatio-temporal tradeoff that includes efficient spatial fitting of VMs on servers to achieve high utilization of machine resources, as well as balanced temporal fitting of servers with VMs having similar runtimes to ensure a server runs at a high utilization throughout its uptime. We propose VM placement algorithms that explicitly incorporate these tradeoffs. Our algorithms achieve energy savings over existing placement techniques.

Low Carbon Virtual Private Clouds
Fereydoun Farrahi Moghaddam, Mohamed Cheriet, Kim Khoa Nguyen (Ecole de technologie superieure, Canada)
With the introduction of live WAN VM migration, however, the challenge of energy efficiency extends from a single data center to a network of data centers. In this paper, intelligent live migration of VMs within a WAN is used as a reallocation tool to minimize the overall carbon footprint of the network. We provide a formulation to calculate carbon footprint and energy consumption for the whole network and its components, which will be helpful for customers of a provider of cleaner energy cloud services. Simulation results show that using the proposed Genetic Algorithm (GA)-based method for live VM migration can significantly reduce the carbon footprint of a cloud network compared to the consolidation of individual datacenter servers. In addition, the WAN data center consolidation results show that an optimum solution for carbon reduction is notnecessarily optimal for energy consumption, and vice versa. Also, the simulation platform was tested under heavy and light VMloads, the results showing the levels of improvement in carbon reduction under different loads.

Portability and Interoperability in Cloud Computing
Lee Badger, Tim Grance, Bill MacGregor, NIST
Three presentations by NIST, whose goal is to accelerate the Federal government’s adoption of cloud computing by building a roadmap and leading efforts to develop standards and guidelines.

Exploring Alternative Approaches to Implement an Elasticity Policy
Hamoun Ghanbari, Bradley Simmons, Marin Litoiu, Gabriel Iszlai (York University; IBM Toronto Lab, Canada)
An elasticity policy governs how and when resources (e.g., application server instances at the PaaS layer) are added to and/or removed from a cloud environment. The elasticity policy can be implemented as a conventional control loop or as a set of heuristic rules. In the control-theoretic approach, complex constructs such as tracking filters, estimators, regulators, and controllers are utilized. In the heuristic, rule-based approach, various alerts (e.g., events) are defined on instance metrics (e.g., CPU utilization), which are then aggregated at a global scale in order to make provisioning decisions for a given application tier. This work provides an overview of our experiences designing and working with both approaches to construct an autoscaler for simple applications. We enumerate different criteria such as design complexity, ease of comprehension, and maintenance upon which we form an informal comparison between the different methods. We conclude with a brief discussion of how these approaches can be used in the governance of resources to better meet a high-level goal over time.

3 comments: