Chameleon

Purpose

A persistent problem facing academic cloud research is the lack of infrastructure and data to perform experimental research: large-scale hardware is needed to investigate the scalability of cloud infrastructure and applications, heterogeneous hardware is needed to investigate algorithmic and implementation tradeoffs, fully-configurable software environments are needed to investigate the performance of virtualization techniques and the differences between cloud software stacks, and data about how clouds are used is needed to evaluate virtual machine scheduling and data placement algorithms. Chameleon addresses these needs by providing a large-scale, fully configurable experimental testbed driven by the needs of the cloud research and education communities.

The project is led by the University of Chicago and includes partners from the Texas Advanced Computing Center (TACC), Northwestern University, the Ohio State University, and the University of Texas at San Antonio, comprising a highly qualified and experienced team, with research leaders from the cloud and networking world blended with providers of production quality cyberinfrastructure. The team includes members from the NSF-supported FutureGrid project and from the GENI community, both forerunners of the NSFCloud solicitation under which this project is funded.

The Chameleon testbed is deployed at the University of Chicago (UC) and the Texas Advanced Computing Center (TACC) and consists of 650 multi-core cloud nodes, 5PB of total disk space, and leverage 100 Gbps connection between the sites. To support a broad range of experiments emphasizing a range of requirements, the project supports a graduated configuration system allowing full user configurability of the stack, from provisioning of bare metal and network interconnects to delivery of fully functioning cloud environments. In addition, to facilitate experiments, Chameleon supports services designed to meet researchers needs, including support for experimental management, reproducibility, and repositories of trace and workload data of production cloud workloads.

Contributors

Dan Stanzione
Executive Director