Last month, our CTO, Robert Jenkins, spoke at Cordis’ Research in Future Cloud Computing event in Brussels, Belgium. During the event, Robert presented to a diverse audience on three key areas of research that have the potential to have a major impact on the future of the cloud – operating systems, ecosystems and scalable storage. Be sure to let us know what you think in the comments section and click here to see Robert’s full presentation from the event.
Hardware virtualization is very well established as a technology with a number of hypervisor options, as is cloud management by and large. Customers however install standard operating systems that aren’t really aware of their virtual environment; this leads to many restrictions and sub-optimal performance in many cases. For example, operating systems can’t recognize ‘hot’ changes to resources, meaning that while the hypervisor can allocate additional RAM or CPU to a virtual machine, the operating system can’t recognize or see it. The result is that vertical scaling in the cloud (i.e. making individual virtual machines bigger or smaller) needs power cycling of the virtual machine, which is disruptive and, in many cases, not practical. Instead, people largely engage in horizontal scaling by clustering their computing over many virtual machines and monitoring load. This is quite wasteful through duplicated operating system resources across those machines and the communication/management overhead of clusters. This is one key area that needs addressing in order to continue to make computing in the cloud more efficient and relevant.
Companies and institutions in today’s connected economies have wide and varied ecosystems of suppliers and customers surrounding them. Each industry, in turn, has unique workflows and requirements. Moving individual companies and individual components of those workflows to the cloud isn’t effective and often destroys many of the economic benefits of using the cloud through time delays, accessibility issues and increased data transfer costs.
Ecosystems have a vital role to play in delivering widespread cloud adoption within industries. Building ecosystems in the cloud means working within an industry or product area to move supply chains and workflows into the cloud in a more holistic way. This keeps work and infrastructure within the cloud, super-charging coordination, reducing lead times and speeding up iteration cycles, both within companies and between collaborating entities.
A great example of an ecosystem being built is the Helix Nebula consortium. CloudSigma has been working as part of the Helix Nebula consortium to build ecosystems around the big data and processing requirements of leading scientific institutions. Using the cloud as a hub, the huge data transfer requirements go internally over 10GigE lines within the cloud and results from new data are quickly pushed out to a wider audience. Likewise, outside entities can draw on that data for their own needs at little or no cost, all within the cloud.
Similar to the workflow and big data requirements of the scientific community, the digital media sector would greatly benefit from collaborative cloud-based hubs, which is precisely what we’ve created with our Media Services Ecosystem. CloudSigma’s Media Services Ecosystem allows service providers and production companies working in the film, music and other media industries to easily collaborate and share data, with access to CloudSigma’s powerful compute and storage capabilities. We’re basically giving media companies one roof under which to work together more efficiently; regardless of their geographic location. Eliminating long transfer times and high data transfer costs that plague most productions is critical to offering a viable alternative to existing slow, high-cost in-house alternatives currently in use today. Ecosystems will therefore be a pivotal feature of driving cloud adoption in many industries in the future and customers and cloud vendors need to collaborate to help to work together in creating them.
Currently, at CloudSigma, we are working to optimize storage systems to go beyond the performance levels of dedicated hardware. Far from matching the performance of dedicated in-house solutions, we believe public cloud as a delivery mechanism can offer higher performance to customers at a lower cost.
Storage for public clouds has to date been largely dominated with dealing with the problems created by increasingly random-looking I/O requests caused by multi-tenancy. In principle, the cloud can offer individual customers higher performance and less variable performance levels by spreading storage loads across a greater bed of hardware. When customers run a dedicated SAN environment for example, the load on the system is 100 percent correlated to that particular company’s usage. Moving to the cloud can spread that load over a wider install base, thus drawing on addition resource capacity during peak times and making the impact of any one customer on any one particular piece of infrastructure, minimal.
To take advantage of the load spreading, storage systems used in the cloud need to evolve from the SAN and local storage systems of today to distributed systems that offer modular scalable high availability designs. Some of the components are now coming together however a lot more work remains to be done.
Object storage is an area that is seeing astronomical growth and uptake as customers take advantage of total scalability and convenience of an outsourced storage arrangement. As the amount of data in the world continues to grow at an accelerating rate, the use and importance of object based storage will continue to increase exponentially. To date, outside of Amazon’s S3, full-featured object storage environments have been very limited and a lot of work remains to build open source components and standards that can widen the install base for customers of object storage who don’t want to get locked-in to any one proprietary platform.
Overall, a very significant amount of areas for innovation remain in the IaaS space that will deliver year on year improvements in price/performance for the foreseeable future. As these innovations mature, it’s clear that the public cloud delivery mechanism will continue to gain ground and make legacy systems more and more uncompetitive.
To see Robert’s full presentation from the event, follow this link http://cordis.europa.eu/fp7/ict/ssai/docs/future-cc-2may-jenkins-presentation.pdf, and let us know what you think in the comments section below!