Home » Cloud » Open Source + Hosted Containers: A recipe for workload mobility

Open Source + Hosted Containers: A recipe for workload mobility

Last Tuesday, we announced the availability of the Google Container Engine Alpha, our new service offering based on Kubernetes, the open source project we announced in June. One of the advantages of using Kubernetes and Docker containers as the underpinnings of Google Container Engine is the level of portability they offer our customers, as both are designed to run in multiple clouds.

We listened to our customers explain their needs for a multi-cloud strategy, with either mixed public and private deployments or in multiple public clouds, so we decided to focus on mobility as a design goal for our next generation computing service. We also wanted to make sure these advantages would benefit both developers needing to run their workloads in multiple clouds indefinitely, as well as those just getting started and looking to move to the cloud. That’s why Google Cloud Platform is an ideal environment for customers who are in the process of moving to the cloud, want to run only part of an application in the cloud, or need to run an application in multiple clouds. Here are some common hybrid cloud use cases we hear from our customers:

  • Develop and perform scale out testing in the cloud, but deploy to an on-premises production data center. A huge benefit to many of our customers is being able to do high throughput scale out testing of an application on resources that are paid for by the minute because it reduces iteration time and improves team productivity. Because Google Compute Engine is billed in minute quanta, the incremental cost of accelerated scale out testing is low. You pay what you would for sequential testing– it just happens on more cores and finishes much more quickly. This only works if the framework that runs your application is available in both your production and test environment. It also helps to have a management framework that makes it easy to deploy, orchestrate and wire together individual tests. Google Container Engine with Docker containers provide a framework that supports the easy deployment and management of an app, and also lets you easily integrate test management and orchestration frameworks.
  • Migrate a new piece of an application to the cloud, but have parts of it stay on-premises. With Google Cloud Platform’s newly announced direct peering and carrier interconnect network features, it’s now easier to connect a part of an application deployed in the cloud to Google’s data centers with the on-premises parts. With 70+ peering locations in 33 countries, it’s possible to get unprecedented levels of throughput and low latency access to your cloud resources. Many of our customers also highly value a common toolchain and management paradigm, as it makes sense to build modern applications using the same tools, packaging format and management services, but let the pieces that need to stay on-premises remain there.
  • Burst to the cloud during peak load. The cloud offers the ability to quickly and easily spin up a large number of VM instances that are charged on a per minute basis. Compute Engine instances tend to boot in around 30 seconds, giving our customers the ability to react quickly to unexpected demand spikes.

Kubernetes and Container Engine were designed from the ground up to meet the needs of those looking to benefit from application mobility. The following properties ensure our customers receive high levels of portability:

  • Container based. Docker has created a highly portable application container framework and is committed to the vision of making it run everywhere. The natural decoupling of application pieces from the OS and infrastructure environment is a really important ingredient in achieving high levels of portability.
  • Modular to the core. To become broadly adopted, it was important to allow providers to adapt and extend pieces of the stack without invalidating the core API. We focused on rigorous and principled modularity in the design, and pretty much everything in Kubernetes can be unplugged and replaced by other technologies.
  • Decoupled services. A key insight into what allows portability and mobility is the idea that different pieces of an application may be moved to a different cloud at different times. Kubernetes is built with a focus on micro-services based architecture and ensuring that the pieces of an application are not tied together. The beauty of Kubernetes is that its naturally decoupled model creates the feeling like the pieces are co-deployed. You don’t have to jump through hoops to get a decoupled deployment.

To achieve unprecedented levels of portability of applications, the community has pulled together to support integration from the start. Some of the biggest names in technology have stepped up to help bring Kubernetes to their technology stacks, including Microsoft, IBM, VMware, and HP. Beyond basic integration, a set of our partners have been working hand-in-glove with us on the core product to strengthen the platform, and add new capabilities and abstractions that offer even higher levels of portability. For example, Red Hat has contributed tirelessly to almost every component of the stack and has been instrumental in shaping and improving the overall production readiness of Kubernetes.

In addition, we have relied on CoreOS technologies in Kubernetes for some time, such as using etcd for distributed state management. Looking forward, they are working to deliver new technology to achieve high levels of portability for Kubernetes and have also started developing new capabilities for the platform, most prominently Flannel. Because Kubernetes relies on virtualized networking capabilities, some of our earlier customers indicated that it was challenging to move to environments that were not running on the same virtualized network technologies that Google offers (Andromeda based). With Flannel, we now have a more portable network layer for Kubernetes.

CoreOS also just contributed code to ensure that Kubernetes works well on Amazon Web Services and have signed up to qualify our binary releases and ensure high levels of mobility between Google and Amazon. Alex Polvi, CEO of CoreOS said, “We really respect the architecture behind Kubernetes. CoreOS stands behind the project and is working to provide support across cloud and on-premises environments to encourage interoperability. You can run Kubernetes in any environment CoreOS supports, which includes AWS and all other major cloud providers.”

We openly invite others to join in our journey with this project. Our IRC channel is here, and the open source project is hosted on GitHub. You can take Container Engine for a free test drive and get all the details you need to get started with our technical docs.

-Posted by Craig McLuckie, Product Manager

Feed Source: Google Cloud Platform Blog
Article Source: Open Source + Hosted Containers: A recipe for workload mobility

About Admin

Powered by WP Robot