Ship faster, operate with ease and scale confidently

Deploy your web applications to Gcloud kubernetes for easier scaling, higher availability, and lower costs. Start small at just $10 per month, and scale up and save with our free control plane and Inexpensive bandwidth

Deploy and manage containerised applications more easily with a fully managed Kubernetes service. Gcloud Kubernetes Service offers serverless Kubernetes, an integrated continuous integration and continuous delivery (CI/CD) experience and enterprise-grade security and governance. Unite your development and operations teams on a single platform to rapidly build, deliver and scale applications with confidence.

ECloud Kubernetes Flow
Global availability

Sign up a cluster in locations such as New York, San Francisco, London, Frankfurt, or Bangalore.

Intelligent scheduling

Automatically deploy containers on compute hosts based on available resources across the cluster.

Auto scaling

Ensure fast performance and control costs by letting Gcloud Kubernetes automatically adjust the number of nodes in your cluster.

Increased operational efficiency

Rely on built-in automated provisioning, repair, monitoring, and scaling. Get up and running quickly and minimise infrastructure maintenance.

kubernetes managed Docker Deployment

Orchestrate any type of workload running in the environment of your choice with GKS



Develop and iterate more rapidly with easy application deployment, release updates, and management of your apps and services.

Resource efficiency

Kubernetes knows the compute, memory, and storage resources each application needs and schedules instances across the cluster to maximize resource efficiency.

Streamline operations

Automate applications deployments, monitoring, instance replication, cluster scheduling, and seamless application releases.

High availability

Kubernetes routinely checks the health of your applications to detect and replace instances that are not responsive.


Use the Kubernetes horizontal pod Autoscaler to add instances of your application services as needed to meet demand.


Applications deployed to Gcloud Kubernetes can run anywhere Kubernets is supported, making it easy to deploy across environments and clouds

Logging and monitoring

Use the built-in logging and metrics service to monitor the performance of clusters and containers.


Portability and flexibility

Kubernetes works with virtually any type of container runtime. In addition, Kubernetes can work with virtually any type of underlying infrastructure -- whether it is a public cloud, a private cloud, or an on-premises server. In these respects, Kubernetes is highly portable, because it can be used on a variety of different infrastructure and environment configurations.

Multi-cloud capability

Due in part to its portability, Kubernetes can host workloads running on a single cloud as well as workloads that are spread across multiple clouds.

Workload Scalability

Kubernetes is known to be efficient in its use of infrastructure resources and offers several useful features for scaling purposes

High Availability

Kubernetes is designed to tackle the availability of both applications and infrastructure, making it indispensable when deploying containers in production

Increased developer productivity

Kubernetes with its declarative constructs and its ops friendly approach has fundamentally changed deployment methodologies and it allows teams to use GitOps. Teams can scale and deploy faster than they ever could in the past. Instead of one deployment a month, teams can now deploy multiple times a day.

Designed for deployment

One of the main benefits of containersation is the ability to speed up the process of building, testing, and releasing software. Kubernetes is designed for deployment

Streamline the installation and management of Kubernetes applications by declaring, provisioning and configuring your cloud native applications on Gcloud Kubernetes.

Use Cases

Lift and shift to containers with Gcloud Kubernetes service

Easily migrate existing applications to containers and run them in a fully managed Kubernetes service with GKS

Microservices with GKS

Use GKS to simplify the deployment and management of microservices based architecture. GKS streamlines horizontal scaling, self-healing, load balancing, secret management.

Secure DevOps for GKS

Kubernetes and DevOps are better together. Achieve the balance between speed and security and deliver code faster at scale by implementing secure DevOps with Kubernetes on Gcloud

Bursting from GKS with GCI ( Gcloud Container Instance)

Use the GKS virtual node to provision pods inside GCI that start in seconds. This enables GKS to run with just enough capacity for your average workload.

Frequently Asked Questions
Within the cluster, most Kubernetes services are implemented as a virtual IP called a ClusterIP. A ClusterIP has a list of pods which are in the service, the client sends IP traffic directly to a randomly selected pod in the service, so the ClusterIP isn't actually directly routable even from kubernetes nodes.
Containers within a pod share networking space and can reach other on localhost. For instance, if you have two containers within a pod, a MySQL container running on port 3306, and a PHP container running on port 80, the PHP container could access the MySQL one through localhost:3306.
Kubernetes is designed to be resilient to any individual node failure, master or worker. When a master fails the nodes of the cluster will keep operating, but there can be no changes including pod creation or service member changes until the master is available. When a worker fails, the master stops receiving messages from the worker. If the master does not receive status updates from the worker the node will be marked as NotReady. If a node is NotReady for 5 minutes, the master reschedules all pods that were running on the dead node to other available nodes.
The only stateful part of a Kubernetes cluster is the etcd. The master server runs the controller manager, scheduler, and the API server and can be run as replicas. The controller manager and scheduler in the master servers use a leader election system, so only one controller manager and scheduler is active for the cluster at any time. So an HA cluster generally consists of an etcd cluster of 3+ nodes and multiple master nodes.
There is a DNS server called skydns which runs in a pod in the cluster, in the kube-system namespace. That DNS server reads from etcd and can serve up dns entries for Kubernetes services to all pods. You can reach any service with the name < service >.< namespace >.svc.cluster.local. The resolver automatically searches < namespace >.svc.cluster.local dns so that you should be able to call one service to another in the same namespace with just < service >.