This document is intended for database administrators, cloud architects, and operations professionals interested in deploying a highly available MySQL topology on Google Kubernetes Engine.
Follow this tutorial to learn how to deploy a MySQL InnoDB Cluster and a MySQL InnoDB ClusterSet, in addition to MySQL Router middleware on your GKE cluster, and how to perform upgrades.
Objectives
In this tutorial, you will learn how to:- Create and deploy a stateful Kubernetes service.
- Deploy a MySQL InnoDB Cluster for high availability.
- Deploy Router middleware for database operation routing.
- Deploy a MySQL InnoDB ClusterSet for disaster tolerance.
- Simulate a MySQL cluster failover.
- Perform a MySQL version upgrade.
The following sections describe the architecture of the solution you will build in this tutorial.
MySQL InnoDB Cluster
In your regional GKE cluster, using a StatefulSet, you deploy a MySQL database instance with the necessary naming and configuration to create a MySQL InnoDB Cluster. To provide fault tolerance and high availability, you deploy three database instance Pods. This ensures that the majority of Pods on different zones are available at any given time for a successful primary election using a consensus protocol, and makes your MySQL InnoDB Cluster tolerant of single zonal failures.
Figure 1: Example architecture of a single MySQL InnoDB ClusterOnce deployed, you designate one Pod as the primary instance to serve both read and write operations. The other two Pods are secondary read-only replicas. If the primary instance experiences an infrastructure failure, you can promote one of these two replica Pods to become the primary.
In a separate namespace, you deploy three MySQL Router Pods to provide connection routing for improved resilience. Instead of directly connecting to the database service, your applications connect to MySQL Router Pods. Each Router Pod is aware of the status and purpose of each MySQL InnoDB Cluster Pod, and routes application operations to the respective healthy Pod. The routing state is cached in the Router Pods and updated from the cluster metadata stored on each node of the MySQL InnoDB Cluster. In the case of an instance failure, the Router adjusts the connection routing to a live instance.
MySQL InnoDB ClusterSet
You can create a MySQL InnoDB ClusterSet from an initial MySQL InnoDB Cluster. This lets you increase disaster tolerance if the primary cluster is no longer available.
Figure 2: Example multi-region ClusterSet architecture which contains one primary cluster and one replica clusterIf the MySQL InnoDB Cluster primary instance is no longer available, you can promote a replica cluster in the ClusterSet to primary. When using MySQL Router middleware, your application does not need to track the health of the primary database instance. Routing is adjusted to send connections to the new primary after the election has occurred. However, it is your responsibility to ensure that applications connecting to your MySQL Router middleware follow best practices for resilience, so that connections are retried if an error occurs during cluster failover.
Costs
This tutorial uses the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage, use the pricing calculator.New Google Cloud users might be eligible for a free trial.
When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see .
Before you begin
Set up your project
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
In the Google Cloud console, on the project selector page, click Create project to begin creating a new Google Cloud project.
Go to project selector
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
Enable the GKE API.
Enable the API
In the Google Cloud console, on the project selector page, click Create project to begin creating a new Google Cloud project.
Go to project selector
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
Enable the GKE API.
Enable the API
Set up your environment
In this tutorial, you use Cloud Shell to manage resources hosted on Google Cloud. Cloud Shell comes preinstalled with Docker and the
NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
6 and gcloud CLI.To use Cloud Shell to set up your environment:
Set environment variables.
export PROJECT_ID=PROJECT_ID
export CLUSTER_NAME=gkemulti-west
export REGION=COMPUTE_REGION
Replace the following values:
Set the default environment variables.
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
Clone the code repository.
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
Change to the working directory.
cd kubernetes-engine-samples/gke-stateful-mysql/kubernetes
Create a GKE cluster
In this section, you create a regional GKE cluster. Unlike a zonal cluster, a regional cluster's control plane is replicated into several zones, so an outage in a single zone doesn't make the control plane unavailable.
To create a GKE cluster, follow these steps:
Autopilot
In Cloud Shell, create a GKE Autopilot cluster in the
NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
7 region.
gcloud container clusters create-auto $CLUSTER_NAME \
--region=$REGION
Get the GKE cluster credentials.
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
Deploy a Service across three zones.
gke-stateful-mysql/kubernetes/prepare-for-ha.yaml
View on GitHub
apiVersion: apps/v1
kind: Deployment
metadata:
name: prepare-three-zone-ha
labels:
app: prepare-three-zone-ha
spec:
replicas: 3
selector:
matchLabels:
app: prepare-three-zone-ha
template:
metadata:
labels:
app: prepare-three-zone-ha
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- prepare-three-zone-ha
topologyKey: "topology.kubernetes.io/zone"
containers:
- name: prepare-three-zone-ha
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "while true; do sleep 3600; done"
resources:
limits:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
requests:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
kubectl apply -f prepare-for-ha.yaml
By default, Autopilot provisions resources in two zones. The Deployment defined in
NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
9 ensures that Autopilot provisions nodes across three zones in your cluster, by setting
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
00,
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
01 with
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
02, and
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
03.
Check the status of the Deployment.
kubectl get deployment prepare-three-zone-ha --watch
When you see three Pods in the ready state, cancel this command with
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
04. The output is similar to the following:NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
Run this script to validate that your Pods have been deployed across three zones.
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
0Each line of the output corresponds to a Pod, and the second column indicates the cloud zone. The output is similar to the following:
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
1Standard
In Cloud Shell, create a GKE Standard cluster in the
NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
7 region. gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
2Get the GKE cluster credentials.
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
Deploy MySQL StatefulSets
In this section, you deploy one MySQL StatefulSet. Each StatefulSet consists of three MySQL replicas.
To deploy the MySQL StatefulSet, follow these steps:
Create a namespace for the StatefulSet.
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
4Create the MySQL secret.
gke-stateful-mysql/kubernetes/secret.yaml
View on GitHub
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
5 gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
6The password is deployed with each Pod, and is used by management scripts and commands for MySQL InnoDB Cluster and ClusterSet deployment in this tutorial.
Create the StorageClass.
gke-stateful-mysql/kubernetes/storageclass.yaml
View on GitHub
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
7 gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
8This storage class uses the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
06 Persistent Disk type that balances performance and cost. The gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
07 field is set to gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
08 meaning that GKE delays provisioning of a PersistentVolume until the Pod is created. This setting ensures that the disk is provisioned in the same zone where the Pod is scheduled.Deploy the StatefulSet of MySQL instance Pods.
gke-stateful-mysql/kubernetes/c1-mysql.yaml
View on GitHub
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
9git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
0This command deploys the StatefulSet consisting of three replicas. In this tutorial, the primary MySQL cluster is deployed across three zones in
NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
7. The output is similar to the following:git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
1In this tutorial, the resource limits and requests are set to minimal values to save cost. When planning for a production workload, make sure to set these values appropriately for your organization's needs.
Verify the StatefulSet is created successfully.
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
2When all three pods are in a ready state, exit the command using
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
10. If you see gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
11 errors due to insufficient CPU or memory, wait a few minutes for the control plane to resize to accommodate the large workload.The output is similar to the following:
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
3To inspect the placement of your Pods on the GKE cluster nodes, run this script:
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
4The output shows the Pod name, the GKE node name, and the zone where the node is provisioned, and looks similar to the following:
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
5The columns in the output represent the hostname, cloud zone, and Pod name, respectively.
The
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
12 policy in the StatefulSet specification ( gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
13) directs the scheduler to place the Pods evenly across the failure domain ( gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
14).The
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
01 policy enforces the constraint that Pods are required to not be placed on the same GKE cluster node ( gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
16). For the MySQL instance Pods, this policy results in the Pods being deployed evenly across the three zones in the Google Cloud region. This placement enables high availability of the MySQL InnoDB Cluster by placing each database instance in a separate failure domain.
Prepare the primary MySQL InnoDB Cluster
To configure a MySQL InnoDB Cluster, follow these steps:
In the Cloud Shell terminal, set the group replication configurations for the MySQL instances to be added to your cluster.
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
6gke-stateful-mysql/scripts/c1-clustersetup.sh
View on GitHub
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
7The script will remotely connect into each of the three MySQL instances to set and persist the following environment variables:
In MySQL versions earlier than 8.0.22, use
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
19 instead of gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
17.Open a second terminal, so that you do not need to create a shell for each Pod.
Connect to MySQL Shell on the Pod
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
21.git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
8Verify the MySQL group replication allowlist for connecting to other instances.
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
9The output is similar to the following:
cd kubernetes-engine-samples/gke-stateful-mysql/kubernetes
0Verify the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
22 is unique on each of the instances.cd kubernetes-engine-samples/gke-stateful-mysql/kubernetes
1The output is similar to the following:
cd kubernetes-engine-samples/gke-stateful-mysql/kubernetes
2Configure each instance for MySQL InnoDB Cluster usage and create an administrator account on each instance.
cd kubernetes-engine-samples/gke-stateful-mysql/kubernetes
3All instances must have the same username and password in order for the MySQL InnoDB Cluster to function properly. Each command produces output similar to the following:
cd kubernetes-engine-samples/gke-stateful-mysql/kubernetes
4Verify that the instance is ready to be used in a MySQL InnoDB Cluster.
cd kubernetes-engine-samples/gke-stateful-mysql/kubernetes
5The output is similar to the following:
cd kubernetes-engine-samples/gke-stateful-mysql/kubernetes
6Optionally, you can connect to each MySQL instance and repeat this command. For example, run this command to check the status on the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
23 instance:cd kubernetes-engine-samples/gke-stateful-mysql/kubernetes
7
Create the primary MySQL InnoDB Cluster
Next, create the MySQL InnoDB Cluster using the MySQL Admin
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
24 command. Start with the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
21 instance, which will be the primary instance for the cluster, then add two additional replicas to the cluster.
To initialize the MySQL InnoDB Cluster, follow these steps:
Create the MySQL InnoDB Cluster.
cd kubernetes-engine-samples/gke-stateful-mysql/kubernetes
8Running the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
24 command triggers these operations:- Deploy the metadata schema.
- Verify that the configuration is correct for Group Replication.
- Register it as the seed instance of the new cluster.
- Create necessary internal accounts, such as the replication user account.
- Start Group Replication.
This command initializes a MySQL InnoDB Cluster with the host
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
21 as the primary. The cluster reference is stored in the cluster variable.The output looks similar to the following:
cd kubernetes-engine-samples/gke-stateful-mysql/kubernetes
9Add the second instance to the cluster.
gcloud container clusters create-auto $CLUSTER_NAME \
--region=$REGION
0Add the remaining instance to the cluster.
gcloud container clusters create-auto $CLUSTER_NAME \
--region=$REGION
1The output is similar to the following:
gcloud container clusters create-auto $CLUSTER_NAME \
--region=$REGION
2Verify the cluster's status.
gcloud container clusters create-auto $CLUSTER_NAME \
--region=$REGION
3This command shows the status of the cluster. The topology consists of three hosts, one primary and two secondary instances. Optionally, you can call
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
28.The output is similar to the following:
gcloud container clusters create-auto $CLUSTER_NAME \
--region=$REGION
4Optionally, you can call
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
28 to obtain additional status details.
Create a sample database
To create a sample database, follow these steps:
Create a database and load data into the database.
gcloud container clusters create-auto $CLUSTER_NAME \
--region=$REGION
5Insert sample data into the database. To insert data, you must be connected to the primary instance of the cluster.
gcloud container clusters create-auto $CLUSTER_NAME \
--region=$REGION
6Verify that the table contains the three rows inserted in the previous step.
gcloud container clusters create-auto $CLUSTER_NAME \
--region=$REGION
7The output is similar to the following:
gcloud container clusters create-auto $CLUSTER_NAME \
--region=$REGION
8
Create a MySQL InnoDB ClusterSet
You can create a MySQL InnoDB ClusterSet to manage replication from your primary cluster to replica clusters, using a dedicated ClusterSet replication channel.
A MySQL InnoDB ClusterSet provides disaster tolerance for MySQL InnoDB Cluster deployments by linking a primary MySQL InnoDB Cluster with one or more replicas of itself in alternate locations, such as multiple zones and multiple regions.
If you closed MySQL Shell, create a new shell by running this command in a new Cloud Shell terminal:
gcloud container clusters create-auto $CLUSTER_NAME \
--region=$REGION
9
To create a MySQL InnoDB ClusterSet, follow these steps:
In your MySQL Shell terminal, obtain a cluster object.
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
0The output is similar to the following:
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
1Initialize a MySQL InnoDB ClusterSet with the existing MySQL InnoDB Cluster stored in the cluster object as the primary.
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
2The output is similar to the following:
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
3Check the status of your MySQL InnoDB ClusterSet.
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
4The output is similar to the following:
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
5Optionally, you can call
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
30 to obtain additional status details, including information about the cluster.Exit MySQL Shell.
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
6
Deploy a MySQL Router
You can deploy a MySQL Router to direct client application traffic to the proper clusters. Routing is based on the connection port of the application issuing a database operation:
- Writes are routed to the primary Cluster instance in the primary ClusterSet.
- Reads can be routed to any instance in the primary Cluster.
When you start a MySQL Router, it is bootstrapped against the MySQL InnoDB ClusterSet deployment. The MySQL Router instances connected with the MySQL InnoDB ClusterSet are aware of any controlled switchovers or emergency failovers and direct traffic to the new primary cluster.
To deploy a MySQL Router, follow these steps:
In the Cloud Shell terminal, deploy the MySQL Router.
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
7The output is similar to the following:
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
8Check the readiness of the MySQL Router deployment.
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
9When all three Pods are ready, the output is similar to the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: prepare-three-zone-ha
labels:
app: prepare-three-zone-ha
spec:
replicas: 3
selector:
matchLabels:
app: prepare-three-zone-ha
template:
metadata:
labels:
app: prepare-three-zone-ha
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- prepare-three-zone-ha
topologyKey: "topology.kubernetes.io/zone"
containers:
- name: prepare-three-zone-ha
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "while true; do sleep 3600; done"
resources:
limits:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
requests:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
0If you see a
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
31 error in the console, wait a minute or two while GKE provisions more nodes. Refresh, and you should see gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
32.Start MySQL Shell on any member of the existing cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
name: prepare-three-zone-ha
labels:
app: prepare-three-zone-ha
spec:
replicas: 3
selector:
matchLabels:
app: prepare-three-zone-ha
template:
metadata:
labels:
app: prepare-three-zone-ha
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- prepare-three-zone-ha
topologyKey: "topology.kubernetes.io/zone"
containers:
- name: prepare-three-zone-ha
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "while true; do sleep 3600; done"
resources:
limits:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
requests:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
1This command connects to the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
21 Pod, then starts a shell connected to the gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
21 MySQL instance.Verify the router configuration.
apiVersion: apps/v1
kind: Deployment
metadata:
name: prepare-three-zone-ha
labels:
app: prepare-three-zone-ha
spec:
replicas: 3
selector:
matchLabels:
app: prepare-three-zone-ha
template:
metadata:
labels:
app: prepare-three-zone-ha
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- prepare-three-zone-ha
topologyKey: "topology.kubernetes.io/zone"
containers:
- name: prepare-three-zone-ha
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "while true; do sleep 3600; done"
resources:
limits:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
requests:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
2The output is similar to the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: prepare-three-zone-ha
labels:
app: prepare-three-zone-ha
spec:
replicas: 3
selector:
matchLabels:
app: prepare-three-zone-ha
template:
metadata:
labels:
app: prepare-three-zone-ha
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- prepare-three-zone-ha
topologyKey: "topology.kubernetes.io/zone"
containers:
- name: prepare-three-zone-ha
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "while true; do sleep 3600; done"
resources:
limits:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
requests:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
3Exit MySQL Shell.
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
6Run this script to inspect the placement of the MySQL Router Pods.
apiVersion: apps/v1
kind: Deployment
metadata:
name: prepare-three-zone-ha
labels:
app: prepare-three-zone-ha
spec:
replicas: 3
selector:
matchLabels:
app: prepare-three-zone-ha
template:
metadata:
labels:
app: prepare-three-zone-ha
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- prepare-three-zone-ha
topologyKey: "topology.kubernetes.io/zone"
containers:
- name: prepare-three-zone-ha
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "while true; do sleep 3600; done"
resources:
limits:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
requests:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
5The script shows the node and Cloud Zone placement of the all of the Pods in the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
35 namespace, where the output is similar to the following:apiVersion: apps/v1
kind: Deployment
metadata:
name: prepare-three-zone-ha
labels:
app: prepare-three-zone-ha
spec:
replicas: 3
selector:
matchLabels:
app: prepare-three-zone-ha
template:
metadata:
labels:
app: prepare-three-zone-ha
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- prepare-three-zone-ha
topologyKey: "topology.kubernetes.io/zone"
containers:
- name: prepare-three-zone-ha
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "while true; do sleep 3600; done"
resources:
limits:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
requests:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
6You can observe that the MySQL Router Pods are distributed equally across the zones; that is, not placed on the same node as a MySQL Pod, or on the same node as another MySQL Router Pod.
Manage GKE and MySQL InnoDB Cluster upgrades
Updates for both MySQL and Kubernetes are released on a regular schedule. Follow operational best practices to update your software environment regularly. By default, GKE manages cluster and node pool upgrades for you. Kubernetes and GKE also provide additional features to facilitate MySQL software upgrades.
Plan for GKE upgrades
You can take proactive steps and set configurations to mitigate risk and facilitate a smoother cluster upgrade when you are running stateful services, including:
Standard clusters: Follow GKE best practices for upgrading clusters. Choose an appropriate upgrade strategy to ensure the upgrades happen during the period of the maintenance window:
- Choose if cost optimization is important and if your workloads can tolerate a graceful shutdown in less than 60 minutes.
- Choose if your workloads are less tolerant of disruptions, and a temporary cost increase due to higher resource usage is acceptable.
To learn more, see . Autopilot clusters are automatically upgraded, based on the release channel you selected.
Use maintenance windows to ensure upgrades happen when you intend them. Before the maintenance window, ensure your database backups are successful.
Before allowing traffic to the upgraded MySQL nodes, use Readiness Probes and Liveness Probes to ensure they are ready for traffic.
Create Probes that assess whether replication is in sync before accepting traffic. This can be done through custom scripts, depending on the complexity and scale of your database.
Set a Pod Disruption Budget (PDB) policy
When a MySQL InnoDB Cluster is running on GKE, there must be a sufficient number of instances running at any time to meet the quorum requirement.
In this tutorial, given a MySQL cluster of three instances, two instances must be available to form a quorum. A
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
36 policy allows you to limit the number of Pods that can be terminated at any given time. This is useful for both steady state operations of your stateful services and for cluster upgrades.
To ensure that a limited number of Pods are concurrently disrupted, you set the PDB for your workload to
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
37. This ensures that at any point in the service operation, no more than one Pod is not running.Note:
You can also set the gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
38 value to ensure that a minimum number of Pods are running. However, if using gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
38 alone, to guarantee cluster availability, make sure that the value is increased if the size of the cluster increases. In contrast, the gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
40 value provides quorum protection for the cluster without any changes; the tradeoff is that only one instance can be disrupted for upgrade at a time.The following
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
36 policy manifest sets the maximum unavailable Pods to one for your MySQL application.
gke-stateful-mysql/kubernetes/mysql-pdb-maxunavailable.yaml
View on GitHub
apiVersion: apps/v1
kind: Deployment
metadata:
name: prepare-three-zone-ha
labels:
app: prepare-three-zone-ha
spec:
replicas: 3
selector:
matchLabels:
app: prepare-three-zone-ha
template:
metadata:
labels:
app: prepare-three-zone-ha
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- prepare-three-zone-ha
topologyKey: "topology.kubernetes.io/zone"
containers:
- name: prepare-three-zone-ha
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "while true; do sleep 3600; done"
resources:
limits:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
requests:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
7
To apply the PDB policy to your cluster, follow these steps:
Apply the PDB policy using
NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
6.apiVersion: apps/v1
kind: Deployment
metadata:
name: prepare-three-zone-ha
labels:
app: prepare-three-zone-ha
spec:
replicas: 3
selector:
matchLabels:
app: prepare-three-zone-ha
template:
metadata:
labels:
app: prepare-three-zone-ha
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- prepare-three-zone-ha
topologyKey: "topology.kubernetes.io/zone"
containers:
- name: prepare-three-zone-ha
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "while true; do sleep 3600; done"
resources:
limits:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
requests:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
8View the status of the PDB.
apiVersion: apps/v1
kind: Deployment
metadata:
name: prepare-three-zone-ha
labels:
app: prepare-three-zone-ha
spec:
replicas: 3
selector:
matchLabels:
app: prepare-three-zone-ha
template:
metadata:
labels:
app: prepare-three-zone-ha
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- prepare-three-zone-ha
topologyKey: "topology.kubernetes.io/zone"
containers:
- name: prepare-three-zone-ha
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "while true; do sleep 3600; done"
resources:
limits:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
requests:
cpu: "500m"
ephemeral-storage: "10Mi"
memory: "0.5Gi"
9In the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
43 section of the output, see the gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
44 and gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
45 Pods counts. The output is similar to the following:kubectl apply -f prepare-for-ha.yaml
0
Plan for MySQL binary upgrades
Kubernetes and GKE provide features to facilitate upgrades for the MySQL binary. However, you need to perform some operations to prepare for the upgrades.
Keep the following considerations in mind before you begin the upgrade process:
- Upgrades should first be carried out in a test environment. For production systems, you should perform further testing in a pre-production environment.
- For some binary releases, you cannot downgrade the version once an upgrade has been performed. Take the time to understand the implications of an upgrade.
- Replication sources can replicate to a newer version. However, copying from a newer to an older version is typically not supported.
- Make sure you have a complete database backup before deploying the upgraded version.
- Keep in mind the ephemeral nature of Kubernetes Pods. Any configuration state stored by the Pod that is not on the persistent volume will be lost when the Pod is redeployed.
- For MySQL binary upgrades, use the same PDB, node pool update strategy, and Probes as described earlier.
In a production environment, you should follow these best practices:
- Create a container image with the new version of MySQL.
- Persist the image build instructions in a source control repository.
- Use an automated image build and testing pipeline such as Cloud Build, and store the image binary in an image registry such as Artifact Registry.
To keep this tutorial simple, you will not build and persist a container image; instead, you use the public MySQL images.
Deploy the upgraded MySQL binary
To perform the MySQL binary upgrade, you issue a declarative command that modifies the image version of the StatefulSet resource. GKE performs the necessary steps to stop the current Pod, deploy a new Pod with the upgraded binary, and attach the persistent disk to the new Pod.
Verify that the PDB was created.
kubectl apply -f prepare-for-ha.yaml
1Get the list of stateful sets.
kubectl apply -f prepare-for-ha.yaml
2Get the list of running Pods using the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
46 label.kubectl apply -f prepare-for-ha.yaml
3Update the MySQL image in the stateful set.
kubectl apply -f prepare-for-ha.yaml
4The output is similar to the following:
kubectl apply -f prepare-for-ha.yaml
5Check the status of the terminating Pods and new Pods.
kubectl apply -f prepare-for-ha.yaml
3
Validate the MySQL binary upgrade
During the upgrade, you can verify the status of the rollout, the new Pods, and the existing Service.
Confirm the upgrade by running the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
47 command.kubectl apply -f prepare-for-ha.yaml
7The output is similar to the following:
kubectl apply -f prepare-for-ha.yaml
8Confirm the image version by inspecting the stateful set.
kubectl apply -f prepare-for-ha.yaml
9The output is similar to the following:
kubectl get deployment prepare-three-zone-ha --watch
0Check the status of the cluster.
kubectl get deployment prepare-three-zone-ha --watch
1For each cluster instance, look for the status and version values in the output. The output is similar to the following:
kubectl get deployment prepare-three-zone-ha --watch
2
Rollback the last app deployment rollout
Warning:
Some binary versions cannot be downgraded. Understand the implications and constraints before performing a binary upgrade.When you revert the deployment of an upgraded binary version, the rollout process is reversed and a new set of Pods is deployed with the previous image version.
To revert the deployment to the previous working version, use the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
48 command:
kubectl get deployment prepare-three-zone-ha --watch
3
The output is similar to the following:
kubectl get deployment prepare-three-zone-ha --watch
4
Scale your database cluster horizontally
To scale your MySQL InnoDB Cluster horizontally, you add additional nodes to the GKE cluster node pool (only required if you are using Standard), deploy additional MySQL instances, then add each instance to the existing MySQL InnoDB Cluster.
Add nodes to your Standard cluster
This operation is not needed if you are using a Autopilot cluster.
To add nodes to your Standard cluster, follow the instructions below for Cloud Shell or the Google Cloud console. For detailed steps, see
gcloud
In Cloud Shell, resize the default node pool to eight instances in each managed instance group.
kubectl get deployment prepare-three-zone-ha --watch
5Console
To add nodes to your Standard cluster:
- Open the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
49 Cluster page in the Google Cloud console. - Select Nodes, and click on default pool.
- Scroll down to Instances groups.
- For each instance group, resize the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
50 value from 5 to 8 nodes.
Add MySQL Pods to the primary cluster
To deploy additional MySQL Pods to scale your cluster horizontally, follow these steps:
In Cloud Shell, update the number of replicas in the MySQL deployment from three replicas to five replicas.
kubectl get deployment prepare-three-zone-ha --watch
6Verify the progress of the deployment.
kubectl get deployment prepare-three-zone-ha --watch
7To determine whether the Pods are ready, use the
gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
51 flag to watch the deployment. If you are using Autopilot clusters and see gcloud config set project PROJECT_ID
gcloud config set compute/region COMPUTE_REGION
52 errors, this might indicate GKE is provisioning nodes to accommodate the additional Pods.Configure the group replication settings for the new MySQL instances to add to the cluster
kubectl get deployment prepare-three-zone-ha --watch
8The script submits the commands to the instances running on the Pods with ordinals 3 through 4.
Open MySQL Shell.
kubectl get deployment prepare-three-zone-ha --watch
9Configure the two new MySQL instances.
NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
0The commands check if the instance is configured properly for MySQL InnoDB Cluster usage and perform the necessary configuration changes.
Add one of the new instances to the primary cluster.
NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
1Add a second new instance to the primary cluster.
NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
2Obtain the ClusterSet status, which also includes the Cluster status.
NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
3The output is similar to the following:
NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
4Exit MySQL Shell.
gcloud container clusters get-credentials $CLUSTER_NAME \
--region=$REGION
6
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the project
The easiest way to avoid billing is to delete the project you created for the tutorial.
Caution: Deleting a project has the following effects:If you plan to explore multiple tutorials and quickstarts, reusing projects can help you avoid exceeding project quota limits.