Sign In Try Free

Deploy TiDB on Google Cloud GKE

This document describes how to deploy a Google Kubernetes Engine (GKE) cluster and deploy a TiDB cluster on GKE.

To deploy TiDB Operator and the TiDB cluster in a self-managed Kubernetes environment, refer toDeploy TiDB OperatorandDeploy TiDB on General Kubernetes.

Prerequisites

Before deploying a TiDB cluster on GKE, make sure the following requirements are satisfied:

  • InstallHelm 3: used for deploying TiDB Operator.

  • Installgcloud: a command-line tool used for creating and managing Google Cloud services.

  • Complete the operations in theBefore you beginsection ofGKE Quickstart.

    This guide includes the following contents:

    • Enable Kubernetes APIs
    • Configure enough quota
  • 实例类型:以获得更好的性能,佛llowing is recommended:
    • PD nodes:n2-standard-4
    • TiDB nodes:n2-standard-16
    • TiKV or TiFlash nodes:n2-standard-16
  • Storage: For TiKV or TiFlash, it is recommended to usepd-ssddisk type.

Configure the Google Cloud service

Configure your Google Cloud project and default region:


              
gcloud configsetcore/project gcloud configsetcompute/region

Create a GKE cluster and node pool

  1. Create a GKE cluster and a default node pool:

    
                    
    gcloud container clusters create tidb --region us-east1 --machine-type n1-standard-4 --num-nodes=1
    • The command above creates a regional cluster.
    • The--num-nodes=1option indicates that one node is created in each zone. So if there are three zones in the region, there are three nodes in total, which ensures high availability.
    • It is recommended to use regional clusters in production environments. For other types of clusters, refer toTypes of GKE clusters.
    • The command above creates a cluster in the default network. If you want to specify a network, use the--network/subnetoption. For more information, refer toCreating a regional cluster.
  2. Create separate node pools for PD, TiKV, and TiDB:

    
                    
    gcloud container node-pools create pd --cluster tidb --machine-type n2-standard-4 --num-nodes=1 \ --node-labels=dedicated=pd --node-taints=dedicated=pd:NoSchedule gcloud container node-pools create tikv --cluster tidb --machine-type n2-highmem-8 --num-nodes=1 \ --node-labels=dedicated=tikv --node-taints=dedicated=tikv:NoSchedule gcloud container node-pools create tidb --cluster tidb --machine-type n2-standard-8 --num-nodes=1 \ --node-labels=dedicated=tidb --node-taints=dedicated=tidb:NoSchedule

    The process might take a few minutes.

Configure StorageClass

cr GKE集群之后eated, the cluster contains three StorageClasses of different disk types.

  • standard:pd-standarddisk type (default)
  • standard-rwo:pd-balanceddisk type
  • premium-rwo:pd-ssddisk type (recommended)

To improve I/O write performance, it is recommended to configurenodelallocandnoatimein themountOptionsfield of the StorageClass resource. For details, seeTiDB Environment and System Configuration Check.

It is recommended to use the defaultpd-ssdstorage classpremium-rwoor to set up a customized storage class:


              
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: pd-custom provisioner: kubernetes.io/gce-pd volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: type: pd-ssd mountOptions: - nodelalloc,noatime

Use local storage

For the production environment, usezonal persistent disks.

If you need to simulate bare-metal performance, some Google Cloud instance types provide additionallocal store volumes. You can choose such instances for the TiKV node pool to achieve higher IOPS and lower latency.

  1. Create a node pool with local storage for TiKV:

    
                    
    gcloud container node-pools create tikv --cluster tidb --machine-type n2-highmem-8 --num-nodes=1 --local-ssd-count 1 \ --node-labels dedicated=tikv --node-taints dedicated=tikv:NoSchedule

    If the TiKV node pool already exists, you can either delete the old pool and then create a new one, or change the pool name to avoid conflict.

  2. Deploy the local volume provisioner.

    You need to use thelocal-volume-provisionerto discover and manage the local storage. Executing the following command deploys and creates alocal-storagestorage class:

    
                    
    kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/gke/local-ssd-provision/local-ssd-provision.yaml
  3. Use the local storage.

    After the steps above, the local volume provisioner can discover all the local NVMe SSD disks in the cluster.

    Modifytikv.storageClassNamein thetidb-cluster.yamlfile tolocal-storage.

Deploy TiDB Operator

To deploy TiDB Operator on GKE, refer todeploy TiDB Operator.

Deploy a TiDB cluster and the monitoring component

This section describes how to deploy a TiDB cluster and its monitoring component on GKE.

Create namespace

To create a namespace to deploy the TiDB cluster, run the following command:


              
kubectl create namespace tidb-cluster

Deploy

First, download the sampleTidbClusterandTidbMonitorconfiguration files:


              
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/gcp/tidb-cluster.yaml && \ curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/gcp/tidb-monitor.yaml && \ curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/gcp/tidb-dashboard.yaml

Refer toconfigure the TiDB clusterto further customize and configure the CR before applying.

To deploy theTidbClusterandTidbMonitorCR in the GKE cluster, run the following command:


              
kubectl create -f tidb-cluster.yaml -n tidb-cluster && \ kubectl create -f tidb-monitor.yaml -n tidb-cluster

After the yaml file above is applied to the Kubernetes cluster, TiDB Operator creates the desired TiDB cluster and its monitoring component according to the yaml file.

View the cluster status

To view the status of the starting TiDB cluster, run the following command:


              
kubectl get pods -n tidb-cluster

When all the Pods are in theRunningorReadystate, the TiDB cluster is successfully started. For example:


              
NAME READY STATUS RESTARTS AGE tidb-discovery-5cb8474d89-n8cxk 1/1 Running 0 47h tidb-monitor-6fbcc68669-dsjlc 3/3 Running 0 47h tidb-pd-0 1/1 Running 0 47h tidb-pd-1 1/1 Running 0 46h tidb-pd-2 1/1 Running 0 46h tidb-tidb-0 2/2 Running 0 47h tidb-tidb-1 2/2 Running 0 46h tidb-tikv-0 1/1 Running 0 47h tidb-tikv-1 1/1 Running 0 47h tidb-tikv-2 1/1 Running 0 47h

Access the TiDB database

After you deploy a TiDB cluster, you can access the TiDB database via MySQL client.

Prepare a bastion host

The LoadBalancer created for your TiDB cluster is an intranet LoadBalancer. You can create abastion hostin the cluster VPC to access the database.


              
gcloud compute instances create bastion \ --machine-type=n1-standard-4 \ --image-project=centos-cloud \ --image-family=centos-7 \ --zone=${your-region}-a

Install the MySQL client and connect

After the bastion host is created, you can connect to the bastion host via SSH and access the TiDB cluster via the MySQL client.

  1. Connect to the bastion host via SSH:

    
                    
    gcloud compute ssh tidb@bastion
  2. Install the MySQL client:

    
                    
    sudo yum install mysql -y
  3. Connect the client to the TiDB cluster:

    
                    
    mysql --comments -h${tidb-nlb-dnsname}-P 4000 -u root

    ${tidb-nlb-dnsname}is the LoadBalancer IP of the TiDB service. You can view the IP in theEXTERNAL-IPfield of thekubectl get svc basic-tidb -n tidb-clusterexecution result.

    For example:

    
                    
    $ mysql --comments -h 10.128.15.243 -P 4000 -u root Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connectionidis 7823 Server version: 5.7.25-TiDB-v7.1.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible Copyright (c) 2000, 2022, Oracle and/or its affiliates. Type'help;'or'\h' for help. Type'\c'明确当前输入语句。MySQL[(没有)]> show status; +--------------------+--------------------------------------+ | Variable_name | Value | +--------------------+--------------------------------------+ | Ssl_cipher | | | Ssl_cipher_list | | | Ssl_verify_mode | 0 | | Ssl_version | | | ddl_schema_version | 22 | | server_id | 717420dc-0eeb-4d4a-951d-0d393aff295a | +--------------------+--------------------------------------+ 6 rowsin set(0.01 sec)

Access the Grafana monitoring dashboard

Obtain the LoadBalancer IP of Grafana:


              
kubectl -n tidb-cluster get svc basic-grafana

For example:


              
$ kubectl -n tidb-cluster get svc basic-grafana NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE basic-grafana LoadBalancer 10.15.255.169 34.123.168.114 3000:30657/TCP 35m

In the output above, theEXTERNAL-IPcolumn is the LoadBalancer IP.

You can access the${grafana-lb}:3000address using your web browser to view monitoring metrics. Replace${grafana-lb}with the LoadBalancer IP.

Access TiDB Dashboard Web UI

Obtain theLoadBalancerdomain name of TiDB Dashboard by running the following command:


              
kubectl -n tidb-cluster get svc basic-tidb-dashboard-exposed

The following is an example:


              
$ kubectl -n tidb-cluster get svc basic-tidb-dashboard-exposed NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE basic-tidb-dashboard-exposed LoadBalancer 10.15.255.169 34.123.168.114 12333:30657/TCP 35m

You can view monitoring metrics of TiDB Dashboard by visiting${EXTERNAL-IP}:12333using your web browser.

Upgrade

To upgrade the TiDB cluster, execute the following command:


              
kubectl patch tc basic -n tidb-cluster --typemerge -p'{"spec":{"version":"${version}"}}`.

The upgrade process does not finish immediately. You can watch the upgrade progress by executingkubectl get pods -n tidb-cluster --watch.

Scale out

Before scaling out the cluster, you need to scale out the corresponding node pool so that the new instances have enough resources for operation.

This section describes how to scale out the EKS node group and TiDB components.

Scale out GKE node group

The following example shows how to scale out thetikvnode pool of thetidbcluster to 6 nodes:


              
gcloud container clusters resize tidb --node-pool tikv --num-nodes 2

Scale out TiDB components

After that, executekubectl edit tc basic -n tidb-clusterand modify each component'sreplicasto the desired number of replicas. The scaling-out process is then completed.

For more information on managing node pools, refer toGKE Node pools.

Deploy TiFlash and TiCDC

TiFlashis the columnar storage extension of TiKV.

TiCDCis a tool for replicating the incremental data of TiDB by pulling TiKV change logs.

这两个组件not requiredin the deployment. This section shows a quick start example.

Create new node pools

  • Create a node pool for TiFlash:

    
                    
    gcloud container node-pools create tiflash --cluster tidb --machine-type n1-highmem-8 --num-nodes=1 \ --node-labels dedicated=tiflash --node-taints dedicated=tiflash:NoSchedule
  • Create a node pool for TiCDC:

    
                    
    gcloud container node-pools create ticdc --cluster tidb --machine-type n1-standard-4 --num-nodes=1 \ --node-labels dedicated=ticdc --node-taints dedicated=ticdc:NoSchedule

Configure and deploy

  • To deploy TiFlash, configurespec.tiflashintidb-cluster.yaml. For example:

    
                    
    spec: ... tiflash: baseImage: pingcap/tiflash maxFailoverCount: 0 replicas: 1 storageClaims: - resources: requests: storage: 100Gi nodeSelector: dedicated: tiflash tolerations: - effect: NoSchedule key: dedicated operator: Equal value: tiflash

    To configure other parameters, refer toConfigure a TiDB Cluster.

  • To deploy TiCDC, configurespec.ticdcintidb-cluster.yaml. For example:

    
                    
    spec: ... ticdc: baseImage: pingcap/ticdc replicas: 1 nodeSelector: dedicated: ticdc tolerations: - effect: NoSchedule key: dedicated operator: Equal value: ticdc

    Modifyreplicasaccording to your needs.

Finally, executekubectl -n tidb-cluster apply -f tidb-cluster.yamlto update the TiDB cluster configuration.

For detailed CR configuration, refer toAPI referencesandConfigure a TiDB Cluster.

Download PDF Request docs changes Ask questions on Discord
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Was this page helpful?
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
©2023PingCAP. All Rights Reserved.