Sign In Try Free

Get Started with TiDB on Kubernetes

This document introduces how to create a simple Kubernetes cluster and use it to deploy a basic test TiDB cluster using TiDB Operator.

To deploy TiDB Operator and a TiDB cluster, follow these steps:

  1. Create a test Kubernetes cluster
  2. Deploy TiDB Operator
  3. Deploy a TiDB cluster and its monitoring services
  4. Connect to a TiDB cluster
  5. Upgrade a TiDB cluster
  6. Destroy the TiDB cluster and the Kubernetes cluster

You can watch the following video (approximately 12 minutes) to learn how to get started with TiDB Operator.

Step 1: Create a test Kubernetes cluster

This section describes two methods for creating a simple Kubernetes cluster. After creating a Kubernetes cluster, you can use it to test TiDB clusters managed by TiDB Operator. Choose the method that best suits your environment.

Alternatively, you can deploy a Kubernetes cluster on Google Kubernetes Engine on Google Cloud using theGoogle Cloud Shell.

Method 1: Create a Kubernetes cluster using kind

This section explains how to deploy a Kubernetes cluster usingkind.

kind is a popular tool for running local Kubernetes clusters using Docker containers as cluster nodes. For available tags, seeDocker Hub. The latest version of kind is used by default.

Before deployment, ensure that the following requirements are met:

Here is an example usingkindv0.8.1:


              
kind create cluster
Expected output

               
Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.18.2) ✓ Preparing nodes ✓ Writing configuration ✓ Starting control-plane ️ ✓ Installing CNI ✓ Installing StorageClass Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Thanks for using kind!

Check whether the cluster is successfully created:


              
kubectl cluster-info
Expected output

               
Kubernetes master is running at https://127.0.0.1:51026 KubeDNS is running at https://127.0.0.1:51026/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

You are now ready to deploy TiDB Operator.

Method 2: Create a Kubernetes cluster using minikube

You can create a Kubernetes cluster in a VM usingminikube, which supports macOS, Linux, and Windows.

Before deployment, ensure that the following requirements are met:

  • minikube: version 1.0.0 or later versions. Newer versions like v1.24 are recommended. minikube requires a compatible hypervisor. For details, refer to minikube installation instructions.
  • kubectl: version >= 1.12

Start a minikube Kubernetes cluster

After installing minikube, run the following command to start a minikube Kubernetes cluster:


              
minikube start
Expected outputYou should see output like this, with some differences depending on your OS and hypervisor:

               
minikube v1.24.0 on Darwin 12.1 ✨ Automatically selected the docker driver. Other choices: hyperkit, virtualbox, ssh Starting control plane node minikube in cluster minikube Pulling base image ... Downloading Kubernetes v1.22.3 preload ... > gcr.io/k8s-minikube/kicbase: 355.78 MiB / 355.78 MiB 100.00% 4.46 MiB p/ > preloaded-images-k8s-v13-v1...: 501.73 MiB / 501.73 MiB 100.00% 5.18 MiB Creating docker container (CPUs=2, Memory=1985MB) ... Preparing Kubernetes v1.22.3 on Docker 20.10.8 ... ▪ Generating certificates and keys ... ▪ Booting up control plane ... ▪ Configuring RBAC rules ... Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 Enabled addons: storage-provisioner, default-storageclass Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Usekubectlto interact with the cluster

To interact with the cluster, you can usekubectl, which is included as a sub-command inminikube. To make thekubectlcommand available, you can either add the following alias definition command to your shell profile or run the following alias definition command after opening a new shell.


              
alias kubectl='minikube kubectl --'

Run the following command to check the status of Kubernetes and ensure thatkubectlcan connect to it:


              
kubectl cluster-info
Expected output

               
Kubernetes master is running at https://192.168.64.2:8443 KubeDNS is running at https://192.168.64.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

You are now ready to deploy TiDB Operator.

Step 2: Deploy TiDB Operator

To deploy TiDB Operator, you need to follow these steps:

Install TiDB Operator CRDs

First, you need to install the Custom Resource Definitions (CRDs) that are required for TiDB Operator. These CRDs implement different components of the TiDB cluster.

To install the CRDs, run the following command:


              
kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/crd.yaml
Expected output

               
customresourcedefinition.apiextensions.k8s.io/tidbclusters.m.rzhenli.com created customresourcedefinition.apiextensions.k8s.io/backups.m.rzhenli.com created customresourcedefinition.apiextensions.k8s.io/restores.m.rzhenli.com created customresourcedefinition.apiextensions.k8s.io/backupschedules.m.rzhenli.com created customresourcedefinition.apiextensions.k8s.io/tidbmonitors.m.rzhenli.com created customresourcedefinition.apiextensions.k8s.io/tidbinitializers.m.rzhenli.com created customresourcedefinition.apiextensions.k8s.io/tidbclusterautoscalers.m.rzhenli.com created

Install TiDB Operator

To install TiDB Operator, you can useHelm 3. Follow these steps:

  1. Add the PingCAP repository:

    
                    
    helm repo add pingcap https://charts.pingcap.org/
    Expected output
    
                     
    "pingcap" has been added to your repositories
  2. Create a namespace for TiDB Operator:

    
                    
    kubectl create namespace tidb-admin
    Expected output
    
                     
    namespace/tidb-admin created
  3. Install TiDB Operator:

    
                    
    helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.1
    Expected output
    
                     
    名称:tidb-operator最后部署:星期一6月1 31:43 2020 NAMESPACE: tidb-admin STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Make sure tidb-operator components are running: kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator

To confirm that the TiDB Operator components are running, run the following command:


              
kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator
Expected output

               
NAME READY STATUS RESTARTS AGE tidb-controller-manager-6d8d5c6d64-b8lv4 1/1 Running 0 2m22s tidb-scheduler-644d59b46f-4f6sb 2/2 Running 0 2m22s

Once all the Pods are in the "Running" state, you can proceed to the next step.

Step 3: Deploy a TiDB cluster and its monitoring services

This section describes how to deploy a TiDB cluster and its monitoring services.

Deploy a TiDB cluster


              
kubectl create namespace tidb-cluster && \ kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-cluster.yaml
Expected output

               
namespace/tidb-cluster created tidbcluster.m.rzhenli.com/basic created

If you need to deploy a TiDB cluster on an ARM64 machine, refer toDeploying a TiDB Cluster on ARM64 Machines.

Deploy TiDB Dashboard independently


              
kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-dashboard.yaml
Expected output

               
tidbdashboard.m.rzhenli.com/basic created

Deploy TiDB monitoring services


              
kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-monitor.yaml
Expected output

               
tidbmonitor.m.rzhenli.com/basic created

View the Pod status


              
watch kubectl get po -n tidb-cluster
Expected output

               
NAME READY STATUS RESTARTS AGE basic-discovery-6bb656bfd-xl5pb 1/1 Running 0 9m9s basic-monitor-5fc8589c89-gvgjj 3/3 Running 0 8m58s basic-pd-0 1/1 Running 0 9m8s basic-tidb-0 2/2 Running 0 7m14s basic-tikv-0 1/1 Running 0 8m13s

Wait until all Pods for each service are started. Once you see that the Pods for each type (-pd,-tikv, and-tidb) are in the "Running" state, you can pressCtrl+Cto return to the command line and proceed with connecting to your TiDB cluster.

Step 4: Connect to TiDB

To connect to TiDB, you can use the MySQL client since TiDB supports the MySQL protocol and most of its syntax.

Install the MySQL client

Before connecting to TiDB, make sure you have a MySQL-compatible client installed on the host wherekubectlis installed. This can be themysqlexecutable from an installation of MySQL Server, MariaDB Server, Percona Server, or a standalone client executable from your operating system's package.

Forward port 4000

To connect to TiDB, you need to forward a port from the local host to the TiDB service on Kubernetes.

First, get a list of services in thetidb-clusternamespace:


              
kubectl get svc -n tidb-cluster
Expected output

               
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE basic-discovery ClusterIP 10.101.69.5 10261/TCP 10m basic-grafana ClusterIP 10.106.41.250 3000/TCP 10m basic-monitor-reloader ClusterIP 10.99.157.225 9089/TCP 10m basic-pd ClusterIP 10.104.43.232 2379/TCP 10m basic-pd-peer ClusterIP None 2380/TCP 10m basic-prometheus ClusterIP 10.106.177.227 9090/TCP 10m basic-tidb ClusterIP 10.99.24.91 4000 / TCP 10080 / TCP 8 m40s basic-tidb-peer俱乐部terIP None 10080/TCP 8m40s basic-tikv-peer ClusterIP None 20160/TCP 9m39s

In this case, the TiDB service is calledbasic-tidb. Run the following command to forward this port from the local host to the cluster:


              
kubectl port-forward -n tidb-cluster svc/basic-tidb 14000:4000 > pf14000.out &

If port14000is already occupied, you can replace it with an available port. This command runs in the background and writes its output to a file namedpf14000.out. You can continue to run the command in the current shell session.

Connect to the TiDB service


              
mysql --comments -h 127.0.0.1 -P 14000 -u root
Expected output

               
Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 76 Server version: 5.7.25-TiDB-v4.0.0 MySQL Community Server (Apache License 2.0) Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>

After connecting to the cluster, you can run the following commands to verify that some features are available in TiDB. Note that some commands require TiDB 4.0 or higher versions. If you have deployed an earlier version, you need toupgrade the TiDB cluster.

Create ahello_worldtable

               
mysql>use test; mysql> create tablehello_world (idintunsignednot nullauto_incrementprimarykey, vvarchar(32)); Query OK,0 rowsaffected (0.17sec) mysql> select * frominformation_schema.tikv_region_statuswheredb_name=database()andtable_name= 'hello_world'\G* * * * * * * * * * * * * * * * * * * * * * * * * * * 1. row * * * * * * * * * * * * * * * * * * * * * * * * * * *REGION_ID:2START_KEY:7480000000000000FF3700000000000000F8 END_KEY: TABLE_ID:55DB_NAME: test TABLE_NAME: hello_world IS_INDEX:0INDEX_ID:NULLINDEX_NAME:NULLEPOCH_CONF_VER:5EPOCH_VERSION:23WRITTEN_BYTES:0READ_BYTES:0APPROXIMATE_SIZE:1APPROXIMATE_KEYS:0 1 row in set(0.03sec)
Query the TiDB version

               
mysql> selecttidb_version()\G* * * * * * * * * * * * * * * * * * * * * * * * * * * 1. row * * * * * * * * * * * * * * * * * * * * * * * * * * *tidb_version():ReleaseVersion: v7.1 .1Edition: Community GitCommitHash: cf441574864be63938524e7dfcf7cc659edc3dd8 Git Branch: heads/refs/tags/v7.1 .1UTC BuildTime:2023 -07 -19 10:16:40GoVersion: go1.20 .6Race Enabled:falseTiKV Min Version:6.2 .0 -alphaCheck TableBeforeDrop:falseStore: tikv1 row in set(0.01sec)
Query the TiKV store status

               
mysql> select * frominformation_schema.tikv_store_status\G* * * * * * * * * * * * * * * * * * * * * * * * * * * 1. row * * * * * * * * * * * * * * * * * * * * * * * * * * *STORE_ID:4ADDRESS: basic-tikv-0.basic-tikv-peer.tidb-cluster.svc:20160STORE_STATE:0STORE_STATE_NAME: Up LABEL:nullVERSION:5.2 .1CAPACITY:58.42GiB AVAILABLE:36.18GiB LEADER_COUNT:3LEADER_WEIGHT:1LEADER_SCORE:3LEADER_SIZE:3REGION_COUNT:21REGION_WEIGHT:1REGION_SCORE:21REGION_SIZE:21START_TS:2020 -05 -28 22:48:21LAST_HEARTBEAT_TS:2020 -05 -28 22:52:01UPTIME:3m40.598302151s1 rows in set(0.01sec)
Query the TiDB cluster information

This command is effective only in TiDB 4.0 or later versions. If your TiDB does not support the command, you need toupgrade the TiDB cluster.


               
mysql> select * frominformation_schema.cluster_info\G* * * * * * * * * * * * * * * * * * * * * * * * * * * 1. row * * * * * * * * * * * * * * * * * * * * * * * * * * *TYPE: tidb INSTANCE: basic-tidb-0.basic-tidb-peer.tidb-cluster.svc:4000STATUS_ADDRESS: basic-tidb-0.basic-tidb-peer.tidb-cluster.svc:10080VERSION:5.2 .1GIT_HASH:689a6b6439ae7835947fcaccf329a3fc303986cb START_TIME:2020 -05 -28T22:50:11Z UPTIME:3m21.459090928s* * * * * * * * * * * * * * * * * * * * * * * * * * * 2. row * * * * * * * * * * * * * * * * * * * * * * * * * * *TYPE: pd INSTANCE: basic-pd:2379STATUS_ADDRESS: basic-pd:2379VERSION:5.2 .1GIT_HASH:56d4c3d2237f5bf6fb11a794731ed1d95c8020c2 START_TIME:2020 -05 -28T22:45:04Z UPTIME:8m28.459091915s* * * * * * * * * * * * * * * * * * * * * * * * * * * 3. row * * * * * * * * * * * * * * * * * * * * * * * * * * *TYPE: tikv INSTANCE: basic-tikv-0.basic-tikv-peer.tidb-cluster.svc:20160STATUS_ADDRESS:0.0 .0 .0:20180VERSION:5.2 .1GIT_HASH:198a2cea01734ce8f46d55a29708f123f9133944 START_TIME:2020 -05 -28T22:48:21Z UPTIME:5m11.459102648s3 rows in set(0.01sec)

Access the Grafana dashboard

To access the Grafana dashboard locally, you need to forward the port for Grafana:


              
kubectl port-forward -n tidb-cluster svc/basic-grafana 3000 > pf3000.out &

You can access the Grafana dashboard athttp://localhost:3000on the host where you runkubectl. The default username and password in Grafana are bothadmin.

Note that if you runkubectlin a Docker container or on a remote host instead of your local host, you cannot access the Grafana dashboard athttp://localhost:3000from your browser. In this case, you can run the following command to listen on all addresses:


              
kubectl port-forward --address 0.0.0.0 -n tidb-cluster svc/basic-grafana 3000 > pf3000.out &

Then access Grafana throughhttp://${remote-server-IP}:3000.

For more information about monitoring the TiDB cluster in TiDB Operator, refer toDeploy Monitoring and Alerts for a TiDB Cluster.

Access the TiDB Dashboard web UI

To access the TiDB Dashboard web UI locally, you need to forward the port for TiDB Dashboard:


              
kubectl port-forward -n tidb-cluster svc/basic-tidb-dashboard-exposed 12333 > pf12333.out &

You can access the panel of TiDB Dashboard athttp://localhost:12333on the host where you runkubectl.

Note that if you runkubectl port-forwardin a Docker container or on a remote host instead of your local host, you cannot access TiDB Dashboard usinglocalhostfrom your local browser. In this case, you can run the following command to listen on all addresses:


              
kubectl port-forward --address 0.0.0.0 -n tidb-cluster svc/basic-tidb-dashboard-exposed 12333 > pf12333.out &

Then access TiDB Dashboard throughhttp://${remote-server-IP}:12333.

Step 5: Upgrade a TiDB cluster

TiDB运营商简化了执行的过程a rolling upgrade of a TiDB cluster. This section describes how to upgrade your TiDB cluster to the "nightly" release.

在继续之前,重要的是要熟悉yourself with thekubectl patchsub-command. This command lets you directly apply changes to the running cluster resources. There are different patch strategies available, each with its own capabilities, limitations, and allowed formats. For more information, refer to theKubernetes Patchdocument.

Modify the TiDB cluster version

To update the version of the TiDB cluster to "nightly," you can use a JSON merge patch. Execute the following command:


              
kubectl patch tc basic -n tidb-cluster --typemerge -p'{"spec": {"version": "nightly"} }'
Expected output

               
tidbcluster.m.rzhenli.com/basic patched

Wait for Pods to restart

To monitor the progress of the cluster upgrade and observe the restart of its components, run the following command. You should see some Pods transitioning fromTerminatingtoContainerCreatingand finally toRunning.


              
watch kubectl get po -n tidb-cluster
Expected output

               
NAME READY STATUS RESTARTS AGE basic-discovery-6bb656bfd-7lbhx 1/1 Running 0 24m basic-pd-0 1/1 Terminating 0 5m31s basic-tidb-0 2/2 Running 0 2m19s basic-tikv-0 1/1 Running 0 4m13s

Forward the TiDB service port

Once all Pods have been restarted, you can verify that the cluster's version number has been updated.

Note that if you had previously set up port forwarding, you will need to reset it because the Pods it forwarded to have been destroyed and recreated.


              
kubectl port-forward -n tidb-cluster svc/basic-tidb 24000:4000 > pf24000.out &

If port24000is already in use, you can replace it with an available port.

Check the TiDB cluster version

To confirm the TiDB cluster's version, execute the following command:


              
mysql --comments -h 127.0.0.1 -P 24000 -u root -e 'select tidb_version()\G'
Expected output

Note thatnightlyis not a fixed version and the version might vary depending on the time the command is run.


               
*************************** 1. row *************************** tidb_version(): Release Version: v7.1.1 Edition: Community Git Commit Hash: cf441574864be63938524e7dfcf7cc659edc3dd8 Git Branch: heads/refs/tags/v7.1.1 UTC Build Time: 2023-07-19 10:16:40 GoVersion: go1.20.6 Race Enabled: false TiKV Min Version: 6.2.0-alpha Check Table Before Drop: false Store: tikv

Step 6: Destroy the TiDB cluster and the Kubernetes cluster

你完成测试之后,可以破坏TiDBcluster and the Kubernetes cluster.

Destroy the TiDB cluster

To destroy the TiDB cluster, follow these steps:

停止kubectlport forwarding

If you have any runningkubectlprocesses that are forwarding ports, make sure to end them by running the following command:


              
pgrep -lfa kubectl

Delete the TiDB cluster

To delete the TiDB cluster, use the following command:


              
kubectl delete tc basic -n tidb-cluster

In this command,tcis short fortidbclusters.

Delete TiDB monitoring services

To delete the TiDB monitoring services, run the following command:


              
kubectl delete tidbmonitor basic -n tidb-cluster

Delete PV data

If your deployment includes persistent data storage, deleting the TiDB cluster does not remove the data in the cluster. If you do not need the data, you can clean it by running the following commands:


              
kubectl delete pvc -n tidb-cluster -l app.kubernetes.io/instance=basic,app.kubernetes.io/managed-by=tidb-operator && \ kubectl get pv -l app.kubernetes.io/namespace=tidb-cluster,app.kubernetes.io/managed-by=tidb-operator,app.kubernetes.io/instance=basic -o name | xargs -I {} kubectl patch {} -p'{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'

Delete namespaces

To ensure that there are no remaining resources, delete the namespace used for your TiDB cluster by running the following command:


              
kubectl delete namespace tidb-cluster

Destroy the Kubernetes cluster

The method for destroying a Kubernetes cluster depends on how it was created. Here are the steps for destroying a Kubernetes cluster based on the creation method:

  • kind
  • minikube

If you created the Kubernetes cluster using kind, use the following command to destroy it:


               
kind delete cluster

If you created the Kubernetes cluster using minikube, use the following command to destroy it:


               
minikube delete

See also

If you are interested in deploying a TiDB cluster in production environments, refer to the following documents:

On public clouds:

In a self-managed Kubernetes cluster:

Download PDF Request docs changes Ask questions on Discord
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Was this page helpful?
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
©2023PingCAP. All Rights Reserved.