Deploy TiDB Operator on Kubernetes
This document describes how to deploy TiDB Operator on Kubernetes.
Prerequisites
Before deploying TiDB Operator, make sure the following items are installed on your machine:
- Kubernetes >= v1.12
- DNS addons
- PersistentVolume
- RBACenabled (optional)
- Helm 3
Deploy the Kubernetes cluster
TiDB Operator runs in the Kubernetes cluster. You can refer tothe document of how to set up Kubernetesto set up a Kubernetes cluster. Make sure that the Kubernetes version is v1.12 or higher. If you want to deploy a very simple Kubernetes cluster for testing purposes, consult theGet Starteddocument.
For some public cloud environments, refer to the following documents:
TiDB Operator usesPersistent Volumesto persist the data of TiDB cluster (including the database, monitoring data, and backup data), so the Kubernetes cluster must provide at least one kind of persistent volumes.
It is recommended to enableRBACin the Kubernetes cluster.
Install Helm
Refer toUse Helmto install Helm and configure it with the official PingCAP chart repository.
Deploy TiDB Operator
Create CRD
TiDB Operator usesCustom Resource Definition (CRD)to extend Kubernetes. Therefore, to use TiDB Operator, you must first create theTidbCluster
CRD, which is a one-time job in your Kubernetes cluster.
kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.0/manifests/crd.yaml
If the server cannot access the Internet, you need to download thecrd.yaml
file on a machine with Internet access before installing:
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.0/manifests/crd.yaml kubectl create -f ./crd.yaml
If the following message is displayed, the CRD installation is successful:
kubectl get crd
NAME CREATED AT backups.m.rzhenli.com 2020-06-11T07:59:40Z backupschedules.m.rzhenli.com 2020-06-11T07:59:41Z restores.m.rzhenli.com 2020-06-11T07:59:40Z tidbclusterautoscalers.m.rzhenli.com 2020-06-11T07:59:42Z tidbclusters.m.rzhenli.com 2020-06-11T07:59:38Z tidbinitializers.m.rzhenli.com 2020-06-11T07:59:42Z tidbmonitors.m.rzhenli.com 2020-06-11T07:59:41Z
Customize TiDB Operator deployment
To deploy TiDB Operator quickly, you can refer toDeploy TiDB Operator. This section describes how to customize the deployment of TiDB Operator.
After creating CRDs in the step above, there are two methods to deploy TiDB Operator on your Kubernetes cluster: online and offline.
When you use TiDB Operator,tidb-scheduler
is not mandatory. Refer totidb-scheduler and default-schedulerto confirm whether you need to deploytidb-scheduler
. If you do not needtidb-scheduler
, you can configurescheduler.create: false
in thevalues.yaml
file, sotidb-scheduler
is not deployed.
Online deployment
Get the
values.yaml
file of thetidb-operator
chart you want to deploy:mkdir-p${HOME}/tidb-operator && \ helm inspect values pingcap/tidb-operator --version=${chart_version}>${HOME}/tidb-operator/values-tidb-operator.yamlConfigure TiDB Operator
TiDB Operator manages all TiDB clusters in the Kubernetes cluster by default. If you only need it to manage clusters in a specific namespace, you can set
clusterScoped: false
invalues.yaml
.You can modify other items such as
limits
,requests
, andreplicas
as needed.Deploy TiDB Operator
helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=${chart_version}-f${HOME}/tidb-operator/values-tidb-operator.yaml && \ kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operatorUpgrade TiDB Operator
如果你需要升级TiDB运营商修改the
${HOME}/tidb-operator/values-tidb-operator.yaml
file, and then execute the following command to upgrade:helm upgrade tidb-operator pingcap/tidb-operator --namespace=tidb-admin -f${HOME}/tidb-operator/values-tidb-operator.yaml
Offline installation
If your server cannot access the Internet, install TiDB Operator offline by the following steps:
Download the
tidb-operator
chartIf the server has no access to the Internet, you cannot configure the Helm repository to install the TiDB Operator component and other applications. At this time, you need to download the chart file needed for cluster installation on a machine with Internet access, and then copy it to the server.
Use the following command to download the
tidb-operator
chart file:wget http://charts.pingcap.org/tidb-operator-v1.5.0.tgzCopy the
tidb-operator-v1.5.0.tgz
file to the target server and extract it to the current directory:tar zxvf tidb-operator.v1.5.0.tgzDownload the Docker images used by TiDB Operator
If the server has no access to the Internet, you need to download all Docker images used by TiDB Operator on a machine with Internet access and upload them to the server, and then use
docker load
to install the Docker image on the server.The Docker images used by TiDB Operator are:
pingcap/tidb-operator:v1.5.0 pingcap/tidb-backup-manager:v1.5.0 bitnami/kubectl:latest pingcap/advanced-statefulset:v0.3.3 k8s.gcr.io/kube-scheduler:v1.16.9Among them,
k8s.gcr.io/kube-scheduler:v1.16.9
should be consistent with the version of your Kubernetes cluster. You do not need to download it separately.Next, download all these images using the following command:
docker pull pingcap/tidb-operator:v1.5.0 docker pull pingcap/tidb-backup-manager:v1.5.0 docker pull bitnami/kubectl:latest docker pull pingcap/advanced-statefulset:v0.3.3 docker save -o tidb-operator-v1.5.0.tar pingcap/tidb-operator:v1.5.0 docker save -o tidb-backup-manager-v1.5.0.tar pingcap/tidb-backup-manager:v1.5.0 docker save -o bitnami-kubectl.tar bitnami/kubectl:latest docker save -o advanced-statefulset-v0.3.3.tar pingcap/advanced-statefulset:v0.3.3Next, upload these Docker images to the server, and execute
docker load
to install these Docker images on the server:docker load -i tidb-operator-v1.5.0.tar docker load -i tidb-backup-manager-v1.5.0.tar docker load -i bitnami-kubectl.tar docker load -i advanced-statefulset-v0.3.3.tarConfigure TiDB Operator
TiDB Operator embeds a
kube-scheduler
to implement a custom scheduler. If you need to deploytidb-scheduler
, modify the./tidb-operator/values.yaml
file to configure the Docker image's name and version of this built-inkube-scheduler
component. For example, ifkube-scheduler
in your Kubernetes cluster uses the imagek8s.gcr.io/kube-scheduler:v1.16.9
, set./tidb-operator/values.yaml
as follows:…调度程序:serviceAccount: tidb-scheduler logLevel: 2 replicas: 1 schedulerName: tidb-scheduler resources: limits: cpu: 250m memory: 150Mi requests: cpu: 80m memory: 50Mi kubeSchedulerImageName: k8s.gcr.io/kube-scheduler kubeSchedulerImageTag: v1.16.9 ...You can modify other items such as
limits
,requests
, andreplicas
as needed.Install TiDB Operator
Install TiDB Operator using the following command:
helm install tidb-operator ./tidb-operator --namespace=tidb-adminUpgrade TiDB Operator
If you need to upgrade TiDB Operator, modify the
./tidb-operator/values.yaml
file, and then execute the following command to upgrade:helm upgrade tidb-operator ./tidb-operator --namespace=tidb-admin
Customize TiDB Operator
To customize TiDB Operator, modify${HOME}/tidb-operator/values-tidb-operator.yaml
. The rest sections of the document usevalues.yaml
to refer to${HOME}/tidb-operator/values-tidb-operator.yaml
TiDB Operator contains two components:
- tidb-controller-manager
- tidb-scheduler
These two components are stateless and deployed viaDeployment
. You can customize resourcelimit
,request
, andreplicas
in thevalues.yaml
file.
After modifyingvalues.yaml
, run the following command to apply this modification:
helm upgrade tidb-operator pingcap/tidb-operator --version=${chart_version}--namespace=tidb-admin -f${HOME}/tidb-operator/values-tidb-operator.yaml