Sign In Try Free

Deploy TiDB on Alibaba Cloud Kubernetes

This document describes how to deploy a TiDB cluster on Alibaba Cloud Kubernetes with your laptop (Linux or macOS) for development or testing.

To deploy TiDB Operator and the TiDB cluster in a self-managed Kubernetes environment, refer toDeploy TiDB OperatorandDeploy TiDB on General Kubernetes.

Prerequisites

  • aliyun-cli>= 3.0.15 andconfigurealiyun-cli

  • kubectl>= 1.12

  • Helm 3

  • jq>= 1.6

  • terraform0.12.*

You can useCloud Shellof Alibaba Cloud to perform operations. All the tools have been pre-installed and configured in the Cloud Shell of Alibaba Cloud.

Required privileges

To deploy a TiDB cluster, make sure you have the following privileges:

  • AliyunECSFullAccess
  • AliyunESSFullAccess
  • AliyunVPCFullAccess
  • AliyunSLBFullAccess
  • AliyunCSFullAccess
  • AliyunEIPFullAccess
  • AliyunECIFullAccess
  • AliyunVPNGatewayFullAccess
  • AliyunNATGatewayFullAccess

Overview of things to create

In the default configuration, you will create:

  • A new VPC

  • 英航ECS实例stion machine

  • A managed ACK (Alibaba Cloud Kubernetes) cluster with the following ECS instance worker nodes:

    • An auto-scaling group of 2 * instances (2 cores, 2 GB RAM). The default auto-scaling group of managed Kubernetes must have at least two instances to host the whole system service, like CoreDNS
    • An auto-scaling group of 3 *ecs.g5.largeinstances for deploying the PD cluster
    • An auto-scaling group of 3 *ecs.i2.2xlargeinstances for deploying the TiKV cluster
    • An auto-scaling group of 2 *ecs.c5.4xlargeinstances for deploying the TiDB cluster
    • An auto-scaling group of 1 *ecs.c5.xlargeinstance for deploying monitoring components
    • A 100 GB cloud disk used to store monitoring data

All the instances except ACK mandatory workers are deployed across availability zones (AZs) to provide cross-AZ high availability. The auto-scaling group ensures the desired number of healthy instances, so the cluster can auto-recover from node failure or even AZ failure.

Deploy

部署ACK, TiDB运营商为Ti和节点池DB cluster

  1. Configure the target region and Alibaba Cloud key (you can also set these variables in theterraformcommand prompt):

    
                    
    exportTF_VAR_ALICLOUD_REGION=${REGION}&& \exportTF_VAR_ALICLOUD_ACCESS_KEY=${ACCESS_KEY}&& \exportTF_VAR_ALICLOUD_SECRET_KEY=${SECRET_KEY}

    Thevariables.tffile contains default settings of variables used for deploying the cluster. You can change it or use the-varoption to override a specific variable to fit your need.

  2. Use Terraform to set up the cluster.

    
                    
    gitclone--depth=1 https://github.com/pingcap/tidb-operator && \cdtidb-operator/deploy/aliyun

    You can create or modifyterraform.tfvarsto set the values of the variables, and configure the cluster to fit your needs. You can view the configurable variables and their descriptions invariables.tf. The following is an example of how to configure the ACK cluster name, the TiDB cluster name, the TiDB Operator version, and the number of PD, TiKV, and TiDB nodes.

    
                    
    cluster_name = " testack " tidb_cluster_name = "测试db" tikv_count = 3 tidb_count = 2 pd_count = 3 operator_version = "v1.5.0"
    • To deploy TiFlash in the cluster, setcreate_tiflash_node_pool = trueinterraform.tfvars. You can also configure the node count and instance type of the TiFlash node pool by modifyingtiflash_countandtiflash_instance_type. By default, the value oftiflash_countis2, and the value oftiflash_instance_typeisecs.i2.2xlarge.

    • To deploy TiCDC in the cluster, setcreate_cdc_node_pool = trueinterraform.tfvars. You can also configure the node count and instance type of the TiCDC node pool by modifyingcdc_countandcdc_instance_type. By default, the value ofcdc_countis3, and the value ofcdc_instance_typeisecs.c5.2xlarge.

    After the configuration, execute the following commands to initialize and deploy the cluster:

    
                    
    terraform init

    Input "yes" to confirm execution when you run the followingapplycommand:

    
                    
    terraform apply

    If you get an error while runningterraform apply, fix the error (for example, lack of permission) according to the error description and runterraform applyagain.

    It takes 5 to 10 minutes to create the whole stack usingterraform apply. Once the installation is complete, the basic cluster information is printed:

    
                    
    应用完成了!资源:3补充说,0改变,1 destroyed. Outputs: bastion_ip = 47.96.174.214 cluster_id = c2d9b20854a194f158ef2bc8ea946f20e kubeconfig_file = /tidb-operator/deploy/aliyun/credentials/kubeconfig monitor_endpoint = not_created region = cn-hangzhou ssh_key_file = /tidb-operator/deploy/aliyun/credentials/my-cluster-keyZ.pem tidb_endpoint = not_created tidb_version = v3.0.0 vpc_id = vpc-bp1v8i5rwsc7yh8dwyep5
  3. You can then interact with the ACK cluster usingkubectlorhelm:

    
                    
    exportKUBECONFIG=$PWD/credentials/kubeconfig
    
                    
    kubectl version
    
                    
    helmls

Deploy the TiDB cluster and monitor

  1. Prepare theTidbCluster,TidbDashboard, andTidbMonitorCR files:

    
                    
    cpmanifests/db.yaml.example db.yaml && \cpmanifests/db-monitor.yaml.example db-monitor.yaml && \cpmanifests/dashboard.yaml.example tidb-dashboard.yaml

    To complete the CR file configuration, refer toTiDB Operator API documentationandConfigure a TiDB Cluster.

    • To deploy TiFlash, configurespec.tiflashindb.yamlas follows:

      
                        
      spec ... tiflash: baseImage: pingcap/tiflash maxFailoverCount: 0 nodeSelector: dedicated: TIDB_CLUSTER_NAME-tiflash replicas: 1 storageClaims: - resources: requests: storage: 100Gi storageClassName: local-volume tolerations: - effect: NoSchedule key: dedicated operator: Equal value: TIDB_CLUSTER_NAME-tiflash

      To configure other parameters, refer toConfigure a TiDB Cluster.

      Modifyreplicas,storageClaims[].resources.requests.storage, andstorageClassNameaccording to your needs.

    • To deploy TiCDC, configurespec.ticdcindb.yamlas follows:

      
                        
      spec ... ticdc: baseImage: pingcap/ticdc nodeSelector: dedicated: TIDB_CLUSTER_NAME-cdc replicas: 3 tolerations: - effect: NoSchedule key: dedicated operator: Equal value: TIDB_CLUSTER_NAME-cdc

      Modifyreplicasaccording to your needs.

  2. CreateNamespace:

    
                    
    kubectl --kubeconfig credentials/kubeconfig create namespace${namespace}
  3. Deploy the TiDB cluster:

    
                    
    kubectl --kubeconfig credentials/kubeconfig create -f db.yaml -n${namespace}&& kubectl --kubeconfig credentials/kubeconfig create -f db-monitor.yaml -n${namespace}

Access the database

You can connect the TiDB cluster via the bastion instance. All necessary information is in the output printed after installation is finished (replace the${}parts with values from the output):


              
ssh -i credentials/${cluster_name}-key.pem root@${bastion_ip}

              
mysql --comments -h${tidb_lb_ip}-P 4000 -u root

tidb_lb_ipis the LoadBalancer IP of the TiDB service.

Access Grafana

Visit:3000to view the Grafana dashboards.monitor-lbis the LoadBalancer IP of the Monitor service.

The initial login user account and password:

  • User: admin
  • Password: admin

Access TiDB Dashboard Web UI

You can view Grafana monitoring metrics by visiting:12333in your browser.

tidb-dashboard-exposedis theLoadBalancerIP of the TiDB Dashboard service.

Upgrade

To upgrade the TiDB cluster, modify thespec.versionvariable by executingkubectl --kubeconfig credentials/kubeconfig patch tc ${tidb_cluster_name} -n ${namespace} --type merge -p '{"spec":{"version":"${version}"}}.

This may take a while to complete. You can watch the process using the following command:


              
kubectl get pods --namespace${namespace}-o wide --watch

Scale out the TiDB cluster

To scale out the TiDB cluster, modifytikv_count,tiflash_count,cdc_count, ortidb_countin theterraform.tfvarsfile, and then runterraform applyto scale out the number of nodes for the corresponding components.

After the nodes scale out, modify thereplicasof the corresponding components by runningkubectl --kubeconfig credentials/kubeconfig edit tc ${tidb_cluster_name} -n ${namespace}.

Configure

Configure TiDB Operator

You can set the variables interraform.tfvarsto configure TiDB Operator. Most configuration items can be modified after you understand the semantics based on the comments of thevariable. Note that theoperator_helm_valuesconfiguration item can provide a customizedvalues.yamlconfiguration file for TiDB Operator. For example:

  • Setoperator_helm_valuesinterraform.tfvars:

    
                    
    operator_helm_values = "./my-operator-values.yaml"
  • Setoperator_helm_valuesinmain.tf:

    
                    
    operator_helm_values = file("./my-operator-values.yaml")

In the default configuration, the Terraform script creates a new VPC. To use the existing VPC, setvpc_idinvariable.tf. In this case, Kubernetes nodes are not deployed in AZs with vSwitch not configured.

Configure the TiDB cluster

SeeTiDB Operator API DocumentationandConfigure a TiDB Cluster.

Manage multiple TiDB clusters

To manage multiple TiDB clusters in a single Kubernetes cluster, you need to edit./main.tfand add thetidb-clusterdeclaration based on your needs. For example:


              
module "tidb-cluster-dev" { source = "../modules/aliyun/tidb-cluster" providers = { helm = helm.default } cluster_name = "dev-cluster" ack = module.tidb-operator pd_count = 1 tikv_count = 1 tidb_count = 1 } module "tidb-cluster-staging" { source = "../modules/aliyun/tidb-cluster" providers = { helm = helm.default } cluster_name = "staging-cluster" ack = module.tidb-operator pd_count = 3 tikv_count = 3 tidb_count = 2 }

All the configurable parameters intidb-clusterare as follows:

Parameter Description Default value
ack The structure that enwraps the target Kubernetes cluster information (required) nil
cluster_name The TiDB cluster name (required and unique) nil
tidb_version The TiDB cluster version v3.0.1
tidb_cluster_chart_version tidb-clusterhelm chart version v1.0.1
pd_count The number of PD nodes 3
pd_instance_type The PD instance type ecs.g5.large
tikv_count The number of TiKV nodes 3
tikv_instance_type The TiKV instance type ecs.i2.2xlarge
tiflash_count The count of TiFlash nodes 2
tiflash_instance_type The TiFlash instance type ecs.i2.2xlarge
cdc_count TiCDC节点的计数 3
cdc_instance_type The TiCDC instance type ecs.c5.2xlarge
tidb_count The number of TiDB nodes 2
tidb_instance_type The TiDB instance type ecs.c5.4xlarge
monitor_instance_type The instance type of monitoring components ecs.c5.xlarge
override_values Thevalues.yamlconfiguration file of the TiDB cluster. You can read it using thefile()function nil
local_exec_interpreter The interpreter that executes the command line instruction ["/bin/sh", "-c"]
create_tidb_cluster_release Whether to create the TiDB cluster using Helm false

Manage multiple Kubernetes clusters

It is recommended to use a separate Terraform module to manage a specific Kubernetes cluster. (A Terraform module is a directory that contains the.tfscript.)

deploy/aliyuncombines multiple reusable Terraform scripts indeploy/modules. To manage multiple clusters, perform the following operations in the root directory of thetidb-operatorproject:

  1. Create a directory for each cluster. For example:

    
                    
    mkdir-p deploy/aliyun-staging
  2. Refer tomain.tfindeploy/aliyunand write your own script. For example:

    
                    
    provider "alicloud" { region = ${REGION} access_key = ${ACCESS_KEY} secret_key = ${SECRET_KEY} } module "tidb-operator" { source = "../modules/aliyun/tidb-operator" region = ${REGION} access_key = ${ACCESS_KEY} secret_key = ${SECRET_KEY} cluster_name = "example-cluster" key_file = "ssh-key.pem" kubeconfig_file = "kubeconfig" } provider "helm" { alias = "default" insecure = true install_tiller = false kubernetes { config_path = module.tidb-operator.kubeconfig_filename } } module "tidb-cluster" { source = "../modules/aliyun/tidb-cluster" providers = { helm = helm.default } cluster_name = "example-cluster" ack = module.tidb-operator } module "bastion" { source = "../modules/aliyun/bastion" bastion_name = "example-bastion" key_name = module.tidb-operator.key_name vpc_id = module.tidb-operator.vpc_id vswitch_id = module.tidb-operator.vswitch_ids[0] enable_ssh_to_worker = true worker_security_group_id = module.tidb-operator.security_group_id }

You can customize this script. For example, you can remove themodule "bastion"declaration if you do not need the bastion machine.

Destroy

  1. Refer toDestroy a TiDB clusterto delete the cluster.

  2. Destroy the ACK cluster by running the following command:

    
                    
    terraform destroy

If the Kubernetes cluster is not successfully created, thedestroyoperation might return an error and fail. In such cases, manually remove the Kubernetes resources from the local state:


              
terraform state list

              
terraform statermmodule.ack.alicloud_cs_managed_kubernetes.k8s

It may take a long time to finish destroying the cluster.

Limitation

You cannot changepod cidr,service cidr, and worker instance types once the cluster is created.

Download PDF Request docs changes Ask questions on Discord
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Was this page helpful?
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
©2023PingCAP. All Rights Reserved.