Sign In Try Free

Deploy a TiDB Cluster Using TiUP

TiUPis a cluster operation and maintenance tool introduced in TiDB 4.0. TiUP providesTiUP cluster, a cluster management component written in Golang. By using TiUP cluster, you can easily perform daily database operations, including deploying, starting, stopping, destroying, scaling, and upgrading a TiDB cluster, and manage TiDB cluster parameters.

TiUP supports deploying TiDB, TiFlash, TiDB Binlog, TiCDC, and the monitoring system. This document introduces how to deploy TiDB clusters of different topologies.

Step 1. Prerequisites and precheck

Make sure that you have read the following documents:

Step 2. Deploy TiUP on the control machine

You can deploy TiUP on the control machine in either of the two ways: online deployment and offline deployment.

Deploy TiUP online

Log in to the control machine using a regular user account (take thetidbuser as an example). Subsequent TiUP installation and cluster management can be performed by thetidbuser.

  1. Install TiUP by running the following command:

    
                    
    curl --proto'=https'--tlsv1.2 -sSf https://tiup-mirrors.m.rzhenli.com/install.sh | sh
  2. Set TiUP environment variables:

    1. Redeclare the global environment variables:

      
                        
      source.bash_profile
    2. Confirm whether TiUP is installed:

      
                        
      whichtiup
  3. Install the TiUP cluster component:

    
                    
    tiup cluster
  4. If TiUP is already installed, update the TiUP cluster component to the latest version:

    
                    
    tiup update --self && tiup update cluster

    IfUpdate successfully!is displayed, the TiUP cluster is updated successfully.

  5. Verify the current version of your TiUP cluster:

    
                    
    tiup --binary cluster

Deploy TiUP offline

Perform the following steps in this section to deploy a TiDB cluster offline using TiUP:

Prepare the TiUP offline component package

Method 1: On theofficial download page, select the offline mirror package (TiUP offline package included) of the target TiDB version. Note that you need to download the server package and toolkit package at the same time.

Method 2: Manually pack an offline component package usingtiup mirror clone. The detailed steps are as follows:

  1. Install the TiUP package manager online.

    1. Install the TiUP tool:

      
                        
      curl --proto'=https'--tlsv1.2 -sSf https://tiup-mirrors.m.rzhenli.com/install.sh | sh
    2. Redeclare the global environment variables:

      
                        
      source.bash_profile
    3. Confirm whether TiUP is installed:

      
                        
      whichtiup
  2. Pull the mirror using TiUP.

    1. Pull the needed components on a machine that has access to the Internet:

      
                        
      tiup mirrorclonetidb-community-server-${version}-linux-amd64${version}--os=linux --arch=amd64

      The command above creates a directory namedtidb-community-server-${version}-linux-amd64in the current directory, which contains the component package necessary for starting a cluster.

    2. Pack the component package by using thetarcommand and send the package to the control machine in the isolated environment:

      
                        
      tar czvf tidb-community-server-${version}-linux-amd64.tar.gz tidb-community-server-${version}-linux-amd64

      tidb-community-server-${version}-linux-amd64.tar.gzis an independent offline environment package.

  3. Customize the offline mirror, or adjust the contents of an existing offline mirror.

    If you want to adjust an existing offline mirror (such as adding a new version of a component), take the following steps:

    1. When pulling an offline mirror, you can get an incomplete offline mirror by specifying specific information via parameters, such as the component and version information. For example, you can pull an offline mirror that includes only the offline mirror of TiUP v1.11.3 and TiUP Cluster v1.11.3 by running the following command:

      
                        
      tiup mirrorclonetiup-custom-mirror-v1.11.3 --tiup v1.11.3 --cluster v1.11.3

      If you only need the components for a particular platform, you can specify them using the--osor——拱parameters.

    2. Refer to the step 2 of "Pull the mirror using TiUP", and send this incomplete offline mirror to the control machine in the isolated environment.

    3. Check the path of the current offline mirror on the control machine in the isolated environment. If your TiUP tool is of a recent version, you can get the current mirror address by running the following command:

      
                        
      tiup mirror show

      If the output of the above command indicates that theshowcommand does not exist, you might be using an older version of TiUP. In this case, you can get the current mirror address from$HOME/.tiup/tiup.toml. Record this mirror address. In the following steps,${base_mirror}is used to refer to this address.

    4. Merge an incomplete offline mirror into an existing offline mirror:

      First, copy thekeysdirectory in the current offline mirror to the$HOME/.tiupdirectory:

      
                        
      cp-r${base_mirror}/keys$HOME/.tiup/

      Then use the TiUP command to merge the incomplete offline mirror into the mirror in use:

      
                        
      tiup mirror merge tiup-custom-mirror-v1.11.3
    5. When the above steps are completed, check the result by running thetiup listcommand. In this document's example, the outputs of bothtiup list tiupandtiup list cluster表明,一ding components ofv1.11.3are available.

Deploy the offline TiUP component

After sending the package to the control machine of the target cluster, install the TiUP component by running the following commands:


              
tar xzvf tidb-community-server-${version}-linux-amd64.tar.gz && \ sh tidb-community-server-${version}-linux-amd64/local_install.sh && \source/home/tidb/.bash_profile

Thelocal_install.shscript automatically runs thetiup mirror set tidb-community-server-${version}-linux-amd64command to set the current mirror address totidb-community-server-${version}-linux-amd64.

Merge offline packages

If you download the offline packages from theofficial download page, you need to merge the server package and the toolkit package into an offline mirror. If you manually package the offline component packages using thetiup mirror clonecommand, you can skip this step.

Run the following commands to merge the offline toolkit package into the server package directory:


              
tar xf tidb-community-toolkit -${version}-linux-amd64.tar.gzls-ld tidb-community-server-${version}-linux-amd64 tidb-community-toolkit-${version}-linux-amd64cdtidb-community-server-${version}-linux-amd64/cp-rp keys ~/.tiup/ tiup mirror merge ../tidb-community-toolkit-${version}-linux-amd64

To switch the mirror to another directory, run thetiup mirror set command. To switch the mirror to the online environment, run thetiup mirror set https://tiup-mirrors.m.rzhenli.comcommand.

Step 3. Initialize cluster topology file

Run the following command to create a cluster topology file:


              
tiup cluster template > topology.yaml

In the following two common scenarios, you can generate recommended topology templates by running commands:

  • For hybrid deployment: Multiple instances are deployed on a single machine. For details, seeHybrid Deployment Topology.

    
                    
    tiup cluster template --full > topology.yaml
  • For geo-distributed deployment: TiDB clusters are deployed in geographically distributed data centers. For details, seeGeo-Distributed Deployment Topology.

    
                    
    tiup cluster template --multi-dc > topology.yaml

Runvi topology.yamlto see the configuration file content:


              
global: user:"tidb"ssh_port: 22 deploy_dir:"/tidb-deploy"data_dir:"/tidb-data"server_configs: {} pd_servers: - host: 10.0.1.4 - host: 10.0.1.5 - host: 10.0.1.6 tidb_servers: - host: 10.0.1.7 - host: 10.0.1.8 - host: 10.0.1.9 tikv_servers: - host: 10.0.1.1 - host: 10.0.1.2 - host: 10.0.1.3 monitoring_servers: - host: 10.0.1.4 grafana_servers: - host: 10.0.1.4 alertmanager_servers: - host: 10.0.1.4

The following examples cover seven common scenarios. You need to modify the configuration file (namedtopology.yaml显示)to the topology description and templates in the corresponding links. For other scenarios, edit the configuration template accordingly.

Application Configuration task Configuration file template Topology description
OLTP Deploy minimal topology Simple minimal configuration template
Full minimal configuration template
This is the basic cluster topology, including tidb-server, tikv-server, and pd-server.
HTAP Deploy the TiFlash topology Simple TiFlash configuration template
Full TiFlash configuration template
This is to deploy TiFlash along with the minimal cluster topology. TiFlash is a columnar storage engine, and gradually becomes a standard cluster topology.
Replicate incremental data usingTiCDC Deploy the TiCDC topology Simple TiCDC configuration template
Full TiCDC configuration template
This is to deploy TiCDC along with the minimal cluster topology. TiCDC supports multiple downstream platforms, such as TiDB, MySQL, Kafka, MQ, and storage services.
Replicate incremental data usingTiDB Binlog Deploy the TiDB Binlog topology Simple TiDB Binlog configuration template (MySQL as downstream)
Simple TiDB Binlog configuration template (Files as downstream)
Full TiDB Binlog configuration template
This is to deploy TiDB Binlog along with the minimal cluster topology.
Use OLAP on Spark Deploy the TiSpark topology Simple TiSpark configuration template
Full TiSpark configuration template
这是部署TiSpark随着最小cluster topology. TiSpark is a component built for running Apache Spark on top of TiDB/TiKV to answer the OLAP queries. Currently, TiUP cluster's support for TiSpark is stillexperimental.
Deploy multiple instances on a single machine Deploy a hybrid topology Simple configuration template for hybrid deployment
Full configuration template for hybrid deployment
The deployment topologies also apply when you need to add extra configurations for the directory, port, resource ratio, and label.
Deploy TiDB clusters across data centers Deploy a geo-distributed deployment topology 配置模板geo-distributed deployment This topology takes the typical architecture of three data centers in two cities as an example. It introduces the geo-distributed deployment architecture and the key configuration that requires attention.

For more configuration description, see the following configuration examples:

Step 4. Run the deployment command

Before you run thedeploycommand, use thecheckandcheck --applycommands to detect and automatically repair potential risks in the cluster:

  1. Check for potential risks:

    
                    
    tiup cluster check ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]
  2. Enable automatic repair:

    
                    
    tiup cluster check ./topology.yaml --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa]
  3. Deploy a TiDB cluster:

    
                    
    tiup cluster deploy tidb-test v7.1.2 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]

In thetiup cluster deploycommand above:

  • tidb-testis the name of the TiDB cluster to be deployed.
  • v7.1.2is the version of the TiDB cluster to be deployed. You can see the latest supported versions by runningtiup list tidb.
  • topology.yamlis the initialization configuration file.
  • --user rootindicates logging into the target machine as therootuser to complete the cluster deployment. Therootuser is expected to havesshandsudoprivileges to the target machine. Alternatively, you can use other users withsshandsudoprivileges to complete the deployment.
  • [-i]and[-p]are optional. If you have configured login to the target machine without password, these parameters are not required. If not, choose one of the two parameters.[-i]is the private key of the root user (or other users specified by--user) that has access to the target machine.[-p]is used to input the user password interactively.

At the end of the output log, you will seeDeployed cluster `tidb-test` successfully. This indicates that the deployment is successful.

Step 5. Check the clusters managed by TiUP


              
tiup cluster list

TiUP supports managing multiple TiDB clusters. The preceding command outputs information of all the clusters currently managed by TiUP, including the cluster name, deployment user, version, and secret key information:

Step 6. Check the status of the deployed TiDB cluster

For example, run the following command to check the status of thetidb-testcluster:


              
tiup cluster display tidb-test

Expected output includes the instance ID, role, host, listening port, and status (because the cluster is not started yet, so the status isDown/inactive), and directory information.

Step 7. Start a TiDB cluster

Since TiUP cluster v1.9.0, safe start is introduced as a new start method. Starting a database using this method improves the security of the database. It is recommended that you use this method.

After safe start, TiUP automatically generates a password for the TiDB root user and returns the password in the command-line interface.

Method 1: Safe start


              
tiup cluster start tidb-test --init

If the output is as follows, the start is successful:


              
Started cluster `tidb-test` successfully. The root password of TiDB database has been changed. The new password is:'y_+3Hwp=*AWz8971s6'. Copy and record it to somewhere safe, it is only displayed once, and will not be stored. The generated password can NOT be got againinfuture.

Method 2: Standard start


              
tiup cluster start tidb-test

If the output log includesStarted cluster `tidb-test` successfully, the start is successful. After standard start, you can log in to a database using a root user without a password.

Step 8. Verify the running status of the TiDB cluster


              
tiup cluster display tidb-test

If the output log showsUpstatus, the cluster is running properly.

See also

If you have deployedTiFlashalong with the TiDB cluster, see the following documents:

If you have deployedTiCDCalong with the TiDB cluster, see the following documents:

Download PDF Request docs changes Ask questions on Discord
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Was this page helpful?
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
©2023PingCAP. All Rights Reserved.