Sign In Try Free

Quick Start Guide for the TiDB Database Platform

This guide walks you through the quickest way to get started with TiDB. For non-production environments, you can deploy your TiDB database by either of the following methods:

Deploy a local test cluster

  • Scenario: Quickly deploy a local TiDB cluster for testing using a single macOS or Linux server. By deploying such a cluster, you can learn the basic architecture of TiDB and the operation of its components, such as TiDB, TiKV, PD, and the monitoring components.
  • macOS
  • Linux

As a distributed system, a basic TiDB test cluster usually consists of 2 TiDB instances, 3 TiKV instances, 3 PD instances, and optional TiFlash instances. With TiUP Playground, you can quickly build the test cluster by taking the following steps:

  1. Download and install TiUP:

    curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.m.rzhenli.com/install.sh | sh

    如果displaye以下消息d, you have installed TiUP successfully:

    Successfully set mirror to https://tiup-mirrors.m.rzhenli.com Detected shell: zsh Shell profile: /Users/user/.zshrc /Users/user/.zshrc has been modified to add tiup to PATH open a new terminal or source /Users/user/.zshrc to use it Installed path: /Users/user/.tiup/bin/tiup =============================================== Have a try: tiup playground ===============================================

    Note the Shell profile path in the output above. You need to use the path in the next step.

  2. Declare the global environment variable:

    source ${your_shell_profile}
  3. Start the cluster in the current session:

    • If you want to start a TiDB cluster of the latest version with 1 TiDB instance, 1 TiKV instance, 1 PD instance, and 1 TiFlash instance, run the following command:

      tiup playground
    • If you want to specify the TiDB version and the number of the instances of each component, run a command like this:

      tiup playground v6.5.1 --db 2 --pd 3 --kv 3

      The command downloads a version cluster to the local machine and starts it, such as v6.5.1. To view the latest version, runtiup list tidb.

      This command returns the access methods of the cluster:

      CLUSTER START SUCCESSFULLY, Enjoy it ^-^ To connect TiDB: mysql --comments --host 127.0.0.1 --port 4001 -u root -p (no password) To connect TiDB: mysql --comments --host 127.0.0.1 --port 4000 -u root -p (no password) To view the dashboard: http://127.0.0.1:2379/dashboard PD client endpoints: [127.0.0.1:2379 127.0.0.1:2382 127.0.0.1:2384] To view Prometheus: http://127.0.0.1:9090 To view Grafana: http://127.0.0.1:3000
  4. Start a new session to access TiDB:

    • Use the TiUP client to connect to TiDB.

      tiup client
    • You can also use the MySQL client to connect to TiDB.

      mysql --host 127.0.0.1 --port 4000 -u root
  5. Access the Prometheus dashboard of TiDB athttp://127.0.0.1:9090.

  6. Access theTiDB Dashboardathttp://127.0.0.1:2379/dashboard. The default username isroot, with an empty password.

  7. Access the Grafana dashboard of TiDB throughhttp://127.0.0.1:3000. Both the default username and password areadmin.

  8. (Optional)Load data to TiFlashfor analysis.

  9. Clean up the cluster after the test deployment:

    1. Stop the above TiDB service by pressingControl+C.

    2. Run the following command after the service is stopped:

      tiup clean --all

As a distributed system, a basic TiDB test cluster usually consists of 2 TiDB instances, 3 TiKV instances, 3 PD instances, and optional TiFlash instances. With TiUP Playground, you can quickly build the test cluster by taking the following steps:

  1. Download and install TiUP:

    curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.m.rzhenli.com/install.sh | sh

    如果displaye以下消息d, you have installed TiUP successfully:

    Successfully set mirror to https://tiup-mirrors.m.rzhenli.com Detected shell: zsh Shell profile: /Users/user/.zshrc /Users/user/.zshrc has been modified to add tiup to PATH open a new terminal or source /Users/user/.zshrc to use it Installed path: /Users/user/.tiup/bin/tiup =============================================== Have a try: tiup playground ===============================================

    Note the Shell profile path in the output above. You need to use the path in the next step.

  2. Declare the global environment variable:

    source ${your_shell_profile}
  3. Start the cluster in the current session:

    • If you want to start a TiDB cluster of the latest version with 1 TiDB instance, 1 TiKV instance, 1 PD instance, and 1 TiFlash instance, run the following command:

      tiup playground
    • If you want to specify the TiDB version and the number of the instances of each component, run a command like this:

      tiup playground v6.5.1 --db 2 --pd 3 --kv 3

      The command downloads a version cluster to the local machine and starts it, such as v6.5.1. To view the latest version, runtiup list tidb.

      This command returns the access methods of the cluster:

      CLUSTER START SUCCESSFULLY, Enjoy it ^-^ To connect TiDB: mysql --host 127.0.0.1 --port 4000 -u root -p (no password) --comments To view the dashboard: http://127.0.0.1:2379/dashboard PD client endpoints: [127.0.0.1:2379] To view the Prometheus: http://127.0.0.1:9090 To view the Grafana: http://127.0.0.1:3000
  4. Start a new session to access TiDB:

    • Use the TiUP client to connect to TiDB.

      tiup client
    • You can also use the MySQL client to connect to TiDB.

      mysql --host 127.0.0.1 --port 4000 -u root
  5. Access the Prometheus dashboard of TiDB athttp://127.0.0.1:9090.

  6. Access theTiDB Dashboardathttp://127.0.0.1:2379/dashboard. The default username isroot, with an empty password.

  7. Access the Grafana dashboard of TiDB throughhttp://127.0.0.1:3000. Both the default username and password areadmin.

  8. (Optional)Load data to TiFlashfor analysis.

  9. Clean up the cluster after the test deployment:

    1. Stop the process by pressingControl+C.

    2. Run the following command after the service is stopped:

      tiup clean --all

Simulate production deployment on a single machine

  • Scenario: Experience the smallest TiDB cluster with the complete topology and simulate the production deployment steps on a single Linux server.

This section describes how to deploy a TiDB cluster using a YAML file of the smallest topology in TiUP.

Prepare

Prepare a target machine that meets the following requirements:

  • CentOS 7.3 or a later version is installed
  • The Linux OS has access to the Internet, which is required to download TiDB and related software installation packages

The smallest TiDB cluster topology is as follows:

Instance Count IP 配置
TiKV 3 10.0.1.1
10.0.1.1
10.0.1.1
Avoid conflict between the port and the directory
TiDB 1 10.0.1.1 The default port
Global directory configuration
PD 1 10.0.1.1 The default port
Global directory configuration
TiFlash 1 10.0.1.1 The default port
Global directory configuration
Monitor 1 10.0.1.1 The default port
Global directory configuration

Other requirements for the target machine:

  • Therootuser and its password is required

  • Stop the firewall service of the target machine, or open the port needed by the TiDB cluster nodes

  • Currently, the TiUP cluster supports deploying TiDB on the x86_64 (AMD64) and ARM architectures:

    • It is recommended to use CentOS 7.3 or later versions on AMD64
    • It is recommended to use CentOS 7.6 1810 on ARM

Deploy

  1. Download and install TiUP:

    curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.m.rzhenli.com/install.sh | sh
  2. Declare the global environment variable.

    source ${your_shell_profile}
  3. Install the cluster component of TiUP:

    tiup cluster
  4. If the TiUP cluster is already installed on the machine, update the software version:

    tiup update --self && tiup update cluster
  5. Use the root user privilege to increase the connection limit of thesshdservice. This is because TiUP needs to simulate deployment on multiple machines.

    1. Modify/etc/ssh/sshd_config, and setMaxSessionsto20.

    2. Restart thesshdservice:

      service sshd restart
  6. Create and start the cluster:

    Edit the configuration file according to the following template, and name it astopo.yaml:

    # #适用于所有部署全局变量s and used as the default value of # # the deployments if a specific deployment value is missing. global: user: "tidb" ssh_port: 22 deploy_dir: "/tidb-deploy" data_dir: "/tidb-data" # # Monitored variables are applied to all the machines. monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 server_configs: tidb: log.slow-threshold: 300 tikv: readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: replication.enable-placement-rules: true replication.location-labels: ["host"] tiflash: logger.level: "info" pd_servers: - host: 10.0.1.1 tidb_servers: - host: 10.0.1.1 tikv_servers: - host: 10.0.1.1 port: 20160 status_port: 20180 config: server.labels: { host: "logic-host-1" } - host: 10.0.1.1 port: 20161 status_port: 20181 config: server.labels: { host: "logic-host-2" } - host: 10.0.1.1 port: 20162 status_port: 20182 config: server.labels: { host: "logic-host-3" } tiflash_servers: - host: 10.0.1.1 monitoring_servers: - host: 10.0.1.1 grafana_servers: - host: 10.0.1.1
    • user: "tidb": Use thetidbsystem user (automatically created during deployment) to perform the internal management of the cluster. By default, use port 22 to log in to the target machine via SSH.
    • replication.enable-placement-rules: This PD parameter is set to ensure that TiFlash runs normally.
    • host: The IP of the target machine.
  7. Execute the cluster deployment command:

    tiup cluster deploy   ./topo.yaml --user root -p
    • : Set the cluster name

    • : Set the TiDB cluster version, such asv6.5.1. You can see all the supported TiDB versions by running thetiup list tidbcommand

    • -p: Specify the password used to connect to the target machine.

    Enter "y" and therootuser's password to complete the deployment:

    Do you want to continue? [y/N]: y Input SSH password:
  8. Start the cluster:

    tiup cluster start 
  9. Access the cluster:

    • Install the MySQL client. If it is already installed, skip this step.

      yum -y install mysql
    • Access TiDB. The password is empty:

      mysql -h 10.0.1.1 -P 4000 -u root
    • Access the Grafana monitoring dashboard athttp://{grafana-ip}:3000. The default username and password are bothadmin.

    • Access theTiDB Dashboardathttp://{pd-ip}:2379/dashboard. The default username isroot, and the password is empty.

    • To view the currently deployed cluster list:

      tiup集群系统t
    • To view the cluster topology and status:

      tiup cluster display 

What's next

Download PDF Request docs changes Ask questions on TiDB Forum
Was this page helpful?
Products
TiDB Cloud
TiDB
Pricing
Get Demo
©2023PingCAP. All Rights Reserved.