Sign In Try Free

Back up Data to S3-Compatible Storage Using BR

This document describes how to back up the data of a TiDB cluster on AWS Kubernetes to AWS storage. There are two backup types:

  • Snapshot backup。With snapshot backup, you can restore a TiDB cluster to the time point of the snapshot backup usingfull restoration
  • Log backup。With snapshot backup and log backup, you can restore a TiDB cluster to any point in time. This is also known asPoint-in-Time Recovery (PITR)

The backup method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator. For the underlying implementation,BRis used to get the backup data of the TiDB cluster, and then send the data to the AWS storage. BR stands for Backup & Restore, which is a command-line tool for distributed backup and recovery of the TiDB cluster data.

Usage scenarios

If you have the following backup needs, you can use BR'ssnapshot backupmethod to make anad-hoc backuporscheduled snapshot backupof the TiDB cluster data to S3-compatible storages.

  • To back up a large volume of data (more than 1 TB) at a fast speed
  • To get a direct backup of data as SST files (key-value pairs)

If you have the following backup needs, you can use BRlog backupto make anad-hoc backupof the TiDB cluster data to S3-compatible storages (you can combine log backup and snapshot backup torestore datamore efficiently):

  • To restore data of any point in time to a new cluster
  • The recovery point object (RPO) is within several minutes.

For other backup needs, refer toBackup and Restore Overviewto choose an appropriate backup method.

Ad-hoc backup

Ad-hoc backup includes snapshot backup and log backup. For log backup, you canstartorstopa log backup task andclean log backup data

To get an Ad-hoc backup, you need to create aBackupCustom Resource (CR) object to describe the backup details. Then, TiDB Operator performs the specific backup operation based on thisBackupobject. If an error occurs during the backup process, TiDB Operator does not retry, and you need to handle this error manually.

This document provides an example about how to back up the data of thedemo1TiDB cluster in thetest1Kubernetes namespace to the AWS storage. The following are the detailed steps.

Prerequisites: Prepare for an ad-hoc backup

  1. Create a namespace for managing backup. The following example creates abackup-testnamespace:

    
                    
    kubectl create namespace backup-test
  2. Downloadbackup-rbac.yaml, and execute the following command to create the role-based access control (RBAC) resources in thebackup-testnamespace:

    
                    
    kubectl apply -f backup-rbac.yaml -n backup-test
  3. Grant permissions to the remote storage for the createdbackup-testnamespace.

    • If you are using Amazon S3 to backup your cluster, you can grant permissions in three methods. For more information, refer toAWS account permissions
    • If you are using other S3-compatible storage (such as Ceph and MinIO) to backup your cluster, you can grant permissions byusing AccessKey and SecretKey
  4. For a TiDB version earlier than v4.0.8, you also need to complete the following preparation steps. For TiDB v4.0.8 or a later version, skip these preparation steps.

    1. Make sure that you have theSELECTandUPDATEprivileges on themysql.tidbtable of the backup database so that theBackupCR can adjust the GC time before and after the backup.

    2. Create thebackup-demo1-tidb-secretsecret to store the account and password to access the TiDB cluster:

      
                        
      kubectl create secret generic backup-demo1-tidb-secret --from-literal=password=${password}--namespace=test1

Snapshot backup

根据您选择哪一种方法给烫issions to the remote storage when preparing for the ad-hoc backup, export your data to the S3-compatible storage by doing one of the following:

  • Method 1: If you grant permissions by importing AccessKey and SecretKey, create theBackupCR to back up cluster data as described below:

    
                    
    kubectl apply -f full-backup-s3.yaml

    The content offull-backup-s3.yamlis as follows:

    
                    
    --- apiVersion: m.rzhenli.com/v1alpha1 kind: Backup metadata: name: demo1-full-backup-s3 namespace: backup-test spec: backupType: full br: cluster: demo1 clusterNamespace: test1 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # sendCredToTikv: true # options: # - --lastbackupts=420134118382108673 # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 from: host: ${tidb_host} port: ${tidb_port} user: ${tidb_user} secretName: backup-demo1-tidb-secret s3: provider: aws secretName: s3-secret region: us-west-1 bucket: my-bucket prefix: my-full-backup-folder
  • Method 2: If you grant permissions by associating IAM with Pod, create theBackupCR to back up cluster data as described below:

    
                    
    kubectl apply -f full-backup-s3.yaml

    The content offull-backup-s3.yamlis as follows:

    
                    
    --- apiVersion: m.rzhenli.com/v1alpha1 kind: Backup metadata: name: demo1-full-backup-s3 namespace: backup-test annotations: iam.amazonaws.com/role: arn:aws:iam::123456789012:role/user spec: backupType: full br: cluster: demo1 sendCredToTikv: false clusterNamespace: test1 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # options: # - --lastbackupts=420134118382108673 # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 from: host: ${tidb_host} port: ${tidb_port} user: ${tidb_user} secretName: backup-demo1-tidb-secret s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-full-backup-folder
  • Method 3: If you grant permissions by associating IAM with ServiceAccount, create theBackupCR to back up cluster data as described below:

    
                    
    kubectl apply -f full-backup-s3.yaml

    The content offull-backup-s3.yamlis as follows:

    
                    
    --- apiVersion: m.rzhenli.com/v1alpha1 kind: Backup metadata: name: demo1-full-backup-s3 namespace: backup-test spec: backupType: full serviceAccount: tidb-backup-manager br: cluster: demo1 sendCredToTikv: false clusterNamespace: test1 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # options: # - --lastbackupts=420134118382108673 # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 from: host: ${tidb_host} port: ${tidb_port} user: ${tidb_user} secretName: backup-demo1-tidb-secret s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-full-backup-folder

When configuringfull-backup-s3.yaml, note the following:

  • Since TiDB Operator v1.1.6, if you want to back up data incrementally, you only need to specify the last backup timestamp--lastbackuptsinspec.br.options。For the limitations of incremental backup, refer toUse BR to Back up and Restore Data
  • 哟u can ignore theacl,endpoint,storageClassconfiguration items of Amazon S3. For more information about S3-compatible storage configuration, refer toS3 storage fields
  • Some parameters in。spec.brare optional, such aslogLevelandstatusAddr。For more information about BR configuration, refer toBR fields
  • For v4.0.8 or a later version, BR can automatically adjusttikv_gc_life_time。哟u do not need to configurespec.tikvGCLifeTimeandspec.fromfields in theBackupCR.
  • For more information about theBackupCR fields, refer toBackup CR fields

View the snapshot backup status

After you create theBackupCR, TiDB Operator starts the backup automatically. You can view the backup status by running the following command:


              
kubectl get backup -n backup-test -o wide

From the output, you can find the following information for theBackupCR nameddemo1-full-backup-s3。TheCOMMITTSfield indicates the time point of the snapshot backup:


              
NAME TYPE MODE STATUS BACKUPPATH COMMITTS ... demo1-full-backup-s3 full snapshot Complete s3://my-bucket/my-full-backup-folder/ 436979621972148225 ...

Log backup

哟u can use aBackupCR to describe the start and stop of a log backup task and manage the log backup data. Log backup grants permissions to remote storages in the same way as snapshot backup. In this section, the example shows log backup operations by taking aBackupCR nameddemo1-log-backup-s3as an example. Note that these operations assume that permissions to remote storages are granted using accessKey and secretKey. See the following detailed steps.

Start log backup

  1. In thebackup-testnamespace, create aBackupCR nameddemo1-log-backup-s3

    
                    
    kubectl apply -f log-backup-s3.yaml

    The content oflog-backup-s3.yamlis as follows:

    
                    
    --- apiVersion: m.rzhenli.com/v1alpha1 kind: Backup metadata: name: demo1-log-backup-s3 namespace: backup-test spec: backupMode: log br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true s3: provider: aws secretName: s3-secret region: us-west-1 bucket: my-bucket prefix: my-log-backup-folder
  2. Wait for the start operation to complete:

    
                    
    kubectl getjobs-n backup-test
    
                    
    NAME COMPLETIONS ... backup-demo1-log-backup-s3-log-start 1/1 ...
  3. View the newly createdBackupCR:

    
                    
    kubectl get backup -n backup-test
    
                    
    NAME MODE STATUS .... demo1-log-backup-s3 log Running ....

View the log backup status

哟u can view the log backup status by checking the information of theBackupCR:


              
kubectl describe backup -n backup-test

From the output, you can find the following information for theBackupCR nameddemo1-log-backup-s3。TheLog Checkpoint Tsfield indicates the latest point in time that can be recovered:


              
Status: Backup Path: s3://my-bucket/my-log-backup-folder/ Commit Ts: 436568622965194754 Conditions: Last Transition Time: 2022-10-10T04:45:20Z Status: True Type: Scheduled Last Transition Time: 2022-10-10T04:45:31Z Status: True Type: Prepare Last Transition Time: 2022-10-10T04:45:31Z Status: True Type: Running Log Checkpoint Ts: 436569119308644661

Stop log backup

Because you already created aBackupCR nameddemo1-log-backup-s3when you started log backup, you can stop the log backup by modifying the sameBackupCR. The priority of all operations is: stop log backup > delete log backup data > start log backup.


              
kubectl edit backup demo1-log-backup-s3 -n backup-test

In the last line of the CR, appendspec.logStop:真。Then save and quit the editor. The modified content is as follows:


              
--- apiVersion: m.rzhenli.com/v1alpha1 kind: Backup metadata: name: demo1-log-backup-s3 namespace: backup-test spec: backupMode: log br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true s3: provider: aws secretName: s3-secret region: us-west-1 bucket: my-bucket prefix: my-log-backup-folder logStop: true

哟u can see theSTATUSof theBackupCR nameddemo1-log-backup-s3change fromRunningtoStopped:


              
kubectl get backup -n backup-test

              
NAME MODE STATUS .... demo1-log-backup-s3 log Stopped ....

Clean log backup data

  1. Because you already created aBackupCR nameddemo1-log-backup-s3when you started log backup, you can clean the log data backup by modifying the sameBackupCR. The priority of all operations is: stop log backup > delete log backup data > start log backup. The following example shows how to clean log backup data generated before 2022-10-10T15:21:00+08:00.

    
                    
    kubectl edit backup demo1-log-backup-s3 -n backup-test

    In the last line of the CR, appendspec.logTruncateUntil: "2022-10-10T15:21:00+08:00"。Then save and quit the editor. The modified content is as follows:

    
                    
    --- apiVersion: m.rzhenli.com/v1alpha1 kind: Backup metadata: name: demo1-backup-s3 namespace: backup-test spec: backupMode: log br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true s3: provider: aws secretName: s3-secret region: us-west-1 bucket: my-bucket prefix: my-log-backup-folder logTruncateUntil: "2022-10-10T15:21:00+08:00"
  2. Wait for the clean operation to complete:

    
                    
    kubectl getjobs-n backup-test
    
                    
    NAME COMPLETIONS ... ... backup-demo1-log-backup-s3-log-truncate 1/1 ...
  3. View theBackupCR information:

    
                    
    kubectl describe backup -n backup-test
    
                    
    。.. Log Success Truncate Until: 2022-10-10T15:21:00+08:00 ...

    哟u can also view the information by running the following command:

    
                    
    kubectl get backup -n backup-test -o wide
    
                    
    NAME MODE STATUS ... LOGTRUNCATEUNTIL demo1-log-backup-s3 log Stopped ... 2022-10-10T15:21:00+08:00

Backup CR examples

Back up data of all clusters

               
--- apiVersion: m.rzhenli.com/v1alpha1 kind: Backup metadata: name: demo1-backup-s3 namespace: backup-test spec: backupType: full serviceAccount: tidb-backup-manager br: cluster: demo1 sendCredToTikv: false clusterNamespace: test1 # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb_host} # port: ${tidb_port} #用户:${tidb_user} # secretName: backup-demo1-tidb-secret s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-folder
Back up data of a single database

The following example backs up data of thedb1database.


               
--- apiVersion: m.rzhenli.com/v1alpha1 kind: Backup metadata: name: demo1-backup-s3 namespace: backup-test spec: backupType: full serviceAccount: tidb-backup-manager tableFilter: - "db1.*" br: cluster: demo1 sendCredToTikv: false clusterNamespace: test1 # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb_host} # port: ${tidb_port} #用户:${tidb_user} # secretName: backup-demo1-tidb-secret s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-folder
Back up data of a single table

The following example backs up data of thedb1.table1table.


               
--- apiVersion: m.rzhenli.com/v1alpha1 kind: Backup metadata: name: demo1-backup-s3 namespace: backup-test spec: backupType: full serviceAccount: tidb-backup-manager tableFilter: - "db1.table1" br: cluster: demo1 sendCredToTikv: false clusterNamespace: test1 # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb_host} # port: ${tidb_port} #用户:${tidb_user} # secretName: backup-demo1-tidb-secret s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-folder
Back up data of multiple tables using the table filter

The following example backs up data of thedb1.table1table anddb1.table2table.


               
--- apiVersion: m.rzhenli.com/v1alpha1 kind: Backup metadata: name: demo1-backup-s3 namespace: backup-test spec: backupType: full serviceAccount: tidb-backup-manager tableFilter: - "db1.table1" - "db1.table2" # ... br: cluster: demo1 sendCredToTikv: false clusterNamespace: test1 # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb_host} # port: ${tidb_port} #用户:${tidb_user} # secretName: backup-demo1-tidb-secret s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-folder

Scheduled snapshot backup

哟u can set a backup policy to perform scheduled backups of the TiDB cluster, and set a backup retention policy to avoid excessive backup items. A scheduled snapshot backup is described by a customBackupScheduleCR object. A snapshot backup is triggered at each backup time point. Its underlying implementation is the ad-hoc snapshot backup.

Prerequisites: Prepare for a scheduled snapshot backup

The steps to prepare for a scheduled snapshot backup are the same as that ofPrepare for an ad-hoc backup

Perform a scheduled snapshot backup

根据您选择哪一种方法给烫issions to the remote storage, perform a scheduled snapshot backup by doing one of the following:

  • Method 1: If you grant permissions by importing AccessKey and SecretKey, create theBackupScheduleCR, and back up cluster data as described below:

    
                    
    kubectl apply -f backup-scheduler-aws-s3.yaml

    The content ofbackup-scheduler-aws-s3.yamlis as follows:

    
                    
    --- apiVersion: m.rzhenli.com/v1alpha1 kind: BackupSchedule metadata: name: demo1-backup-schedule-s3 namespace: backup-test spec: #maxBackups: 5 #pause: true maxReservedTime: "3h" schedule: "*/2 * * * *" backupTemplate: backupType: full # Clean outdated backup data based on maxBackups or maxReservedTime. If not configured, the default policy is Retain # cleanPolicy: Delete br: cluster: demo1 clusterNamespace: test1 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # sendCredToTikv: true # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 from: host: ${tidb_host} port: ${tidb_port} user: ${tidb_user} secretName: backup-demo1-tidb-secret s3: provider: aws secretName: s3-secret region: us-west-1 bucket: my-bucket prefix: my-folder
  • Method 2: If you grant permissions by associating IAM with the Pod, create theBackupScheduleCR, and back up cluster data as described below:

    
                    
    kubectl apply -f backup-scheduler-aws-s3.yaml

    The content ofbackup-scheduler-aws-s3.yamlis as follows:

    
                    
    --- apiVersion: m.rzhenli.com/v1alpha1 kind: BackupSchedule metadata: name: demo1-backup-schedule-s3 namespace: backup-test annotations: iam.amazonaws.com/role: arn:aws:iam::123456789012:role/user spec: #maxBackups: 5 #pause: true maxReservedTime: "3h" schedule: "*/2 * * * *" backupTemplate: backupType: full # Clean outdated backup data based on maxBackups or maxReservedTime. If not configured, the default policy is Retain # cleanPolicy: Delete br: cluster: demo1 sendCredToTikv: false clusterNamespace: test1 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 from: host: ${tidb_host} port: ${tidb_port} user: ${tidb_user} secretName: backup-demo1-tidb-secret s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-folder
  • Method 3: If you grant permissions by associating IAM with ServiceAccount, create theBackupScheduleCR, and back up cluster data as described below:

    
                    
    kubectl apply -f backup-scheduler-aws-s3.yaml

    The content ofbackup-scheduler-aws-s3.yamlis as follows:

    
                    
    --- apiVersion: m.rzhenli.com/v1alpha1 kind: BackupSchedule metadata: name: demo1-backup-schedule-s3 namespace: backup-test spec: #maxBackups: 5 #pause: true maxReservedTime: "3h" schedule: "*/2 * * * *" backupTemplate: backupType: full serviceAccount: tidb-backup-manager # Clean outdated backup data based on maxBackups or maxReservedTime. If not configured, the default policy is Retain # cleanPolicy: Delete br: cluster: demo1 sendCredToTikv: false clusterNamespace: test1 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 from: host: ${tidb_host} port: ${tidb_port} user: ${tidb_user} secretName: backup-demo1-tidb-secret s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-folder

In the above example ofbackup-scheduler-aws-s3.yaml,backupScheduleconfiguration consists of two parts. One is the unique configuration ofbackupSchedule, and the other isbackupTemplate

  • For the unique configuration ofbackupSchedule, refer toBackupSchedule CR fields
  • backupTemplatespecifies the configuration related to the cluster and remote storage, which is the same as thespecconfiguration oftheBackupCR

After creating the scheduled snapshot backup, use the following command to check the backup status:


              
kubectl get bks -n test1 -o wide

During cluster recovery, you need to specify the backup path. You can use the following command to check all the backup items. The names of these backups are prefixed with the scheduled snapshot backup name:


              
kubectl get bk -l tidb.m.rzhenli.com/backup-schedule=demo1-backup-schedule-s3 -n test1

Integrated management of scheduled snapshot backup and log backup

哟u can use theBackupScheduleCR to integrate the management of scheduled snapshot backup and log backup for TiDB clusters. By setting the backup retention time, you can regularly recycle the scheduled snapshot backup and log backup, and ensure that you can perform PITR recovery through the scheduled snapshot backup and log backup within the retention period.

The following example creates aBackupScheduleCR namedintegrated-backup-schedule-s3。In the example, accessKey and secretKey are used to access the remote storage. For more information about the authorization method, refer toAWS account permissions

Prerequisites: Prepare for a scheduled snapshot backup environment

The steps to prepare for a scheduled snapshot backup are the same as those ofPrepare for an ad-hoc backup

CreateBackupSchedule

  1. Create aBackupScheduleCR namedintegrated-backup-schedule-s3in thebackup-testnamespace.

    
                    
    kubectl apply -f integrated-backup-schedule-s3.yaml

    The content ofintegrated-backup-schedule-s3.yamlis as follows:

    
                    
    --- apiVersion: m.rzhenli.com/v1alpha1 kind: BackupSchedule metadata: name: integrated-backup-schedule-s3 namespace: backup-test spec: maxReservedTime: "3h" schedule: "* */2 * * *" backupTemplate: backupType: full cleanPolicy: 删除 br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true s3: provider: aws secretName: s3-secret region: us-west-1 bucket: my-bucket prefix: my-folder-snapshot logBackupTemplate: backupMode: log br: cluster: demo1 clusterNamespace: test1 sendCredToTikv: true s3: provider: aws secretName: s3-secret region: us-west-1 bucket: my-bucket prefix: my-folder-log

    In the above example ofintegrated-backup-schedule-s3.yaml,backupScheduleconfiguration consists of three parts: the unique configuration ofbackupSchedule,configuration of the snapshot backupbackupTemplate, and the configuration of the log backuplogBackupTemplate

    For the field description ofbackupSchedule, refer toBackupSchedule CR fields

  2. After creatingbackupSchedule, use the following command to check the backup status:

    
                    
    kubectl get bks -n backup-test -o wide

    A log backup task is created together withbackupSchedule。你可以通过检查日志备份的名字status.logBackupfield of thebackupScheduleCR.

    
                    
    kubectl describe bks integrated-backup-schedule-s3 -n backup-test
  3. To perform data restoration for a cluster, you need to specify the backup path. You can use the following command to check all the backup items under the scheduled snapshot backup.

    
                    
    kubectl get bk -l tidb.m.rzhenli.com/backup-schedule=integrated-backup-schedule-s3 -n backup-test

    TheMODEfield in the output indicates the backup mode.snapshot显示预定的快照up, andlogindicates the log backup.

    
                    
    NAME MODE STATUS .... integrated-backup-schedule-s3-2023-03-08t02-45-00 snapshot Complete .... log-integrated-backup-schedule-s3 log Running ....

删除the backup CR

If you no longer need the backup CR, refer to删除the Backup CR

Troubleshooting

If you encounter any problem during the backup process, refer toCommon Deployment Failures

Download PDF Request docs changes Ask questions on Discord
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Was this page helpful?
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
©2023PingCAP. All Rights Reserved.