Debug School

Cover image for Install TiDB
Suyash Sambhare
Suyash Sambhare

Posted on

Install TiDB

TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL-compatible and features horizontal scalability, strong consistency, and high availability. The goal of TiDB is to provide users with a one-stop database solution that covers OLTP (Online Transactional Processing), OLAP (Online Analytical Processing), and HTAP services. TiDB is suitable for various use cases that require high availability and strong consistency with large-scale data.

Software and Hardware Recommendations

As an open-source distributed SQL database with high performance, TiDB can be deployed in the Intel architecture server, ARM architecture server, and major virtualization environments and runs well. TiDB supports most of the major hardware networks and Linux operating systems.

Installation

Step 1. Prerequisites and precheck
Ensure Hardware and software requirements are met for Environment and system configuration check

Step 2. Deploy TiUP on the control machine
You can deploy TiUP on the control machine in either of the two ways: online deployment and offline deployment.

Deploy TiUP online
Log in to the control machine using a regular user account (take the tidb user as an example). Subsequent TiUP installation and cluster management can be performed by the tidb user.

Install TiUP by running the following command:

curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

Set TiUP environment variables:
Redeclare the global environment variables:
source .bash_profile

Confirm whether TiUP is installed:
which tiup

Install the TiUP cluster component:
tiup cluster

If TiUP is already installed, update the TiUP cluster component to the latest version:
tiup update --self && tiup update cluster

If Update successfully! is displayed, the TiUP cluster is updated successfully.
Verify the current version of your TiUP cluster:
tiup --binary cluster

To switch the mirror to another directory, run the tiup mirror set <mirror-dir> command. To switch the mirror to the online environment, run the tiup mirror set https://tiup-mirrors.pingcap.com command.

Step 3. Initialize cluster topology file
Run the following command to create a cluster topology file:
tiup cluster template > topology.yaml

In the following two common scenarios, you can generate recommended topology templates by running commands:
For hybrid deployment: Multiple instances are deployed on a single machine. For details, see Hybrid Deployment Topology.
tiup cluster template --full > topology.yaml

For geo-distributed deployment: TiDB clusters are deployed in geographically distributed data centers. For details, see Geo-Distributed Deployment Topology.
tiup cluster template --multi-dc > topology.yaml

Run vi topology.yaml to see the configuration file content:

global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"
server_configs: {}
pd_servers:
  - host: 10.0.1.4
  - host: 10.0.1.5
  - host: 10.0.1.6
tidb_servers:
  - host: 10.0.1.7
  - host: 10.0.1.8
  - host: 10.0.1.9
tikv_servers:
  - host: 10.0.1.1
  - host: 10.0.1.2
  - host: 10.0.1.3
monitoring_servers:
  - host: 10.0.1.4
grafana_servers:
  - host: 10.0.1.4
alertmanager_servers:
  - host: 10.0.1.4
Enter fullscreen mode Exit fullscreen mode

The following examples cover seven common scenarios. You need to modify the configuration file (named topology.yaml) according to the topology description and templates in the corresponding links. For other scenarios, edit the configuration template accordingly.

Step 4. Run the deployment command
You can use secret keys or interactive passwords for security authentication when you deploy TiDB using TiUP:

If you use secret keys, specify the path of the keys through -i or --identity_file.
If you use passwords, add the -p flag to enter the password interaction window.
If password-free login to the target machine has been configured, no authentication is required.
In general, TiUP creates the user and group specified in the topology.yaml file on the target machine, with the following exceptions:

  • The user name is configured in topology.yaml already exists on the target machine.
  • You have used the --skip-create-user option in the command line to explicitly skip the step of creating the user. Before you run the deploy command, use the check and check --apply commands to detect and automatically repair potential risks in the cluster:

Check for potential risks:
tiup cluster check ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]

Enable automatic repair:
tiup cluster check ./topology.yaml --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa]

Deploy a TiDB cluster:
tiup cluster deploy tidb-test v7.1.2 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]

In the tiup cluster deploy the command above:
tidb-test is the name of the TiDB cluster to be deployed.
v7.1.2 is the version of the TiDB cluster to be deployed. You can see the latest supported versions by running tiup list tidb.
topology.yaml is the initialization configuration file.
--user root indicates logging into the target machine as the root user to complete the cluster deployment. The root user is expected to have ssh and sudo privileges to the target machine.
At the end of the output log, you will see Deployed cluster tidb-test successfully. This indicates that the deployment is successful.

TiDB

Step 5. Check the clusters managed by TiUP
tiup cluster list
TiUP supports managing multiple TiDB clusters. The preceding command outputs information on all the clusters currently managed by TiUP, including the cluster name, deployment user, version, and secret key information:

Step 6. Check the status of the deployed TiDB cluster
For example, run the following command to check the status of the tidb-test cluster:
tiup cluster display tidb-test
Expected output includes the instance ID, role, host, listening port, status (because the cluster is not started yet, the status is Down/inactive), and directory information.

Step 7. Start a TiDB cluster
Since TiUP cluster v1.9.0, safe start is introduced as a new start method. Starting a database using this method improves the security of the database. It is recommended that you use this method.
After a safe start, TiUP automatically generates a password for the TiDB root user and returns the password in the command-line interface.

After the safe start of a TiDB cluster, you cannot log in to TiDB using a root user without a password. Therefore, you need to record the password returned in the command output for future logins.

The password is generated only once. If you do not record it or you forgot it, refer to Forget the root password to change the password.

Step 8. Verify the running status of the TiDB cluster
tiup cluster display tidb-test
If the output log shows Up status, the cluster is running properly.

Ref: https://docs.pingcap.com/tidb/stable/production-deployment-using-tiup

Top comments (0)