Setting Up MemSQL On-Premises

This topic describes how to set up MemSQL in an on-premises environment.

Note

If you are upgrading an existing MemSQL installation, follow the instructions in Upgrading to the Latest MemSQL Version instead.

Prerequisites

This section describes MemSQL software, hardware, and network requirements.

Software Requirements

MemSQL requires the following software.

Component Requirement
Operating System 64-bit Linux-based operating system. MemSQL will not run on 32-bit Linux.
Client Utilities MySQL client tools. Must be installed before running MemSQL (see http://dev.mysql.com/downloads/).
Compiler g++ compiler (if needed for your Linux distribution.) If it is not already installed, the MemSQL installation process will attempt to install it for you.

The following Linux distributions are officially supported.

Distribution Minimum Version Appropriate g++ Command
Amazon AMI 2012.03 sudo yum install gcc-c++
CentOS 6.0 sudo yum install gcc-c++
Debian 6.0 sudo apt-get install g++
Fedora 14 sudo yum install gcc-c++
OpenSUSE 11.0 sudo zypper install gcc-c++
Red Hat 6.0 sudo yum install gcc-c++
Ubuntu 8.04 sudo apt-get install g++

Note

MemSQL can run on older versions of Linux (such as CentOS 5.*). However, these distributions offer only an incompatibly old version of the assembler. To run MemSQL on these distributions, download the latest version of binutils and compile it from source.

It is recommended that you create a swap file so that the operating system does not immediately start killing processes if MemSQL runs out of memory. Here is a sample shell script to create the swap file:

snapshot_dir="/var/lib/memsql/data/snapshots"
snapshot_file="$snapshot_dir/memsql.swp"
mkdir -p snapshot_dir
dd if=/dev/zero of=$snapshot_file bs=1M count=4096
chmod 600 $snapshot_file
mkswap $snapshot_file
rswapon $snapshot_file
echo "$snapshot_file swap swap defaults 0 0" | tee -a /etc/fstab

For more information on configuring swap files, see: http://www.cyberciti.biz/faq/linux-add-a-swap-file-howto/.

Hardware Requirements

MemSQL recommends the following hardware.

Component Requirement
CPU Two socket x86-based server with 8 cores. At a minimum, Intel Core i3 or better processor.
Memory Recommended: 64-96GB. Minimum: 8GB
Hard Drive SSD or SATA 7200RPM hard drive with 3X memory capacity

Considerations:

  • MemSQL will run on machines with less than 8GB RAM, but this is not recommended.
  • MemSQL storage capacity is limited by the amount of RAM on the host machine. Increasing RAM increases the amount of available data storage.
  • MemSQL is optimized for architectures supporting SSE4.2, but it will run successfully on most earlier architectures.

Network Requiremens

MemSQL requires routing and firewall settings to be configured to:

  • allow the client host to connect to the MemSQL server host
  • prevent other hosts on the Internet from connecting to the MemSQL server

Installing MemSQL on a Node

This section describes how to install MemSQL on a single node using RPM, deb file, or a direct binary file.

Note

If you are setting up a MemSQL cluster, you will repeat these instructions for every node in the cluster.

Installing MemSQL from an RPM

  1. Using the information provided in the license confirmation email, run wget to download the RPM onto the server. This email also contains your license key.
$ wget http://download.memsql.com/[license_key]/memsql-[version_#].x86_64.rpm
  1. Run the RPM Package Manager.
$ rpm -i memsql-[version_#].x86_64.rpm
  1. Start the MemSQL service.
$ service memsql start

MemSQL’s RPM supports a standard set of commands (start, stop, restart, and status) as well as a check-system command to validate the underlying system.

Installing MemSQL from a .deb Package

  1. Using the information provided in the license confirmation email, run wget to download the deb file onto the server. This email also contains your license key.
$ wget http://download.memsql.com/[license_key]/memsql-[version_#].x86_64.deb
  1. Make sure the OS is up to date and install the g++ compiler.
$ sudo apt-get update
$ sudo apt-get install -y g++
  1. Run the Debian Package Manager.
$ sudo dpkg -i memsql-[version_#].x86_64.deb
  1. Start the MemSQL service.
$ sudo service memsql start

MemSQL’s RPM supports a standard set of commands (start, stop, restart, and status) as well as a check-system command to validate the underlying system.

Installing MemSQL from a Direct Binary

  1. Using the information provided in the license confirmation email, run wget to download the binary onto the server. This email also contains your license key.
$  wget http://download.memsql.com/[license_key]/memsqlbin_amd64.tar.gz
  1. Expand the downloaded tar.gz archive.
$ tar -xzf memsqlbin_amd64.tar.gz
$ cd memsqlbin
  1. Run the check_system script, which verifies that the target system is compatible with MemSQL, and then installs any necessary dependencies. The script will guide you through additional steps, if needed.
$ ./check_system
  1. Start the server.
$ ./memsqld

To stop the MemSQL database, send SIGTERM to the MemSQL process. Example:

$ killall memsqld

MemSQL has disk-durability turned on by default. After a server restart, MemSQL will read from the snapshot and log files to recover the data that was present before the shutdown.

Verifying Your Installation

To confirm that MemSQL was installed successfully, try connecting to the MemSQL server. MemSQL is protocol-compatible with MySQL, which means that the open source MySQL command-line client can be used to connect to a MemSQL server.

  1. Start the server, if it is not already running.
  2. Where MemSQL is running, use the MySQL client to connect to the MemSQL server using the appropriate command-line options.
$ mysql -u root -h 127.0.0.1 -P 3306 --prompt="memsql> "

Warning

The string "localhost" will not work in place of the IP address 127.0.0.1 in the commands above. The MySQL client defaults to connecting to localhost via Unix-domain sockets rather than via the specified TCP port. Therefore, the host must always be explicitly specified on the command line. Otherwise, it defaults to localhost.

Running this command, with these options, will connect to the MemSQL server running on the same host, at TCP port 3306 (the standard network port for the MySQL protocol), running as MemSQL user root, with no password. Installation succeeded if you are able to connect to the database.

Note

Throughout this MemSQL documentation, MemSQL is assumed to be running locally on port 3306. The switches -h 127.0.0.1 and -P 3306, respectively, are used in doc examples. If you run MemSQL on a different host/port combination, just substitute your own hostname or IP address values. However, do not use localhost in place of the IP address.

Troubleshooting Tips

Port Conflicts

If MySQL is already running on the system and occupying port 3306, you will see an error similar to:

120501  3:04:15 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use
120501  3:04:15 [ERROR] Do you already have another mysqld server running on port: 3306 ?
120501  3:04:15 [ERROR] Aborting

The workaround is to add -P 3307 to the ./memsqld startup command. Thereafter, connect with mysql -u root -h 127.0.0.1 -P 3307 --prompt="memsql> ".

Access Denied Error

If you see an error similar to:

ERROR 1045 (28000): Access denied for user 'root'@'localhost' (user password: YES)

then a default password is probably configured in the global my.cnf file. The MySQL client refers to this file for default parameters to use when connecting to the database. If the global my.cnf file contains a line resembling:

password = XXXXXX

then the MySQL client will default to connecting to MemSQL using that password. MemSQL comes installed with a root user and no password. Use two workarounds to resolve this issue:

  • Remove this line from my.cnf
  • Specify --password='' as an argument to the connection command: mysql -u root --password='' -h 127.0.0.1 -P 3306 --prompt="memsql> "

Example: Running SQL Commands Against the MemSQL Database

After you have successfully connected to the MemSQL server, you can watch MemSQL in action by running SQL commands against the MemSQL database (just as you would in MySQL). Feel free to experiment. The SQL statements currently supported by MemSQL are documented in SQL Statements. If you have any questions or issues, contact MemSQL support at support@memsql.com.

Note

When you try a new kind of query for the first time, it might take longer to run while MemSQL compiles the SQL command to optimized native code (see code generation). Every time MemSQL sees a new query skeleton (the query without numbers and strings), it generates and compiles C++ code, and then loads the compiled query plan into the database. Subsequent executions, however, complete more quickly because when MemSQL sees a query with the same skeleton, it runs the query as a tightly-optimized sequence of native instructions. To see a list of compiled plans, run SHOW PLANCACHE.

Example SQL Commands

> create database test;
> use test;
> create table T (id int primary key);
> insert into T values (1);
> insert into T values (2), (3);
> select * from T;
> update T set id = id + 3;
> select * from T;

Example MySQL Command Line Interaction

The following is an example interaction session with MemSQL using the MySQL command-line client.

memsql> SHOW DATABASES;
Empty set (0.00 sec)
memsql> CREATE DATABASE MemSQL_tutorial;
Query OK, 0 rows affected (8.93 sec)
memsql> SHOW DATABASES;
+-----------------+
| Database        |
+-----------------+
| MemSQL_tutorial |
+-----------------+
1 row in set (0.00 sec)
memsql> USE MemSQL_tutorial;
Database changed
memsql> CREATE TABLE t(id INT PRIMARY KEY AUTO_INCREMENT, v VARCHAR(10) NOT NULL);
Query OK, 0 rows affected (1.42 sec)
memsql> SHOW TABLES;
+---------------------------+
| Tables_in_MemSQL_tutorial |
+---------------------------+
| t                         |
+---------------------------+
1 row in set (0.00 sec)
memsql> DESCRIBE t;
+-------+-------------+------+-----+---------+----------------+
| Field | Type        | Null | Key | Default | Extra          |
+-------+-------------+------+-----+---------+----------------+
| id    | int(11)     | NO   | PRI | NULL    | auto_increment |
| v     | varchar(10) | NO   |     | NULL    |                |
+-------+-------------+------+-----+---------+----------------+
2 rows in set (0.00 sec)
memsql> INSERT INTO t (v) VALUES ('hello');
Query OK, 1 row affected (0.84 sec)
memsql> INSERT INTO t (v) VALUES ('goodbye');
Query OK, 1 row affected (0.00 sec)
memsql> SELECT * FROM t;
+----+---------+
| id | v       |
+----+---------+
|  1 | hello   |
|  2 | goodbye |
+----+---------+
2 rows in set (0.78 sec)

Note

The first INSERT statement may run more slowly while the command is compiled to optimized native code. Subsequently, similar INSERTs will run more quickly.

Installing MemSQL on a Cluster

Once you have learned how to install MemSQL on a single node, you can install on multiple nodes, and then connect these nodes in a MemSQL cluster. MemSQL’s distributed environment provides the additional capabilities of fault-tolerance, high availability, and high scalability by parallelizing both storage and compute work across a cluster of machines.

MemSQL Cluster Architecture

../_images/cluster_arch.png

A MemSQL cluster consists of two tiers:

  • Aggregator nodes are cluster-aware query routers. An aggregator stores only metadata and acts as a gateway into the distributed system. An aggregator queries the leaves, aggregates the results, and returns them to the client. The master aggregator is a specialized aggregator responsible for cluster monitoring and failover. All DDL operations are orchestrated by the master aggregator, along with basic cluster operations.
  • Leaf nodes function as storage and compute nodes. To optimize performance, the system automatically distributes data across leaf nodes into partitions.

The minimal setup for a MemSQL cluster is just one aggregator (the master aggregator) and one leaf. You can add more aggregators, which will read metadata from the master aggregator, and can run DML (Data Manipulation Language) commands on the leaves. Aggregators and leaves share the same binary, so you can deploy the same build to every machine in the cluster.

The number of deployed aggregator and leaf nodes determines the storage size and performance of a cluster. Typical deployments have a 5:1 ratio of leaf:aggregator nodes. In a well-designed cluster:

  • applications that require higher connection capabilities from application servers have a higher aggregator-to-leaf node ratio
  • applications with larger storage requirements have a higher leaf-to-aggregator node ratio

Setting Up a Cluster

Complete the following steps to set up a MemSQL cluster.

Step 1: Install MemSQL on All Nodes in the Cluster

Begin by installing MemSQL on all the nodes in the cluster, according to the instructions in Installing MemSQL on a Node.

Step 2: Start Up a Master Aggregator

Pick a node in the cluster that will server as the master aggregator. Append the line master-aggregator (with no arguments) to memsql.cnf and then start the server.

Alternatively, if you’re running the MemSQL binary directly, you can use the --master-aggregator switch.

Step 3: Add More Aggregators to the Cluster

To add more aggregators to a cluster, on a node, start memsqld with the --master-aggregator=host:port switch (which points to the master aggregator location).

./memsqld --master-aggregator=host:port

Step 4: Add Leaves to the Cluster

Note

Leaf nodes must be able to communicate with aggregators via the MySQL protocol.

Repeat the following instructions for each leaf:

  1. Start a MemSQL instance on a node using the following command.
./memsqld
  1. Run ADD LEAF to add the node (as a leaf) to the cluster.
memsql> ADD LEAF 'root'@'192.168.1.110':3306;
Query OK, 1 row affected (0.00 sec)

Example Setup

The following example assumes that MemSQL is running on two nodes (192.168.1.110:3306 and 192.168.1.111:3306). To add them to the cluster, use the MySQL client to connect to the master aggregator, and then run the ADD LEAF command (see XREF to add_leaf).

memsql> ADD LEAF 'root'@'192.168.1.110':3306;
Query OK, 1 row affected (0.00 sec)

memsql> ADD LEAF 'root'@'192.168.1.111':3306;
Query OK, 1 row affected (0.00 sec)

memsql> SHOW LEAVES;
+---------------+-------+--------------------+-----------+-----------+--------+--------------------+---------------------------+
| Host          | Port  | Availability_Group | Pair_Host | Pair_Port | State  | Opened_Connections | Average_Roundtrip_Latency |
+---------------+-------+--------------------+-----------+-----------+--------+--------------------+---------------------------+
| 192.168.1.110 |  3306 |                  1 | NULL      |      NULL | online |                  9 |                     0.290 |
| 192.168.1.111 |  3306 |                  1 | NULL      |      NULL | online |                  0 |                     0.302 |
+---------------+-------+--------------------+-----------+-----------+--------+--------------------+---------------------------+
2 rows in set (0.00 sec)

The Availability_Group and Leaf Pair concepts relate to MemSQL’s high availability capabilities, which you will learn about later in this topic.

Creating Databases and Tables (DDL)

Running DDL (Data Definition Language) statements in a MemSQL cluster resembles running them on a single node. DDL statements are executed only on the master aggregator, however, which then automatically propagates (via replication) any changes to the other aggregators in the cluster.

Supported Table Types

A MemSQL cluster supports two types of tables:

  • Distributed tables (default) are sharded across the cluster.
  • Reference tables are mirrored on every aggregator and leaf via asynchronous replication. Applications can write data into reference tables from the master aggregator. MemSQL distributes joins against reference tables on leaves for efficient computation.

The following example creates:

  • two reference tables: pages to store every page on the MemSQL website, and users to store user information
  • one distributed table (clicks) to track every click that occurs on a specific web page
memsql> CREATE DATABASE web;
Query OK, 1 row affected (0.41 sec)

memsql> USE web;
Database changed
memsql> CREATE REFERENCE TABLE pages(
    ->     page_id INT PRIMARY KEY AUTO_INCREMENT,
    ->     page_url VARCHAR(1000)
    -> );
Query OK, 0 rows affected (4.21 sec)

memsql> CREATE REFERENCE TABLE users(
    ->     user_id INT PRIMARY KEY AUTO_INCREMENT,
    ->     user_name VARCHAR(1000)
    -> );
Query OK, 0 rows affected (4.19 sec)

memsql> CREATE TABLE clicks(
    ->     click_id BIGINT PRIMARY KEY AUTO_INCREMENT,
    ->     user_id INT,
    ->     page_id INT,
    ->     ts TIMESTAMP
    -> );
Query OK, 0 rows affected (8.32 sec)

memsql> SHOW TABLES EXTENDED;
+------------------+-------------+
| Tables_in_web    | distributed |
+------------------+-------------+
| clicks           |           1 |
| pages            |           0 |
| users            |           0 |
+------------------+-------------+
3 rows in set (0.00 sec)

The pages and users tables are reference tables, which means that they are replicated to every aggregator and leaf in the distributed environment. The clicks table is distributed, which means that it is split into a finite number of partitions that are distributed evenly across the leaves. Depending on the cluster’s redundancy level (described later in this topic), partitions can live on multiple machines to support MemSQL’s high availability capabilities.

Showing the Distribution of Data

To show how data is distributed in the cluster, run SHOW PARTITIONS ON web.

memsql> SHOW PARTITIONS ON web;
+---------+---------------+-------+--------+
| Ordinal | Host          | Port  | Role   |
+---------+---------------+-------+--------+
|       0 | 192.168.1.110 |  3306 | Master |
|       1 | 192.168.1.110 |  3306 | Master |
|       2 | 192.168.1.110 |  3306 | Master |
|       3 | 192.168.1.110 |  3306 | Master |
|       4 | 192.168.1.110 |  3306 | Master |
|       5 | 192.168.1.110 |  3306 | Master |
|       6 | 192.168.1.110 |  3306 | Master |
|       7 | 192.168.1.110 |  3306 | Master |
+---------+---------------+-------+--------+
8 rows in set (0.00 sec)

In this example, all 8 partitions map to the same leaf.

Running SQL Commands (DML)

To run DML (Data Manipulation Language, see http://en.wikipedia.org/wiki/Data_manipulation_language) statements, connect to any of the MemSQL aggregators. The following example inserts some records into pages, users, and clicks.

memsql> INSERT INTO PAGES (page_url) VALUES ('memsql.com');
memsql> INSERT INTO PAGES (page_url) VALUES ('developers.memsql.com');
memsql> INSERT INTO PAGES (page_url) VALUES ('http://www.memsql.com/download');

memsql> INSERT INTO USERS (user_name) VALUES ('jake');
memsql> INSERT INTO USERS (user_name) VALUES ('john');
memsql> INSERT INTO USERS (user_name) VALUES ('peter');

memsql> INSERT INTO CLICKS (user_id, page_id) VALUES (1, 1);
memsql> INSERT INTO CLICKS (user_id, page_id) VALUES (1, 1);
memsql> INSERT INTO CLICKS (user_id, page_id) VALUES (1, 1);
memsql> INSERT INTO CLICKS (user_id, page_id) VALUES (1, 1);
memsql> INSERT INTO CLICKS (user_id, page_id) VALUES (1, 2);
memsql> INSERT INTO CLICKS (user_id, page_id) VALUES (2, 1);
memsql> INSERT INTO CLICKS (user_id, page_id) VALUES (2, 3);
memsql> INSERT INTO CLICKS (user_id, page_id) VALUES (3, 3);
memsql> INSERT INTO CLICKS (user_id, page_id) VALUES (3, 3);

Once data resides in both reference and distributed tables, run an analytical query.

memsql> SELECT COUNT(*), user_name, page_url from clicks, users, pages
    ->     WHERE clicks.user_id = users.user_id AND pages.page_id = clicks.page_id
    ->     GROUP BY users.user_id, pages.page_id
    ->     ORDER BY COUNT(*) DESC;
+----------+-----------+--------------------------------+
| COUNT(*) | user_name | page_url                       |
+----------+-----------+--------------------------------+
|        5 | jake      | memsql.com                     |
|        2 | john      | http://www.memsql.com/download |
|        1 | jake      | developers.memsql.com          |
|        1 | jake      | memsql.com                     |
|        1 | jake      | http://www.memsql.com/download |
+----------+-----------+--------------------------------+
5 rows in set (0.00 sec)

In this example, the query executes as a distributed aggregation on the leaves, with a final merge step on the aggregators.

Note

For information about the SQL surface area that is currently supported by an aggregator, see surface area.

Setting Up High Availability

This section introduces how to configure a MemSQL cluster for high availability.

About High Availability and Failover

MemSQL’s high availability capabilities provide maximum system uptime using replication and automatic fail-over to redundant components.

Redundancy Level

In order to sustain high availability, a distributed MemSQL environment can be configured to store data redundantly. The redundancy_level variable (either 1 or 2) represents the number of copies of each partition to maintain in the cluster. When redundancy_level = 2, then every partition has one master instance and one slave instance, which are hooked together via replication.

Note

In the event of a node failure, the master aggregator performs automatic failover and promotes the redundant node to a Master role. MemSQL will not automatically rebalance data if a failure occurs. MemSQL will promote only enough Slave partitions to bring the table back online. This avoids further destabilizing the system with expensive data movement operations.

If a MemSQL leaf node becomes unavailable, running SHOW LEAVES will display the node in an offline state (not online). When the machine comes back up and MemSQL is again reachable on that machine, the leaf node will show in a detached state. You can reintroduce the leaf node to the cluster using the ATTACH LEAF command, which will trigger a local rebalancing of the data across the leaf and its pair (if the redundancy level is 2).

To add or remove nodes from the cluster, you can restore the data redundancy property by running REBALANCE PARTITIONS ON db, which will distribute data among the remaining servers you have provisioned. To add new machines to the system, simply run ADD LEAF before calling REBALANCE PARTITIONS so that MemSQL automatically includes the new leaves.

Availability Groups

MemSQL categorizes leaves together in availability groups, numbered 1, 2, ... The number of availability groups matches the redundancy level of the cluster (currently up to 2). Every machine has a corresponding pair in the other availability group. If there is an odd number of machines, then the remaining machine is left as a standby.

Within a pair, both leaves contain the same set of partitions. Master and slave partitions are split evenly between the two leaves.

Step 1: Add Leaves

Continuing with the previous cluster example, add two more leaves (192.168.1.112:3306 and 192.168.1.113:3306):

memsql> SELECT @@global.redundancy_level;
+---------------------------+
| @@global.redundancy_level |
+---------------------------+
|                         1 |
+---------------------------+
1 row in set (0.00 sec)

memsql> SET GLOBAL redundancy_level=2;
Query OK, 0 rows affected (0.00 sec)

memsql> ADD LEAF 'root'@'192.168.1.112':3306;
Query OK, 1 row affected (0.60 sec)

memsql> ADD LEAF 'root'@'192.168.1.113':3306;
Query OK, 1 row affected (0.27 sec)

memsql> SHOW LEAVES;
+---------------+-------+--------------------+---------------+-----------+--------+--------------------+---------------------------+
| Host          | Port  | Availability_Group | Pair_Host     | Pair_Port | State  | Opened_Connections | Average_Roundtrip_Latency |
+---------------+-------+--------------------+---------------+-----------+--------+--------------------+---------------------------+
| 192.168.1.110 |  3306 |                  1 | 192.168.1.112 |      3306 | online |                  5 |                     0.299 |
| 192.168.1.111 |  3306 |                  1 | 192.168.1.113 |      3306 | online |                  5 |                     0.293 |
| 192.168.1.112 |  3306 |                  2 | 192.168.1.110 |      3306 | online |                  1 |                     0.271 |
| 192.168.1.113 |  3306 |                  2 | 192.168.1.111 |      3306 | online |                  1 |                     0.264 |
+---------------+-------+--------------------+---------------+-----------+--------+--------------------+---------------------------+
4 rows in set (0.00 sec)

Each of the new leaves is in Availability_Group 2, and each is paired with one of the old leaves.

Step 2: Rebalance the Cluster

The next step is to rebalance the cluster in order to

  • load balance
  • back up the data on 192.168.1.110:3306 and 192.168.1.111:3306 to the new leaves
memsql> REBALANCE PARTITIONS ON web;
Query OK, 1 row affected (8.18 sec)

memsql> SHOW PARTITIONS ON web;
+---------+---------------+-------+--------+
| Ordinal | Host          | Port  | Role   |
+---------+---------------+-------+--------+
|       0 | 192.168.1.112 |  3306 | Master |
|       0 | 192.168.1.110 |  3306 | Slave  |
|       1 | 192.168.1.113 |  3306 | Master |
|       1 | 192.168.1.111 |  3306 | Slave  |
|       2 | 192.168.1.113 |  3306 | Master |
|       2 | 192.168.1.111 |  3306 | Slave  |
|       3 | 192.168.1.111 |  3306 | Master |
|       3 | 192.168.1.113 |  3306 | Slave  |
|       4 | 192.168.1.112 |  3306 | Master |
|       4 | 192.168.1.110 |  3306 | Slave  |
|       5 | 192.168.1.111 |  3306 | Master |
|       5 | 192.168.1.113 |  3306 | Slave  |
|       6 | 192.168.1.110 |  3306 | Master |
|       6 | 192.168.1.112 |  3306 | Slave  |
|       7 | 192.168.1.110 |  3306 | Master |
|       7 | 192.168.1.112 |  3306 | Slave  |
+---------+---------------+-------+--------+
16 rows in set (0.00 sec)

Note that each partition is present on a leaf and its pair. Masters are split evenly between the leaves in both pairs.

Step 3: Kill a Leaf

Next, run the kill -9 command to kill the leaf on 192.168.1.110:3306.

$ kill -9 memsqld

Step 4: Check the State of the Cluster

After a couple of seconds, check the state of the cluster.

memsql> SHOW LEAVES;
+---------------+-------+--------------------+---------------+-----------+---------+--------------------+---------------------------+
| Host          | Port  | Availability_Group | Pair_Host     | Pair_Port | State   | Opened_Connections | Average_Roundtrip_Latency |
+---------------+-------+--------------------+---------------+-----------+---------+--------------------+---------------------------+
| 192.168.1.110 |  3306 |                  1 | 192.168.1.112 |      3306 | offline |                  0 |                     0.299 |
| 192.168.1.111 |  3306 |                  1 | 192.168.1.113 |      3306 | online  |                  5 |                     0.292 |
| 192.168.1.112 |  3306 |                  2 | 192.168.1.110 |      3306 | online  |                  5 |                     0.280 |
| 192.168.1.113 |  3306 |                  2 | 192.168.1.111 |      3306 | online  |                  5 |                     0.274 |
+---------------+-------+--------------------+---------------+-----------+---------+--------------------+---------------------------+
4 rows in set (0.00 sec)

memsql> SHOW PARTITIONS ON web;
+---------+---------------+-------+--------+
| Ordinal | Host          | Port  | Role   |
+---------+---------------+-------+--------+
|       0 | 192.168.1.112 |  3306 | Master |
|       1 | 192.168.1.113 |  3306 | Master |
|       1 | 192.168.1.111 |  3306 | Slave  |
|       2 | 192.168.1.113 |  3306 | Master |
|       2 | 192.168.1.111 |  3306 | Slave  |
|       3 | 192.168.1.111 |  3306 | Master |
|       3 | 192.168.1.113 |  3306 | Slave  |
|       4 | 192.168.1.112 |  3306 | Master |
|       5 | 192.168.1.111 |  3306 | Master |
|       5 | 192.168.1.113 |  3306 | Slave  |
|       6 | 192.168.1.112 |  3306 | Master |
|       7 | 192.168.1.112 |  3306 | Master |
+---------+---------------+-------+--------+
12 rows in set (0.00 sec)

Note that 192.168.1.110:3306 does not show up in the partition map, and that every partition on 192.168.1.112:3306 is a Master. Furthermore, failover did not automatically restore redundancy, yet the table is online for reads and writes.

Step 5: Restart the Server

Next, restart the server on 192.168.1.110:3306. The leaf automatically transitions to the detached state (you may need to wait a few seconds). Run the ATTACH LEAF command to restore the state of the cluster.

memsql> SHOW LEAVES;
+---------------+-------+--------------------+---------------+-----------+----------+--------------------+---------------------------+
| Host          | Port  | Availability_Group | Pair_Host     | Pair_Port | State    | Opened_Connections | Average_Roundtrip_Latency |
+---------------+-------+--------------------+---------------+-----------+----------+--------------------+---------------------------+
| 192.168.1.110 |  3306 |                  1 | 192.168.1.112 |      3306 | detached |                  0 |                     0.292 |
| 192.168.1.111 |  3306 |                  1 | 192.168.1.113 |      3306 | online   |                  5 |                     0.309 |
| 192.168.1.112 |  3306 |                  2 | 192.168.1.110 |      3306 | online   |                  5 |                     0.277 |
| 192.168.1.113 |  3306 |                  2 | 192.168.1.111 |      3306 | online   |                  5 |                     0.289 |
+---------------+-------+--------------------+---------------+-----------+----------+--------------------+---------------------------+
4 rows in set (0.00 sec)

memsql> ATTACH LEAF '192.168.1.110':3306;
Query OK, 1 row affected (1.11 sec)

memsql> SHOW LEAVES;
+---------------+-------+--------------------+---------------+-----------+--------+--------------------+---------------------------+
| Host          | Port  | Availability_Group | Pair_Host     | Pair_Port | State  | Opened_Connections | Average_Roundtrip_Latency |
+---------------+-------+--------------------+---------------+-----------+--------+--------------------+---------------------------+
| 192.168.1.110 |  3306 |                  1 | 192.168.1.112 |      3306 | online |                  5 |                     0.286 |
| 192.168.1.111 |  3306 |                  1 | 192.168.1.113 |      3306 | online |                  5 |                     0.275 |
| 192.168.1.112 |  3306 |                  2 | 192.168.1.110 |      3306 | online |                  1 |                     0.305 |
| 192.168.1.113 |  3306 |                  2 | 192.168.1.111 |      3306 | online |                  1 |                     0.299 |
+---------------+-------+--------------------+---------------+-----------+--------+--------------------+---------------------------+
4 rows in set (0.00 sec)

memsql> SHOW PARTITIONS ON web;
+---------+---------------+-------+--------+
| Ordinal | Host          | Port  | Role   |
+---------+---------------+-------+--------+
|       0 | 192.168.1.110 |  3306 | Master |
|       0 | 192.168.1.112 |  3306 | Slave  |
|       1 | 192.168.1.113 |  3306 | Master |
|       1 | 192.168.1.111 |  3306 | Slave  |
|       2 | 192.168.1.113 |  3306 | Master |
|       2 | 192.168.1.111 |  3306 | Slave  |
|       3 | 192.168.1.111 |  3306 | Master |
|       3 | 192.168.1.113 |  3306 | Slave  |
|       4 | 192.168.1.110 |  3306 | Master |
|       4 | 192.168.1.112 |  3306 | Slave  |
|       5 | 192.168.1.111 |  3306 | Master |
|       5 | 192.168.1.113 |  3306 | Slave  |
|       6 | 192.168.1.112 |  3306 | Master |
|       6 | 192.168.1.110 |  3306 | Slave  |
|       7 | 192.168.1.112 |  3306 | Master |
|       7 | 192.168.1.110 |  3306 | Slave  |
+---------+---------------+-------+--------+
16 rows in set (0.00 sec)

Note that 192.168.1.110:3306 has been reattached to the cluster, and that each of the 8 partitions has both a master and a slave.

Shutting Down and Restarting the Cluster

To avoid triggering automatic failure detection while cleanly shutting down the server, shut off and restart MemSQL instances in the correct order.

Shutting Down the Cluster

To avoid triggering failover detection:

  1. Shut down the master aggregator first.
  2. Shut down the remaining aggregators and leaves (in any order).

Restarting the Cluster

To bring the cluster back up:

  1. Restart all of the leaves.
  2. Verify that all the leaves are reachable (using a SELECT 1 query).
  3. Turn on the master aggregator.
  4. Turn on the remaining aggregators.
::...
免责声明:
当前网页内容, 由 大妈 ZoomQuiet 使用工具: ScrapBook :: Firefox Extension 人工从互联网中收集并分享;
内容版权归原作者所有;
本人对内容的有效性/合法性不承担任何强制性责任.
若有不妥, 欢迎评注提醒:

或是邮件反馈可也:
askdama[AT]googlegroups.com



自怼圈/年番新

DU21.7
关于 ~ DebugUself with DAMA ;-)


关注公众号, 持续获得相关各种嗯哼:
zoomquiet


粤ICP备18025058号-1
公安备案号: 44049002000656 ...::