Skip to content

MySQL-cluster how-to (with load-balancing & failover)

Since I recently configured and installed a MySQL-cluster, I thought I’d share the procedure. A lot of the examples around explains how to set it all up on the same machine for “testing purposes” — which, in theory, is the same as setting it up on different machines. I’ll be explaining the latter, that is, installing it onto different machines.

To achieve true redundancy in a MySQL-cluster, you need at least 3 seperate, physical machines; two data-nodes, and one management-node. The latter you can use a virtual machine for, as long as it doesn’t run on the two data-nodes (which means you still need at least 3 physical machines). You can also use the management-node as a mysql-proxy for transparent failover/load-balancing for the clients.

My setup was done using two physical machines (db0 and db1) running Ubuntu 8.04 (Hardy Heron), and one virtual machine (mysql-mgmt) running Debian 6 (Squeeze). The VM is not running on the two physical machines. db0 and db1 is the actual data-nodes/servers, and mysql-mgmt is going to be used as the management-node for the cluster. In addition, mysql-mgmt is also going to be configured with mysql-proxy, so that we have transparent failover/load-balancing for the clients.

Update 2011-10-26: I?ve changed the setup a bit, compared to my original walkthrough. I hit some memory-limits when using the NDB-engine. This caused MySQL to fail inserting new rows (stating that the table was full). There are some variables that you can set (DataMemory and IndexMemory), to increase the memory-consumption for the ndb-process (which was what caused the issues). Since I had limited amount of memory available on the mysql-mgmt virtual machine (and lots on db0/1), I decided to run ndb_mgmd on db0 + db1. Apparently, you can do this, and it?s still redundant. The post has been changed to reflect this.

My setup was done using two physical machines (db0 and db1) running Ubuntu 8.04 (Hardy Heron), and one virtual machine (mysql-proxy) running Debian 6 (Squeeze). Previously, the virtual machine ran ndb_mgmd, but due to the above mentioned issues, both db0 and db1 runs their own ndb_mgmd-processes. The virtual machine is now only used to run mysql-proxy (and hence it’s hostname has changed to reflect this).

Update 2012-01-30: morphium pointed out that /etc/my.cnf needed it’s own [mysql_cluster]-section, so that ndbd and ndb_mgmd connects to something else than localhost (which is the default if no explicit hosts is defined). The post has been updated to reflect this.

Update 2013-01-03: Dave Weddell pointed out that newer versions of mysql-proxy had a different syntax for the proxy-backend-addresses parameter. Instead of having multiple proxy-backend-addresses-parameters (one for each backend), it wants all the backends in one parameter (comma separated). The post has been updated to reflect this.

Update 2013-05-15: Richard pointed out that he had to alter two more tables to use the ‘ndbcluster’-engine in order for it to work. It was not needed when I originally set this up, but recent versions might have introduced more tables. I’ve updated the post to reflect this.

Prepare db0 + db1

Go to MySQLs homepage, and find the download-link for the latest MySQL Cluster-package, available here. Then proceed as shown below (changing the ‘datadir’ to your likings);

root@db0:~# cd /usr/local/
root@db0:/usr/local# wget -q http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.1/mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23.tar.gz/from/http://mirror.switch.ch/ftp/mirror/mysql/
root@db0:/usr/local# mv index.html mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23.tar.gz
root@db0:/usr/local# tar -xzf mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23.tar.gz
root@db0:/usr/local# rm -f mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23.tar.gz
root@db0:/usr/local# ln -s mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23 mysql
root@db0:/usr/local# ls -ald mysql*
lrwxrwxrwx  1 root root    45 2011-03-04 18:30 mysql -> mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23
drwxr-xr-x 13 root mysql 4096 2011-03-05 00:42 mysql-cluster-gpl-7.1.10-linux-x86_64-glibc23
root@db0:/usr/local# cd mysql
root@db0:/usr/local/mysql# mkdir /opt/oracle/disk/mysql_cluster
root@db0:/usr/local/mysql# mkdir /opt/oracle/disk/mysql_cluster/mysqld_data
root@db0:/usr/local/mysql# groupadd mysql
root@db0:/usr/local/mysql# useradd -g mysql mysql
root@db0:/usr/local/mysql# chown mysql:root /opt/oracle/disk/mysql_cluster/mysqld_data
root@db0:/usr/local/mysql# scripts/mysql_install_db --user=mysql --no-defaults --datadir=/opt/oracle/disk/mysql_cluster/mysqld_data/
root@db0:/usr/local/mysql# cp support-files/mysql.server /etc/init.d/
root@db0:/usr/local/mysql# chmod +x /etc/init.d/mysql.server
root@db0:/usr/local/mysql# update-rc.d mysql.server defaults

Repeat on db1. Do not start the MySQL-server yet.

root@db0:/usr/local/mysql# vim /etc/my.cnf

Put the following in the my.cnf-file;

[mysqld]
basedir=/usr/local/mysql
datadir=/opt/oracle/disk/mysql_cluster/mysqld_data
event_scheduler=on
default-storage-engine=ndbcluster
ndbcluster
ndb-connectstring=db0.internal,db1.internal     # IP/host of the NDB_MGMD-nodes

key_buffer = 512M
key_buffer_size = 512M
sort_buffer_size = 512M
table_cache = 1024
read_buffer_size = 512M

[mysql_cluster]
ndb-connectstring=db0.internal,db1.internal     # IP/host of the NDB_MGMD-nodes

Repeat on db 1. And (again), do not start the MySQL-server yet.

Prepare ndb_mgmd

We can now prepare ndb_mgmd as follows;

root@db0:~# cd /usr/local/mysql
root@db0:/usr/local/mysql# chmod +x bin/ndb_mgm*
root@db0:/usr/local/mysql# mkdir /var/lib/mysql-cluster
root@db0:/usr/local/mysql# vim /var/lib/mysql-cluster/config.ini

Put the following into the config.ini-file;

[NDBD DEFAULT]
NoOfReplicas=2
DataDir=/var/lib/mysql-cluster
DataMemory=8G
IndexMemory=4G

[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]

# 2 Managment Servers
[NDB_MGMD]
HostName=db0.internal    # IP/host of first NDB_MGMD-node
NodeId=1

[NDB_MGMD]
HostName=db1.internal    # IP/host of second NDB_MGMD-node
NodeId=2

# 2 Storage Engines
[NDBD]
HostName=db0.internal    # IP/host of first NDBD-node
NodeId=3
[NDBD]
HostName=db1.internal    # IP/host of second NDBD-node
NodeId=4

# 2 MySQL Clients
# Lave this blank to allow rapid changes of the mysql clients.
[MYSQLD]
[MYSQLD]

There are two variables in the above config you’d want to pay attention to; DataMemory and IndexMemory. These values needs to be changed according to how large tables you need. Without setting these values, they default to 80MB (DataMemory) and 18MB (IndexMemory), which is not much (after around 200.000 rows, you’ll get messages stating that the table is full when trying to insert new rows). My values are probably way to high for most cases, but since we have a few tables with a lot of messages, and lots of RAM, I just set them a bit high to avoid issues. Keep in mind that NDBD will allocate/reserve the amount of memory you set for DataMemory (so in my config above, NDBD uses 8GB of memory from the second the service is started).

Now we’re ready to start the management-server for the first time. Please notice the use of the parameter ‘–initial’. This should only be used the first time you start it. Once you’ve started it for the first time, you remove the ‘–initial’ parameter.

root@db0:/usr/local/mysql/bin# ndb_mgmd -f /var/lib/mysql-cluster/config.ini --initial --config-dir=/var/lib/mysql-cluster/
MySQL Cluster Management Server mysql-5.1.51 ndb-7.1.10

Repeat on db1.

When done, we go back to the storage-servers.

Finalize db0 + db1

Now we make the ndb data-dirs, and start the ndb-service;

root@db0:/usr/local/mysql# mkdir /var/lib/mysql-cluster
root@db0:/usr/local/mysql# cd /var/lib/mysql-cluster
root@db0:/var/lib/mysql-cluster# /usr/local/mysql/bin/ndbd --initial
2011-03-04 22:51:54 [ndbd] INFO     -- Angel connected to 'localhost:1186'
2011-03-04 22:51:54 [ndbd] INFO     -- Angel allocated nodeid: 3

Repeat on db1.

We’re now going to alter some of the tables, so that they use the ‘ndbcluster’-engine. This is to ensure that user/host-priviledges also gets synced (so that if you add a user on one server, it gets replicated to the other).

root@db0:/var/lib/mysql-cluster# /etc/init.d/mysql.server start
Starting MySQL.. * 
root@db1:/var/lib/mysql-cluster# /etc/init.d/mysql.server start
Starting MySQL.. *
root@db0:/usr/local/mysql# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.1.51-ndb-7.1.10-cluster-gpl MySQL Cluster Server (GPL)
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL v2 license
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use mysql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed

mysql> ALTER TABLE mysql.user ENGINE=NDBCLUSTER;
Query OK, 6 rows affected (0.25 sec)
Records: 6  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE mysql.db ENGINE=NDBCLUSTER;
Query OK, 2 rows affected (0.16 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE mysql.host ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.18 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE mysql.tables_priv ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.16 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE mysql.columns_priv ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.16 sec)
Records: 0  Duplicates: 0  Warnings: 0

Richard pointed out that he had to alter two more tables to use the ‘ndbcluster’-engine in order for it to work. It was not needed when I originally set this up, but recent versions might have introduced more tables. If you encounter errors, alter the following two tables as well;

mysql> ALTER TABLE mysql.procs_priv ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.19 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE mysql.proxies_priv ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.31 sec)
Records: 0  Duplicates: 0  Warnings: 0

Do the following regardless if you alter the two tables above or not;

mysql> SET GLOBAL event_scheduler=1;
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE EVENT `mysql`.`flush_priv_tables` ON SCHEDULE EVERY 30 second ON COMPLETION PRESERVE DO FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

mysql> exit

Now we’ll stop the MySQL-service on both servers, and copy the data on db0 over to db1. Then we’ll start the servers again.

root@db0:/usr/local/mysql# /etc/init.d/mysql.server stop
root@db1:/usr/local/mysql# /etc/init.d/mysql.server stop
root@db0:/opt/oracle/disk/mysql_cluster# scp -r mysqld_data/ db1:/opt/oracle/disk/mysql_cluster/
root@db0:/usr/local/mysql# /etc/init.d/mysql.server start
Starting MySQL.. *
root@db1:/usr/local/mysql# /etc/init.d/mysql.server start
Starting MySQL.. *

Testing

So far, so good. We’re now gonna check if the management/control of the cluster works as it should.

root@db0:~# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: db0.internal:1186
Cluster Configuration
---------------------
[ndbd(NDB)]	2 node(s)
id=3	@10.10.10.10  (mysql-5.1.51 ndb-7.1.10, Nodegroup: 0)
id=4	@10.10.10.20  (mysql-5.1.51 ndb-7.1.10, Nodegroup: 0, Master)

[ndb_mgmd(MGM)]	2 node(s)
id=1	@10.10.10.10  (mysql-5.1.51 ndb-7.1.10)
id=2	@10.10.10.20  (mysql-5.1.51 ndb-7.1.10)

[mysqld(API)]	2 node(s)
id=5	@10.10.10.10  (mysql-5.1.51 ndb-7.1.10)
id=6	@10.10.10.20  (mysql-5.1.51 ndb-7.1.10)

Seems to be working just fine! However, we also want to make sure that replication works. We’re going to populate a database with some data, and check that it replicates to the other server. We’re also going to shut down one server, alter some data, and start it again, to see if the data synchronizes.

root@db0:/var/lib/mysql-cluster# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.51-ndb-7.1.10-cluster-gpl MySQL Cluster Server (GPL)
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,and you are welcome to modify and redistribute it under the GPL v2 license
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use test
Database changed

mysql> CREATE TABLE loltest (i INT) ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.15 sec)

mysql> INSERT INTO loltest () VALUES (1); 
Query OK, 1 row affected (0.00 sec)

mysql> SELECT * FROM loltest;
+------+
| i    |
+------+
|    1 |
+------+
1 row in set (0.00 sec)

We now move over to db1 to check if it got replicated.

root@db1:/var/lib/mysql-cluster# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.1.51-ndb-7.1.10-cluster-gpl MySQL Cluster Server (GPL)
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,and you are welcome to modify and redistribute it under the GPL v2 license
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed

mysql> SELECT * FROM loltest;
+------+
| i    |
+------+
|    1 |
+------+
1 row in set (0.00 sec)

That seems to be working just fine, as well. However, we also want to test that the servers synchronize when a server comes back up after being down (so that changes done to the other server gets synchronized).

root@db0:/var/lib/mysql-cluster# /etc/init.d/mysql.server stop
Shutting down MySQL..... *


root@db0:~# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: db0.internal:1186
Cluster Configuration
---------------------
[ndbd(NDB)]	2 node(s)
id=3	@10.10.10.10  (mysql-5.1.51 ndb-7.1.10, Nodegroup: 0)
id=4	@10.10.10.20  (mysql-5.1.51 ndb-7.1.10, Nodegroup: 0, Master)

[ndb_mgmd(MGM)]	2 node(s)
id=1	@10.10.10.10  (mysql-5.1.51 ndb-7.1.10)
id=2	@10.10.10.20  (mysql-5.1.51 ndb-7.1.10)

[mysqld(API)]	2 node(s)
id=5 (not connected, accepting connect from any host)
id=6	@10.10.10.20  (mysql-5.1.51 ndb-7.1.10)


root@db1:/var/lib/mysql-cluster# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.1.51-ndb-7.1.10-cluster-gpl MySQL Cluster Server (GPL)

Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL v2 license

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed

mysql> INSERT INTO loltest () VALUES (99);
Query OK, 1 row affected (0.00 sec)

mysql> INSERT INTO loltest () VALUES (999);
Query OK, 1 row affected (0.00 sec)

mysql> INSERT INTO loltest () VALUES (100);
Query OK, 1 row affected (0.00 sec)

mysql> SELECT * FROM loltest;
+------+
| i    |
+------+
|    1 |
|  100 |
|   99 |
|  999 |
+------+
4 rows in set (0.00 sec)

root@db0:/var/lib/mysql-cluster# /etc/init.d/mysql.server start
Starting MySQL. * 
root@db0:/var/lib/mysql-cluster# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.51-ndb-7.1.10-cluster-gpl MySQL Cluster Server (GPL)
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL v2 license
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed

mysql> SELECT * FROM loltest;
+------+
| i    |
+------+
|   99 |
|  999 |
|    1 |
|  100 |
+------+
4 rows in set (0.00 sec)

That also seems to be working just fine. That also concludes the configuration of the cluster it self.

Failover/redundancy/load-balancing

The only thing that’s left is to setup a mysql-proxy, which all the clients will use as it’s MySQL hostname. This mysql-proxy is then ‘the middleman’, completely transparent for both the servers and the clients. Should a server go down, the clients won’t notice it. It also does automatic load-balancing. If you proceed with this, keep in mind that this mysql-proxy becomes a single-point-of-failure in the setup (hence it kinda makes the whole MySQL-cluster useless). In my setup, I chose to install the mysql-proxy on the mysql-mgmt machine. I’ve installed mysql-proxy on it’s own virtual host. Since this is virtualized, it’s also redundant should something happen. You could also use two physical machines, and use Linux HA etc, however that’s quite more complex than using a VM (at least if you already have virtualization available).

root@mysql-proxy:~# apt-get install mysql-proxy
root@mysql-proxy:~# mkdir /etc/mysql-proxy
root@mysql-proxy:~# cd /etc/mysql-proxy
root@mysql-proxy:/etc/mysql-proxy# vim mysql-proxy.conf

Add the following to the mysql-proxy.conf-file;

[mysql-proxy]
daemon = true
keepalive = true
proxy-address = mysql-proxy.internal:3306

# db0
proxy-backend-addresses = db0.internal:3306

# db1
proxy-backend-addresses = db1.internal:3306

Depending on what version of mysql-proxy you use, it might complain about the above syntax. On newer versions of mysql-proxy, you have to put all the backend addresses into one proxy-backend-addresses-parameter. The above config would then look like this;

[mysql-proxy]
daemon = true
keepalive = true
proxy-address = mysql-proxy.internal:3306

# db0 + db1
proxy-backend-addresses = db0.internal:3306,db1.internal:3306

Then you can start the mysql-proxy service;

root@mysql-proxy:/etc/mysql-proxy# mysql-proxy --defaults-file=/etc/mysql-proxy/mysql-proxy.conf

Now point your clients to use the hostname of the mysql-proxy server, and you’re good to go!

Init-scripts (automatic startup at boot)

The ‘ndbd’- and ‘ndb_mgmd’-services needs init-scripts in order to be loaded automatically at boot. Since they don’t seem to be provided in the MySQL Cluster-package, I made them myself.

The init-script that’s included with the mysql-proxy service didn’t work for me, so I wrote my own for that as well.

Copy them, save them in /etc/init.d/. Then make them executable (chmod +x /etc/init.d/<filename>). Finally you add them to rc.d, so that they’re loaded at boot; update-rc.d <filename> defaults.

/etc/init.d/ndbd

#!/bin/bash
# Linux Standard Base comments
### BEGIN INIT INFO
# Provides:          ndbd
# Required-Start:    $local_fs $network $syslog $remote_fs
# Required-Stop:     $local_fs $network $syslog $remote_fs
# Should-Start:
# Should-Stop:
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: mysql cluster manager client
# Description:       mysql cluster manager client
### END INIT INFO

ndbd_bin=/usr/local/mysql/bin/ndbd

if ! test -x $ndbd_bin; then
	echo "Can't execute $ndbd_bin";
	exit;
fi

start_ndbd(){
	number_of_ndbd_pids=`ps aux|grep -iv "grep"|grep -i "/usr/local/mysql/bin/ndbd"|wc -l`
	if [ $number_of_ndbd_pids -eq 0 ]; then
		$ndbd_bin
		echo "ndbd started."
	else
		echo "ndbd is already running."
	fi
}

stop_ndbd(){
	number_of_ndbd_pids=`ps aux|grep -iv "grep"|grep -i "/usr/local/mysql/bin/ndbd"|wc -l`
        if [ $number_of_ndbd_pids -ne 0 ]; then
		ndbd_pids=`pgrep ndbd`
        	for ndbd_pid in $(echo $ndbd_pids); do
                	kill $ndbd_pid 2> /dev/null
        	done

		number_of_ndbd_pids=`ps aux|grep -iv "grep"|grep -i "/usr/local/mysql/bin/ndbd"|wc -l`

        	if [ $number_of_ndbd_pids -eq 0 ]; then
                	echo "ndbd stopped."
        	else
                	echo "Could not stop ndbd."
        	fi
	else
		echo "ndbd is not running."
	fi
}


case "$1" in
    'start' )
        start_ndbd
        ;;
    'stop' )
	stop_ndbd
        ;;
    'restart' )
        stop_ndbd
	start_ndbd
        ;;
    *)
        echo "Usage: $0 {start|stop|restart}" >&2
        ;;
esac

/etc/init.d/ndb_mgmd

#!/bin/bash
# Linux Standard Base comments
### BEGIN INIT INFO
# Provides:          ndb_mgmd
# Required-Start:    $local_fs $network $syslog $remote_fs
# Required-Stop:     $local_fs $network $syslog $remote_fs
# Should-Start:
# Should-Stop:
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: mysql cluster manager
# Description:       mysql cluster manager
### END INIT INFO

ndb_mgmd=/usr/local/mysql/bin/ndb_mgmd
config_file=/var/lib/mysql-cluster/config.ini
config_dir=/var/lib/mysql-cluster

if ! test -x $ndb_mgmd; then
	echo "Can't execute $ndb_mgmd"	
	exit;
fi

start_ndb_mgmd(){
	number_of_ndb_mgmd_pids=`ps aux|grep -iv "grep"|grep -i "$ndb_mgmd"|wc -l`
	if [ $number_of_ndb_mgmd_pids -eq 0 ]; then
		$ndb_mgmd -f $config_file --config-dir=$config_dir
		echo "ndb_mgmd started."
	else
		echo "ndb_mgmd is already running."
	fi
}

stop_ndb_mgmd(){
	number_of_ndb_mgmd_pids=`ps aux|grep -iv "grep"|grep -i "$ndb_mgmd"|wc -l`
        if [ $number_of_ndb_mgmd_pids -ne 0 ]; then
		ndb_mgmd_pids=`pgrep ndb_mgmd`
        	for ndb_mgmd_pid in $(echo $ndb_mgmd_pids); do
                	kill $ndb_mgmd_pid 2> /dev/null
        	done

		number_of_ndb_mgmd_pids=`ps aux|grep -iv "grep"|grep -i "$ndb_mgmd"|wc -l`

        	if [ $number_of_ndb_mgmd_pids -eq 0 ]; then
                	echo "ndb_mgmd stopped."
        	else
                	echo "Could not stop ndb_mgmd."
        	fi
	else
		echo "ndb_mgmd is not running."
	fi
}


case "$1" in
    'start' )
        start_ndb_mgmd
        ;;
    'stop' )
	stop_ndb_mgmd
        ;;
    'restart' )
        stop_ndb_mgmd
	start_ndb_mgmd
        ;;
    *)
        echo "Usage: $0 {start|stop|restart}" >&2
        ;;
esac

/etc/init.d/mysql-proxy

#! /bin/bash
### BEGIN INIT INFO
# Provides:             mysql-proxy
# Required-Start:       $local_fs $network $syslog $remote_fs
# Required-Stop:        $local_fs $network $syslog $remote_fs
# Should-Start:
# Should-Stop:
# Default-Start:        2 3 4 5
# Default-Stop:         0 1 6
# Short-Description:    MySQL Proxy
# Description:          MySQL Proxy
### END INIT INFO

mysql_proxy=/usr/bin/mysql-proxy
config_file=/etc/mysql-proxy/mysql-proxy.conf

if ! test -x $mysql_proxy; then
	echo "Can't execute $mysql_proxy"	
	exit;
fi

start_mysql_proxy(){
	number_of_mysql_proxy_pids=`ps aux|grep -iv "grep"|grep -i "/usr/bin/mysql-proxy"|wc -l`
	if [ $number_of_mysql_proxy_pids -eq 0 ]; then
		$mysql_proxy --defaults-file=$config_file
		echo "mysql-proxy started."
	else
		echo "mysql-proxy is already running."
	fi
}

stop_mysql_proxy(){
	number_of_mysql_proxy_pids=`ps aux|grep -iv "grep"|grep -i "/usr/bin/mysql-proxy"|wc -l`
        if [ $number_of_mysql_proxy_pids -ne 0 ]; then
		mysql_proxy_pids=`pgrep mysql-proxy`
        	for mysql_proxy_pid in $(echo $mysql_proxy_pids); do
                	kill $mysql_proxy_pid 2> /dev/null
        	done

		number_of_mysql_proxy_pids=`ps aux|grep -iv "grep"|grep -i "/usr/bin/mysql-proxy"|wc -l`

        	if [ $number_of_mysql_proxy_pids -eq 0 ]; then
                	echo "mysql-proxy stopped."
        	else
                	echo "Could not stop mysql-proxy."
        	fi
	else
		echo "mysql-proxy is not running."
	fi
}


case "$1" in
    'start' )
        start_mysql_proxy
        ;;
    'stop' )
	stop_mysql_proxy
        ;;
    'restart' )
        stop_mysql_proxy
	start_mysql_proxy
        ;;
    *)
        echo "Usage: $0 {start|stop|restart}" >&2
        ;;
esac

102 Comments

  1. Romaric Romaric

    Thanks a lot !!!

  2. Randy Reedc Randy Reedc

    This is great I will be testing this.. I have one question.
    How do i enable remote connections and if i were installing wordpress or joomla would i connect to the management servers ip?
    Thanks in advance…

    • jocke jocke

      Ehm, if you’re also using the mysql-proxy, you would use the IP-address to the server running the mysql-proxy service. In my example above, this would indeed be the management servers IP, since mysql-proxy runs on that one.

  3. Randy Reed Randy Reed

    Okay i figured it out.. By the way great tutorial.
    [code]
    mysql> GRANT ALL ON foo.* TO bar@’202.54.10.20′ IDENTIFIED BY ‘PASSWORD’;
    [/code]
    where foo is your database name bar is the username
    and the ip of the cluster proxy and your password in the password field..

    this grants access to mysql from a remote webserver.

    • jocke jocke

      Thanks! Good that someone found the use for the tutorial (-:

      You can also use wildcards and hostnames as the criteria for the permissions;

      GRANT ALL ON foo TO ‘user’@’%’ IDENTIFIED BY ‘secretpassword’;

      GRANT ALL ON foo TO ‘user’@’192.168.0.%’ IDENTIFIED BY ‘secretpassword’;

      GRANT ALL ON foo TO ‘user’@’%.mysite.com’ IDENTIFIED BY ‘secretpassword’;

      GRANT ALL ON foo TO ‘user’@’mysql.mysite.%’ IDENTIFIED BY ‘secretpassword’;

      Etc…

  4. Ivan Rancati Ivan Rancati

    Thanks for the tutorial.
    I have downloaded the latest (7.1.15) and have an odd problem with the binary executables in the bin directory:
    bash always reports a “no such file or directory”
    It looks like they are built for a 2.4 kernel
    root@Vdb1:/usr/local/mysql/bin# file my_print_defaults
    my_print_defaults: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.4.0, not stripped

    which does not match the kernel in Debian 6 64 bit
    2.6.32-5-amd64 #1 SMP Mon Jun 13 05:49:32 UTC 2011 x86_64 GNU/Linux

    the executable bit is set for all files in /usr/local/mysql/bin (I can execute the non binary scripts, for example mysql_secure_installation). I was wondering whether “file” reports 2.6.x or 2.4 for the files that you downloaded for 7.1.10

    • Ivan Rancati Ivan Rancati

      Well, it turns out the minimal install of Debian 6 does not include glib and libc6-amd64. That was the problem

      • I’m having the same issue. I installed libc6 but have glib already, but it’s still happening. What version of glib did you install?

        BTW thanks for posting your answer after figuring it out

  5. Dave Dave

    Jocke,

    Is the management node required? Is it not possible to just run the two clustered MySQL servers and use ucarp for failover? It would not load balance, but failover would be in place.

    Thoughts?

    • jocke jocke

      The cluster-nodes won’t know about each other without the management-node. Each of the MySQL-servers run ndbd, which connects to ndb_mgmd running on the management-node. So at least you need ndb_mgmd when the cluster initializes.

      However, if the cluster only consists of two MySQL-nodes, I’m not really sure if ndb_mgmd is needed after the cluster has initialized. I don’t know the specific role of ndb_mgmd if one of the nodes should fail/go offline when there are only two nodes, but I’m pretty sure that if you have more than two nodes, you need ndb_mgmd to keep things in order when/if nodes fail/go offline.

      Maybe you’d be able to run ndb_mgmd on one of the MySQL-nodes (just to make the two MySQL-nodes aware of each other during initialization). If one of the nodes fail, maybe it won’t matter, even if it’s the node running ndb_mgmd…

      If you try it out, let us know how it turns out.

  6. mvaxdx mvaxdx

    Thank you Jocke for let us know this!
    Very Helpfull! it worked fine to me.
    im tryin to make a bash script to quickly deploy such servers

  7. Carlo Carlo

    First of all thanks for the excellent tutorial.
    I see that a /etc/init.d helper is provided for mysql, but not for ndbd.
    How do you make sure that ndbd is started automatically when the (virtual)server is restarted?

    thanks

  8. Carlo Carlo

    Sorry, ignore my question, that’s section 6. Why it did not display previously in my browser I am not sure

  9. tiffanyhong tiffanyhong

    hi,

    im facing a problem. mySQL proxy cant seems to start. when i run ‘net start proxy’, it displays this error message:

    ‘the MySQL proxy service is starting.
    the MySQL proxy service could not be started.
    the service did not report an error.’

    plsssss help :(

    • jocke jocke

      It looks like this is on Windows?

      If so, I’m afraid I’m not much of a help (-:

  10. jocke jocke

    On a general note; I’ve changed the setup a bit, compared to my original walkthrough. I hit some memory-limits when using the NDB-engine. This caused MySQL to fail inserting new rows (stating that the table was full). There are some variables that you can set, to increase the memory-consumption for the ndb-process (which was what caused the issues). Since I had limited amount of memory available on the mysql-mgmt virtual machine (and lots on db0/1), I decided to run ndb_mgmd on db0 + db1. Apparently, you can do this, and it’s still redundant.

    I’ve updated the post to reflect these changes.

  11. Benjamin Benjamin

    First, thanks for the how-to. the msql-cluster is working great. but the mysql-proxy isn’t working fine. in the log is an error like

    (debug) chassis.c:979: trying to set new RLIMIT_NOFILE = 8192 (hard: 1024)

    (critical) chassis.c:981: could not raise RLIMIT_NOFILE to 8192, Invalid argument (22). Current limit still 1024.

    an i can not connect to the server from outside.

    thanks

    benjamin

    (sorry for my english)

    • jocke jocke

      Try running

      ulimit -n 8192

      as root. Does this make the issue go away?

  12. Benjamin Benjamin

    thanks, no errors in the log :) . but the connection problem still persist.

    • jocke jocke

      No rules in iptables that blocks it, or something similar? And you’ve also made sure that the user you’re trying to connect with, is allowed from that IP/host within MySQL?

  13. mvaxdx mvaxdx

    Jocke, firstly i would like to thank you very much for your great work.
    and i have a question: is it safe to run the “ndb_mgmd” within the “ndbd” at same box?
    Actually i kept my old configuration with 2 db nodes and 1 mgm node (not running the mysqld(API). would you suggest to take one of the db nodes and run another ndb_mgmd (with the ndbd)?

    • jocke jocke

      Hi,

      Good to see that people could use this guide for something useful (-:

      Regarding your question; it _should_ work. I’ve been running it for some months now without any issues. The machines (db0 + db1) has been restarted once or twice (one at a time, of course) due to upgrading other software running on them. This was all done while the MySQL-cluster was in production. However, this being said, I found no official “yes, this should work”-answers in the MySQL-cluster documentation.

      If you run ndb_mgmd on the two machines that also run ndbd, the output from “show” in ndb_mgm is a bit weird. After reading some emails on a MySQL-mailinglist somewhere, I got confirmation that this output was normal, and that it should work fine.

      I can’t see any reasons for why this should fail. I mean, if you run only 1 ndb_mgmd-node, what happens if that one crashes/becomes unavailable? It would be logic that one can run ndb_mgmd in a redundant way.

  14. Hi and thanks for your good tutorial!

    You should add 2 lines to your my.cnf:
    [mysql_cluster]
    ndb-connectstring=host.or.ip.of.management.node[,second.host]

    Else, if you run your mgmt node somewhere else, ndbd will complain that it can’t connect to 127.0.0.1 (it obviously doesn’t read the connectstring from the mysqld section).

    And it would be not harmful to add LSB headers to your first init script, too.

    Best regards!
    morphium

    • jocke jocke

      Hey, morphium!

      Thanks for the feedback.

      Indeed, you’re correct regarding the ndb-connectstring. I’ve updated the post accordingly.

      I added the LSB-headers, along with a few changes (mostly cosmetic) on all three of the init scripts.

  15. Kody Kody

    Does Mysql Server Have to be installed on db0 and db1 first???

    I am getting the following error when trying to run the command:

    ndb_mgmd -f /var/lib/mysql-cluster/config.ini –initial –config-dir=/var/lib/mysql-cluster/

    ERROR:
    The program ‘ndb_mgmd’ is currently not installed. You can install it by typing:
    apt-get install mysql-server-5.0

    Thanks so much!

    • Kody Kody

      I got it working… Although, I can’t seem to figure out how to install the mysqli extension without breaking the setup. Any advice?

      • jocke jocke

        Well, when you install mysql-packages after following this guide, it probably overwrites my.cnf, or adds a new my.cnf, which again probably have some conflicting information. Even binaries could end up with duplicates (and since cluster-binaries differs from normal binaries, it can cause issues). I suggest you look into this, and remove all my.cnf-files except the one you set up in the first place. In addition, I’d suggest you manually remove duplicate binaries (make sure you remove the non-cluster ones). As an alternative to removing the binaries, you could probably try using full paths, pointing to the correct binaries.

        Other than this, I don’t really know why it doesn’t work. Haven’t read anything that suggests that MySQLi won’t support cluster.

  16. dave dave

    nice tutorial, however is it not correct that if you bind a single IP address to two machines, you can then quite easily run mysql-proxy without the single point of failure, as one host will always respond to that IP, much like NLB in windows, were a single IP is assigned to bother servers, NLB manager works as a proxy to load balance and redirect to the correct server, and also monitor both server for alive.

    this is what I am going to try and achieve, what’s your thought?

    thanks

    dave

    • jocke jocke

      Yes, if you set up mysql-proxy on two different machines, sharing a virtual IP, you don’t have a single point of failure anymore.

      I’ve never dealt much with NLB, but there are equivalents for Linux/BSD; CARP or Keepalived. Both utilize the VRRP protocol.

  17. Hi!

    when i edit:

    [mysqld]
    user = mysql
    pid-file = /var/run/mysqld/mysqld.pid
    socket = /var/run/mysqld/mysqld.sock
    port = 3306
    basedir = /mysqlc/my_cluster/
    datadir = /var/lib/mysql
    tmpdir = /tmp
    ###########################################
    default-storage-engine=ndbcluster
    ndbcluster
    ndb-connectstring=localhost,172.19.7.10

    the mysql service don’t start! do you know something about this? =[

    if i comment:

    default-storage-engine=ndbcluster
    ndbcluster
    ndb-connectstring=localhost,172.19.7.10

    the mysql service start -_-

    • jocke jocke

      What does the logs say? You should be able to pinpoint the issue from those? Or post them here, and I’ll have a look at it.

      A few other things that comes to my mind;
      – Are you sure you’re running the cluster-version of MySQL?
      – Does ‘localhost’ resolve? (do ‘nslookup localhost’ or something).

      • :X

        i’m using ubuntu 12.04 with “mysql Ver 14.14 Distrib 5.5.24, for debian-linux-gnu (x86_64) using readline 6.2”

        who can i see the correct version?

        i guess that we found the problem :P you use ubuntu 8.04… =[

      • jocke jocke

        It has nothing to do with the Ubuntu-version.

        Your primary issue, I believe, is that you’re not running the Cluster-version of MySQL. I’m not sure if there exists a MySQL Cluster package in APT/Ubuntu.

        Download the latest MySQL Cluster here; http://dev.mysql.com/downloads/cluster/

  18. Nguyen Trong Tri Nguyen Trong Tri

    Thank jocke. Mysql cluster worked, but mysql proxy not work.
    I have 2 sql node
    sqlnode1: 192.168.2.53
    sqlnode2: 192.168.2.54
    I install mysql-proxy on manager node. (manager node: 192.168.2.52)
    I config file mysql-proxy.conf

    [mysql-proxy]
    daemon = true
    keepalive = true
    proxy-address = 192.168.2.52:3306
    # db0
    proxy-backend-addresses = 192.168.2.53:3306
    # db1
    proxy-backend-addresses = 192.168.2.54:3306

    When to shutdown sqlnode2 (sqlnode1 runing), i can connect to proxy-address.
    But when to shutdown sqlnode1 (sqlnode2 runing), i can’t connect to proxy-address. Error:
    Error: EMyError:
    #07000(proxy) all backends are down

    Please help me, thank.
    Sorry for English not good.

    • Nguyen Trong Tri Nguyen Trong Tri

      And how to shutdown or restart mysql-proxy ?

      • jocke jocke

        mysql-proxy sure comes with a manual, yes? The init-script I provided in this blogpost can also be used (depending on what OS you’re using, of course).

    • jocke jocke

      This seems strange. If you shut down sqlnode1 (when sqlnode2 is running), can you connect just fine to sqlnode2 directly? (that is, not via mysql-proxy). If you can’t, then you need to figure out that first, before troubleshooting mysql-proxy. However, if you can connect to sqlnode2 directly just fine, I’m not really sure why it’s not working via the mysql-proxy. Your config seems to be correct (assuming the information you provided is correct, of course).

  19. Noneee Noneee

    please explain more detail of mysql-proxy, i don’t understand that, please explain its steps.

  20. David David

    Hello,
    Thanks for your tutorial. It worked first time. Well until I reboot. I’ve installed it on 2 Centos VMs and if I start everything at startup, it’s a bit hit and miss. But if I start ndb_mgmd at boot, then login and start ndbd and wait. Then start mysql, it all works.
    I’m guessing, maybe incorrectly, that everything is starting all at once and they need to start in order. I did change the order of startup, but I don’t know if I can start ndbd but make sure that the ndb_mgmd script has run first.
    Any ideas.

    Thanks,

    David

    • jocke jocke

      Hi David,

      I can’t say for sure if I ever ran across the issue you have. I don’t have a working setup of a MySQL-cluster anymore, so I cannot test what works/what doesn’t.

      However, if you want to delay ndbd from starting, until ndb_mgmd has started, you could probably replace some parts in the init-script of ndbd.

      Replace (in start_ndbd-function):

      		$ndbd_bin
      

      with:

      while true; do
      	# check if ndb_mgmd is running
      	number_of_ndb_mgmd_pids=`ps aux|grep -iv "grep"|grep -i "$ndb_mgmd"|wc -l`
      
      	if [ $number_of_ndb_mgmd_pids -eq 0 ]; then
      		# ndb_mgmd not running -- wait
      		echo "ndb_mgmd is not running. Waiting..."
      		sleep 5
      	else
      		# ndb_mgmd running -- lets start
      		$ndbd_bin
      		break
      	fi
      done
      

      Keep in mind that the above code is untested, so you might have to tweak it a bit.

      • David David

        Thanks for replying, I’ve managed to sort it out.
        Someone pointed me in the direction of Arbitration. It would appear that because there are only 2 nodes, it all gets a bit confused if you shut one down.

        If you add the line:
        Arbitration=disabled
        in the [NDBD DEFAULT] section of config.ini, when you shut one node down, the other node can take over as Master. This works fine in the system I’m building.
        The downside with this option is that a ‘split brain’ can occur. That may be a nightmare for some systems.
        Thanks again for you help.

      • jocke jocke

        Hey again,

        Good to hear that you sorted it out, and thanks for letting us know what you did to resolve it! (-:

        However, I cannot remember that I had to do any tweaking to make it work redundantly with only two nodes. I could shut one of them off, and everything worked nicely. Whenever I turned it back on, it would replicate whatever changes that was done to the other node while it was offline.

  21. Stefan Stefan

    Woah! I’m really digging the template/theme of this website. It’s simple, yet
    effective. A lot of times it’s difficult to get that “perfect balance” between superb usability and appearance. I must say that you’ve done a superb job
    with this. In addition, the blog loads very fast for me on Internet explorer.
    Superb Blog!

    • jocke jocke

      Hey,

      Thanks for the positive feedback. Indeed, I’m somewhat a minimalist, so the theme felt appealing — even for me (-:

  22. Dave Weddell Dave Weddell

    Thanks a lot for this excellent description of mysql-cluster. I don’t think I’d of got my test system up and running without it. Certainly not in the timescale I had available.
    I’m still having a few teething problems with mysql-proxy, but the rest of it is working like a dream.

    Thanks again :)

    • jocke jocke

      Good to hear that it’s of use to people (-:

      What issues are you experiencing with mysql-proxy? I might be able to help you out…

      • Dave Weddell Dave Weddell

        Hi Jocke,

        It appears with the current version that the mysql-proxy.conf file won’t accept multiple proxy-backend-addresses entries. It seems to just use the last it reads in the file. I have solved the problem, as it turns out that you put all the proxy-backends on the same line but comma separated.

        In addition it doesn’t even cope with spaces after the commas on that line so you need a line like: –

        proxy-backend-addresses = db0.internal:3306,db1.internal:3306

        to make it work.

        Anyway that’s put out there for anyone in a similar boat. And thanks again for a great description :)

      • jocke jocke

        Thanks for the heads-up. I’ve updated the post to reflect this change.

  23. Ed Ed

    I’m kinda stuck.
    When use :
    ndb_mgmd -f /var/lib/mysql-cluster/config.ini –initial –config-dir=/var/lib/mysql-cluster/

    It says: No such file or directory

    I’ve used a fresh install of debian6 with only ssh and basic installation.
    Downloaded: mysql-cluster-gpl-7.2.9-linux2.6-x86_64.tar.gz

    I see in /usr/local/mysql/bin ndb_mgmd. But for some reason its not working for me.

    At first i had a problem with libaio1 but i solved it with:
    apt-get install libaio1 libaio-dev

    Regards,

    Ed

    • jocke jocke

      Are you sure of the following two things?

      1. You’re in the /usr/local/mysql/bin-folder
      2. You have chmod’ed /usr/local/mysql/bin/ndb*
      • Ed Ed

        1) Im sure as i could see the ndb_mgm and ndb_mgmd.
        2) i’v chmod’ed: chmod +x bin/ndb_mgm*

        I’ll try chmod +x bin/ndb* and see if that makes a difference

      • jocke jocke

        Are you sure that the folders/files you use in the parameters for ndb_mgmd, exists? (/var/lib/mysql-cluster/config.ini and /var/lib/mysql-cluster/)

  24. Hello,
    I managed to sort out my start-up problems. Now when rebooting everything starts as it should. I’m using Centos 6.3:
    1. Change the start-up order so that ndbd starts after ndd_mgm.
    2. Add this line in config.ini to every section that has the nodeid
    DataDir= /var/lib/mysql-cluster
    This sets the data directory. Other wise the directory is set to where you start the programs from. I had log files and file systems all over the place.
    3. In my.cnf add these lines:
    ndb-wait-connected=600
    ndb-wait-setup=600
    It will make mySQL wait until the cluster is up and running before starting. Wait time is in seconds. It doesn’t need to be that high, but it works fine for me.
    May help some…….

    David

    • jocke jocke

      Hello,

      1. I mad a “fix” for this a few comments above.

      2. The config-examples I used in my posts, all have DataDir or BaseDir.

      3. Seems like newer versions have these set at a default of 30 seconds. Was that too low in your case? 600 seems like a really high value for these variables.

  25. tdldp tdldp

    Hello,

    Following tutorial, i’ve tried to setup the 2 ndb_mgmd and ndbd on 2 debian 6 boxes.
    Things go pretty well (after checking that libaio1 package is installed, else errors), until i come to the ndb_mgmd and ndbd initial start.
    For information i used following configuration :
    db0 :
    host : infra.db1.domain.ext
    mysql datadir : /srv/dat_infr_db_001/mysql_cluster/mysqld_data (instead of /opt/oracle/disk/mysql_cluster/mysqld_data)
    mysql-cluster datadir : /srv/dat_infr_db_001/mysql_cluster/mysql-cluster (instead of /var/lib/mysql-cluster)

    db1 : infra.db2.domain.ext
    mysql datadir : /srv/dat_infr_db_002/mysql_cluster/mysqld_data (instead of /opt/oracle/disk/mysql_cluster/mysqld_data)
    mysql-cluster datadir : /srv/dat_infr_db_002/mysql_cluster/mysql-cluster (instead of /var/lib/mysql-cluster)

    The personalised directories is due to nas drive allocation on the server that works with minimal space on system disks (3go) not enough for holding database data (where as /srv/disks has unlimited space allocation possible)
    I get the following messages :

    for ndb_mgmd :
    MySQL Cluster Management Server mysql-5.5.29 ndb-7.2.10
    2013-01-09 12:12:23 [MgmtSrvr] WARNING — at line 32: Cluster configuration warning:
    arbitrator with id 1 and db node with id 3 on same host infra.db01.domain.ext
    arbitrator with id 2 and db node with id 4 on same host infra.db02.domain.ext
    Running arbitrator on the same host as a database node may
    cause complete cluster shutdown in case of host failure.
    This is i think a warning that can be safely ignored.

    for ndbd :
    On db0 :
    [ndbd] INFO — Angel connected to ‘infra.db01.domain.ext:1186’
    [ndbd] INFO — Angel allocated nodeid: 3

    on db1 :
    I first had :
    [ndbd] INFO — Angel connected to ‘infra.db01.domain.ext:1186’
    [ndbd] INFO — Angel allocated nodeid: 4
    [ndbd] WARNING — Cannot change directory to ‘/srv/dat_infr_db_001/mysql_cluster/mysql-cluster’, error: 2
    [ndbd] ERROR — Couldn’t start as daemon, error: ‘Failed to open logfile ‘/srv/dat_infr_db_001/mysql_cluster/mysql-cluster/ndb_4_out.log’ for write, errno: 2′

    What was weird : connected to db0 host instead of db1 host …
    After changing my.cnf on db1 to :
    ndb-connectstring=infra.db02.domain.ext,infra.db01.domain.ext
    and on db0 to :
    ndb-connectstring=infra.db01.domain.ext,infra.db02.domain.ext

    I now get :
    [ndbd] INFO — Angel connected to ‘infra.db02.domain.ext:1186’
    [ndbd] INFO — Angel allocated nodeid: 4
    [ndbd] WARNING — Cannot change directory to ‘/srv/dat_infr_db_001/mysql_cluster/mysql-cluster’, error: 2
    [ndbd] ERROR — Couldn’t start as daemon, error: ‘Failed to open logfile ‘/srv/dat_infr_db_001/mysql_cluster/mysql-cluster/ndb_4_out.log’ for write, errno: 2′

    Weirdly : Fatal error points on db1 on the wrong directory /srv/dat_infr_db_001 instead of db_002 that is configured locally on this server…

    Any idea on how to solve this…
    Thnks by advance for any help

    Tdldp
    Ps : I’d recommend for normal debian installation to add to your tutorial the libaio1 installation command (apt-get install libaio1)

    • tdldp tdldp

      UPDATE :
      after googling, and reading comments here, i add DataDir=/srv/dat_infr_db_001/mysql_cluster/mysql-cluster and
      DataDir=/srv/dat_infr_db_002/mysql_cluster/mysql-cluster
      to NDB_MGMD entries of config.ini file.

      With this corrected, apparently i do not have any more errors on ndb_mgmd and ndbd starts (–initial)
      Yet, i think something is still not working :

      — NDB Cluster — Management Client —
      ndb_mgm> show
      Connected to Management Server at: infra.db01.emsvo.fr:1186
      Cluster Configuration
      ———————
      [ndbd(NDB)] 2 node(s)
      id=3 (not connected, accepting connect from infra.db01.domain.ext)
      id=4 (not connected, accepting connect from infra.db02.domain.ext)

      [ndb_mgmd(MGM)] 2 node(s)
      id=1 @XX.XXX.X.XX (mysql-5.5.29 ndb-7.2.10)
      id=2 (not connected, accepting connect from infra.db02.domain.ext)

      [mysqld(API)] 2 node(s)
      id=5 (not connected, accepting connect from any host)
      id=6 (not connected, accepting connect from any host)

      when continuing tutorial with mysql.server start, servers do start (Your MySQL connection id is 5 and Your MySQL connection id is 6)
      but when trying to change to ndbcluster engine, i get this :
      mysql> ALTER TABLE mysql.user ENGINE=NDBCLUSTER;
      ERROR 157 (HY000): Could not connect to storage engine

      Wassup ? ;)

      Tdldp

      • jocke jocke

        Hello,

        Good to see that you resolved the path-issue. I guess you encountered that issue because you had different paths for db0 and db1 (whereas I had the same path on both, so I could use the same configuration).

        Regarding your second issue, it seems like you have some “connection” issues. Either because of firewalls (as Puneet pointed out), or because of other network issues.

        When you do the “show” command in ndb_mgm, you see that you’re missing one ndb_mgmd-node, and both mysqld-nodes. This needs to be resolved, before you can move on to the altering of the ENGINE type.

      • tdldp tdldp

        Hello all … Little update here …
        Mysql cluster is up and running smoothly … at last…
        Nothing to do with path or firewall problems, but you need to set RAM memory level at least at 6Go on your basic cluster setup config (i tried your config with only 4 Gb of Ram, and mysql was crashing). Fixed that, rebooted server, and no more problems…

        Oh and i also added :
        DataDir=/srv/dat_infr_db_001/mysql_cluster/mysql-cluster
        DataDir=/srv/dat_infr_db_002/mysql_cluster/mysql-cluster
        to NDBD configuration file.

        Looking now at mysql proxy thing, setting this up on a nginx front server, but i was wondering one thing : The init scripts, on which server do you set them up ? u don’t state this and speak of the 3 init scripts that are not all on same servers (mgm / ndbd on db0 and db1, mysql-proxy on 3rd machine)

      • jocke jocke

        Hi,

        4GB of RAM worked fine in my setup. Maybe newer versions have other requirements. In any case, thanks for sharing what the solution to your problem was, and that it now seems to be working fine.

        Those dirs you entered in the NDBD-config, was probably needed since you had different folders on the different nodes. If they had been the same on both, you wouldn’t need it.

        Regarding the init-scripts. They are set up on the respective nodes running the respective software. If node0 runs ndb and ndb_mgmd, you should have both those init-scripts there. If node1 runs ndb only, you only need the init-script of ndb. If you run mysql-proxy on node2, you only need the init-script of mysql-proxy there.

  26. Puneet Puneet

    Hey buddy!!

    Thanx for the nice walk through, u reallly saved some of my effort.
    But i always come across a peculiar problem once I’ve made the management server up and running and started my both individual nodes, the mysql server starts fine on both the nodes and once i change the table structure as told by you in one of the node and shutdown both the nodes, copy the data from one to other and then after that when i try starting mysql server on the second system to which i copied the data it always shows the error “Starting MySQL………………………….. ERROR! The server quit without updating PID file (/opt/oracle/disk/mysql_cluster/mysqld_data/localhost.localdomain.pid).”

    the processes running are as under

    root 1355 1 0 04:07 ? 00:00:00 /usr/local/mysql/bin/ndbd –initial
    root 1356 1355 1 04:07 ? 00:00:09 /usr/local/mysql/bin/ndbd –initial
    root 2124 1250 0 04:21 pts/0 00:00:00 grep mysql

    and the “ndb_3.pid” file under /var/lib/mysql-cluster/ shows the pid 1355

    Please help me with this
    thanks in advance…….

    • jocke jocke

      Did you try to just delete the PID-file? That is, delete the PID-file, and then try to start mysqld again. Let me know if that works or not.

      • Puneet Puneet

        even if i deleted the pid file it didn’t helped. Then i thought of setting the system right back from the start, deleted all the setup and the set it up from the beginning. It worked !!! but I’m still wondering why this issue happened?

      • jocke jocke

        Good to hear that it works. I’m not really sure why you got that error. I might recall that I had a similar issue, but as far as I can remember, I solved it without having to redo the whole setup.

      • Puneet Puneet

        exactly how did u sorted that issue out without redoing the entire setup ? do u remember???

      • jocke jocke

        I’m sorry, but I don’t really recall it. I’m not even sure I had the same issue(s) you had. If I should guess, I probably just tried stuff, restarting here and there, wiggling the setup until it worked. (-:

      • Puneet Puneet

        Hmmm thats ok !!!! And ever since i did the setup again, there are no problems at all. So lets forget about it, untill the issue creeps in again :) moving to production environment now !!!!!

  27. Puneet Puneet

    @Tdldp
    Hey guess this is possibly because of your firewall, in case u are using CentOS try doing this
    /etc/init.d/iptables stop
    I guess this will solve your problem :)

    • tdldp tdldp

      nothing to do with firewall, ports were correctly opened, this was due to mysql cluster crashing for lacking memory … Once corrected no more problems…

      • Puneet Puneet

        glad to hear that u got it sorted out :)

  28. tdldp tdldp

    Little wonders, following the how to gives me several questions :
    – regarding security :
    Nodes db0 and db1 do not have mysql root password, how did you add some ? (method, more that syntax)

    – Installing a phpmyadmin :
    Is it possible ? where should it be installed from (i’d say db2 where mysql-proxy is staying)

    – mysql-proxy : gives me this message :
    2013-01-21 16:43:48: (message) mysql-proxy 0.8.0 started
    2013-01-21 16:43:48: (debug) chassis-limits.c:75: current RLIMIT_NOFILE = 1024 (hard: 1024)
    2013-01-21 16:43:48: (debug) chassis-limits.c:79: trying to set new RLIMIT_NOFILE = 8192 (hard: 1024)
    2013-01-21 16:43:48: (critical) chassis-limits.c:81: could not raise RLIMIT_NOFILE to 8192, Invalid argument (22). Current limit still 1024.

    yet it seems it’s created 49 threads correctly, how do i test correct functionning ?

    Thanks for your return…

    • tdldp tdldp

      oh and i’d add for security concerns and phpmyadmin access, mustn’t i add both hosts to each mysql nodes in user access ?

      • tdldp tdldp

        @myself : security concerns are no more actuality. : Root access is now password and access restricted with success.
        Phpmyadmin is setup and running may it access through mysql-proxy or directly to the nodes (with security comming with it) : It is possible to setup phpmyadmin, but under debian you must ignore pma autoconfig. It doesn’t work (not planned for a cluster usage)

        @all : still seeing these mysql-proxy error messages.
        I have a further problem : Yesterday proxy was running ok on both sql nodes, (starting/stopping) them, and today, node db1 is not answering though it is up and running… Will investigate and run tests

      • tdldp tdldp

        @all : mysql-proxy errors still occuring, but :
        db1 node problem solved : Mysql-cluster locked this node on a big query of 200 000 data lines. had to mysqladmin flush hosts. and added by security :
        max_connect_errors=10000 in my.cnf inside [mysqld] section on both data nodes.
        It is apparently a common problem, maybe you should add this to your conf for newbs don’t slam their head against the wall…

    • jocke jocke

      Could you please post your mysql-proxy config? What arguments do you start mysql-proxy with?

  29. Craig Craig

    How many computers do i need to set this up ?

    • tdldp tdldp

      @craig : Personnally i’ve set this up on 3 servers all virtualised.
      I wouldn’t try with less (mainly for HA/Failover)
      I’m facing a little problem with mysql-proxy that can’t see db1 node (but yesterday evening everything was ok and it could access all nodes)

      • how did you manage to set this up with 2 nodes, i still get

        MySQL Cluster Management Server mysql-5.5.29 ndb-7.2.10
        2013-02-02 02:49:42 [MgmtSrvr] WARNING — at line 31: Cluster configuration warning:
        arbitrator with id 1 and db node with id 3 on same host 10.0.100.141
        arbitrator with id 2 and db node with id 4 on same host 10.0.100.142
        Running arbitrator on the same host as a database node may
        cause complete cluster shutdown in case of host failure.

  30. Hello,

    should it require a symlink to mysql, so it is recognized ?
    > ln -s /usr/local/mysql/bin/mysql /usr/bin/mysql

    Rch

    • jocke jocke

      This makes sense if MySQL cluster is the only MySQL version you have.

      In my case, I had a non-clustered MySQL version installed via apt. This is why I use full paths in my init-scripts, so that I was sure the correct versions where loaded.

  31. Thank you very much for this great article.

    A few remarks:

    1) I think these tables have to be as well put on NDB engine (otherwise there is an inconsistency error in the log: /opt/oracle/disk/mysql_cluster/mysql_data/%server_name%.err)

    mysql> ALTER TABLE mysql.procs_priv ENGINE=NDBCLUSTER;
    mysql> ALTER TABLE mysql.proxies_priv ENGINE=NDBCLUSTER;

    2) why did you copied the files from db0 to db1 ? I didn’t do it and even didn’t work when I tried (you need to re-set the permissions on the copied folders)

    3) probably would make sense to create symlink
    > ln -s /usr/local/mysql/bin/ndb_mgm* -t /usr/bin/

    Rch

    • jocke jocke

      You’re welcome.

      1) If you got errors, then you either did something wrong, or there must be a change since I originally wrote this article. Back then, it ran fine with the alterations I did. Nevertheless; I’ve updated the article to reflect that one might have to alter those two tables as well. Thanks for the feedback.

      2) Because it wouldn’t work properly if I didn’t do this. The reason for this, was that the tables on both (or all, if more than two) nodes needs to have the correct ENGINE set on the tables. If you change the ENGINE on both nodes manually, it might work, but since I wanted to be 100% sure that both sides of the database was 100% in sync, I made all changes on one side, and copied them over, so that it would be a 100% identical setup on the other node. Not sure if they’ve changed this, since you’re getting errors, but it worked perfectly fine in my case.

      3) Again, this would only make sense if MySQL cluster is the only MySQL version you’re running.

      • – oh indeed, makes sense for 1) & 3)
        – concerning 2) => well, I followed the instructions and when I changed the ENGINE types on the first node it was automatically replicated, so on the 2nd node when I made a “SELECT TABLE_NAME, ENGINE FROM information_schema.TABLES WHERE TABLE_SCHEMA = ‘mysql’;” it showed correctly the changed ENGINEs. (i use the latest NDB 7.2.x – maybe it differed prior?) (so, i didn’t copy the files over the servers and the replications works fine as your test shows)

        Rch

      • jocke jocke

        So you’re saying that it actually replicates _without_ having to set ENGINE to ‘ndbcluster’ on the other node? That is; it’s enough to change it on one of the nodes? That doesn’t make much sense? I thought the point of changing the ENGINE, was that you could actually host non-replicated databases if you wanted to (by not changing ENGINE to ‘ndbcluster’).

        But maybe I’m wrong?

  32. tdldp tdldp

    Mysql Cluster has been running for a few months correctly…
    But Provider has had a power / Disk outage, that has broken mysql cluster…

    Since then, each time i do a /etc/init.d/mysql.server start, i get this on all mysql cluster nodes:

    130529 14:09:04 mysqld_safe Starting mysqld daemon with databases from /dat_db_001/mysql_cluster/mysqld_data
    130529 14:09:04 [Note] Plugin ‘FEDERATED’ is disabled.
    130529 14:09:04 InnoDB: The InnoDB memory heap is disabled
    130529 14:09:04 InnoDB: Mutexes and rw_locks use GCC atomic builtins
    130529 14:09:04 InnoDB: Compressed tables use zlib 1.2.3
    130529 14:09:04 InnoDB: Using Linux native AIO
    130529 14:09:04 InnoDB: Initializing buffer pool, size = 128.0M
    130529 14:09:04 InnoDB: Completed initialization of buffer pool
    130529 14:09:04 InnoDB: highest supported file format is Barracuda.
    130529 14:09:04 InnoDB: Waiting for the background threads to start
    130529 14:09:05 InnoDB: 1.1.8 started; log sequence number 1595675
    130529 14:09:05 [Note] NDB: NodeID is 5, management server ‘infra.db01.domain.ext:1186’
    130529 14:09:06 [Note] NDB[0]: NodeID: 5, all storage nodes connected
    130529 14:09:06 [Warning] NDB: server id set to zero – changes logged to bin log with server id zero will be logged with another server id by slave mysqlds
    130529 14:09:06 [Note] Starting Cluster Binlog Thread
    130529 14:09:06 [Note] Server hostname (bind-address): ‘0.0.0.0’; port: 3306
    130529 14:09:06 [Note] – ‘0.0.0.0’ resolves to ‘0.0.0.0’;
    130529 14:09:06 [Note] Server socket created on IP: ‘0.0.0.0’.
    130529 14:09:06 [ERROR] /usr/local/mysql/bin/mysqld: Incorrect information in file: ‘./mysql/host.frm’
    130529 14:09:06 [ERROR] Fatal error: Can’t open and lock privilege tables: Table ‘mysql.host’ doesn’t exist
    130529 14:09:06 mysqld_safe mysqld from pid file /dat_db_001/mysql_cluster/mysqld_data/infra-001.pid ended

    Any ideas on how to correct this…
    Many google posts point to corrupted data in mysql table… I had it engined by ndbcluster…

    • jocke jocke

      Hi,

      I’ve never had the issue you describe, so I don’t really know off-hand whats causing this.

      1) What is the content of the ‘./mysql/host.frm’ file?

      2) Have you moved/changed any paths after the power/disk outage?

      3) It might be corruption, as you mention. One solution might be to try the ‘mysql_install_db’ command again (as used in my guide above). More specifically;

      mysql_install_db --user=mysql --no-defaults --datadir=/path/to/your/mysql/data/dir

      Please keep in mind to take backups first — the above command should initialize the missing tables, but I’m not sure what else it does, so you might mess up the database even more.

  33. skraw skraw

    Is there a way to tell the cluster to automatically use the ndb engine on all old and newly created databases?
    IOW how to make the ndbengine completely transparent to all the databases?

    • jocke jocke

      Not that I’m aware of — I believe some of the MySQL-databases isn’t meant to be replicated (i.e. they are of local relevance to each of the MySQL servers). In any case it’s pretty easy to just make a script that lists all databases, and then iterates through them to change the engine.

  34. Amar Amar

    I shutdown the cluster and then started it with the ‘–initial’ flag on both the management node and the data nodes just to notice a little later that ‘–initial’ does a completely different thing on the data nodes than it does on the management.

    After that that my tables disappeared.

    Is it true “it happens due to ndb –initial on nodes.”

    • jocke jocke

      I’m not really sure I understand what you’re asking here. If the question is if ‘–initial’ should be part of the command when running ndbd and ndb_mgmd for the first time, the answer is “yes”. I do not believe this has changed since I wrote this article.

  35. Amar Amar

    My Mysql cluster was running fine but I started to get error of table full.So I increased Data memory size in config.ini and start management node with –intial.Again I started ndbd –intial on both data nodes and my all tables which a
    were in ndb engine disappeared.

    That’s why I asked “Is it true ndbd — intial do some diffrent in cluster”.

    Could u pls tell me how max concurrent users can accept Mysql-proxy.I have traffic of 30k query per second.Can I use Mysql-proxy??

    • jocke jocke

      You only use ‘–initial’ the first time you set up the ndb-cluster. If you change settings in the config.ini-file (or wherever else), you are not supposed to start the ndb-cluster with ‘–initial’ again. My article above highlights the “first time” part, since it should only be used the first time (-:

      Regarding mysql-proxy, I never did any benchmarks. Try it out in a non-production environment, and see how it works out. I would suppose the specs of the server running mysql-proxy would be of relevance.

  36. Jesus Jesus

    ?Before you install mysql-cluster you need to have installed mysql-server??

    • jocke jocke

      Not sure what you mean, but mysql-cluster is a different version than the normal mysql-server, so you don’t have to install mysql-server to get mysql-cluster working (in essence, the mysql-cluster has many of the same binary files as the normal mysql-server).

  37. Marcio Oliveira Marcio Oliveira

    Hello I have a difficulty to configure the management node to start service in command ndb_mgmd-f / var / lib / mysql-cluster / config.ini – configdir = / var / lib / mysql-cluster / returns the error “bash : / usr / bin / ndb_mgmd no such file or directory “being made ??to copy the binaries and gave them permission. what can be?

    • jocke jocke

      Hi,

      bash: /usr/bin/ndb_mgmd no such file or directory” means basically what it says. There is no file/binary in that location. If you followed the guide to the point, that file should be located here; /usr/local/mysql/bin/ndb_mgmd.

  38. Tim Tim

    Hi,

    One issue we’re seeing from time to time (even on 7.3) :

    We have a working cluster: 1 Management Node, 2 Data nodes and 2 SQL nodes, all on different servers.

    If something occurs with networking, e.g. the port on the switch that 1 data node is connected to gets temporarily disabled, the NDBD process on that data node will exit. There seems to be no elegant way to automatically let it reconnect (or re-launch in this case?). We should have something like:
    – no more network on data node, go into ‘disabled’ mode, but not quit NDBD!
    – check for network availability every x minutes
    – if OK, reconnect data node

    Does something like this exist? If not, perhaps Nagios can do such a thing?

  39. Some may add nice incentives within their ad text, but why
    not work into your ad rotation some opportunities to really stick out
    from the crowd. Auto-submit the optimized pages to the
    major search engines around the globe. But it is
    also important to note that there are factors that should be
    considered when hiring the company.

  40. driggs driggs

    How to start ndb_mgmd after modifying the config.ini (like adding another data node) without using ‘–initial’? The ‘–initial’ deletes all the table data that were created using the previous ndbd right?

    • jocke jocke

      By using the init-scripts provided in my post above? (/etc/init.d/ndb_mgmd). You could also start them manually — just leave out the ‘–initial’ part.

  41. niko niko

    helped me so much! thanks a lot!!

Leave a Reply

Your email address will not be published. Required fields are marked *