The next meetup for the MySQL User Group NL is on September 12
Details and signup: http://www.meetup.com/MySQL-User-Group-NL/events/97461512/
Tuesday, September 10, 2013
Saturday, September 7, 2013
Using a loadbalancer with MySQL
A loadbalancer is used in many MySQL setups. Loadbalancing can be done with an hardware loadbalancer or with software. Software solutions tend to be a fair bit cheaper.
Loadbalancing is often implemented as an extra layer: The application connects to the loadbalancer via the network and the loadbalancer connects to the database via the network. The loadbalancing can be implemented on the application machines to get rid of the extra layer, this is better for latency (each layer adds latency) and as each application server has it's own loadbalancer it can also remove bottlenecks. The drawback are that you'll lose a single point of control and with multiple loadbalancers the number of health check queries will probably increase. The two-layer setup is often used with Connector/J and haproxy.
There are simple and more intelligent loadbalancers. There are loadbalancers which can split read and write traffic for example. Of course read/write splitting will only work for single statements as a transaction might start with reads and then start writing, though this can change if they detect read only transactions (START TRANSACTION READ ONLY). Connector/J can send read-only transactions to the slave if they are properly marked as read-only in the Java code.
The simple loadbalancers might only check if MySQL still listens on the configured TCP port by connecting to it and then disconnecting (which can increase the counters for aborted connects, etc). The more intelligent loadbalancers can logon to MySQL and query a table and check the result. The F5 BigIP loadbalancers are capable of doing this. But keep in mind that executing a query every x seconds can generate quite some load and might cause locking issues.
To mark a machine for maintenance (or as failed) there are some options:
There are some great tools to manage failovers in a replication setup like mysqlfailover from the MySQL Utilities and MHA. Those utilities generally can run a pre-failover and post-failover script when a failover happens, this can be used to change the contents of a check table or make some TCP port available or unavailable.
I tend to favor the two-layer setup instead of the three-layer setup (separate loadbalancer), but I do use the three-layer setup more often. The integration between failover tools and loadbalancer could be a lot better.
And I don't like all the health checks. The database pool in the application is doing health checks, the loadbalancer is doing health checks, the failover utility is doing health checks, the monitoring is doing health checks. Some of those should be combined. And better health checks generate more load and can be ran less frequently and simple/fast health checks might not catch a failure.
Please add a comment about how you're using loadbalancers and MySQL and how you automate it.
Loadbalancing is often implemented as an extra layer: The application connects to the loadbalancer via the network and the loadbalancer connects to the database via the network. The loadbalancing can be implemented on the application machines to get rid of the extra layer, this is better for latency (each layer adds latency) and as each application server has it's own loadbalancer it can also remove bottlenecks. The drawback are that you'll lose a single point of control and with multiple loadbalancers the number of health check queries will probably increase. The two-layer setup is often used with Connector/J and haproxy.
There are simple and more intelligent loadbalancers. There are loadbalancers which can split read and write traffic for example. Of course read/write splitting will only work for single statements as a transaction might start with reads and then start writing, though this can change if they detect read only transactions (START TRANSACTION READ ONLY). Connector/J can send read-only transactions to the slave if they are properly marked as read-only in the Java code.
The simple loadbalancers might only check if MySQL still listens on the configured TCP port by connecting to it and then disconnecting (which can increase the counters for aborted connects, etc). The more intelligent loadbalancers can logon to MySQL and query a table and check the result. The F5 BigIP loadbalancers are capable of doing this. But keep in mind that executing a query every x seconds can generate quite some load and might cause locking issues.
To mark a machine for maintenance (or as failed) there are some options:
- Reject traffic from the loadbalancer using a firewall (iptables)
- Rejecting all traffic might make it hard to backup/restore the machine or connect with a remote client like MySQL Workbench.
- Dropping traffic will also work, but a reject might cause the loadbalancer to respond faster
- Change the contents of the table to which the loadbalancer connects
- But be carefull with replication
- Connect to the loadbalancer via an API (e.g. F5 iControl) and change the status
- Not every solutions does have an API
- Manually changing the status in the loadbalancer
There are some great tools to manage failovers in a replication setup like mysqlfailover from the MySQL Utilities and MHA. Those utilities generally can run a pre-failover and post-failover script when a failover happens, this can be used to change the contents of a check table or make some TCP port available or unavailable.
I tend to favor the two-layer setup instead of the three-layer setup (separate loadbalancer), but I do use the three-layer setup more often. The integration between failover tools and loadbalancer could be a lot better.
And I don't like all the health checks. The database pool in the application is doing health checks, the loadbalancer is doing health checks, the failover utility is doing health checks, the monitoring is doing health checks. Some of those should be combined. And better health checks generate more load and can be ran less frequently and simple/fast health checks might not catch a failure.
Please add a comment about how you're using loadbalancers and MySQL and how you automate it.