Loadbalancing is often implemented as an extra layer: The application connects to the loadbalancer via the network and the loadbalancer connects to the database via the network. The loadbalancing can be implemented on the application machines to get rid of the extra layer, this is better for latency (each layer adds latency) and as each application server has it's own loadbalancer it can also remove bottlenecks. The drawback are that you'll lose a single point of control and with multiple loadbalancers the number of health check queries will probably increase. The two-layer setup is often used with Connector/J and haproxy.
There are simple and more intelligent loadbalancers. There are loadbalancers which can split read and write traffic for example. Of course read/write splitting will only work for single statements as a transaction might start with reads and then start writing, though this can change if they detect read only transactions (START TRANSACTION READ ONLY). Connector/J can send read-only transactions to the slave if they are properly marked as read-only in the Java code.
The simple loadbalancers might only check if MySQL still listens on the configured TCP port by connecting to it and then disconnecting (which can increase the counters for aborted connects, etc). The more intelligent loadbalancers can logon to MySQL and query a table and check the result. The F5 BigIP loadbalancers are capable of doing this. But keep in mind that executing a query every x seconds can generate quite some load and might cause locking issues.
To mark a machine for maintenance (or as failed) there are some options:
- Reject traffic from the loadbalancer using a firewall (iptables)
- Rejecting all traffic might make it hard to backup/restore the machine or connect with a remote client like MySQL Workbench.
- Dropping traffic will also work, but a reject might cause the loadbalancer to respond faster
- Change the contents of the table to which the loadbalancer connects
- But be carefull with replication
- Connect to the loadbalancer via an API (e.g. F5 iControl) and change the status
- Not every solutions does have an API
- Manually changing the status in the loadbalancer
There are some great tools to manage failovers in a replication setup like mysqlfailover from the MySQL Utilities and MHA. Those utilities generally can run a pre-failover and post-failover script when a failover happens, this can be used to change the contents of a check table or make some TCP port available or unavailable.
I tend to favor the two-layer setup instead of the three-layer setup (separate loadbalancer), but I do use the three-layer setup more often. The integration between failover tools and loadbalancer could be a lot better.
And I don't like all the health checks. The database pool in the application is doing health checks, the loadbalancer is doing health checks, the failover utility is doing health checks, the monitoring is doing health checks. Some of those should be combined. And better health checks generate more load and can be ran less frequently and simple/fast health checks might not catch a failure.
Please add a comment about how you're using loadbalancers and MySQL and how you automate it.
Hi Daniël!
ReplyDeleteSince you explicitly mention Connector/J, can I ask why you would choose to use haproxy instead of the load-balancing support natively baked into Connector/J? This can be done without any additional software, and provides an interface to manage changing topographies (temporary host maintenance, bringing new hosts into load-balance target pool) without any application down time or restarts:
http://mysqlblog.fivefarmers.com/2010/08/03/managing-load-balanced-connectorj-deployments-2/
I would use the loadbalancing in Connector/J if possible. I would use haproxy if the application and/or connector would not support loadbalancing.
DeleteThanks for clarifying, Daniël! I think I was a little confused by the classification of Connector/J (and haproxy) as two-layer solution.
DeleteFWIW, there are a few other connectors which also offer load-balancing support to some extent now. Leaving these breadcrumbs for others looking into load-balanced deployments where they may be applicable:
Connector/Net 6.7 and PHP (via mysqlnd).
It s really a big topic. I really would like to know some stories around.
ReplyDeleteFor HA, usually two big IP are necessary, and the configuration informed which one is the 'Active' one and if there are some 'Changes Pending'.
There are a lot to code for make MHA for example, working smooth with LB layer:
- the script need to be detect which one is the Active big ip
- detect which are the available pools
- detect which are the active members in the pools, when MHA doing his magics
- maybe disable all the pools until MHA has not 'confirm' the new changed
- according with the topology in place, open/enables the pools according to the change apply by MHA.
- eventually kills some mysql connections (persistant connetion??java pool?), depends also how big ip is comfigured
Some application already implemented the split between R and W, and this allow often to define a list of masters (active/passive or multimaster) and list of slaves that could also serve just one side of the topology (very handy when necessary to scale in reading). MHA has a lot of options and it s great, but sometime from Load Balancer configuration is very hard to change all the pool definition and members 'on the fly' and be sure, also MHA may has promote a slave as a new master and so on, so the only change is integrate iControl with specific code i guess.
Users can use the built-in command-line tools or download MySQL front ends from parties that have developed software and applications to manipulate MySQL databases, to construct database structure and to use data records.https://www.dbdesigner.net
ReplyDelete