Monday, July 20, 2015
So I started testing this. I wrote a Python script with MySQL Connector/Python and used MySQL Sandbox to bring up an instance of MySQl 5.6.25.
One of the first settings I had to change was max_allowed_packet, which was expected. I set it to 1GB.
The next setting was less expected, it was innodb_log_file_size. The server enforces that the transaction has to fit in 10% of the InnoDB log files. So I had to set it to 2 files of 5G to be able to insert one record of (almost) 1GB.
So that worked for a row of a bit less that 1GB, this is because there is some overhead in the packet and the total has to fit in 1GB.
For the next step (>1GB) I switched from Python to C so I could use mysql_stmt_send_long_data() which allows you to upload data in multiple chunks.
I expected that to work, but it didn't. This is because in MySQL 5.6 and up the max_long_data_size was replaced by max_allowed_packet. But max_allowed_packet can only be set to max 1GB and max_long_data_size can be set to 4GB.
So I switched from MySQL 5.6 to MariaDB 10.1, because MariaDB still has max_long_data_size. That worked, now I could upload rows of up to (almost) 4GB.
I also noticed InnoDB will complain in the error logs about large tuples
InnoDB: Warning: tuple size very big: 1100000025
So you can insert CD ISO images in your database. For small DVD images this could work if your connector uses the COM_STMT_SEND_LONG_DATA command.
But it's best to avoid this and keep the size of rows smaller.
The scripts I used for my tests (and references to the bugs I found):
Wednesday, January 28, 2015
This is a benchmark which can be used to secure your MySQL installation.
Saturday, January 10, 2015
• 18:30 - 19:00: Welcome
• 19:00 - 19:30: Easy Replication Hierarchy Management with Pseudo-GTID
(by Shlomi Noach)
• 19:30 - 20:00: Food & Discussion • 20:00 - 20:30: Entry level Sphinx Search
(by Art van Scheppingen)
• 20:30 - 22:00: Discussion & Drinks
For more information and to RSVP you can visit the event page of MySQL User Group NL website
Monday, January 5, 2015
In this blog post I will describe different ways of using SSL with the MySQL database server.
What does SSL give you?
You might use MySQL replication over the internet or connect to MySQL over the internet.
Another posibility is that you connect over an enterprise network to which just too many people have access. This is especially an issue if you use an BYOD network.
SSL helps here by encrypting the network traffic to prevent against evesdropping. It also validates that you're talking to the correct server to prevent man-in-the-middle attacks.
And you can also use SSL client certificates together with an password as two factor authentication.
SSL is not the only option, you could use SSH and many MySQL GUI clients like MySQL Workbench support this. But you can't easily use SSH with a python script or mysqldump.
Things that could go wrong
Using SSL is almost always better than not using SSL at all. So there is not much you could do wrong.
But there are a few things that could go wrong:
A false sense of security
You think you are protected by using SSL, but you forgot some option somewhere and the connection falls back to a non-SSL connection. Make sure you set SSL as required so this can't happen and use Wireshark or a similar tool to inspect the traffic to see if it is really encrypted.
Not renewing certificates in time
You should have some mechanism in place to notify you that your certificate is about to expire. This could be a nagios check, a note in your calendar or an email alert from your CA. If you let your certificates expire then your clients should stop with an error.
If performance is important then you should benchmark with and without SSL to see what the impact is. You could try a different cipher and try both OpenSSL and YaSSL to see what performs best.
Tools which use traffic inspection
If you are using a tool like VividCortex, pt-query-digest based on a pcap file then you should make sure these continue to work after you've deployed SSL. This can be done by giving them your key and use a cipher which doesn't use Diffie-Hellman (DH) or let them use a different source like performance_schema and/or your slow query log. But this depends on what is supported with that application. This might also includes some load balancer setups.
How SSL with MySQL is different to SSL in your browser
The browser by default has a big list of trusted CA's. MySQL doesn't trust anything by default. So that a big difference. This is more similar to how OpenVPN uses SSL.
For both a MySQL server and a Web server you have to enable SSL. And with both it's possible to require client certificates, but that's much more common with MySQL.
There are some other minor differences like protocol support: MySQL only uses TLS v1.0. And MySQL doesn't use hostname validation by default, so your certificate might be for db1.example.com and might be used on the host db2.example.com. Your browser might use OCSP, CRL's or CRLsets to verify if a certificate is still valid. MySQL is only capable of using a CRL sice MySQL 5.6.
Setup 1: Server/Client with Internal CA
This is the most basic setup and where I created mysslgen for.
You should create a key for the server in PKCS#1 format as PKCS#8 won't work. And then create a CSR (Certificate Signing Request) which can be signed by your CA. The end result is a CA certificate a server certificate and a server key. I always use PEM formatted certificates.
What you need to configure on the server:
- SSL server certificate and key (
- SSL CA certificate (
What you need to configure on the client:
- SSL CA certificate (
To require the use of SSL for an account you should use
REQUIRE SSL with your
Setup 2: Server/Client with Internal CA and client certificate
For this you need to enable SSL on the server and create a certificate for the client.
On the client you need to specify the client certificate and key with
Now you must use
REQUIRE X509 to require a valid certificate. This won't work if you use the CA for more than this one MySQL server as other people might be able to get a valid certificate.
You might use
REQUIRE SUBJECT '/CN=myclient.example.com' to limit the certificates which can be used.
In this setup each certificate is issued by your CA this gives you a lot of control about which certificates are issued. The result is that all certificates from your CA are trusted.
Setup 3: Server/Client with Public CA
This is similar to setup 2. But a public CA will issue much more certificates and will probably also revoke some of them.
Here you should use --ssl-verify-server-cert to make sure the server has a certificate which matches the hostname. Otherwise someone might use a certificate which is valid, but just not for that hostname to create a man in the middle attack.
Also you should use
REQUIRE X509 and require a SUBJECT and optionally require a ISSUER for each user.
Setup 4: Server/Client with multiple public CA's
This looks almost like what webbrowsers use.
You could either use a certificate bundle (one file with multiple certificates) and specify this file with
On Ubuntu this is made available by the ca-certificates package on /etc/ssl/certs/ca-certificates.crt
Another option is to specify a
ssl-capath instead of using
ssl-ca. Then you could set it to
/etc/ssl/certs . If you want to use a custom dir with CA's then you need to run the c_rehash utility from OpenSSL on it to generate the correct symlinks. This only works if you use OpenSSL.
Setup 5: Replication
This works by setting MASTER_SSL_* parameters in the
CHANGE MASTER TO command.
Make sure to use
MASTER_SSL_VERIFY_SERVER_CERT if this is possible to enable hostname verification.
The only way to be really sure that the connection is using SSL is to set SSL as required for the replication user. The output of
SHOW SLAVE STATUS output will only show if SSL is allowed, not that it's in active use.
One CA per server or a centralized CA?
You could use a CA per server and then issue a client certificate for each client. But then you will need a different client certificate for each server you want to connect to. Also you would have much more CA certificates to distribute.
So you might want to use one CA instead of a per-server CA.
What to do if things go wrong
If one of the private keys (client, server or CA) get's compromised there are a few things you can do:
Nothing, just wait till the certificate expires. This only works if you are using short lived certificates (e.g. certificates which are valid for 7 days). This will still leave a window in which you are vulnerable. You might have another control in place which mitigates this risk. The problem with this is that you need to restart MySQL every 7 days and you need some mechanism to distribute the certificates in an automated fashion.
Re-create all certificates and keys (including the CA). This might only be feasable if you have a very small CA with just a few users and one or more servers.
Use a CRL This is a Certificate Revocation list. This only works if your certificates have proper serial numbers. Also if you use multiple CA's then you must have a CRL for each and then bundle them in one file and specify that with
ssl-crlor in a directory with symlinks created by c_rehash and specified with
ssl-crlpath(OpenSSL only). If you use CRL's then you need to use MySQL 5.6 or newer and have some method for updating the CRL, also for all clients.
Note: You must always re-create the private key if leaked, using the old CSR (Certificate Signing Request) is not enough.
Note on client configuration
ssl-ca parameter might be set in the
[client] section of your my.cnf. However mysqlbinlog then stops working. So use loose-ssl-ca in the client section instead.
ssl-cert should be in the
[client] section of your ~/.my.cnf and those file should be properly protected.
It is currently not possible to add SSL settings the a 'login-path' as created by
Note on ciphers
With ssl-cipher you can set a certain cipher, but with YaSSL this is quite limited. When you use OpenSSL you could use a cipher suite, which is a certain set of ciphers with some filters and an order.
Note on YaSSL, OpenSSL and status variables
It's hard to determine if your MySQL server uses OpenSSL or YaSSL. There are some methods like using
ldd /path/to/mysqld. By default MySQL Community Edition uses YaSSL and MySQL Enterprise Edition uses OpenSSL.
Depending on the SSL implementation some status variables might not get updated correctly.
To make sure you're fully protected against any SSL vulnerabilities you should follow the Oracle Critical Patch Update and keep your MySQL and OpenSSL packages up to date.
Note on running a CA
This is not as easy as it seams. It is easy to get started, but doing it right is hard. Make sure you understand how to issue a CRL, what X509 extensions are and which of them you must use.
One of the many websites which tells you how to setup your own CA: How to act as your own certificate authority (CA) - jamielinux
Also many of the public CA's offer to host your private CA for you.
Books on SSL/TLS
There is a book about SSL I can recommend: Bulletproof SSL and TLS
And there is a very entertaining book about the history of cryptography: The Code Book
Thursday, January 1, 2015
First: Don't panic.
Often a password is required besides a certificate to connect to the server. So someone with the certificate can't use it without the password. The certificate itself might be protected by a password, but that's really rare. Also access to MySQL and/or your account should be limited to certain IP's.
The next step is to revoke the certificate. This is possible since MySQL 5.6.3 by using a Certificate Revocation List (CRL).
A CRL is a list of the serials of the revoked certificates and signed by the CA. So this will only work if the certificates have unique serials.
To get the serial of a certificate with OpenSSL:
$ openssl x509 -in client-cert.pem -noout -text | grep 'Serial Number'
Serial Number: 3 (0x3)
To get the serial of a certificate with GnuTLS:
$ certtool --certificate-info --infile client-cert.pem | grep 'Serial Number'
Serial Number (hex): 03
If all certificates have the same serial, then you can't revoke them and you have to create a new CA and all other certificates.
If you do have unique serials for your certificates then you can use GnuTLS to create the CRL:
$ certtool --generate-crl --load-ca-privkey=CAkey.pem \
Then you will be asked in how many days the next CRL will be issued and what the CRL number is.
With OpenSSL you can use the 'ca' command with the -gencrl option, but that requires a bit of a setup.
If you have the CRL file then you can use --ssl-crl option for the server and clients to let them know which certificates are revoked. It looks like MySQL Workbench can't be configured to use CRL's, but it should work for most other tools and connectors.
The major drawback of using a CRL is that you have to update the file for all servers and clients if you revoke a certificate.
With OpenSSL you can use openssl crl -in CAcrl.pem -noout -text to view the CRL contents and with GnuTLS you can use certtool --infile=CAcrl.pem --crl-info
If you use a public CA for your MySQL certificates then the CA can revoke the certificate, but then you still need to get the latest CRL from your CA and distribute it to all servers and clients. This is because MySQL doesn't fetch the CRL from the 'CRL Distribution Points' URL's which may be embedded in the certificates.
Saturday, December 27, 2014
The output of the XA RECOVER command to list transactions was hard to read because of the representation of the data column:
The good news is that 5.7 has transaction information in performance_schema:
mysql> select trx_id, isolation_level, state, xid, xa_state, access_mode -> from performance_schema.events_transactions_current; +-----------------+-----------------+--------+--------+----------+-------------+ | trx_id | isolation_level | state | xid | xa_state | access_mode | +-----------------+-----------------+--------+--------+----------+-------------+ | NULL | REPEATABLE READ | ACTIVE | x-1 | PREPARED | READ WRITE | | 421476507015704 | REPEATABLE READ | ACTIVE | NULL | NULL | READ WRITE | | NULL | REPEATABLE READ | ACTIVE | foo-1 | ACTIVE | READ WRITE | | NULL | REPEATABLE READ | ACTIVE | NULL | NULL | READ ONLY | | NULL | REPEATABLE READ | ACTIVE | 0x0101 | PREPARED | READ WRITE | +-----------------+-----------------+--------+--------+----------+-------------+ 5 rows in set (0.00 sec)
This is not only more readable but allows you to use functions on the output. It is also much more complete as all transactions are listed here, not only XA transactions.
Update: It's not only possible with performance_schema, In 5.7 there is also a new option to print the xid in hex for XA RECOVER:
mysql> XA RECOVER; +----------+--------------+--------------+------+ | formatID | gtrid_length | bqual_length | data | +----------+--------------+--------------+------+ | 1 | 2 | 0 | | +----------+--------------+--------------+------+ 1 row in set (0.00 sec) mysql> XA RECOVER CONVERT XID; +----------+--------------+--------------+--------+ | formatID | gtrid_length | bqual_length | data | +----------+--------------+--------------+--------+ | 1 | 2 | 0 | 0x0001 | +----------+--------------+--------------+--------+ 1 row in set (0.00 sec)
Tuesday, November 4, 2014
The mysqlbackup command has settings to for the number of read, write and process threads. The defaults are 1 read, 1 write and 6 process threads. So that isn't really useful for throttling as I was using the defaults.
Using the ionice utility wouldn't work as that requires the CFG I/O scheduler.
I found a solution in this blog post. It is to use cgroups on Linux. I had used cgroups before to test how a galera setup works when one of the three servers had a much slower CPU.
# mkdir /cgroup/blkio # mount -t cgroup -o blkio non /cgroup/blkio # cgcreate -g blkio:/mysqlbackup # ls -lh /dev/mapper/vgdb01-lvdb01 lrwxrwxrwx 1 root root 7 Sep 26 14:22 /dev/mapper/vgdb01-lvdb01 -> ../dm-2 # ls -lh /dev/dm-2 brw-rw---- 1 root disk 253, 2 Sep 26 14:22 /dev/dm-2 # cgset -r blkio.throttle.read_iops_device="253:2 20" mysqlbackup # cgset -r blkio.throttle.write_iops_device="253:2 20" mysqlbackup # echo $$ > /cgroup/blkio/mysqlbackup/tasks # cat /proc/$$/cgroup 1:blkio:/mysqlbackup # mysqlbackup --user=root --password=xxxxxx --with-timestamp --backup-dir=/data/backup backup
This worked exactly as I had hoped and expected. Both read and write operations were limited to 20 iops.
It turned out to be a issue with the storage system, so I won't have to use this in production but I hope that this will be to value of someone. This can also be useful in other situations.