Sunday, August 31, 2014

Putting MySQL Cluster in a container

To get more familiar with docker and to create a test setup for MySQL Cluster I created docker images for the various components of MySQL Cluster (a.k.a. NDB Cluster)

At first I created a Fedora 20 container and ran all components in one container. That worked and is quite easy to setup. But that's not how one is supposed to use docker.

So I created Dockerfile's for all components and one base image.

The base image:
  • contains the MySQL Cluster software
  • has libaio installed
  • has a mysql user and group 
  • serves as a base for the other images
The management node (ndb_mgmd) image:
  • Has ndb_mgmd as entrypoint
  • Has a config.ini for the cluster config
  • Should be started with "--name=mymgm01"
The data node (ndbmtd) image:
  • Has ndbmtd as entrypoint
  • Uses the connect string: "host=${MGM01_PORT_1186_TCP_ADDR}:1186"
  • Should be started with "--link mymgm01:mgm01" to allow it to connect to the management node.
  • You should create 2 containers of this type to create a nodegroup of 2 nodes.
The API node (mysqld) image:
  • has a my.cnf
  • Runs mysqld_safe
  • Should be started with "--link mymgm01:mgm01" to allow it to connect to the management node.
  • The ndb-connectstring is given as parameter to mysqld_safe as it comes from an environment variable. It's not possible to use environment variables from within my.cnf. Docker is supposed to also update /etc/hosts but that didn't work for me.
  • You should expose port 3306 for your application
The management client (ndb_mgm) image:
  • Runs ndb_mgm as entrypoint
  • Should be started with "--link mymgm01:mgm01" to allow it to connect to the management node.
  • Running the ndb_mgm in a container removes the need to publish port 1186 on the management server. More info here.
  • You can override the entrypoint to run other NDB utilities like ndb_desc or ndb_select_all
The images can be found on https://registry.hub.docker.com/u/dveeden/mysqlcluster72/
The Dockerfiles can be found on https://github.com/dveeden/dve-docker  

Possible improvements
  • Use hostnames in the config.ini instead of IPv4 addresses. This makes it more dynamic. But that means updating /etc/hosts or fideling with DNS.
  • Using VOLUMES in the Dockerfiles to make working with data easier.

Sunday, August 17, 2014

The new cloud backup option of MySQL Enterprise Backup

MySQL Enterprise Backup 3.10 support backups to the cloud. The only supported cloud service is Amazon S3.

When the cloud destination is used mysqlbackup will upload the backup as an image file.

You can specify all options on the commandline:
mysqlbackup --cloud-service=s3 --cloud-aws-region=eu-west-1 \
--cloud-access-key-id=AKIAJLGCPXEGVHCQD27B \
--cloud-secret-access-key=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 \
--cloud-bucket=meb_myserver --cloud-object-key=firstbackup --cloud-trace=0 \
--backup-dir=/tmp/firstbackup --backup-image=- --with-timestamp backup-to-image

But you can also put the settings in the my.cnf
[mysqlbackup_cloud]
cloud-service=s3
cloud-aws-region=eu-west-1
cloud-access-key-id=AKIAJLGCPXEGVHCQD27B
cloud-secret-access-key=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
cloud-bucket=meb_myserver
cloud-trace=0
backup-dir=/data/cloudbackup
backup-image=-
with-timestamp

The with-timestamp option is important as the backup won't start if the backup-dir already exists. This is because mysqlbackup will leave the backup directory exists after uploading the backup. The backup directory will only have meta info and the log file, not the actual backup.

By using a group suffix like _cloud you can put settings for multiple types of backups in one cnf file.

mysqlbackup --defaults-group-suffix='_cloud' \
--cloud-object-key=backup_2014081701 backup-to-image

The account you're using should have this policy to be allowed to read and write to the s3 bucket:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1408302840000",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:ListBucket", 
        "s3:ListBucketMultipartUploads",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::meb_myserver/*"
      ]
    }
  ]
}

This looks like a good option to me if you're already using mysqlbackup and amazon. It would be nice if the next version would support other cloud providers (e.g. openstack swift, ceph). Implementing this should be easy for those with an s3 compatibility layer, but will probably take more time for others.

I did find some bugs (just search for tag=cloud on http://bugs.mysql.com if you're interested).

Thursday, August 7, 2014

MySQL User Group Meetup in Amsterdam

This Tuesday Markus Winand will talk at the MySQL User Group NL meetup about "Indexes: The neglected performance all-rounder".
Markus is known for the http://use-the-index-luke.com website and the SQL Performance Explained book.

Date: Tuesday August 12
Location: Marktplaats/eBay Office Amsterdam
Registration: http://www.meetup.com/MySQL-User-Group-NL/events/196440532/