When the cloud destination is used mysqlbackup will upload the backup as an image file.
You can specify all options on the commandline:
mysqlbackup --cloud-service=s3 --cloud-aws-region=eu-west-1 \
--cloud-access-key-id=AKIAJLGCPXEGVHCQD27B \
--cloud-secret-access-key=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 \
--cloud-bucket=meb_myserver --cloud-object-key=firstbackup --cloud-trace=0 \
--backup-dir=/tmp/firstbackup --backup-image=- --with-timestamp backup-to-image
But you can also put the settings in the my.cnf
[mysqlbackup_cloud] cloud-service=s3 cloud-aws-region=eu-west-1 cloud-access-key-id=AKIAJLGCPXEGVHCQD27B cloud-secret-access-key=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 cloud-bucket=meb_myserver cloud-trace=0 backup-dir=/data/cloudbackup backup-image=- with-timestamp
The with-timestamp option is important as the backup won't start if the backup-dir already exists. This is because mysqlbackup will leave the backup directory exists after uploading the backup. The backup directory will only have meta info and the log file, not the actual backup.
By using a group suffix like _cloud you can put settings for multiple types of backups in one cnf file.
mysqlbackup --defaults-group-suffix='_cloud' \
--cloud-object-key=backup_2014081701 backup-to-image
The account you're using should have this policy to be allowed to read and write to the s3 bucket:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1408302840000", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::meb_myserver/*" ] } ] }
This looks like a good option to me if you're already using mysqlbackup and amazon. It would be nice if the next version would support other cloud providers (e.g. openstack swift, ceph). Implementing this should be easy for those with an s3 compatibility layer, but will probably take more time for others.
I did find some bugs (just search for tag=cloud on http://bugs.mysql.com if you're interested).
No comments:
Post a Comment