Skip to main content

Posts

Showing posts from 2013

Looking back on 2013

The year is almost at its end. Looking back at the past year I think it was a good year for MySQL. The 5.6 release was released in February and has been proven to be a very good release. There were many folks who reported bugs, which is good as this means they care about the product. But MySQL is not only a product, its a large ecosystem. One of the big players in the ecosystem is Oracle and this year they really participated in the MySQL community by having their engineers attend conferences which were not organized by Oracle (Like Percona Live and FOSDEM).   This year I couldn't attend the MySQL Connect and Percona Live conferences, but I hope to be able to attend in 2014 again. I did attend FOSDEM, which is a really nice (and different) conference. For MariaDB it also was an interesting year as a number of  Linux distributions and customers switched from MySQL to MariaDB (and sometimes back again). I wonder what 2014 will bring for MariaDB. The TokuDB storage engine w...

The importance of multi source replication

One of the latest labs releases of Oracle MySQL brings multi source replication. This lifts the limitation found in earlier releases that a MySQL slave can only have one master. To be fair, there were other ways of doing this already: Using a time based switch as described in MySQL High Availability Using the multi source feature in the yet-to-be released MariaDB 10 Using Tungsten Replicator There are many good uses of multi source replication. You could use it to combine data from multiple shards or applications. If MySQL is used with a loadbalancer the most easy to build setup is a 2-way multi master. This makes it possible to use the InnoDB storage engine. Using MySQL Cluster is another alternative, but MySQL Cluster uses the NDB storage engine, and might not be a supported option for your application. A MySQL Cluster setup also needs at least 4 machines to be fully redundant and MySQL Multi Master only needs two machines. There is little intelligence required in the load...

MariaDB's RETURNING feature.

There is a new feature in the MariaDB 10 Beta which caught my eye: support for returning a result set on delete . With a 'regular' DELETE operation you only get to know the number of affected rows. To get more info or actions you have to use a trigger or a foreign key. Anoter posibility is doing a SELECT and then a DELETE and with the correct transaction isolation a transactional support this will work. With the support for the RETURNING keyword this has become easier to do and it will probably bennefit performance and save you a few roundtrips and a few lines of code. There is already support for RETURNING in PostgreSQL. And PostgreSQL has an other nifty feature for which RETURNING really helps: CTE or common table expressions or the WITH keyword. I really hope to see CTE support in MySQL or MariaDB some day. An example from RETURNING and CTE in PostgreSQL: demo=# select * from t1; id | name ----+------- 1 | test1 2 | test2 3 | test3 4 | test1 5 | test2 ...

Using the PAM authentication plugin

The procedure for using the PAM authentication plugin as documented doesn't work flawlessly on Ubuntu. So here is how it works on Ubuntu (and probably also on other Debian based systems). Please note that the PAM authentication plugin is an enterprise feature. 1. Make sure the plugin is loaded This can be done by adding the following to the mysqld section of my.cnf (Don't forget to restart). You could also use INSTALL PLUGIN to load it without restart. plugin-load=authentication_pam.so 2.  Add a user which will use the plugin mysql> CREATE USER 'dveeden'@'localhost' IDENTIFIED WITH authentication_pam; Query OK, 0 rows affected (0.00 sec) 3. Add a pam config file for 'mysql': Create /etc/pam.d/mysql with the following contents: @include common-auth @include common-account @include common-session-noninteractive 4. Login with the user mysql -p --enable-cleartext-plugin 5. Verify if you're really connected as the correct user. mysql...

time for standards 2

I was a bit wrong in my previous post . MySQL 5.6 does allow you to supply a fsp with CURRENT_TIMESTAMP (thanks Roy). mysql> SELECT CURRENT_TIMESTAMP,CURRENT_TIMESTAMP(6); +---------------------+----------------------------+ | CURRENT_TIMESTAMP | CURRENT_TIMESTAMP(6) | +---------------------+----------------------------+ | 2013-10-27 10:38:59 | 2013-10-27 10:38:59.182530 | +---------------------+----------------------------+ 1 row in set (0.00 sec) It however feels a bit weird to me as the CURRENT_TIMESTAMP is often used without () and doesn't look like a function. So when I tried to use a CURRENT_TIMESTAMP with a fsp of 6 it was not behaving how I expected it to be: mysql> CREATE TABLE t1 (ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP(6)); ERROR 1067 (42000): Invalid default value for 'ts' mysql> CREATE TABLE t1 (ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP(0)); Query OK, 0 rows affected (0.01 sec) mysql> INSERT INTO t1 VALUES(CURRENT_TIMESTAMP(6)); Query OK,...

time for standards

MySQL 5.6 includes support for microsecode timestamp resolution , which is a great new feature. To get the current timestamp in MySQL 5.5 you could use NOW(), SYSDATE() or CURRENT_TIMESTAMP. mysql_5.5> SELECT NOW(),SYSDATE(),CURRENT_TIMESTAMP; +---------------------+---------------------+---------------------+ | NOW() | SYSDATE() | CURRENT_TIMESTAMP | +---------------------+---------------------+---------------------+ | 2013-10-26 15:46:24 | 2013-10-26 15:46:24 | 2013-10-26 15:46:24 | +---------------------+---------------------+---------------------+ 1 row in set (0.01 sec) If we run the same statement in MySQL 5.6 the output is the same. This is great for compatibility, but what if we want those microsecond timestamps? mysql_5.6> SELECT NOW(),SYSDATE(),CURRENT_TIMESTAMP; +---------------------+---------------------+---------------------+ | NOW() | SYSDATE() | CURRENT_TIMESTAMP | +---------------------+----------------...

Persistent statistics and partitions

Today when I was studying for the MySQL 5.6 exams. I was studying for these two items: Create and utilize table partitioning Obtain MySQL metadata from INFORMATION_SCHEMA tables The first step is to create a table, partition it with a hash. mysql> CREATE TABLE pfoo (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255)) -> PARTITION BY HASH(id) PARTITIONS 4; Query OK, 0 rows affected (0.04 sec) mysql> INSERT INTO pfoo(name) VALUES('test01'),('test02'),('test03'),('test04'), -> ('test05'),('test06'),('test07'),('test08'),('test09'),('test10'),('test11'); Query OK, 11 rows affected (0.00 sec) Records: 11 Duplicates: 0 Warnings: 0 mysql> SELECT * FROM pfoo; +----+--------+ | id | name | +----+--------+ | 4 | test04 | | 8 | test08 | | 1 | test01 | | 5 | test05 | | 9 | test09 | | 2 | test02 | | 6 | test06 | | 10 | test10 | | 3 | test03 | | 7 | test07 | | 11...

MySQL Certification Study: Write a basic .NET application that uses MySQL

I've registered for the MySQL 5.6 beta exams, just like many more on Planet MySQL have done. One of the topics for the MySQL 5.6 Developer exam is: "Write a basic .NET application that uses MySQL". I have used Connector/Net a few times the last few years, but the last time I looked at some code involving Connector/Net was when I took the MySQL 5 Developer exam, which was long ago. I'm using Linux on both my laptop and on servers for most projects. So I don't use or see many .NET code. So I need a short refresh. To use Connector/Net on Linux you will need to install Mono. The base for .NET is CIL, which exists in at least 4 versions: 1.0, 2.0 and 4.0. To compile C# code with mono you need the command which correspondends with the CIL version you are using. CIL 1.0: mcs CIL 2.0: gmcs CIL 4.0: dmcs For Ubuntu you'll need these packages: libmono-system-data4.0-cil libmysql-cil-dev mono-runtime mono-dmcs pkg-config So now we should be able...

Using a loadbalancer with MySQL

A loadbalancer is used in many MySQL setups. Loadbalancing can be done with an hardware loadbalancer or with software. Software solutions tend to be a fair bit cheaper. Loadbalancing is often implemented as an extra layer: The application connects to the loadbalancer via the network and the loadbalancer connects to the database via the network. The loadbalancing can be implemented on the application machines to get rid of the extra layer, this is better for latency (each layer adds latency) and as each application server has it's own loadbalancer it can also remove bottlenecks. The drawback are that you'll lose a single point of control and with multiple loadbalancers the number of health check queries will probably increase. The two-layer setup is often used with Connector/J and haproxy. There are simple and more intelligent loadbalancers. There are loadbalancers which can split read and write traffic for example. Of course read/write splitting will only work for single st...

Playing hid-and-seek with databases

As far as I know there isn't a well accepted set of best practices for MySQL, but there are many best practices known and used by most MySQL DBA's. One of those best practices is that the datadir must not be equal to a mountpoint; it has to be a subdirectory of a mountpoint. I learned this the hard way a few years ago when I used a NetApp Filer via NFS as a data directory. The NetApp filer exposed the snapshots via a .snapshot directory. A database in MySQL is a directory, so MySQL thought that the .snapshot directory was a database. This resulted in some issues with our monitoring scripts, so we had to create a mysql_data directory and move all the databases to that directory. For other setups directories like lost+found, .zfs, etc. gave similar issues. In MySQL 5.6 a new feature was introduced to make it possible to make some databases hidden. To do this a ignore-db-dir option needs to be specified for each directory. On a running server the list of ignored database direc...

Calculating the InnoDB free space - part 2

This is part 2, you can find part 1 here . So in part 1 we learned how to calculate the free space within InnoDB. But unfortunately that won't always work perfectly. The first issue: the DATA_FREE column in the INFORMATION_SCHEMA.TABLES table will not show a sum of the free space of each partition. This means that if you have innodb_file_per_table disabled and are using partitioning then you must divide DATA_FREE by the number of partitions. This is Bug #36312 . Example: mysql> SELECT CONCAT(T.TABLE_SCHEMA,'.',T.TABLE_NAME) AS TABLE_NAME, -> P.PARTITION_NAME AS PART,IBT.SPACE,IBD.PATH,T.DATA_FREE AS T_DATA_FREE, -> P.DATA_FREE AS P_DATA_FREE FROM INFORMATION_SCHEMA.TABLES T -> LEFT JOIN INFORMATION_SCHEMA.PARTITIONS P ON P.TABLE_SCHEMA=T.TABLE_SCHEMA -> AND P.TABLE_NAME=T.TABLE_NAME -> LEFT JOIN INFORMATION_SCHEMA.INNODB_SYS_TABLES IBT -> ON IBT.NAME=CONCAT(T.TABLE_SCHEMA,'/',T.TABLE_NAME) -> OR IBT.NA...

Calculating the InnoDB free space

Recently someone asked my if it's possible to find the total free space within InnoDB. I thought this would be very easy as the INFORMATION_SCHEMA.TABLES table has a DATA_FREE column. So we could just use SELECT SUM(DATA_FREE) FROM INFORMATION_SCHEMA.TABLES couldn't we? &nbsp So what does the DATA_FREE column tell us? It tells us the free data within InnoDB for that particular table. A table can share a tablespace with multiple other tables. &nbsp The tablespace which is used by a table depends on whether the innodb_file_per_table was enabled during table creation and/or at the last time the table was rebuild (e.g. by OPTIMIZE TABLE). &nbsp If innodb_file_per_table was always disabled then this query probably reports the correct free space: SELECT DATA_FREE FROM INFORMATION_SCHEMA.TABLES WHERE ENGINE='InnoDB' LIMIT 1; This is because all tables will share 1 tablespace. &nbsp If innodb_file_per_table was always enabled (new default for 5.6!) th...

MySQL User Group NL Meetup May 31 at Snow

The third meeting for the MySQL User Group NL will be hosted by Snow B.V. in the Snow office in Geldermalsen .   The Agenda: Choosing the Best Sharding Policy - Doran Levari ( ScaleBase , using a video link) Performance Monitoring with Statsd and Graphite - Art van Scheppingen ( Spil Games ) Basic MySQL performance tuning for sysadmins - Daniël van Eeden ( Snow ) Please RSVP on the meetup.com page . The user group now has more than 100 members!

MySQL: Every detail matters

Some bugs can have a high impact if it causes data corruption, security issues or simply causes MySQL to crash. But what about small bugs and small usability issues? This entry from the MySQL 5.6.8 release notes is an interesting one: InnoDB: On startup, MySQL would not start if there was a mismatch between the value of the innodb_log_file_size configuration option and the actual size of the ib_logfile* files that make up the redo log . This behavior required manually removing the redo log files after changing the value of innodb_log_file_size . The fix causes MySQL to write all dirty pages to disk and re-create the redo log files during startup if it detects a size mismatch. (Bug #14596550) So previously you had to shutdown MySQL without fast shutdown enabled, then move the ib_logfile's away and start MySQL again. This is not too hard or complicated. But I have seen some seriou...

Instant InnoDB

I've reviewed the Instant InnoDB book. It's a good starting point if you're learning about InnoDB, which is used for most MySQL installations. As a consultant I've seen many situations where some basic InnoDB setting (buffer pool size, log file size, file per table) can have a huge effect on the performance of the database and make it easier to manage. The website of the book can be found here .

Running Percona XtraDB Cluster in a Sandbox on Ubuntu

I wanted to do some experimentation with Percona XtraDB Cluster (Galera) and I didn't want to use virtual machines. I'm already using MySQL Sandbox for many other projects so that's the natural choice. I've downloaded the tarball for Percona XtraDB Cluster 5.5.29 and I've extracted it to ~/opt/mysql/5.5.29-pxc. Then I've installed 3 nodes: make_sandbox 5.5.29-pxc -- --sandbox_port 4551 \ --sandbox_directory msb_5_5_29-pxc01  make_sandbox 5.5.29-pxc -- --sandbox_port 4552 \ --sandbox_directory msb_5_5_29-pxc02  make_sandbox 5.5.29-pxc -- --sandbox_port 4553 \ --sandbox_directory msb_5_5_29-pxc03  But when I try to start a node this error happens: 130327 14:21:03 [Note] WSREP: wsrep_load(): loading provider library '/home/dvee den/mysql/5.5.29-pxc/lib/libgalera_smm.so' 130327 14:21:03 [ERROR] WSREP: wsrep_load(): dlopen(): libssl.so.10: cannot open shared object file: No such file or directory 130327 14:21:03 [ERROR] WSREP: wsrep_load(/home/...

MyISAM in a MySQL 5.6 InnoDB Only instance

With MySQL 5.5 the default storage engine was changed to InnoDB. This was a good step as most users expected MySQL to support transactions, row level locking and all the other InnoDB features, but with 5.1 they sometimes forgot to choose the right storage engine. As most databases don't have multiple storage engines and many MySQL users changed the default storage engine to InnoDB this made the switch to MySQL easier and the default behaviour more in line with what people expect from a relational database. Changing the storage engine can of course be done on a per table or per instance basis (default-storage-engine in my.cnf). The temporary tables created with 'CREATE TEMPORARY TABLE ...' should not be forgotten. The performance of InnoDB or MyISAM for temporary tables can have quite some impact, especially with slow storage, a buffer pool which is too small to hold the complete dataset or very small temporary tables. In MySQL 5.6 there is a new variable introduced to se...

How to lose your (foreign) key?

One of the many reasons to switch from MyISAM to InnoDB (or NDB) is the support of foreign keys which helps to guarantee referential integrity. This is how it's supposed to work: mysql> CREATE TABLE parent (id INT AUTO_INCREMENT NOT NULL, -> pname VARCHAR(100) NOT NULL, PRIMARY KEY(`id`)); Query OK, 0 rows affected (0.01 sec) mysql> CREATE TABLE child (id INT AUTO_INCREMENT NOT NULL, cname VARCHAR(100) NOT NULL, -> pid INT NOT NULL, PRIMARY KEY(`id`), FOREIGN KEY `fk_parent_child` (`id`) REFERENCES parent (`id`)); Query OK, 0 rows affected (0.01 sec) mysql> SHOW CREATE TABLE child\G *************************** 1. row *************************** Table: child Create Table: CREATE TABLE `child` ( `id` int(11) NOT NULL AUTO_INCREMENT, `cname` varchar(100) NOT NULL, `pid` int(11) NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `child_ibfk_1` FOREIGN KEY (`id`) REFERENCES `parent` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.01 sec...

MySQL 5.5's new features

The recently released MySQL 5.6 gets a lot of attention, but for those who are still on 5.5 there is also good news: There are two new features in 5.5. The first feature is that there are more INFORMATION_SCHEMA tables for InnoDB. This means that it's possible to 'see' what's in the buffer pool. It also makes it possible to get more information about the LRU list. From the 5.5.28 changelog : InnoDB: Certain information_schema tables originally introduced in MySQL 5.6 are now also available in MySQL 5.5 and MySQL 5.1: INNODB_BUFFER_PAGE , INNODB_BUFFER_PAGE_LRU , and INNODB_BUFFER_POOL_STATS . (Bug #13113026) This is in the "Bugs Fixed" section instead of the "Functionality Added or Changed" section, which is a bit weird in my opinion. The second feature is a variable which makes it possible to get more information about the deadlocks. From the 5.5.30 changelog : InnoDB: The innodb_print_all_deadlocks configuration option from MySQL 5.6 ...

How MySQL 5.6 handles passwords more securely

There are many thing changed in MySQL 5.6 which are related to passwords: There is a new password hash algorithm (SHA-256) There is obfuscation for passwords with the .mylogin.cnf file. The option to store slave passwords in a database table. It's now possible to supply a password to START SLAVE. But that's not what this blog post is about. This blog post is a great new feature: Hiding passwords from your log files, automatically. MySQL 5.6 will by default hide passwords from the general log. This is not just obfuscation as only the one-way hash will be put in the log files. By setting log-raw=OFF you can disable password hiding for the general log. The log-raw setting will only influence the general log, so the passwords in the slow query log and the binary logs will still be hidden. With MySQL 5.5 this could be done manually by first storing the hash in a variable, but like many other work-arounds this is not needed anymore. But it doesn't stop there. Accor...

MySQL Events

The last MySQL User Group NL meeting was last Friday. It's always nice to learn about MySQL and meet other MySQL users. There were two presentations: one about MySQL User Defined Functions (UDF's) and one about MySQL TCO. The slides are available from the meetup page . There are already a number of MySQL events announced for the next few months. I'll only list events in the Netherlands and Virtual events. MySQL Virtual Developer Days This is a virtual event which will take place on March 19 (EMEA region, NA event is on March 12). There are many interesting topics: Performance Schema, New 5.6 Features, Replication, MySQL Enterprise Monitor The eVite SkySQL and MariaDB roadshow 21 March 2013 in Amsterdam This event  have presentations by Monty Widenius, Seppo Jaakola and Anders Karlsson. The main topics are MariaDB and Galera. The event page MySQL User Group NL, Q2 Meeting 31 May 2013 in Geldermalsen The event page

Avoid clear text passwords in MySQL logging.

What happens when you use the PASSWORD() function to insert a password hash into a table? The hash will be written to the table The password might be written in clear text to the binlog The password might be written in clear text to the general log The password might be written in clear text to the slow query log The query mysql [localhost] {msandbox} (test) > INSERT INTO testpwd(pwd) VALUES(PASSWORD(' secret_password ')); Query OK, 1 row affected (0.00 sec) The General log 130128 16:04:41 1 Query INSERT INTO testpwd(pwd) VALUES(PASSWORD(' secret_password ')) The Slow query log # Time: 130128 16:04:41 # User@Host: msandbox[msandbox] @ localhost [] # Query_time: 0.004887 Lock_time: 0.001043 Rows_sent: 0 Rows_examined: 0 SET timestamp=1359385481; INSERT INTO testpwd(pwd) VALUES(PASSWORD(' secret_password ')); The binlog: # at 219 #130128 16:04:41 server id 1 end_log_pos 287 Query thread_id=1 exec_time=0 error_code=0 SET TIMESTAMP=1...

Installing Multicorn on RHEL6

The Multicorn project makes it possible to write Foreign Data Wrappers for PostgreSQL in Python. To install Multicorn on RHEL6 the following is needed: PostgreSQL 9.2 Python 2.7 make, GCC, etc. Installing PostgreSQL 9.2 is easy as it's available in the PostgreSQL Yum repository . Unfortunately Python 2.7 is not included in RHEL6. And replacing the 'system' python is a bad idea. The solution is to do an 'altinstall' of Python. The "--shared" and ucs4 options are required. The altinstall will install a python binary with the name python2.7 instead of just python. This allows you to have multiple python versions on 1 system. wget http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tgz tar zxf Python-2.7.3.tgz cd Python-2.7.3 ./configure --shared --enable-unicode=ucs4 make make altinstall This will result in a /usr/local/bin/python2.7 which doesn't work. This is due to the fact that the libraries are installed /usr/local/lib, which is...

How to install PGXN on RHEL6

Installing PGXN on RHEL 6.3 is not as easy as it might sound. First you need to install the PostgreSQL yum repo: rpm -ivh http://yum.postgresql.org/9.2/redhat/rhel-6.3-x86_64/pgdg-redhat92-9.2-7.noarch.rpm Then you need to install pgxnclient: yum install pgxnclient The pgxn client has 2 dependencies which are not listed in the package: setuptools simplejson 2.1 To satisfy the first dependency we need to install python-setuptools yum install python-setuptools The second one is not that easy as the simplejson version in RHEL6.3 is 2.0, which is too old. We can use PIP to install a newer version: yum remove python-simplejson yum install python-pip python-devel python-pip install simplejson And now the pgxn command will work.

MySQL version history (updated)

I've created a graph about the MySQL version history. It's mysql-graph-history on github. Please let me know if this is correct or if I'm forgetting some versions.

Untrusted downloads and MySQL

When the MySQL version from your distribution isn't good enough you need to download the latest Oracle MySQL. There are more possibilities like Percona Server and MariaDB, but that's not what this post is about. The dowload site for MySQL is https://www.mysql.com/downloads/mysql/ and contains to a mirror.php script which ask you if you like to login with a Oracle Web Account and then chooses a mirror for you. You don't have to login. Then you will be redirected to the chosen mirror. In my case this is https://cdn.mysql.com Firefox will give you a "This Connection is Untrusted" dialog. If you click on "Technical details" it will show the following error: cdn.mysql.com uses an invalid security certificate. The certificate is only valid for the following names:   a248.e.akamai.net , *.akamaihd.net , *.akamaihd-staging.net  (Error code: ssl_error_bad_cert_domain) The Qualys SSL Labs confirm the mismatch between the site name and the certificate...

How to install MySQL succesfully on a Raspberry Pi

For those starting to learn MySQL and want to install it on a Raspberry Pi: there is a little gotcha you should be aware of. Disclaimer: Only run command if you know what it does. Always make sure you have backups of your important data. First you need to put the official Raspbian image on your SD card and then boot the Raspberry Pi. Then if you try to install mysql with " sudo apt-get install mysql-server-5.5 " this will fail. The reason for this is that the filesystem on the SD card will become full. This can be seen by running the " df -h " command. The SD card is probably 4GB. The filesystem will be around 1.9GB. This was done to make it fit on 2GB cards. To stretch the filesystem to complete 4GB you need to run " sudo raspi-config " and choose the " expand_rootfs " option. Then you need to reboot. Now " df -h " should tell you that the filesystem has much more free space. But your MySQL installation failed. So you run ...