Tuesday, November 1, 2011

Fixed in Drizzle or just different?

In a previous post about different output for the same query there were 3 databases (MySQL, PostgreSQL and SQLite) and 3 different results.

I attended the "Fixed in Drizzle: No more GOTCHA's" talk during Percona Live London. The talk was full of issues which I've encountered many times and which were all fixed. So I wondered whether or not this is already fixed in Drizzle.

Here is the results for Drizzle:
drizzle> select version();
| version()  |
| 2011.03.13 | 
1 row in set (0.000418 sec)

drizzle> create database test;
Query OK, 1 row affected (0.000622 sec)

drizzle> use test;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Schema changed
drizzle> create table t1 (id serial, t time);
Query OK, 0 rows affected (0.001479 sec)

drizzle> insert into t1(t) values ('00:05:10');
Query OK, 1 row affected (0.001717 sec)

drizzle> select t,t*1.5 from t1;
| t        | t*1.5 |
| 00:05:10 |   465 | 
1 row in set (0.000691 sec)

And here is the result for MySQL:
mysql> select version();
| version() |
| 5.6.4-m6  |
1 row in set (0.00 sec)

mysql> create database test;
Query OK, 1 row affected (0.00 sec)

mysql> use test;
Database changed
mysql> create table t1 (id serial, t time);
Query OK, 0 rows affected (0.00 sec)

mysql> insert into t1(t) values ('00:05:10');
Query OK, 1 row affected (0.01 sec)

mysql> select t,t*1.5 from t1;
| t        | t*1.5 |
| 00:05:10 |   765 |
1 row in set (0.00 sec)

I guess that the logic is the same as for MySQL and the difference is due to the microsecond support which is already in Drizzle.

I've reported this to the Drizzle developers as Bug 884687.

Wednesday, October 12, 2011

Same query, 3 databases, 3 different results

The SQL standard leaves a lot of room for different implementations. This is a little demonstration of one of such differences.

SQLite  3.7.4
sqlite> create table t1 (id serial, t time);
sqlite> insert into t1(t) values ('00:05:10');
sqlite> select t,t*1.5 from t1;

MySQL 5.6.4-m5
mysql> create table t1 (id serial, t time);
Query OK, 0 rows affected (0.01 sec)

mysql> insert into t1(t) values ('00:05:10');
Query OK, 1 row affected (0.00 sec)

mysql> select t,t*1.5 from t1;
| t        | t*1.5 |
| 00:05:10 |   765 |
1 row in set (0.00 sec)

PostgreSQL 9.0.3
test=# create table t1 (id serial, t time);
NOTICE:  CREATE TABLE will create implicit sequence "t1_id_seq" for serial column "t1.id"
test=# insert into t1(t) values ('00:05:10');
test=# select t,t*1.5 from t1;
    t     | ?column?
 00:05:10 | 00:07:45
(1 row)

I think that the behaviour from PostgreSQL is the correct one. MySQL will just remove the :'s to get the string 000510 and then multiplies that value. The behaviour from SQLite is even more strange.

Of course for MySQL you could use the time_to_sec and sec_to_time functions.

mysql> select t,sec_to_time(time_to_sec(t)*1.5) from t1;
| t        | sec_to_time(time_to_sec(t)*2) |
| 00:05:10 | 00:07:45                      |
1 row in set (0.00 sec)

But those functions are not available on SQLite.

sqlite> select t,sec_to_time(time_to_sec(t)*1.5) from t1;
Error: no such function: sec_to_time

Sunday, October 9, 2011

How not to grant permissions

I went to EuroBSDcon in Maarssen, the Netherlands. It was a great conference and I might write another blog about it.

I the conference bag there was a copy of the dutch Linux Magazine. The magazine is very nice and covers a broad range of topics.

One article about LogicalDOC caught my attention. The LogicalDOC software uses a MySQL database. and the magazine listed some SQL code about how to create the database user:
create user logicaldoc;
set password FOR logicaldoc@´%´=
grant all privileges on logical-
to logicaldoc@´%´ identified by 
create database logicaldoc;
flush privileges;

These statements won't work as some of the quotes are wrong. But let's ignore that.

After executing the first line the grant is like this:
GRANT USAGE ON *.* TO 'logicaldoc'@'%'

So the user logicaldoc is allowed to connect from ANY host WITHOUT password. And yes that does work.

After the second statement the grant looks like this:
GRANT USAGE ON *.* TO 'logicaldoc'@'%' IDENTIFIED BY PASSWORD '*0CD1AE57344BB752E3A08B733916948E5A4BF96C'

Now the user is protected by a password, that's mutch better!

After the third statement the grants looks like this:
GRANT USAGE ON *.* TO 'logicaldoc'@'%' IDENTIFIED BY PASSWORD '*0CD1AE57344BB752E3A08B733916948E5A4BF96C'
GRANT ALL PRIVILEGES ON `logicaldoc`.* TO 'logicaldoc'@'%'

Any user who managed to get a connection after the first statement will still be connected and will now be granted ALL PRIVILEGES including SUPER.
Update: This will not include the SUPER privilege (Thanks to Shlomi Noach for noticing this.)

And of course the flush privileges statement was not necessary because they didn't directly modify the mysql.* tables.

It should have been like this:
CREATE USER 'logicaldoc'@'localhost' IDENTIFIED BY 'password';
GRANT SELECT, UPDATE, INSERT,… ON `logicaldoc`.* TO 'logicaldoc'@'localhost'; 

This will continue to work if the NO_AUTO_CREATE_USER sql_mode is set.
It assumes that the database connection will use a UNIX socket.

Granting specific rights instead of a simple GRANT ALL is better for security and allows mutch more functionality to work like the read_only setting for slaves, reserved connections for emergencies and the init_connect setting.

The statements seem to have been copy-pasted from the LogicalDOC online documentation. There they also suggest to supply the root password on the commandline with -ppassword instead of using -p and let MySQL prompt for the password.

This doesn't mean that LogicalDOC is insecure. It only means that the person writing the documentation probably didn't fully understand MySQL Permissions.

The OurSQL podcast episode 59 has many more information about MySQL and Security.

Tuesday, July 12, 2011

MyISAM Key Buffer Usage

For MyISAM one of the most important variables is the Key Buffer.  The Key Buffer is sometimes called the Key Cache. It's used as a buffer for the indices of MyISAM tables. There is some overhead in the buffer depending on the configured key block size.

The official way to calculate the key buffer usage as documented in the MySQL Reference manual:
1 - ((Key_blocks_unused * key_cache_block_size) / key_buffer_size)
 This will return the factor, so you have to multiply it with 100 to get the percentage. The Key_blocks_unused is used instead of the more obvious Key_blocks_used. This is due to the fact that Key_blocks_used is the maximum number of key blocks ever used. It will not return to 0 after a FLUSH TABLES.

This calculation does not take the overhead in account. The key buffer efficiency can be calculated if the key buffer is empty or (has been) completely full.

If the the key buffer is full:
key_buffer_coefficient = key_cache_block_size/(key_buffer_size/Key_blocks_used)

If the the key buffer is empty:
key_buffer_coefficient = key_cache_block_size/(key_buffer_size/Key_blocks_unused)

The 'fixed' formula:
1 - ((Key_blocks_unused * key_cache_block_size) / (key_buffer_size * key_buffer_coefficient))

The result is shown below:

The overhead in the key buffer will be allocated on startup, so the larger overhead for a larger key buffer will result in more memory usage, even if MyISAM is only used for the internal mysql tables.

Tuesday, July 5, 2011

Working with IP's in MySQL and MariaDB - Part 2

Use DNS directly from your database

mysql> SELECT lookup('localhost');
| lookup('localhost') |
|           |
1 row in set (0.00 sec)

mysql> SELECT reverse_lookup('');
| reverse_lookup('') |
| localhost                   |
1 row in set (0.00 sec)

This is how you install these functions.
  1. Build udf_example.so which is in your mysql source. (make udf_example.so)
  2. Copy the udf_example.so file from your build directory to your plugin_dir.
  3. Create the lookup and reverse_lookup functions
mysql> CREATE FUNCTION lookup RETURNS STRING SONAME 'udf_example.so';
Query OK, 0 rows affected (0.00 sec) 
mysql> CREATE FUNCTION reverse_lookup RETURNS STRING SONAME 'udf_example.so';
Query OK, 0 rows affected (0.00 sec)

I've created a feature request for IPv6 support to these functions.

Monday, July 4, 2011

Working with IP's in MySQL and MariaDB

For MySQL it's a best practice to store IP addresses in a INT column rather than in a VARCHAR. This allows for more compact storage. This best practice is only for MySQL as PostgreSQL does have data types like inet and cidr.

MySQL is equiped with the INET_ATON() and INET_NTOA() functions since version 3.23.15. In MySQL 5.6.3 the INET6_ATON() and INET6_NTOA() functions were addes to finaly add IPv6 address support. To make it easier to work with IPv4 and IPv6 addresses the IS_IPV4_COMPAT(), IS_IPV4_MAPPED(), IS_IPV4() and IS_IPV6() functions were added. The difference between the new and old functions is that the old functions use INT UNSIGNED to store IP addresses and the new function uses VARBINARY(16) for IPv6 addresses and VARBINARY(4) for IPv4 addresses.

Here are some examples about how you could do nifty things with IP addresses in MySQL.

Store hosts and networks in MySQL
CREATE TABLE `hosts` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`hostname` varchar(255) DEFAULT NULL,
`domainname` varchar(255) DEFAULT NULL,
`ip` int(10) unsigned NOT NULL,
KEY `ip` (`ip`)

CREATE TABLE `networks` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`network` int(10) unsigned NOT NULL,
`mask` int(10) unsigned NOT NULL,
`name` varchar(255) DEFAULT '',
KEY `network` (`network`),
KEY `mask` (`mask`)

Insert and retrieve entries
mysql> INSERT INTO `hosts`(`hostname`,`domainname`,`ip`) VALUES('test6','example.com',INET_ATON(''));
Query OK, 1 row affected (0.00 sec)

mysql> SELECT * FROM `hosts` WHERE `id`=LAST_INSERT_ID();
| id | hostname | domainname  | ip         |
|  6 | test6    | example.com | 3232235778 |
1 row in set (0.00 sec)

mysql> SELECT hostname,domainname,INET_NTOA(ip) ip FROM `hosts` WHERE `id`=LAST_INSERT_ID();
| hostname | domainname  | ip          |
| test6    | example.com | |
1 row in set (0.00 sec)

Convert a subnetmask to CIDR notation.
|                 4294967040 |                                    24 |
1 row in set (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

mysql> SET @mask='';
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT @mask,mask_to_cidr(@mask);
| @mask         | mask_to_cidr(@mask) |
| |                  22 |
1 row in set (0.00 sec)

And convert a CIDR mask back to dot-decimal notation

Query OK, 0 rows affected (0.00 sec)

mysql> SELECT cidr_to_mask(22);
| cidr_to_mask(22) |
|    |
1 row in set (0.00 sec)

Use the & operator to find the network address using an IP and mask.
|                                                      |
1 row in set (0.00 sec)

mysql> SET @netmask='';
Query OK, 0 rows affected (0.00 sec)

mysql> SET @ipaddr = '';Query OK, 0 rows affected (0.00 sec)

mysql> SELECT CONV(INET_ATON(@ipaddr),10,2) ip,CONV(INET_ATON(@netmask),10,2) mask,CONV(INET_ATON(@ipaddr) & INET_ATON(@netmask),10,2) network\G
*************************** 1. row ***************************
ip: 11000000101010000000001000000110
mask: 11111111111111111111111100000000
network: 11000000101010000000001000000000
1 row in set (0.00 sec)

This can be used to join a table with host ip's with a table of networks. (Remember the netmasks table in Solaris and/or NIS?)
mysql> SELECT CONCAT(h.hostname,'.',h.domainname) fqdn, INET_NTOA(h.ip) ip, INET_NTOA(n.mask & h.ip) network, INET_NTOA(n.mask) mask FROM hosts h INNER JOIN networks n ON (n.mask & h.ip) = n.network;
| fqdn              | ip          | network     | mask          |
| test1.example.com | | | |
| test2.example.com | | | |
| test3.example.com | | | |
| test4.example.com |    |    |     |
| test5.example.com |    |    |     |
5 rows in set (0.00 sec)

With the ~ operator the mask can be inverted and then be used to find the broadcast address using the XOR operator |.
mysql> SELECT CONV(INET_ATON(@ipaddr),10,2) ip,CONV(INET_ATON(@netmask),10,2) mask,CONV(INET_ATON(@ipaddr) & INET_ATON(@netmask),10,2) network, CONV(CONV(SUBSTRING(CONV(~INET_ATON(''),10,2),-32),2,10) | INET_ATON(@ipaddr),10,2) broadcast\G
*************************** 1. row ***************************
ip: 11000000101010000000001000000110
mask: 11111111111111111111111100000000
network: 11000000101010000000001000000000
broadcast: 11000000101010000000001011111111
1 row in set (0.00 sec)

You could use a view to make it easier for users to select the rows they wanted without using functions. But that will result in a full scan if you search using an IP. MySQL does not optimize this by first converting the search value to an INT but converts all rows to a dot-decimal notation and then compares it. MariaDB has a nifty feature called persistent virtual columns which allow indexing. For the example below this means that instead of scanning all four rows it just uses the index to find the row we need.

mysql> CREATE TABLE `iptest`(`ip_n` int(11) UNSIGNED NOT NULL, `ip_a` CHAR(15) AS (INET_NTOA(`ip_n`)) PERSISTENT, PRIMARY KEY(`ip_n`), KEY `ip_a` (`ip_a`));
Query OK, 0 rows affected (0.00 sec)

mysql> INSERT INTO `iptest`(`ip_n`) VALUES (INET_ATON('')),(INET_ATON('')),(INET_ATON('')),(INET_ATON(''));
Query OK, 4 rows affected (0.00 sec)
Records: 4  Duplicates: 0  Warnings: 0

mysql> SELECT * FROM `iptest`;
| ip_n       | ip_a        |
| 3232235777 | |
| 3232235778 | |
| 3232235779 | |
| 3232235780 | |
4 rows in set (0.00 sec)

mysql> EXPLAIN SELECT * FROM `iptest` WHERE `ip_a`='';
| id | select_type | table  | type | possible_keys | key  | key_len | ref   | rows | Extra       |
|  1 | SIMPLE      | iptest | ref  | ip_a          | ip_a | 16      | const |    1 | Using where |
1 row in set (0.00 sec)

mysql> CREATE VIEW v_iptest AS SELECT ip_n,INET_NTOA(ip_n) ip_a FROM `iptest`;
Query OK, 0 rows affected (0.00 sec)

mysql> EXPLAIN SELECT * FROM `v_iptest` WHERE `ip_a`='';
| id | select_type | table  | type  | possible_keys | key     | key_len | ref  | rows | Extra                    |
|  1 | SIMPLE      | iptest | index | NULL          | PRIMARY | 4       | NULL |    4 | Using where; Using index |
1 row in set (0.00 sec)

Wednesday, June 22, 2011

MySQL privileges and replication

This is a response on MySQL security: inconsistencies and Less known facts about MySQL user grants.

As far as I know the privilege to grant PROXY privileges is also not very well understood. I blogged about that some time ago.

In addion to the already highlighted issues with GRANT replication and grants can very well create an unwanted situation:

master> SHOW GRANTS FOR 'user'@'host'\G
*************************** 1. row ***************************
Grants for user@host: GRANT USAGE ON *.* TO 'user'@'host' IDENTIFIED BY PASSWORD '*4994A78AFED55B0F529C11C436F85458C1F8D4C2'
*************************** 2. row ***************************
Grants for user@host: GRANT SELECT, INSERT, UPDATE, DELETE ON `somedb`.* TO 'user'@'host'
2 rows in set (0.00 sec)

master> GRANT SELECT,INSERT,UPDATE,DELETE ON anotherdb.* TO 'user'@'host';
Query OK, 0 rows affected (0.00 sec)

master> SHOW GRANTS FOR 'user'@'host'\G
*************************** 1. row ***************************
Grants for user@host: GRANT USAGE ON *.* TO 'user'@'host' IDENTIFIED BY PASSWORD '*4994A78AFED55B0F529C11C436F85458C1F8D4C2'
*************************** 2. row ***************************
Grants for user@host: GRANT SELECT, INSERT, UPDATE, DELETE ON `somedb`.* TO 'user'@'host'
*************************** 3. row ***************************
Grants for user@host: GRANT SELECT, INSERT, UPDATE, DELETE ON `anotherdb`.* TO 'user'@'host'
3 rows in set (0.00 sec)

And on the slave:
slave> SHOW GRANTS FOR 'user'@'host'\G
*************************** 1. row ***************************
Grants for user@host: GRANT USAGE ON *.* TO 'user'@'host'
*************************** 2. row ***************************
Grants for user@host: GRANT SELECT, INSERT, UPDATE, DELETE ON `anotherdb`.* TO 'user'@'host'
2 rows in set (0.00 sec)

This could have happened if the user was dropped on the slave to prevent it from inserting on a ro-slave. The cure for this issue is setting sql_mode to NO_AUTO_CREATE_USER.

Another issue is mysql-proxy. MySQL proxy can be used for simple read/write splitting, failover and a lot more. But it could also make your database less secure:
  • root is limited to localhost to prevent remote logins
  • mysql proxy is installed on the database machine
  • if you connect from a remote machine to mysql-proxy this will be proxied to mysql and the host will be localhost.
And the questions for the readers are:
  • Do you allow access on your read-only slaves?
  • Do you replicate mysql.* or not?

Sunday, June 19, 2011

Regularly flushing the MySQL Query Cache without cron

This is a reply on Regularly flushing the MySQL Query Cache.

Alarm Clock 3
The original acticle is about regulary flushing the MySQL Query Cache as it will fragment over time.

There are some drawbacks for the cron method for flushing the query cache:
  • It will only work on UNIX like platforms as MS Windows uses the task scheduler to schedule tasks.
  • It needs credentials to login to the database.
  • It's not included in your database backup
There is another method, which is native to MySQL: the event scheduler.

Step 1: Enable the event scheduler:
mysql> SET GLOBAL event_scheduler=ON;
Query OK, 0 rows affected (0.00 sec)

And don't forget to set/change this in your my.cnf or my.ini

Step 2: Create the event:
Query OK, 0 rows affected (0.00 sec)
*************************** 1. row ***************************
Db: test
Name: flush_query_cache
Definer: msandbox@localhost
Time zone: SYSTEM
Execute at: NULL
Interval value: 1
Interval field: HOUR
Starts: 2011-06-19 12:57:46
Ends: NULL
Originator: 0
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ci
1 row in set (0.00 sec)

Please keep in mind that the query cache doesn't always give you a performance benefit due to mutex contention. See also the query cache tuner from Domas Mituzas.

You schould create a stored procedure for multi statement and/or complex statements and call the procedure from your event instead of putting it directly in your event.

Wednesday, June 15, 2011

OSS-DB Database certification

What will be first? The new and updated MySQL certification or the new OSS-DB exam which is announced by LPI in Japan?

The OSS-DB is only for PostgreSQL for now, but will cover more opensource databases in the future.

There seem to be two levels:

  • Silver: Management consulting engineers who can improve large-scale database

  • Gold: Engineers who can design, development, implementation and operation of the database

The google translate version can be found here. I found this info on Tatsuo Ishii's blog

Tuesday, June 14, 2011

RE: A bit on SANs and system dependencies

This is a reply on A bit on SANs and system dependencies by Eric Bergen.

Lets first start by making a difference between entry level, midrange and high-end SAN's.

Entry level:
This is basically a bunch of disks with a network connection. The Oracle/Sun/StorageTek 2540 is an example for this category. This storage is aimed at lowcost shared storage.

This kind of storage generally features replication and more disks than entry level. HP EVA is what comes to mind for this category. This storage is aimed at a good price/performance.

This is mainframe style storage which is made usable for open systems. Hitachi Data Systems VSP is a good example of this category. This category is for extreme performance and reliability.

Please make sure to not (mis-)judge High-End SAN by the experiences you had with entry-level storage.

Why should we use SAN/NAS storage?
  • SAN's can offer more reliable storage than local storage
  • SAN's offer all kinds of extra services like replication for disaster recovery.
  • SAN's be used to make machines be easy replaceable. (Diskless, Boot from SAN)
  • SAN's offer shared storage which can be used to make cluster setups.
  • SAN's offer server-less backups and snapshots.
  • SAN's offer storage pooling
  • Most SAN's offer online upgrade of firmware
  • SAN's offer online adding and replacement of disks
Why shouldn't we use SAN/NAS storage?

There are many issues which can arise with shared storage. I've encountered many issues with all of the above categories.
  • Multipath setups which fail due to human error or due to firmware issues.
  • Storage becomes slow because 1 machine is hammering the SAN. This bottleneck could arise on the network, the storage controller and on the disks.
  • FC controllers are expensive and so are iSCSI controllers.
  • Using ethernet might require a seperate network or a upgrade of your current network.
  • Using SAN/NAS will require a storage administrator in many cases.
Most MySQL setups can offer the same features as a SAN
  • Use MySQL (semisync) replication for disaster recovery instead of replication on the SAN level. This offers more efficient replication as statement based (or mixed) replication can be used.
  • Use DRBD for repliation of disks for cluster setups. Or use a NDB Cluster.
  • MySQL Enterprise Backup and XtraBackup can provide you with a low-impact backup wich can be used instead of a server-less backup solution. It could even be a no-impact backup when done on a slave. Using server-free backups with MySQL is not always possible.
  • Using LVM will allow for flexible storage assignment and snapshots, but most filesystems only support growing and do not offer shrinking features. Think of it as the ever-growing ibdata file which happens if you're not using innodb_file_per_table

A typical NAS storage array like a NetApp Filer can be accessed using FC nowadays. And most SAN storage vendors do offer NAS heads. So this is not as big of a difference as it once was. SAN is block level storage which uses FC or iSCSI and NAS is file level storage using NFS or CIFS.
MySQL can be used on FC, NFS, iSCSI or any other type of storage. But when sync_binlog=1 and innodb_flush_trx_at_commit=1 is used MySQL will do a lot of fsyncs. This will slow down the storage a lot. A local battery backed RAID array will be much quicker as the network roundtrip will be eliminated. (Beware of raid auto learning!)

Some virtualization software like VMWare and VirtualBox do offer virtualized disks which can be located on a SAN. Make your virtualization layer is configured correctly to do a proper fsync.

There is enough software in the MySQL ecosystem which can bring you SAN-like features with cheap/fast local storage. However if MySQL is not the only software which should be replicated it might be good choice to use a more generic way of managing your storage, this could include a SAN. Using a SAN might allow you to do things like upgrade the firmware of your disks and controllers without doing a failover on your master.

Do you use SAN/NAS for MySQL? Why? Why not? Do you use SAN features like replication and/or snapshots?

Monday, May 16, 2011

MySQL Certification

According to the Oracle website the following certifications are available for MySQL:

There were two new versions released since MySQL 5.0. MySQL 5.0 was released in 2005, that's more than 5 years ago. And "MySQL 5.1 Cluster Database"? According to wikipedia the NDB release in the 5.1 source tree is old and not maintained. And Cluster 7.1 has many new features like multithreaded data nodes and disk based data.

So Oracle, please update the certifications.

    Friday, April 29, 2011

    My MySQL wishlist (revised, again)

     Just like I did in 2007 and 2009, this is my updated whishlist for MySQL.

    My 2007 List:
    Per user and/or per database quota
    I guess that this will be implemented together with catalog support.
    External Authentication 
    Got it in 5.5! Thanks a lot! And the new MySQL Cluster even has support for sharing user credentials.
    Database Locator
    There is still no TNSnames like support.

    Saving extra metadata about the database.
    Using the comment field of tables for things like svn release, customer number and more still just feels wrong. And a database still can't have a comment... Using I_S is possible, but it's not supported to create FK's for that, so consistency is not guaranteed.

    better protection against run-away queries
    With mk-kill this is now easy.

    restore manager
    I still have to use thinks like awk for this...

    My 2009 List:
    SNMP for statistics
    no changes
    SNMP for alerting
    MySQL Enterprise Manager is quite good at this.

    There is much more possible with the new plugin features in 5.5

    And now the new entries:

    SequencesThis could make single-row uuid tables redundant
    Check constraints
    See this blog entry from endpoint.com for the reason why.
    Multiple triggers with the same action time and event

    Host cache info
    As explained in Bug #59404

    Crash-safe UDFs
    A badly written UDF can crash the whole server. This will make the use of UDF's a bit too risky for some.

    What's on your whishlist? A scalable query cache? JSON Features? Multithreaded Slave? Custom datatypes? IPv6?

      Tuesday, April 19, 2011

      Explaining what the default PROXY privilege for root does

      In a default MySQL 5.5.8 installation there is one PROXY privilege:

      GRANT PROXY ON ''@'' TO 'root'@'localhost' WITH GRANT OPTION

      What this does is:
      If USER() and CURRENT_USER() don't match root is still allowed to grant the proxy privilege.

      So if you connect using someuser@localhost using LDAP and LDAP tells you're root then you're still allowed to grant proxy privileges. This will only work if your user has the privilege to proxy to root.

      The documentation for PROXY is here.

      Tuesday, April 12, 2011

      MySQl Enterprise Monitor 2.3.2

      It's the second point release of MySQL Enterprise Monitor 2.3. And there some nice new features and bugfixes.

      It's now possible to enable or disable event blackout for specific servers easily.

      It should now graph InnoDB with multiple buffer pools correctly. It won't work for 5.5.8 however, see Bug #60777 and Bug #60831 for more info on that.

      Read the complete changelogs for all the details.

      The Advisors/Rules were also updated. But there is still room for improvement:
      • Bug #60828 Add rule to detect duplicate foreign keys
      • Bug #60697 False Positives for Root Account Can Login Remotely
      • Bug #60695 False positives for tables w/o keys for 5.5 with perf schema
      • Bug #60677 "User Has Rights To Database That Does Not Exist" gives false positives
      • Bug #60676 Add rule to monitor if the timezone info is loaded
      • Bug #60587 Advice for Root Account Without Password is not correct
      • Bug #60586 key buffer size check gives false positives
      See also my previous blog posts:
      MySQL Enterprise is very easy to install, update and configure. Extending is very well possible using custom SQL queries and Lua scripting, but not really easy. Adding more graphs is also quite hard.

      The graphs look way better than many other tools available. The size is easily configurable so that the graph is as wide as your screen.

      The combination with the Query Analyzer is nice, but I'm not really using it very often as it requires a MySQL Proxy instance to capture the query info. They're fixing that my integrating query capture in Connector/J, Connector/NET and the PHP Connector (the later is only a beta). There are also other ways to capture queries in MySQL: using the sqlstats plugin, so there are lots of possibilities for the query analyzer to become much more useful.

      The downside of MySQL Enterprise Monitor is that the tool itself is not free or opensource. It does use an opensource environment: Tomcat, Lua and many opensource Java classes.

      Reporting Bugs and Feature request works really well, and lots and lots of issues which I encountered were fixed.

      MySQL Monitor is one of the top reasons to choose the MySQL Enterprise Subscription.

      And some questions for the readers of this post:
      • What are you using for monitoring?
      • Are you using MEM? Are you satisfied with it? Why? Why not?
      • If you are using Nagios/Cacti: Which checks/templates do you use?

      Friday, April 8, 2011

      MySQL and Packaging

      The MySQL Server from Oracle comes in a two different flavours: Community Edition and Enterprise Edition. The first one is under the GPLv2 license and the later is under the GPLv2 or Commercial license.

      The Enterprise Edition was always available from https://enterprise.mysql.com (which now has an expired SSL certificate) under the GPLv2 license. This download page was restricted to paying customers. Since the Enterprise downloads were moved to https://edelivery.oracle.com the downloads are available for everyone (as long as it's not restricted by export regulations and accept the trial license agreement). The license is now 'Commercial'. The download be named V24071-01.zip or something like that, which is annoying. The latest version for the Enterprise release on edelivery is 5.5.8 while the latest Community version is 5.5.11. Previously there were two enterprise releases: Advanced (With partitioning) and Professional (Without partitioning). If you use https://support.oracle.com you can search for patch 12332498 which claims to contain MySQL Advanced Server 5.5.11. The download is named 'p12332498_55_Linux-x86-64.zip' which is a bit better than the filenames which edelivery generates.

      Oracle does provide RPM's and Generic Tarballs, but no Debian packages. And Debian/Ubuntu is used a lot, even in bigger companies. There is no yum repository available for MySQL Enterprise, so there a no auto updates and easy installs. There is an official and public yum repository available but that's only for Oracle Enterprise Linux and Oracle VM. And their Solaris repository also doesn't have the latest MySQL versions (Only 5.0 and 5.1).

      The MySQL Enterprise Subscription also contains MySQL Enterprise Monitor, MySQL Enterprise Backup and MySQL Workbench. There were RPM's available for the MySQL Enterprise Agent, but it turn's out that that was a mistake. The official way is to use the installer (Yes unattended installation is possible). But no update via yum/apt or any other method. Just run de agent update installer on every machine.

      For the Community version there is also no YUM or APT repository and that might be one of the reasons why every one sticks to the old versions which are shipped with their Linux distribution of choice. It would be much easier to install packages with a hard dependency on mysql together with the latest official mysql if there were deb packages available. For more info about when not to use yum/apt read Shlomi Noach's blogpost.

      There are deb's for MySQL Workbench available and there are also RPM's, but unfortunately the dependencies are not complete RHEL6.

      Percona is a good example about how it should be done. Their repositories are accessible via APT and YUM. There are no repositories for Solaris and Windows yet, but I don't know if anyone is really interested in that.

      How do most folks keep there mysql servers updated? How do you monitor if there are any security related updates available? Do you use an in house YUM/APT/WSUS repository?

      Monday, April 4, 2011

      How the MySQL Monitor Agent broke Sendmail

      For MySQL Enterprise Monitor an agent is required. I've downloaded
      V23981-01.zip from https://edelivery.oracle.com which contains mysqlmonitoragent-

      When I was testing the email alerting for my backup script I got an error:
      /etc/mail/submit.cf: line 544: fileclass: cannot open '/etc/mail/trusted-users': Group
      writable directory

      Luckily RPM was able to tell me that mysqlmonitoragent had set / to the wrong permissions.
      # rpm -qf /
      # rpm -qvl mysqlmonitoragent- | head -1
      drwxrwxr-x    2 root    root                0 Nov 25 01:51 /
      # rpm -qvl filesystem-2.4.0-3.el5 | head -1
      drwxr-xr-x    2 root    root                0 Oct  1  2009 /
      # echo test | mail -s test user@example.com
      # /etc/mail/submit.cf: line 544: fileclass: cannot open '/etc/mail/trusted-users': Group
      writable directory

      I've filed Bug #60752 for this. So watch out for this bug if you're using mysqlmonitoragent RPM's!

      The work-around is to do a "chmod 755 /".

      Monday, March 28, 2011

      MySQL Timezone support: Updating and Monitoring

      How old is your timezone info?
      As the timezone database gets updated frequently, it's important to update the internal timezone database in MySQL. This is needed for the CONVERT_TZ() function for work properly.

      It's not easy to determine if the data in mysql is older than the data from the tzdata database. It get's a bit better if you use this hack before importing the zone database:
      I've created Bug #60675 to get the system database updated.

      Which database serves still need timezone info?
      Use MySQL Enterprise Monitor with custom rules.

      1. Add data query
      Open /opt/mysql/enterprise/agent/share/mysql-monitor-agent/items/custom.xml in an editor and add this:
      <query><!--[CDATA[SELECT CONVERT_TZ(NOW(),'GMT','MET') IS NOT NULL]]--></query>

      2. Restart the mysql-monitor-agent

      3. Create a custom rule in the MySQL Enterprise Dashboard (Advisor→Manage Rules→create rule):

      Rule Name: Timezone data is not loaded.
      Expression: %tzdata_avail% == THRESHOLD
      Critical: 0
      Variable Assignment
      Variable: %tzdata_avail%
      Data Item: mysql:tzdata_available:tzdata_available
      Instance: local
      Default Frequency: 30 minutes

      Problem Description: "There is no timezone info loaded in mysql. This causes the CONVERT_TZ() function to return NULL instead of the requested conversion."

      Advice: "None Specified"

      Recommended Action: "Use the mysql_tzinfo_to_sql utility to convert the zoneinfo database to SQL and load the converted data into the mysql database."
      Links and Further Reading:
      * {moreInfo:mysql_tzinfo_to_sql — Load the Time Zone Tables|http://dev.mysql.com/doc/refman/5.5/en/mysql-tzinfo-to-sql.html}
      * {moreInfo:MySQL Server Time Zone Support|http://dev.mysql.com/doc/refman/5.5/en/time-zone-support.html}

      4. Add the rule to a schedule

      SQL injection on MySQL.com

      For those who didn't know it already: MySQL.com was "hacked" last weekend. There also was a report about SQL injection on 2 Sun.com domains.

      I haven't seen an announcement from Oracle yet.

      More info on:

      Thursday, March 10, 2011

      Restore a full mysqldump file w/o the mysql database

      So you've got a database dump of a few gigabytes and you wat to restore all databases except one. Using vi won't work as the file is too large to be easily edited. But luckily an awk oneliner does do the job.

      gawk '/^CREATE DATABASE.*/ { db=substr($7,2); db=substr(db,1,length(db)-1) } /^/ {if (db !=
      "mysql") { print }}' original_dump.sql > original_dump_wo_mysql.sql

      The same awk code , but now formatted for readability:

      if (db != "mysql") {
      Of course you could pipe it directly into mysql saving some space on the filesystem. And you could also do the same with perl/python.

      Using SSL with MySQL

      Since MySQL 4.0 it's possible to use SSL to secure the connections to MySQL. Unfortunately this is not used very often.

      It can be used for cient-server connections and for replication. It will encrypt your traffic and prevent man-in-the-middle attacks if you're using ssl-verify-server-cert when connecting using mysql.

      Securing database communication can be required to be compliant to some regulations.

      There are other means of preventing anyone from snooping your database traffic like VPN, SSH Tunneling or stunnel.

      As usual the documentation is quite good. The only issue with the docs is that the verification step is missing. (Bug #59628).

      I'ts quite hard to debug as the OpenSSL messages are not reported in MySQL's errors. Bug luckily that's changing as Bug #21287 does have a patch under SCA/OCA.

      And you should check your my.cnf for typo's when it's not working as mysqld won't tell you when it can't find the ssl-ca file. (Bug #59630).

      I've used tcpdump to capture network traffic when SSL didn't work and loaded it into wireshark. Wireshark has very neat features to decode all kinds of traffic (HTTP, MySQL, SSL and even FC).

      Also a debug build can provide you with some more info.

      MySQL uses the same port for SSL and non-SSL connections. So you don't have to change your firewall rules. This also means that you can't force SSL on your firewall.

      There are a few SSL status variables availables in the output of SHOW GLOBAL STATUS LIKE 'Ssl_%'; The documetation is here. The only issue is that it doesn't actually work. (Bug #59635). This is bad as statistics about renegotiation could indicate renegotiation vulnerabilities.

      To force SSL you should use REQUIRE SSL or REQUIRE X509 with your CREATE USER or GRANT statements. Optionally you could use the REQUIRE SUBJECT, but that only works if the RDN order is how the server expects it to be (Bug #59376).

      There are also some known issues with mixing OpenSSL and YaSSL your should be aware of.

      Using SSL is very well possible, but there is room for improvement.

      I haven't tested the performance impact of using SSL. I would also be interesting to see if the hardware SSL in the Sun T2 cpu would speed this up. (Sun T2 PDF). So Percona/Oracle: start your benchmarks!

      As far as I known the MySQL branch/forks like Percona Server and MariaDB are using almost the same code/features for SSL as Oracle.

      Friday, March 4, 2011

      MySQL Workbench on RHEL6

      Yesterday I blogged about some issues with MySQL Workbench. The problem was that CentOS 5.5 is not supported.

      So I installed RHEL6, one of the best known Enterprise Linux distributions. It's a 30 day evaluation, but that should be sufficient.

      After some minor issues with VirtualBox I had a working installation. So I tried to install MySQL Workbench again. I used 'sudo yum localinstall mysql-workbench-gpl-5.2.31a-2el6.x86_64.rpm' which failed because of 2 unmet dependencies: pexpect and libzip.so.1

      There is already a bug report in RHEL for the missing pexpect: #652659
      I used the RHEL6 installation DVD to install pexpect.

      There is a closed/wont-fix bug (#54403) for the libzip.so.1 dependency. It turns out that libzip is in the "RHEL Server Optional" channel. RHN should be used to enable more channels, but apparently the optional channel is not part of the evaluation license, so I used the web interface to download and install the libzip RPM.

      The workbench installation finaly completed after complaining that the workbench rpm is not signed (use --nogpgcheck).

      So I started MySQL Workbench (There is a launcher in the menu). And then I clicked the "MySQL Doc Library" and Workbench responded with an error about a missing package: pysqlite2/python-sqlite2. I could find a python-sqite for RHEL5 on RHN... but nothing for RHEL6. So I filed Bug #60336

      Thursday, March 3, 2011

      MySQL Workbench on CentOS

      Every once in a while I use MySQL Workbench. First is was only for Windows, then it was unstable and now it is very well usable. Or at least that is the situation when you run on a recent Ubuntu machine.

      I'm now using CenOS 5.5 in a VirualBox VM to do some testing. Unfortunately the download site for MySQL Workbench has RPMs for RedHat 6 and Fedora Core 14 and some other platforms. No CentOS, No RHEL5, only the latest Fedora en RHEL.

      So I downloaded the RHEL6 RPMs and tried to install them with yum localinstall, but that failed. So they are not backwards compatible with 5.5.

      Turns out that CentOS/RHEL 5 doesn't have the required libraries to support MySQL Workbench.

      And unfortunately CentOS 5.5 is still the latest release. So an upgrade to CenOS 6 is not yet possible.