Showing posts with label mariadb. Show all posts
Showing posts with label mariadb. Show all posts

Sunday, September 25, 2016

Common Table Expressions in MySQL

In a recent labs release a new feature was introduced by Oracle, or actually two very related new features were introduced. The first new feature is Common Table Expressions (CTEs), which is also known as WITH. The second feature is recursive CTEs, also known as WITH RECURSIVE.

An example of WITH:

WITH non_root_users AS (SELECT User, Host FROM mysql.user WHERE User<>'root')
SELECT Host FROM non_root_users WHERE User = ?

The non-CTE equivalent is this:

SELECT Host FROM 
    (SELECT User, Host FROM mysql.user WHERE User<>'root') non_root_users
WHERE User = ?

This makes it easier to understand the query, especially if there are many subqueries.

Besides using regular subqueries or CTEs you could also put the subquery in a view, but this requires more privileges. It is also difficult to change the views later on as other quieries might have started to use them.

But views are still very useful. You can make it easier for others to query data or you can use views to restrict access to certain rows.

So CTEs are basically views which are bound to a query. This makes it easier to write complex queries in a way that they are easy to understand. So don't expect CTEs to replace views.

In the PostgreSQL world CTEs existed since version 8.4 (2009) and it is used a lot.

There are some cool things PostgreSQL allows you to do with CTEs and MySQL doesn't:

test=# create table t1 (id serial, name varchar(100));
CREATE TABLE
test=# insert into t1(name) values ('foo'),('bar');
INSERT 0 2
test=# with deleted_names as (delete from t1 where id = 2 returning name)
test-# select name from deleted_names;
 name 
------
 bar
(1 row)

The blog post has more details and examples about recursive CTEs, the second new feature.

One of the examples is generating a range of numbers.

If you're familiar with PostgreSQL that will remind you of the generate_series function. This function can be used to generate a series of intergers or timestamps. So I tried to make a stored procedure which together with the recursive CTE support would emulate generate_series in MySQL, but no such luck as you can't return a table from a stored fuction yet.

In the PostgreSQL world CTEs are also used to trick the optimizer but note that this depends on the specific CTE implementation, so don't assume this trick will work in MySQL.

MariaDB has some support for the RETURNING keyword and in MariaDB 10.2 (not yet released) there is CTE support. Support for recursive CTEs is not yet present, see MDEV-9864 for the progress.

If you want to see the progress of MySQL and MariaDB on other modern SQL features check out this page.

Monday, July 20, 2015

Inserting large rows in MySQL and MariaDB

As the maximum storage size for a LONGBLOB in MySQL is 4GB and the maximum max_allowed_packet size is 1GB I was wondering how it is possible to use the full LONGBLOB.

So I started testing this. I wrote a Python script with MySQL Connector/Python and used MySQL Sandbox to bring up an instance of MySQl 5.6.25.

One of the first settings I had to change was max_allowed_packet, which was expected. I set it to 1GB.

The next setting was less expected, it was innodb_log_file_size. The server enforces that the transaction has to fit in 10% of the InnoDB log files. So I had to set it to 2 files of 5G to be able to insert one record of (almost) 1GB.

So that worked for a row of a bit less that 1GB, this is because there is some overhead in the packet and the total has to fit in 1GB.

For the next step (>1GB) I switched from Python to C so I could use mysql_stmt_send_long_data() which allows you to upload data in multiple chunks.

I expected that to work, but it didn't. This is because in MySQL 5.6 and up the max_long_data_size was replaced by max_allowed_packet. But max_allowed_packet can only be set to max 1GB and max_long_data_size can be set to 4GB.

So I switched from MySQL 5.6 to MariaDB 10.1, because MariaDB still has max_long_data_size. That worked, now I could upload rows of up to (almost) 4GB.

I also noticed InnoDB will complain in the error logs about large tuples
InnoDB: Warning: tuple size very big: 1100000025

So you can insert CD ISO images in your database. For small DVD images this could work if your connector uses the COM_STMT_SEND_LONG_DATA command.

But it's best to avoid this and keep the size of rows smaller.

The scripts I used for my tests (and references to the bugs I found):
https://github.com/dveeden/mysql_supersize

Sunday, November 24, 2013

The importance of multi source replication

One of the latest labs releases of Oracle MySQL brings multi source replication. This lifts the limitation found in earlier releases that a MySQL slave can only have one master.

To be fair, there were other ways of doing this already:
There are many good uses of multi source replication. You could use it to combine data from multiple shards or applications.

If MySQL is used with a loadbalancer the most easy to build setup is a 2-way multi master. This makes it possible to use the InnoDB storage engine. Using MySQL Cluster is another alternative, but MySQL Cluster uses the NDB storage engine, and might not be a supported option for your application. A MySQL Cluster setup also needs at least 4 machines to be fully redundant and MySQL Multi Master only needs two machines.

There is little intelligence required in the loadbalancer. It should write to one server and read from both servers. If the first server is unavailable then it should write to the second one. The requirement to only write to one server has to do with the fact that replication is not fully synchronous (MySQL Cluster is synchronous, and there it is supported to write to all nodes). While the might seem like a disadvantage, it can actually be helpfull to do online schema changes.

One of the drawbacks of multi master is that multi master with more than 2 nodes can be a nightmare to maintain and will probably not help you to get more performance or reliability.

Another drawback is that it is not easy to add a disaster recovery setup to a multi master setup. If you have 2 locations with 2 servers in each location and create a multi-master setup from each pair than it's not possible to get one pair to slave from another pair as each server already has a master. You could create a multi-master on one site and then have a slave-with-a-slave in the other location, but then you'll have to change that setup during or after the site failover to get to the same setup as you had in the primary location.

With multi source replication you can now create a multimaster setup which is slave of another multi-master setup.

I did do some basic tests with the labs release for multi source replication and it looks great.

The "FOR CHANNEL='chan1'" syntax works quite well, although I would have gone for "FOR CHANNEL 'chan1'" (without the equal sign). I would have been nice if MariaDB and MySQL would have used the same syntax, but unfortunately this isn't the case. (MariaDB uses "CHANGE MASTER 'master1' TO...")

For multi source replication to work you have to set both master_info_repository and relay_log_info_repository to TABLE. I only had one of these set, and the error message was not really clear about which setting was wrong.
2013-11-23T14:37:52.972108Z 1 [ERROR] Slave: Cannot create new master info structure when  repositories are of type FILE. Convert slave  repositories  to TABLE to replicate from Multiple sources.

I build a tree-node multi-master with multi source replication. Each server was a slave of the other two servers. The advantages of this setup to a regular 3 node circular setup is that you don't need to enable log-slave-updates, which can save quite some I/O on the binlog files. Also if one node breaks then the remaining two nodes will still receive updates from each other instead of only one node receiving all updates.

If you use sharding you can use multi source replication to combine two (or  more) shards into one. This is similar to how I like to do major version upgrades: first make the new setup a slave of the old setup. This gives you a window for testing the new setup.

So multi source replication is very usefull in many setups.

Saturday, November 9, 2013

MariaDB's RETURNING feature.

There is a new feature in the MariaDB 10 Beta which caught my eye: support for returning a result set on delete.

With a 'regular' DELETE operation you only get to know the number of affected rows. To get more info or actions you have to use a trigger or a foreign key. Anoter posibility is doing a SELECT and then a DELETE and with the correct transaction isolation a transactional support this will work.

With the support for the RETURNING keyword this has become easier to do and it will probably bennefit performance and save you a few roundtrips and a few lines of code.

There is already support for RETURNING in PostgreSQL. And PostgreSQL has an other nifty feature for which RETURNING really helps: CTE or common table expressions or the WITH keyword. I really hope to see CTE support in MySQL or MariaDB some day.

An example from RETURNING and CTE in PostgreSQL:
demo=# select * from t1;
 id | name  
----+-------
  1 | test1
  2 | test2
  3 | test3
  4 | test1
  5 | test2
  6 | test3
(6 rows)

demo=# WITH del_res AS (DELETE FROM t1 RETURNING id) 
demo-# SELECT CONCAT('Removed ID ',id) info FROM del_res;
     info     
--------------
 Removed ID 1
 Removed ID 2
 Removed ID 3
 Removed ID 4
 Removed ID 5
 Removed ID 6
(6 rows)

demo=# 

So my conclusion: Returning a resultset for DELETE is helpfull, and is one step in the direction of CTE support.

The next step step is to get the RETURNING keyword to work for UPDATE.

Thursday, February 28, 2013

MySQL Events

The last MySQL User Group NL meeting was last Friday. It's always nice to learn about MySQL and meet other MySQL users. There were two presentations: one about MySQL User Defined Functions (UDF's) and one about MySQL TCO. The slides are available from the meetup page.

There are already a number of MySQL events announced for the next few months. I'll only list events in the Netherlands and Virtual events.

MySQL Virtual Developer Days
This is a virtual event which will take place on March 19 (EMEA region, NA event is on March 12). There are many interesting topics: Performance Schema, New 5.6 Features, Replication, MySQL Enterprise Monitor
The eVite

SkySQL and MariaDB roadshow
21 March 2013 in Amsterdam
This event  have presentations by Monty Widenius, Seppo Jaakola and
Anders Karlsson. The main topics are MariaDB and Galera.
The event page


MySQL User Group NL, Q2 Meeting
31 May 2013 in Geldermalsen
The event page


Tuesday, January 15, 2013

MySQL version history (updated)

I've created a graph about the MySQL version history.

It's mysql-graph-history on github.


Please let me know if this is correct or if I'm forgetting some versions.

Saturday, August 25, 2012

Fun with Performance Schema

I'm using a very small MariaDB instance as a datastore for my YouLess energy monitor, my own mail server (postfix, roundcube). It's a virtual machine from a commercial VPS provider.

All data fits in memory and the overhead of running with performance_schema on is not an issue.

While I was reading a blog post about performance_schema by Mark Leith I wanted to see what P_S could tell me about my own server.

This is the output from the first query:
mysql> select * from file_summary_by_event_name order by count_read desc,count_write desc limit 10;
+--------------------------------------+------------+-------------+--------------------------+---------------------------+
| EVENT_NAME                           | COUNT_READ | COUNT_WRITE | SUM_NUMBER_OF_BYTES_READ | SUM_NUMBER_OF_BYTES_WRITE |
+--------------------------------------+------------+-------------+--------------------------+---------------------------+
| wait/io/file/sql/FRM                 |      25387 |         548 |                  8795589 |                    117014 |
| wait/io/file/myisam/dfile            |      20102 |         760 |                 35053140 |                     32720 |
| wait/io/file/aria/MAI                |      17392 |      208795 |                  7163684 |                   7756451 |
| wait/io/file/aria/MAD                |       8696 |           0 |                 71237632 |                         0 |
| wait/io/file/sql/binlog              |       6624 |       41964 |                 54217518 |                  58900518 |
| wait/io/file/myisam/kfile            |       2666 |         196 |                   619930 |                      6817 |
| wait/io/file/sql/file_parser         |       2435 |           8 |                  4360082 |                     14128 |
| wait/io/file/innodb/innodb_data_file |       1054 |      243380 |                 17063936 |                7016056832 |
| wait/io/file/innodb/innodb_log_file  |          6 |      220950 |                    69632 |                 799742976 |
| wait/io/file/innodb/innodb_temp_file |          6 |           6 |                  6291456 |                   6291456 |
+--------------------------------------+------------+-------------+--------------------------+---------------------------+
10 rows in set (0.00 sec)

I'm using InnoDB for most (if not any) table. The data fits in memory so it shouldn't need to read a lot from disk.

The second result is for wait/io/file/myisam/dfile. I know that the tables in the 'mysql' database use MyISAM. By why is is so high in the list? Did I accidentally create some MyISAM tables?

The second query:
> select * from file_summary_by_instance order by count_read desc,count_write desc limit 5;
+----------------------------------------------------------------------+---------------------------+------------+-------------+--------------------------+---------------------------+
| FILE_NAME                                                            | EVENT_NAME                | COUNT_READ | COUNT_WRITE | SUM_NUMBER_OF_BYTES_READ | SUM_NUMBER_OF_BYTES_WRITE |
+----------------------------------------------------------------------+---------------------------+------------+-------------+--------------------------+---------------------------+
| /opt/mariadb-5.5.XX-linux-x86_64/data/mysql/proc.MYD                 | wait/io/file/myisam/dfile |      19411 |           4 |                  3207486 |                      2244 |
| /opt/mariadb-5.5.XX-linux-x86_64/data/serverX-bin.000022             | wait/io/file/sql/binlog   |       6618 |       41739 |                 54184750 |                  58809577 |
| /opt/mariadb-5.5.XX-linux-x86_64/data/XXXXXXX/table1.frm             | wait/io/file/sql/FRM      |        630 |          19 |                    81268 |                      4971 |
| /opt/mariadb-5.5.XX-linux-x86_64/data/XXXXXXX/view1.frm              | wait/io/file/sql/FRM      |        618 |           0 |                   375532 |                         0 |
| /opt/mariadb-5.5.XX-linux-x86_64/data/XXXXXXX/view2.frm              | wait/io/file/sql/FRM      |        617 |           0 |                   433493 |                         0 |
+----------------------------------------------------------------------+---------------------------+------------+-------------+--------------------------+---------------------------+
5 rows in set (0.00 sec)

So the mysql.proc table is the one that's the issue. I do use some stored procedures for reporting so that's easy to explain.

As we all know Linux takes care of this by eating RAM.

So performance_schema can easily give detailed information about your server. I wouldn't have guessed that stored procedures were in the top 10 read I/O.

This shows some opportunity for improving MySQL on lightly loaded and idle servers. Is that really needed? Yes, it could make MySQL more enery efficient by not waking sleeping disks and leaving the CPU(s) in a sleep state. Drizzle has shown us that it's possible to get rid of the FRM's. If I have some time I might do some tests with Intel's PowerTOP.

I did also run the queries for write I/O, but that goes to InnoDB datafiles, logfiles, binlogs, etc. and is all how you expect them to be.

Sunday, June 24, 2012

That's not my name! A story about character sets

When computers were still using large black text oriented screens or no screens at all, a computer only knew how to store a limited set of characters. Then it was normal to store a name with the more complicated characters replaced by more basic characters. The ASCII standard was used to make communication between multiple systems (or applications) easier. Storing characters as ASCII needs little space and is quite strait forward.

Then DOS used CP850 and CP437 and so on to make it possible to use language /location specific characters.
Then ISO8859-1, ISO8859-15 and more of these character sets were defined as standard.

And now there is Unicode: UTF-8, UTF-16, UCS2, etc. which allow you to store many different kinds of characters in the same character set.

But all those character sets only work correctly if you configure all applications correctly. Many of the character sets are very similar and seem to work correctly even if one of the systems is not correctly configured. If this happens most characters will be correct except the special ones.  And my name does contain a 'special' character, the 'ë'.

Below is a picture of two letters I received in the mail recently:





So what went wrong?

This is called Mojibake.

The first one:

mysql> CREATE TABLE t1(v1 VARCHAR(200)) ENGINE=InnoDB DEFAULT CHARACTER SET utf8;
Query OK, 0 rows affected (0.01 sec)

mysql> SHOW SESSION VARIABLES LIKE '%char%';
+--------------------------+----------------------------------------------------+
| Variable_name            | Value                                              |
+--------------------------+----------------------------------------------------+
| character_set_client     | utf8                                               |
| character_set_connection | utf8                                               |
| character_set_database   | latin1                                             |
| character_set_filesystem | binary                                             |
| character_set_results    | utf8                                               |
| character_set_server     | latin1                                             |
| character_set_system     | utf8                                               |
| character_sets_dir       | /home/dveeden/mysql/5.5.22-mariadb/share/charsets/ |
+--------------------------+----------------------------------------------------+
8 rows in set (0.00 sec)

mysql> INSERT INTO t1 VALUES('Daniël van Eeden');
Query OK, 1 row affected (0.01 sec)

mysql> SELECT * FROM t1;
+-------------------+
| v1                |
+-------------------+
| Daniël van Eeden  |
+-------------------+
1 row in set (0.00 sec)

mysql> set session character_set_client=latin1;
Query OK, 0 rows affected (0.00 sec)

mysql> set session character_set_connection=latin1;
Query OK, 0 rows affected (0.00 sec)

mysql> INSERT INTO t1 VALUES('Daniël van Eeden');
Query OK, 1 row affected (0.00 sec)

mysql> SELECT * FROM t1;
+---------------------+
| v1                  |
+---------------------+
| Daniël van Eeden    |
| Daniël van Eeden   |
+---------------------+
2 rows in set (0.00 sec)

mysql> INSERT INTO t1 VALUES('Daniël van Eeden');
Query OK, 1 row affected (0.00 sec)

mysql> SELECT * FROM t1;
+---------------------+
| v1                  |
+---------------------+
| Daniël van Eeden    |
| Daniël van Eeden   |
| Daniël van Eeden   |
+---------------------+
3 rows in set (0.00 sec)

So we can reproduce this issue by setting the client and connection charset to latin1.

mysql> SELECT v1,HEX(v1) FROM t1;
+---------------------+----------------------------------------+
| v1                  | HEX(v1)                                |
+---------------------+----------------------------------------+
| Daniël van Eeden    | 44616E69C3AB6C2076616E20456564656E     |
| Daniël van Eeden   | 44616E69C383C2AB6C2076616E20456564656E |
| Daniël van Eeden   | 44616E69C383C2AB6C2076616E20456564656E |
+---------------------+----------------------------------------+
3 rows in set (0.00 sec)

mysql> SELECT CONVERT(X'C3AB' USING utf8),CONVERT(X'C3AB' USING latin1);
+-----------------------------+-------------------------------+
| CONVERT(X'C3AB' USING utf8) | CONVERT(X'C3AB' USING latin1) |
+-----------------------------+-------------------------------+
| ë                           | ë                            |
+-----------------------------+-------------------------------+
1 row in set (0.00 sec)

The ë is stored as C383 C2AB and is rendered as two latin1 characters.

unicode C3 83 (Ã) is latin1 C3
unicode C2 AB («) is latin1 AB
C3AB = unicode ë

charutf8latin1cp850
ëC3 ABEB89
ÃC3 83C3C7
«C2 ABABAE
ÙC3 99D9EB

mysql> SELECT CONVERT(X'C3AB' USING latin1);
+-------------------------------+
| CONVERT(X'C3AB' USING latin1) |
+-------------------------------+
| ë                            |
+-------------------------------+
1 row in set (0.00 sec)

The sender of the first letter is probably storing UTF-8 from the web application in a database, but uses Latin1 when sending the letters.

The second letter renders the ë as Ù.

After some tries it seems like the sender is storing the ë as EB (latin1) and rendering it as CP850.

mysql> SELECT CONVERT(X'EB' USING cp850);
+----------------------------+
| CONVERT(X'EB' USING cp850) |
+----------------------------+
| Ù                          |
+----------------------------+
1 row in set (0.00 sec)

Anders Karslsson recently wrote a nice introduction about Unicode. And Ronald Bradford wrote a blogpost about UTF-8 With MySQL and LAMP.

Sunday, April 15, 2012

Backup your sandbox with XtraBackup

Today I tried to make incremental backups of a MariaDB instance in a MySQL sandbox with Percona XtraBackup.
I used the recently released XtraBackup 2.0. And of course there is documentation about making incremental backups. 

MySQL sandbox makes it easy to run many different MySQL versions on one machine. It does this by changing the port number, data directory, UNIX socket location and a whole lot more.

So I first started with a full backup and after that I used that backup as a base for the incremental backups. To do that I had to specify the port number which is 5522 and the username and password for the msandbox account. As MySQL uses a UNIX socket instead of a TCP connection if the hostname is localhost I specified 127.0.0.1 as hostname to force a TCP connection. That worked!

Then I created the incremental backup by using the --incremental option and the --incremental-basedir option to specify the location of the full backup. That also worked!

Then I tried to make a backup while putting some load on my database. I did use  "INSERT INTO test1(col1) SELECT col1 FROM test1" to do this.

The full and incremental backups still worked, or at least that's what the backup script told me. But the size of the incremental backups was quite small. And I noticed that the LSN was very small and not increasing. The xtrabackup_checkpoints file also told me that the backups where all for exactly the same LSN. As the LSN is only for InnoDB, I verified the table type for my test tables. And my test tables were in fact InnoDB. A "SHOW ENGINE INNODB STATUS\G" told me that the LSN was in fact increasing.

It turned out that XtraBackup was making backups of /var/lib/mysql instead of ~/sandboxes/msb_5_5_22-mariadb/data/. Adding "--defaults-file=~/sandboxes/msb_5_5_22-mariadb/my.sandbox.cnf" to the innobackupex command did correct this.

After specifying the correct config file I did try to make backups under load again. It failed due to the logfiles being too small. So I stopped the database, removed the ib_logfile's and started the database with a larger InnoDB logfile size.

Then It all worked flawlessly!

So you should make sure that your backup completes without errors AND that your backups is from the right database. Of course testing restores regularly would also detect this.

Monday, July 4, 2011

Working with IP's in MySQL and MariaDB

For MySQL it's a best practice to store IP addresses in a INT column rather than in a VARCHAR. This allows for more compact storage. This best practice is only for MySQL as PostgreSQL does have data types like inet and cidr.

MySQL is equiped with the INET_ATON() and INET_NTOA() functions since version 3.23.15. In MySQL 5.6.3 the INET6_ATON() and INET6_NTOA() functions were addes to finaly add IPv6 address support. To make it easier to work with IPv4 and IPv6 addresses the IS_IPV4_COMPAT(), IS_IPV4_MAPPED(), IS_IPV4() and IS_IPV6() functions were added. The difference between the new and old functions is that the old functions use INT UNSIGNED to store IP addresses and the new function uses VARBINARY(16) for IPv6 addresses and VARBINARY(4) for IPv4 addresses.

Here are some examples about how you could do nifty things with IP addresses in MySQL.

Store hosts and networks in MySQL
CREATE TABLE `hosts` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`hostname` varchar(255) DEFAULT NULL,
`domainname` varchar(255) DEFAULT NULL,
`ip` int(10) unsigned NOT NULL,
PRIMARY KEY (`id`),
KEY `ip` (`ip`)
)

CREATE TABLE `networks` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`network` int(10) unsigned NOT NULL,
`mask` int(10) unsigned NOT NULL,
`name` varchar(255) DEFAULT '',
PRIMARY KEY (`id`),
KEY `network` (`network`),
KEY `mask` (`mask`)
)

Insert and retrieve entries
mysql> INSERT INTO `hosts`(`hostname`,`domainname`,`ip`) VALUES('test6','example.com',INET_ATON('192.168.1.2'));
Query OK, 1 row affected (0.00 sec)

mysql> SELECT * FROM `hosts` WHERE `id`=LAST_INSERT_ID();
+----+----------+-------------+------------+
| id | hostname | domainname  | ip         |
+----+----------+-------------+------------+
|  6 | test6    | example.com | 3232235778 |
+----+----------+-------------+------------+
1 row in set (0.00 sec)

mysql> SELECT hostname,domainname,INET_NTOA(ip) ip FROM `hosts` WHERE `id`=LAST_INSERT_ID();
+----------+-------------+-------------+
| hostname | domainname  | ip          |
+----------+-------------+-------------+
| test6    | example.com | 192.168.1.2 |
+----------+-------------+-------------+
1 row in set (0.00 sec)

Convert a subnetmask to CIDR notation.
mysql> SELECT INET_ATON('255.255.255.0'),BIT_COUNT(INET_ATON('255.255.255.0'));
+----------------------------+---------------------------------------+
| INET_ATON('255.255.255.0') | BIT_COUNT(INET_ATON('255.255.255.0')) |
+----------------------------+---------------------------------------+
|                 4294967040 |                                    24 |
+----------------------------+---------------------------------------+
1 row in set (0.00 sec)

mysql> CREATE FUNCTION mask_to_cidr (mask CHAR(15)) RETURNS INT(2) DETERMINISTIC RETURN BIT_COUNT(INET_ATON(mask));
Query OK, 0 rows affected (0.00 sec)

mysql> SET @mask='255.255.252.0';
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT @mask,mask_to_cidr(@mask);
+---------------+---------------------+
| @mask         | mask_to_cidr(@mask) |
+---------------+---------------------+
| 255.255.252.0 |                  22 |
+---------------+---------------------+
1 row in set (0.00 sec)

And convert a CIDR mask back to dot-decimal notation

mysql> CREATE FUNCTION cidr_to_mask (cidr INT(2)) RETURNS CHAR(15) DETERMINISTIC RETURN INET_NTOA(CONV(CONCAT(REPEAT(1,cidr),REPEAT(0,32-cidr)),2,10));
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT cidr_to_mask(22);
+------------------+
| cidr_to_mask(22) |
+------------------+
| 255.255.252.0    |
+------------------+
1 row in set (0.00 sec)

Use the & operator to find the network address using an IP and mask.
mysql> SELECT INET_NTOA(INET_ATON('255.255.255.0') & INET_ATON('192.168.2.3'));
+------------------------------------------------------------------+
| INET_NTOA(INET_ATON('255.255.255.0') & INET_ATON('192.168.2.3')) |
+------------------------------------------------------------------+
| 192.168.2.0                                                      |
+------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> SET @netmask='255.255.255.0';
Query OK, 0 rows affected (0.00 sec)

mysql> SET @ipaddr = '192.168.2.6';Query OK, 0 rows affected (0.00 sec)

mysql> SELECT CONV(INET_ATON(@ipaddr),10,2) ip,CONV(INET_ATON(@netmask),10,2) mask,CONV(INET_ATON(@ipaddr) & INET_ATON(@netmask),10,2) network\G
*************************** 1. row ***************************
ip: 11000000101010000000001000000110
mask: 11111111111111111111111100000000
network: 11000000101010000000001000000000
1 row in set (0.00 sec)

This can be used to join a table with host ip's with a table of networks. (Remember the netmasks table in Solaris and/or NIS?)
mysql> SELECT CONCAT(h.hostname,'.',h.domainname) fqdn, INET_NTOA(h.ip) ip, INET_NTOA(n.mask & h.ip) network, INET_NTOA(n.mask) mask FROM hosts h INNER JOIN networks n ON (n.mask & h.ip) = n.network;
+-------------------+-------------+-------------+---------------+
| fqdn              | ip          | network     | mask          |
+-------------------+-------------+-------------+---------------+
| test1.example.com | 192.168.0.1 | 192.168.0.0 | 255.255.255.0 |
| test2.example.com | 192.168.0.2 | 192.168.0.0 | 255.255.255.0 |
| test3.example.com | 192.168.0.3 | 192.168.0.0 | 255.255.255.0 |
| test4.example.com | 10.0.0.1    | 10.0.0.0    | 255.0.0.0     |
| test5.example.com | 10.0.0.2    | 10.0.0.0    | 255.0.0.0     |
+-------------------+-------------+-------------+---------------+
5 rows in set (0.00 sec)

With the ~ operator the mask can be inverted and then be used to find the broadcast address using the XOR operator |.
mysql> SELECT CONV(INET_ATON(@ipaddr),10,2) ip,CONV(INET_ATON(@netmask),10,2) mask,CONV(INET_ATON(@ipaddr) & INET_ATON(@netmask),10,2) network, CONV(CONV(SUBSTRING(CONV(~INET_ATON('255.255.255.0'),10,2),-32),2,10) | INET_ATON(@ipaddr),10,2) broadcast\G
*************************** 1. row ***************************
ip: 11000000101010000000001000000110
mask: 11111111111111111111111100000000
network: 11000000101010000000001000000000
broadcast: 11000000101010000000001011111111
1 row in set (0.00 sec)

You could use a view to make it easier for users to select the rows they wanted without using functions. But that will result in a full scan if you search using an IP. MySQL does not optimize this by first converting the search value to an INT but converts all rows to a dot-decimal notation and then compares it. MariaDB has a nifty feature called persistent virtual columns which allow indexing. For the example below this means that instead of scanning all four rows it just uses the index to find the row we need.

mysql> CREATE TABLE `iptest`(`ip_n` int(11) UNSIGNED NOT NULL, `ip_a` CHAR(15) AS (INET_NTOA(`ip_n`)) PERSISTENT, PRIMARY KEY(`ip_n`), KEY `ip_a` (`ip_a`));
Query OK, 0 rows affected (0.00 sec)

mysql> INSERT INTO `iptest`(`ip_n`) VALUES (INET_ATON('192.168.1.1')),(INET_ATON('192.168.1.2')),(INET_ATON('192.168.1.3')),(INET_ATON('192.168.1.4'));
Query OK, 4 rows affected (0.00 sec)
Records: 4  Duplicates: 0  Warnings: 0

mysql> SELECT * FROM `iptest`;
+------------+-------------+
| ip_n       | ip_a        |
+------------+-------------+
| 3232235777 | 192.168.1.1 |
| 3232235778 | 192.168.1.2 |
| 3232235779 | 192.168.1.3 |
| 3232235780 | 192.168.1.4 |
+------------+-------------+
4 rows in set (0.00 sec)

mysql> EXPLAIN SELECT * FROM `iptest` WHERE `ip_a`='192.168.1.2';
+----+-------------+--------+------+---------------+------+---------+-------+------+-------------+
| id | select_type | table  | type | possible_keys | key  | key_len | ref   | rows | Extra       |
+----+-------------+--------+------+---------------+------+---------+-------+------+-------------+
|  1 | SIMPLE      | iptest | ref  | ip_a          | ip_a | 16      | const |    1 | Using where |
+----+-------------+--------+------+---------------+------+---------+-------+------+-------------+
1 row in set (0.00 sec)

mysql> CREATE VIEW v_iptest AS SELECT ip_n,INET_NTOA(ip_n) ip_a FROM `iptest`;
Query OK, 0 rows affected (0.00 sec)

mysql> EXPLAIN SELECT * FROM `v_iptest` WHERE `ip_a`='192.168.1.2';
+----+-------------+--------+-------+---------------+---------+---------+------+------+--------------------------+
| id | select_type | table  | type  | possible_keys | key     | key_len | ref  | rows | Extra                    |
+----+-------------+--------+-------+---------------+---------+---------+------+------+--------------------------+
|  1 | SIMPLE      | iptest | index | NULL          | PRIMARY | 4       | NULL |    4 | Using where; Using index |
+----+-------------+--------+-------+---------------+---------+---------+------+------+--------------------------+
1 row in set (0.00 sec)