How to find database and table size in MySQL?

As a Database Administrator(DBA), your job want you to know the most atomic details of databases in your server. It happens for me many times, my boss/ delivery manager asking me, what is the size of a specific database or specific table, in this kind of situation, producing the right data will help make right decision. From my experience, I understood, it is always better to say, I will give you data in few minutes, instead of producing the incorrect data, which I had been doing for a long time.

This post is about identifying the size of a database(s) or table(s).  The simple script, I have been using it for quite a long, if not wrong when I started my career as DBA. You could have probably seen this/similar script on other forums as well and there are many other methods too. I am reproducing this handy script here to get work done.







Size of a specific table:

select table_name as "Table Name",
sum(data_length+index_length)/1024/1024 as "Table Size in MB"
from information_schema.tables
where table_schema = 'Database Name' and
table_name = 'Table Name';
Size of a specific database:

select table_schema as "Database Name",
sum(data_length+index_length)/1024/1024 as "Database Size in MB"
from information_schema.tables
where table_schema = 'Database Name';








Size of all tables in a database with descending order:

select table_name as "Table Name",
sum(data_length+index_length)/1024/1024 as "Table Size in MB"
from information_schema.tables
where table_schema = 'Database Name'
group by table_name
order by 2 desc;

Size of all databases in descending order:

select table_schema as "Database Name",
sum(data_length+index_length)/1024/1024 as "Database Size in MB"
from information_schema.tables
group by table_schema
order by 2 desc;






This script is enough for us to get things done. I am providing here the table description of the information_schema.tables for more understanding. The table description can be displayed by running DESCRIBE command. 

mysql> desc information_schema.tables;
+-----------------+---------------------+------+-----+---------+-------+
| Field           | Type                | Null | Key | Default | Extra |
+-----------------+---------------------+------+-----+---------+-------+
| TABLE_CATALOG   | varchar(512)        | NO   |     |         |       |
| TABLE_SCHEMA    | varchar(64)         | NO   |     |         |       |
| TABLE_NAME      | varchar(64)         | NO   |     |         |       |
| TABLE_TYPE      | varchar(64)         | NO   |     |         |       |
| ENGINE          | varchar(64)         | YES  |     | NULL    |       |
| VERSION         | bigint(21) unsigned | YES  |     | NULL    |       |
| ROW_FORMAT      | varchar(10)         | YES  |     | NULL    |       |
| TABLE_ROWS      | bigint(21) unsigned | YES  |     | NULL    |       |
| AVG_ROW_LENGTH  | bigint(21) unsigned | YES  |     | NULL    |       |
| DATA_LENGTH     | bigint(21) unsigned | YES  |     | NULL    |       |
| MAX_DATA_LENGTH | bigint(21) unsigned | YES  |     | NULL    |       |
| INDEX_LENGTH    | bigint(21) unsigned | YES  |     | NULL    |       |
| DATA_FREE       | bigint(21) unsigned | YES  |     | NULL    |       |
| AUTO_INCREMENT  | bigint(21) unsigned | YES  |     | NULL    |       |
| CREATE_TIME     | datetime            | YES  |     | NULL    |       |
| UPDATE_TIME     | datetime            | YES  |     | NULL    |       |
| CHECK_TIME      | datetime            | YES  |     | NULL    |       |
| TABLE_COLLATION | varchar(32)         | YES  |     | NULL    |       |
| CHECKSUM        | bigint(21) unsigned | YES  |     | NULL    |       |
| CREATE_OPTIONS  | varchar(255)        | YES  |     | NULL    |       |
| TABLE_COMMENT   | varchar(2048)       | NO   |     |         |       |
+-----------------+---------------------+------+-----+---------+-------+
21 rows in set (0.00 sec)






I hope this post will help you to complete your task quickly. Please write your comments on this post and let me know, if there are other simple methods to achieve this task. Thanks for your time.

Deleting huge number of records in MySQL

This is a short post about DELETE data from huge table in MySQL. Most of us experienced, deleting huge record from MySQL tables taking long time, sometimes hours to complete deleting millions of records. Also, on production servers it locks the other table operations as well. Recently, I deleted around 70 million record from a production database in less than an hour. There are multiple workarounds to do this, however I am writing about the two methods which are frequently used by me for this operation. 
  • Using intermediate table.
  • Delete data in small chunks.
Before we proceed with using any of these methods, make sure the table has required indexes on where clause and you have a copy of the table as backup.

Using intermediate table:

In this method, create a new table with similar data structure and copy only required data. Rename the original table as archive or backup table and Rename the new table as original table.








Example:

Table `tabA` has 10 million record and you need only 1 million records of last 1 month. So the model query will be as follows:

CREATE TABLE `tabA_copy` as select * from `tabA` where predicate_col > curdate() - interval 30 day;
Now the table tabA_copy will have only the required data.

Rename table tabA to tabA_archive;
Rename table tabA_copy to tabA;
This method is useful when you wanted to copy only small percentage of the huge table data. Since, moving small portion of data to new table is cheaper when compared to delete huge data.









Delete data in smaller chunks:

In this method, data will be deleted as smaller chunks. Even in case of rollback also, it is cheaper to rollback smaller data sets. I have been using the following procedure to implement this method. Actually, it is a dynamic procedure with PREPARED statement to satisfy all the requirements. For simplicity, I am writing the core part of it. 

drop procedure if exists huge_delete;
delimiter //
create procedure huge_delete()
begin
 declare rows int;
 declare rows_deleted int;
 set rows = 1;
 set rows_deleted = 10000;
 while rows > 0
 do
    delete from db.tabA where predicate_col < curdate() - interval 90 day order by `id` limit 10000;
    set rows = row_count();
    set rows_deleted = rows_deleted + row_count();
    select rows_deleted as "Rows Deleted";
 end while;
end //
delimiter ;






This method is suited when you wanted delete huge set of data (in millions) from the table. Make sure, you have index on the where clause column.

Note:

I am using innodb_flush_log_trx_commit = 2 to reduce the total response time. There are certain points to be considered before setting the innodb_flush_log_at_trx_commit as 2. Take a look at this article before you proceed with this value.


This setting runs with autocommit = 1 value which is default. In case if you run with transactions, make sure you commit the transaction at every iteration.

Caution:

If you are running on Replication master server make sure, slave is in synchronization with master. In my recent implementation, the slave is very long time behind the master. I fixed it by skipping some transactions and later used compare utility to make it sync.

I recommend you to test this steps and confirm that, it is working as you expected before using it on production server. I hope this post will help you to get the things done!.






Understanding MySQL innodb_flush_log_at_trx_commit variable

The main objective of this post is to understand why innodb_fush_log_at_trx_commit variable is used? What are the applicable values for innodb_fush_log_at_trx_commit? How innodb_fush_log_at_trx_commit value will impact the MySQL performance and data safety?  How to change innodb_fush_log_at_trx_commit value? and How to change the frequency of InnoDB log flush?

Why innodb_fush_log_at_trx_commit?


In order to understand why we need innodb_fush_log_at_trx_commit variable, we should know about how InnoDB works. It is a huge and complex topic and it is not the scope of this article. I am trying to cover this topic in simple words and it is given below:

  1. InnoDB performs most of its operations at the memory (InnoDB Buffer Pool)
  2. It will write all changes from memory to the transaction log (InnoDB Log File)
  3. From transaction log - it will flush (write) data to storage disk (durable storage)

In order to achieve durability of data, we need to store each and every transaction data into hard disk storage. But consider, in a busy system, for each transaction commit, if InnoDB trying to flush (write) data to slow running disk, what will happen?, So how do we manage this situation, where we need to store each transaction data and at the same time maintaining good performance of the system.

The InnoDB provides the solution for this situation, based on your system, you can tell InnoDB, when you want to flush (write) data to disk. For example, you can tell InnoDB to work as mentioned below:

  1. Write to log file and flush data to disk in specified interval, not for every transaction commit.
  2. Write to log and flush to disk for each transaction commit.
  3. Write to log for every transaction commit but flush to disk at an interval not for each transaction commit.






Based on the application standard you can adjust these settings to maintain balance between performance and data safety. InnoDB provides a configurable variable to achieve this balance, this variable is called innodb_fush_log_at_trx_commit. This variable controls how often the log buffer is flushed.

What are the applicable values for innodb_fush_log_at_trx_commit and its impact?


InnoDB supports following 3 values for innodb_fush_log_at_trx_commit variable. Let see how these variables will change flush to disk behaviour:

innodb_flush_log_at_trx_commit=0

When you set innodb_flush_log_trx_at_commit=0, InnoDB will write the modified data (in InnoDB Buffer Pool) to log file (ib_logfile) and flush the log file (write to disk) every second, but it will not do anything at transaction commit.

Here, in case if there is a power failure or system crash, all the unflushed data will not be recoverable, since, it is not either written to log file or stored disk.

innodb_flush_log_at_trx_commit=1

When you set innodb_flush_log_trx_commit=1, InnoDB will write the log buffer to transaction log and flush to durable storage for every transaction.

Here, for all transaction commit, InnoDB will write to log and then write to disk, if in case the slower disk storage, it will badly impact the performance, i.e. the number of InnoDB transaction per second will be reduced.

innodb_flush_log_at_trx_commit=2

When you set innodb_flush_log_trx_commit = 2, InnoDB will write the log buffer to log file at every commit, but don't write data to disk. InnoDB flushes data once in every second.
Option 2, even if there is a power failure or system crash, data will be available in log file and can be recoverable.






So which option to be used on your application?


We have seen, the behaviour of each values, now it is based on the application requirement, we need to choose this variable value.

If performance is the main goal, you can set the value as 2. Since, InnoDB writing to disk once in every second, not for every transaction commit and it will improve the performance dramatically. In case if there is power failure or crash, data can be recoverable from transaction log.

If data safety is the main goal, you can set the value as 1, so that for every transaction commit, InnoDB will flush to disk. But performance may reduce in this case.

If you set value as 0, InnoDB will write from buffer to log once in every second and it will not perform any flush to disk operation for every transaction commit. The problem with this option is if in case of any power failure or system crash, there may be a chance of losing data up to one second.

Key points:


During buffer to log operation, i.e. writing from InnoDB buffer pool to InnoDB transaction log file - data is simply moved from InnoDB buffer to Operating system's cache, actually not written to the durable storage. So if you set innodb_fush_log_at_trx_commit either 0 or 2, there is a possibility of losing data up to one second.

If innodb_fush_log_at_trx_commit is set to 1 - InnoDB compels Operating System to flush data to durable storage. Writing to disk is a slow process and it is I/O blocking operation to ensure data is written to disk. So using this option, there is a chance; number of transaction per second will be reduced.

Note that, by default MySQL will auto commit the transactions.

Setting innodb_fush_log_at_trx_commit as 2 is useful when restoring huge databases, it will reduce the restoration time dramatically, and there are different opinions on this point, it is better to test in our own hand. In my experience, it really helped with the reduced restoration time.

Data Definition Language (DDL) changes flush the InnoDB log, independent of the innodb_fush_log_at_trx_commit setting.

Crash recovery operation works regardless of the innodb_fush_log_at_trx_commit setting, I am writing another article on InnoDB crash recovery.







How to configure innodb_fush_log_at_trx_commit variable?


The scope of the innodb_fush_log_at_trx_commit is GLOBAL and it can be set at dynamically without restarting server.

Dynamically on command line you can set innodb_fush_log_at_trx_commit as follows:

SET GLOBAL innodb_fush_log_at_trx_commit = 2;

On configuration file, you can set it as follows:

[mysqld]
innodb_fush_log_at_trx_commit = 2

Note: It requires server restart. Before making changes to configuration file, analyse the impact by setting it on dynamically.

If you try to set innodb_fush_log_at_trx_commit as session level variables, you will encounter the following error:

mysql> set session innodb_fush_log_at_trx_commit = 2;
ERROR 1229 (HY000): Variable 'innodb_fush_log_at_trx_commit' is a GLOBAL variable and should be set with SET GLOBAL
mysql>

How to change the frequency of InnoDB log file flushing?


The variable innodb_flush_log_at_timeout controls the frequency in which InnoDB flush log files. The frequency ranges from 1 to 2700 seconds, with the default value 1.






Note that, the higher this number, there is a higher the chance of losing data, in case of power failure or system crash.

For example, if you set this value as 5 seconds, in case of power failure, you may lose data upto 5 seconds.

In replication topology, to maintain durability and consistency of data, you can leave the default value, i.e. innodb_fush_log_at_trx_commit = 1.

I hope this article help you to understand, how InnoDB flush data and how it impact MySQL performance and data safety, also, how you can configure this variable to achieve maximum benefit. 

In future article let us see about how InnoDB crash recovery works and its configuration settings. If I missed something or if you wish to share your thoughts on this article, please mention in comment section, I will edit the post after review.




How to store IP (internet protocol) address in MySQL?

Internet Protocol address is a special kind of data. This data does not correspond directly to available MySQL built-in data types. I have seen in many servers, the IP(Internet Protocol) address is stored in CHAR(12), VARCHAR(15), TEXT and so on. 

The dotted-quad notation is just a way of writing for better readability, it is not the original form of raw data. The IP address is a UNSIGNED INTEGERS they are not strings. 

So the question is how we can store the IP addresses with dot in integer columns? The MySQL provides the built-it function to do this operation. The functions are given below:







For IPv4 addresses:

INET_ATON()
mysql> select inet_aton('127.0.0.1');

+------------------------+
| inet_aton('127.0.0.1') |
+------------------------+
|             2130706433 | 
+------------------------+
1 row in set (0.11 sec)
INET_NTOA()
SELECT inet_ntoa('2130706433');

+-------------------------+
| inet_ntoa('2130706433') |
+-------------------------+
| 127.0.0.1               | 
+-------------------------+
1 row in set (0.02 sec)







For IPv6 addresses:

The MySQL provides the following built-in functions to store and retrieve the IPv6 addresses. Here, IPv6 addresses are not stored as integers, since numeric-format IPv6 addresses require more bytes than the UNSIGNED BIGINT. So the below functions returns the VARBINARY(16) data types. Let us see with an example.

mysql> select hex(inet6_aton('127.0.0.1'));
+---------------------------------+
| hex(inet6_aton('127.0.0.1')) |
+---------------------------------+
| 7F000001                             |
+---------------------------------+
1 row in set (0.00 sec)

mysql> select hex(inet6_aton('2001:0db8:85a3:0000:0000:8a2e:0370:7334'));
+---------------------------------------------------------------------------+
| hex(inet6_aton('2001:0db8:85a3:0000:0000:8a2e:0370:7334')) |
+---------------------------------------------------------------------------+
| 20010DB885A3000000008A2E03707334                                 |
+---------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> select inet6_ntoa(unhex('20010DB885A3000000008A2E03707334'));
+----------------------------------------------------------------------------+
| inet6_ntoa(unhex('20010DB885A3000000008A2E03707334')) |
+----------------------------------------------------------------------------+
| 2001:db8:85a3::8a2e:370:7334                                                    |
+----------------------------------------------------------------------------+
1 row in set (0.00 sec)


Note:


Assume that, you are writing a query to find out the users, who are connected from the internet protocol address 127.0.0.1. So the following query can be derived.

SELECT name FROM user WHERE inet_ntoa(ipaddress) = '127.0.0.1';

Be aware that, this query will not use the index created on the ipaddress column, since, we are modifying the index column during the execution, also, it will convert the integer into IP adress row by row. So the work around will be:

SET @ip = inet_aton('127.0.0.1');
SELECT name FROM user WHERE ipaddress = @ip;
(or)


SELECT name FROM user WHERE ipaddress = inet_aton('127.0.0.1');



Now if you check the query execution plan, you will see that, the query using index on ipaddress column. I hope this post may help you. Please share your views on comment section and if you have other better methods, please share it on comment section.


[Solved] Can’t connect to MySQL Server

There are a lot of different reasons behind this error. Sometime, you may not see the exact issue on the error message. You have to dig deeper to identify the exact cause. It may take a lot of time and energy to debug this error. I have encountered this error many times, so based on my experience and research on this issue; I have gathered and given the following possible reasons and workarounds to fix this error.

Error Message:

Can't connect to local MySQL server

Reason 1:
MySQL Server is not running on the server.

Workaround 1:
Check MySQL server is running on the server.

MySQL process name                    : mysqld
MySQL default port                        : 3306

On Linux:
To check the service status           : systemctl status mysql
To check the process list               : ps ax | grep mysql
To check the listening port           : netstat –anp | grep mysql

On Windows:
To check the service status          : Ctrl + R -> MySQL
To check the process list               : Task Manager -> mysqld           

If you are trying to connect to MySQL service running on remote server, use telnet.
telnet 127.0.0.1 3306                      (Replace the IP and Port)

If not running, start the MySQL service and try to connect to MySQL server. If MySQL is not starting, see the error log to find out the reason and fix the error. If this is the reason, the error could have resolved now.







Reason 2:
MySQL socket file is missing / deleted / different location.

Workaround:
MySQL server can be connected through either socket file or TCP/IP protocol.  When you are trying to connect, MySQL service running on localhost, it will be connected through the socket file.  If socket file missing, you will get error message with location of the file.

Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’ (2) when trying to connect.

Socket file is a special purpose file, it is used as interface to connect MySQL server from clients. This file is created automatically by the server. If the file is not exists, it is because either of the following issues:

MySQL server was not started.
MySQL is created socket file in different location.
Socket file unexpectedly removed / deleted /missing.
MySQL service is started with socket variable but client trying to connect to default socket.

Example: mysqld –socket=/var/lib/mysqld.sock

If MySQL server is not started:







Use workaround 1 to verify MySQL is running or not. If MySQL service is not running, start the MySQL service by using the following method:

On Linux:

sudo systemctl start mysql

On Windows:

Go to services.msc and start MySQL service.

Note: you can also start the mysql service by other methods too.

If MySQL is created socket file but it is in different location / MySQL service is started with socket variable but client trying to connect to default socket.

You can use mysqladmin utility to check the current location of the MySQL socket file, as below:

mysqladmin -u root  -h 127.0.0.1 -p version


When you connect it from client, you can specify the client explicitly using - -socket variable or change the socket file location in configuration file (This required service restart).

[mysqld]
socket=/datadir/mysql.sock

If socket file is missing / removed /deleted:

I have seen some cases; socket file is missing for no reason. In this case, connect with TCP / IP protocol to restart the service. Socket, file cannot be created as plain empty file manually. The file should be created by server.

The worst part is you cannot shutdown the server. While stopping the server, it will look for socket file and if it not exists, it will through the socket file missing error. So the alternative method should be as follows:

To shutdown mysql server          : mysqladmin –h 127.0.0.1 –protocol=tcp –u root –p shutdown
To login to MySQL server              : mysql –h 127.0.0.1 –protocol=tcp –u root –p

Reason 3:
Security Enhanced Linux is protecting mysqld process.

Workaround 3:
Security Enhanced Linux (SELinux) is a Linux feature that provides mechanism for supporting access control security policies.

In simple words, it does not allow MySQL to access other directories and ports apart from pre-configured access objects. It means, by default, MySQL is configured to access /var/lib/mysql, but when you configured data directory and socket files to different folders, it does not permit access.

You can disable the SELinux protection for mysqld process. This you can done by:

To set SELinux into permissive mode        : Setenforce 0 (It will allow but show warnings!)

This settings is applicable still the next server restart. In order to permanently disable the SELinux (private networks), edit the configuration as follows:

To edit the SELinux configuration file      : vi /etc/selinux/config
To disable the SELinux protection             : SELINUX=disabled (restart required!)

If you wish to configure SELinux protection for mysqld process with proper privileges (production environment), you can refer this page for complete details.

Reason 4:






MySQL server is bound to single address (bind_address)

Workaround 4:

To edit bind_address variable                    : vi /etc/my.cnf
To change bind_address                               : bind_address=0.0.0.0.0

Here, the problem is, sometimes, by default MySQL bind_address variable is set to localhost or some IP address. When you try to connect MySQL server from other hosts, MySQL does not allow, and provides the error message, since MySQL service is bound to only that bind_address value.

By changing bind_address value into 0.0.0.0 will accept connections from all server hosts.

Reason 5:
Firewall blocking MySQL port:

Workaround 5:
If the firewall is blocking MySQL port 3306, you can open the port on host machine to fix this issue.

sudo iptables -I INPUT -p tcp --dport 3306 -j ACCEPT
sudo service iptables save

Note: The firewall rules for Windows and latest version of CentOS may vary, use accordingly.

Reason 6:
Access restricted to the server hosts.

Workaround 6:
There is a possibility that access restricted to server. You can allow access to server by editing the host.allow (Linux) / hosts (Windows file)

To change in Linux : vi /etc/host.allow
To change in Windows : %windows%\system32\drivers\etc
To allow access from all hosts : mysqld: ALL: allow (This is not best practice to allow all – allow only the web server / client hosts allow)

Note: See the TCP wrapper for more details (if you are interested!!!).

Other Possible Reasons:


  • Incorrect host name / IP address – verify you are giving correct IP address / hostname.
  • The user may not access to database - give access in MySQL using GRANT command.
  • The MySQL variable old_passwords = 1, old_password() is deprecated in MySQL 5.7. It is hashing method to store MySQL password. Setting this variable to 0 will allow you connect MySQL. (See more about this here)
  • The mysqld service unable to create pid file – this issue due to mysql user does not have access to create the file. Once, you given access this issue will be resolved.

I hope this article helps you in fixing this error. Still, if you are unable to fix this issue, please mention in comment section, I will get back to you to fix this error. 

Do you know, other workarounds to solve this issue? please mention in comment section, so that I will update the answer, it will be helpful to others.