How to find database and table size in MySQL?

As a Database Administrator(DBA), your job want you to know the most atomic details of databases in your server. It happens for me many times, my boss/ delivery manager asking me, what is the size of a specific database or specific table, in this kind of situation, producing the right data will help make right decision. From my experience, I understood, it is always better to say, I will give you data in few minutes, instead of producing the incorrect data, which I had been doing for a long time.

This post is about identifying the size of a database(s) or table(s).  The simple script, I have been using it for quite a long, if not wrong when I started my career as DBA. You could have probably seen this/similar script on other forums as well and there are many other methods too. I am reproducing this handy script here to get work done.







Size of a specific table:

select table_name as "Table Name",
sum(data_length+index_length)/1024/1024 as "Table Size in MB"
from information_schema.tables
where table_schema = 'Database Name' and
table_name = 'Table Name';
Size of a specific database:

select table_schema as "Database Name",
sum(data_length+index_length)/1024/1024 as "Database Size in MB"
from information_schema.tables
where table_schema = 'Database Name';








Size of all tables in a database with descending order:

select table_name as "Table Name",
sum(data_length+index_length)/1024/1024 as "Table Size in MB"
from information_schema.tables
where table_schema = 'Database Name'
group by table_name
order by 2 desc;

Size of all databases in descending order:

select table_schema as "Database Name",
sum(data_length+index_length)/1024/1024 as "Database Size in MB"
from information_schema.tables
group by table_schema
order by 2 desc;






This script is enough for us to get things done. I am providing here the table description of the information_schema.tables for more understanding. The table description can be displayed by running DESCRIBE command. 

mysql> desc information_schema.tables;
+-----------------+---------------------+------+-----+---------+-------+
| Field           | Type                | Null | Key | Default | Extra |
+-----------------+---------------------+------+-----+---------+-------+
| TABLE_CATALOG   | varchar(512)        | NO   |     |         |       |
| TABLE_SCHEMA    | varchar(64)         | NO   |     |         |       |
| TABLE_NAME      | varchar(64)         | NO   |     |         |       |
| TABLE_TYPE      | varchar(64)         | NO   |     |         |       |
| ENGINE          | varchar(64)         | YES  |     | NULL    |       |
| VERSION         | bigint(21) unsigned | YES  |     | NULL    |       |
| ROW_FORMAT      | varchar(10)         | YES  |     | NULL    |       |
| TABLE_ROWS      | bigint(21) unsigned | YES  |     | NULL    |       |
| AVG_ROW_LENGTH  | bigint(21) unsigned | YES  |     | NULL    |       |
| DATA_LENGTH     | bigint(21) unsigned | YES  |     | NULL    |       |
| MAX_DATA_LENGTH | bigint(21) unsigned | YES  |     | NULL    |       |
| INDEX_LENGTH    | bigint(21) unsigned | YES  |     | NULL    |       |
| DATA_FREE       | bigint(21) unsigned | YES  |     | NULL    |       |
| AUTO_INCREMENT  | bigint(21) unsigned | YES  |     | NULL    |       |
| CREATE_TIME     | datetime            | YES  |     | NULL    |       |
| UPDATE_TIME     | datetime            | YES  |     | NULL    |       |
| CHECK_TIME      | datetime            | YES  |     | NULL    |       |
| TABLE_COLLATION | varchar(32)         | YES  |     | NULL    |       |
| CHECKSUM        | bigint(21) unsigned | YES  |     | NULL    |       |
| CREATE_OPTIONS  | varchar(255)        | YES  |     | NULL    |       |
| TABLE_COMMENT   | varchar(2048)       | NO   |     |         |       |
+-----------------+---------------------+------+-----+---------+-------+
21 rows in set (0.00 sec)






I hope this post will help you to complete your task quickly. Please write your comments on this post and let me know, if there are other simple methods to achieve this task. Thanks for your time.

Deleting huge number of records in MySQL

This is a short post about DELETE data from huge table in MySQL. Most of us experienced, deleting huge record from MySQL tables taking long time, sometimes hours to complete deleting millions of records. Also, on production servers it locks the other table operations as well. Recently, I deleted around 70 million record from a production database in less than an hour. There are multiple workarounds to do this, however I am writing about the two methods which are frequently used by me for this operation. 
  • Using intermediate table.
  • Delete data in small chunks.
Before we proceed with using any of these methods, make sure the table has required indexes on where clause and you have a copy of the table as backup.

Using intermediate table:

In this method, create a new table with similar data structure and copy only required data. Rename the original table as archive or backup table and Rename the new table as original table.








Example:

Table `tabA` has 10 million record and you need only 1 million records of last 1 month. So the model query will be as follows:

CREATE TABLE `tabA_copy` as select * from `tabA` where predicate_col > curdate() - interval 30 day;
Now the table tabA_copy will have only the required data.

Rename table tabA to tabA_archive;
Rename table tabA_copy to tabA;
This method is useful when you wanted to copy only small percentage of the huge table data. Since, moving small portion of data to new table is cheaper when compared to delete huge data.









Delete data in smaller chunks:

In this method, data will be deleted as smaller chunks. Even in case of rollback also, it is cheaper to rollback smaller data sets. I have been using the following procedure to implement this method. Actually, it is a dynamic procedure with PREPARED statement to satisfy all the requirements. For simplicity, I am writing the core part of it. 

drop procedure if exists huge_delete;
delimiter //
create procedure huge_delete()
begin
 declare rows int;
 declare rows_deleted int;
 set rows = 1;
 set rows_deleted = 10000;
 while rows > 0
 do
    delete from db.tabA where predicate_col < curdate() - interval 90 day order by `id` limit 10000;
    set rows = row_count();
    set rows_deleted = rows_deleted + row_count();
    select rows_deleted as "Rows Deleted";
 end while;
end //
delimiter ;






This method is suited when you wanted delete huge set of data (in millions) from the table. Make sure, you have index on the where clause column.

Note:

I am using innodb_flush_log_trx_commit = 2 to reduce the total response time. There are certain points to be considered before setting the innodb_flush_log_at_trx_commit as 2. Take a look at this article before you proceed with this value.


This setting runs with autocommit = 1 value which is default. In case if you run with transactions, make sure you commit the transaction at every iteration.

Caution:

If you are running on Replication master server make sure, slave is in synchronization with master. In my recent implementation, the slave is very long time behind the master. I fixed it by skipping some transactions and later used compare utility to make it sync.

I recommend you to test this steps and confirm that, it is working as you expected before using it on production server. I hope this post will help you to get the things done!.