Tuesday, 18 September 2018

[Solved] How to install MySQL Server on CentOS 7?


Recently, when I am working on setting up MySQL Enterprise Server, I found, there is too much information available over internet and it was very difficult for a newbie to get what is needed for direct implementation. So, I decided to write a quick reference guide for setting up the server, covering end to end, starting from planning to production to maintenance. This is a first post in that direction, in this post, we will discuss about installing MySQL Enterprise Server on CentOS 7 machine. Note that, the steps are same for both the Enterprise and Community editions, only binary files are different, and downloaded from different repositories.

If you are looking for installing MySQL on Windows operating system, please visit this page https://www.rathishkumar.in/2016/01/how-to-install-mysql-server-on-windows.html. I am assuming, hardware and the Operating System is installed and configured as per the requirement. Let us begin with the installation.

Removing MariaDB:

The CentOS comes with MariaDB as a default database, if you try to install, MySQL on top of it, you will encounter an error message stating the MySQL library files conflict with MariaDB library files. Remove the MariaDB to avoid errors and to have a clean installation.  Use below statements to remove MariaDB completely:

sudo yum remove MariaDB-server
sudo yum remove MariaDB-client (This can be done in single step)
sudo rm –rf /var/lib/mysql
sudo rm /etc/my.cnf

(Run with sudo, if you are not logged in as Super Admin).

Downloading RPM files:

MySQL installation files (On CentOS 7 – rpm packages) can be downloaded from MySQL yum repository.

For MySQL Community Edition – there is clear and step-by-step guide available at the MySQL website - https://dev.mysql.com/doc/mysql-yum-repo-quick-guide/en/. The only step missing is downloading MySQL yum repository to your local machine. (This might looks very simple step, but most of the newbies, it is very helpful). 


For MySQL Enterprise Edition – the binary files can be downloaded from Oracle Software Delivery Cloud (http://edelivery.oracle.com) for latest version or previous versions visit My Oracle Support (https://support.oracle.com/).

As mentioned earlier, there is a clear and step-by-step guide available at the MySQL website for Community Edition, I will be continue with installing Enterprise Edition, steps are almost same.

Choosing the RPM file:

For MySQL Community Edition, all the RPM files will be included in the downloaded YUM repository, but for Enterprise Editions, these files will be downloaded separately. (For system administration purpose, all these files can be created under a MySQL repository).

For newbies, it may be confusing to understand, the different RPM files and its contents, I am concentrating on only files required for stand-alone MySQL instances. If there is requirement for embedded MySQL or if you working on developing plugins for MySQL, can install other files. It is completely depends on your requirement. The following tables, describe the required files and where to install.

RPM File
Description
Location
mysql-commercial-server-5.7.23-1.1.el7.x86_64.rpm
MySQL Server and related utilities to run and administer a MySQL server.
On Server
mysql-commercial-client-5.7.23-1.1.el7.x86_64.rpm
Standard MySQL clients and administration tools.
On Server & On Client
mysql-commercial-common-5.7.23-1.1.el7.x86_64.rpm
Common files needed by MySQL client library, MySQL database server, and MySQL embedded server.
On Server
mysql-commercial-libs-5.7.23-1.1.el7.x86_64.rpm
Shared libraries for MySQL Client applications
On Server

Installing MySQL:

Install the MySQL binary files in the following order, this is to avoid dependency errors, the following statements will install MySQL on local machine:

sudo yum localinstall mysql-commercial-libs-5.7.23-1.1.el7.x86_64.rpm
sudo yum localinstall mysql-commercial-client-5.7.23-1.1.el7.x86_64.rpm
sudo yum localinstall mysql-commercial-common-5.7.23-1.1.el7.x86_64.rpm
sudo yum localinstall mysql-commercial-server-5.7.23-1.1.el7.x86_64.rpm

Starting the MySQL Service:

On CentOS 7, the mysql service can be started by following:

sudo systemctl start mysqld.service
sudo systemctl status mysqld.service

Login to MySQL Server for first time:

Once the service is started, the superuser account ‘root’@’localhost’ created and temporary password is stored at the error log file (default /var/log/mysqld.log). The password can be retrieved by using the following command:

sudo grep 'temporary password' /var/log/mysqld.log

As soon as logged in to MySQL with the temporary password, need to reset the root password, until that, you cannot run any queries on MySQL server. You can reset the root account password by running below command.

mysql –u root –h localhost -p
alter user 'root'@'localhost' identified by 'NewPassword';

You can verify the MySQL status and edition by running the following commands, sample output provided below for MySQL 8.0 Community Edition (GPL License) running on Windows machine.

MySQL License Status
Notes:

MySQL conflicts with MariaDB: in case if there is conflict with MariaDB, you will see the error message as below:

file /usr/share/mysql/xxx from install of MySQL-server-xxx conflicts with file from package mariadb-libs-xxx

To resolve this error remove mariadb server and its related files from CentOS server. Refer the section - Removing mariadb.

Can’t connect to mysql server: MySQL server is installed but unable to connect from client.

Check this page for possible causes and solutions: https://www.rathishkumar.in/2017/08/solved-cannot-connect-to-mysql-server.html

Please let me know, if you are facing any other errors on comment section. I hope this post is helpful.

Friday, 15 June 2018

[Solved] Unable to start SQL Server Service after applying service pack Error: Script level upgrade for database 'master' failed because upgrade step 'ISServer_upgrade.sql' encountered error 5069, state 1, severity 16

Problem:
SQL Server service not starting after service pack / cumulative update installation.
Error Message:
Script level upgrade for database 'master' failed because upgrade step 'ISServer_upgrade.sql' encountered error 5069, state 1, severity 16. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the 'master' database, it will prevent the entire SQL Server instance from starting. Examine the previous error log entries for errors, take the appropriate corrective actions and restart the database so that the script upgrade steps run to completion.
Root Cause:
From primary analysis, the code ISServer_upgrade.sql is part of SQL Server DLL and it is no where located on the operating system.
Upgrading this script failing for some reason, which can be identified by starting sql server with trace 3601 and check the error log to identify the exact code running on SQL Server while updating the server.
NET START MSSQLServer /T902 /T3601
However, this DLL is part of the SQL Server, this will be fixed by updating the next service pack or cumulative update.
Troubleshooting Steps:
Start the SQL Server Service with trace 902.
           NET START MSSQLServer /T902 




Download the latest cumulative update on top of service pack.

For example, after installing Service Pack 2 on SQL Server 2016 RTM, if the SQL Server Service not started, install the latest cumulative update CU2 on top of Service Pack 2 while running SQL on T902.
After successfully updating SQL Server with CU2 stop the 902 trace and restart the SQL Server Service from SQL Server Configuration Manager.
NET STOP MSSQLServer
NET START MSSQLServer
Reference:
SQL Server build version:

Saturday, 19 May 2018

How to set up MySQL InnoDB Cluster? Part One


This post is about setting up MySQL InnoDB Cluster with 5 nodes on a sandbox deployment.  Here, we focus on implementation part, the core concepts will be explained in separate posts.

Prerequisites:
  • MySQL Engine
  • MySQL Shell
  • MySQL Router
Deploying MySQL InnoDB Cluster involves the following steps:
  • Deploying MySQL Engine (Sandbox Instance)
  • Creating an InnoDB Cluster
  • Adding nodes to InnoDB Cluster
  • Configuring MySQL Router for High Availability.
  • Testing High Availability.

Tuesday, 26 December 2017

How to find database and table size in MySQL?

As a Database Administrator(DBA), your job want you to know the most atomic details of databases in your server. It happens for me many times, my boss/ delivery manager asking me, what is the size of a specific database or specific table, in this kind of situation, producing the right data will help make right decision. From my experience, I understood, it is always better to say, I will give you data in few minutes, instead of producing the incorrect data, which I had been doing for a long time.

This post is about identifying the size of a database(s) or table(s).  The simple script, I have been using it for quite a long, if not wrong when I started my career as DBA. You could have probably seen this/similar script on other forums as well and there are many other methods too. I am reproducing this handy script here to get work done.







Size of a specific table:

select table_name as "Table Name",
sum(data_length+index_length)/1024/1024 as "Table Size in MB"
from information_schema.tables
where table_schema = 'Database Name' and
table_name = 'Table Name';
Size of a specific database:

select table_schema as "Database Name",
sum(data_length+index_length)/1024/1024 as "Database Size in MB"
from information_schema.tables
where table_schema = 'Database Name';








Size of all tables in a database with descending order:

select table_name as "Table Name",
sum(data_length+index_length)/1024/1024 as "Table Size in MB"
from information_schema.tables
where table_schema = 'Database Name'
group by table_name
order by 2 desc;

Size of all databases in descending order:

select table_schema as "Database Name",
sum(data_length+index_length)/1024/1024 as "Database Size in MB"
from information_schema.tables
group by table_schema
order by 2 desc;






This script is enough for us to get things done. I am providing here the table description of the information_schema.tables for more understanding. The table description can be displayed by running DESCRIBE command. 

mysql> desc information_schema.tables;
+-----------------+---------------------+------+-----+---------+-------+
| Field           | Type                | Null | Key | Default | Extra |
+-----------------+---------------------+------+-----+---------+-------+
| TABLE_CATALOG   | varchar(512)        | NO   |     |         |       |
| TABLE_SCHEMA    | varchar(64)         | NO   |     |         |       |
| TABLE_NAME      | varchar(64)         | NO   |     |         |       |
| TABLE_TYPE      | varchar(64)         | NO   |     |         |       |
| ENGINE          | varchar(64)         | YES  |     | NULL    |       |
| VERSION         | bigint(21) unsigned | YES  |     | NULL    |       |
| ROW_FORMAT      | varchar(10)         | YES  |     | NULL    |       |
| TABLE_ROWS      | bigint(21) unsigned | YES  |     | NULL    |       |
| AVG_ROW_LENGTH  | bigint(21) unsigned | YES  |     | NULL    |       |
| DATA_LENGTH     | bigint(21) unsigned | YES  |     | NULL    |       |
| MAX_DATA_LENGTH | bigint(21) unsigned | YES  |     | NULL    |       |
| INDEX_LENGTH    | bigint(21) unsigned | YES  |     | NULL    |       |
| DATA_FREE       | bigint(21) unsigned | YES  |     | NULL    |       |
| AUTO_INCREMENT  | bigint(21) unsigned | YES  |     | NULL    |       |
| CREATE_TIME     | datetime            | YES  |     | NULL    |       |
| UPDATE_TIME     | datetime            | YES  |     | NULL    |       |
| CHECK_TIME      | datetime            | YES  |     | NULL    |       |
| TABLE_COLLATION | varchar(32)         | YES  |     | NULL    |       |
| CHECKSUM        | bigint(21) unsigned | YES  |     | NULL    |       |
| CREATE_OPTIONS  | varchar(255)        | YES  |     | NULL    |       |
| TABLE_COMMENT   | varchar(2048)       | NO   |     |         |       |
+-----------------+---------------------+------+-----+---------+-------+
21 rows in set (0.00 sec)






I hope this post will help you to complete your task quickly. Please write your comments on this post and let me know, if there are other simple methods to achieve this task. Thanks for your time.

Friday, 1 December 2017

Deleting huge number of records in MySQL

This is a short post about DELETE data from huge table in MySQL. Most of us experienced, deleting huge record from MySQL tables taking long time, sometimes hours to complete deleting millions of records. Also, on production servers it locks the other table operations as well. Recently, I deleted around 70 million record from a production database in less than an hour. There are multiple workarounds to do this, however I am writing about the two methods which are frequently used by me for this operation. 
  • Using intermediate table.
  • Delete data in small chunks.
Before we proceed with using any of these methods, make sure the table has required indexes on where clause and you have a copy of the table as backup.

Using intermediate table:

In this method, create a new table with similar data structure and copy only required data. Rename the original table as archive or backup table and Rename the new table as original table.








Example:

Table `tabA` has 10 million record and you need only 1 million records of last 1 month. So the model query will be as follows:

CREATE TABLE `tabA_copy` as select * from `tabA` where predicate_col > curdate() - interval 30 day;
Now the table tabA_copy will have only the required data.

Rename table tabA to tabA_archive;
Rename table tabA_copy to tabA;
This method is useful when you wanted to copy only small percentage of the huge table data. Since, moving small portion of data to new table is cheaper when compared to delete huge data.









Delete data in smaller chunks:

In this method, data will be deleted as smaller chunks. Even in case of rollback also, it is cheaper to rollback smaller data sets. I have been using the following procedure to implement this method. Actually, it is a dynamic procedure with PREPARED statement to satisfy all the requirements. For simplicity, I am writing the core part of it. 

drop procedure if exists huge_delete;
delimiter //
create procedure huge_delete()
begin
 declare rows int;
 declare rows_deleted int;
 set rows = 1;
 set rows_deleted = 10000;
 while rows > 0
 do
    delete from db.tabA where predicate_col < curdate() - interval 90 day order by `id` limit 10000;
    set rows = row_count();
    set rows_deleted = rows_deleted + row_count();
    select rows_deleted as "Rows Deleted";
 end while;
end //
delimiter ;






This method is suited when you wanted delete huge set of data (in millions) from the table. Make sure, you have index on the where clause column.

Note:

I am using innodb_flush_log_trx_commit = 2 to reduce the total response time. There are certain points to be considered before setting the innodb_flush_log_at_trx_commit as 2. Take a look at this article before you proceed with this value.


This setting runs with autocommit = 1 value which is default. In case if you run with transactions, make sure you commit the transaction at every iteration.

Caution:

If you are running on Replication master server make sure, slave is in synchronization with master. In my recent implementation, the slave is very long time behind the master. I fixed it by skipping some transactions and later used compare utility to make it sync.

I recommend you to test this steps and confirm that, it is working as you expected before using it on production server. I hope this post will help you to get the things done!.






Friday, 13 October 2017

Understanding MySQL innodb_flush_log_at_trx_commit variable

The main objective of this post is to understand why innodb_fush_log_at_trx_commit variable is used? What are the applicable values for innodb_fush_log_at_trx_commit? How innodb_fush_log_at_trx_commit value will impact the MySQL performance and data safety?  How to change innodb_fush_log_at_trx_commit value? and How to change the frequency of InnoDB log flush?

Why innodb_fush_log_at_trx_commit?


In order to understand why we need innodb_fush_log_at_trx_commit variable, we should know about how InnoDB works. It is a huge and complex topic and it is not the scope of this article. I am trying to cover this topic in simple words and it is given below:

  1. InnoDB performs most of its operations at the memory (InnoDB Buffer Pool)
  2. It will write all changes from memory to the transaction log (InnoDB Log File)
  3. From transaction log - it will flush (write) data to storage disk (durable storage)

In order to achieve durability of data, we need to store each and every transaction data into hard disk storage. But consider, in a busy system, for each transaction commit, if InnoDB trying to flush (write) data to slow running disk, what will happen?, So how do we manage this situation, where we need to store each transaction data and at the same time maintaining good performance of the system.

The InnoDB provides the solution for this situation, based on your system, you can tell InnoDB, when you want to flush (write) data to disk. For example, you can tell InnoDB to work as mentioned below:

  1. Write to log file and flush data to disk in specified interval, not for every transaction commit.
  2. Write to log and flush to disk for each transaction commit.
  3. Write to log for every transaction commit but flush to disk at an interval not for each transaction commit.






Based on the application standard you can adjust these settings to maintain balance between performance and data safety. InnoDB provides a configurable variable to achieve this balance, this variable is called innodb_fush_log_at_trx_commit. This variable controls how often the log buffer is flushed.

What are the applicable values for innodb_fush_log_at_trx_commit and its impact?


InnoDB supports following 3 values for innodb_fush_log_at_trx_commit variable. Let see how these variables will change flush to disk behaviour:

innodb_flush_log_at_trx_commit=0

When you set innodb_flush_log_trx_at_commit=0, InnoDB will write the modified data (in InnoDB Buffer Pool) to log file (ib_logfile) and flush the log file (write to disk) every second, but it will not do anything at transaction commit.

Here, in case if there is a power failure or system crash, all the unflushed data will not be recoverable, since, it is not either written to log file or stored disk.

innodb_flush_log_at_trx_commit=1

When you set innodb_flush_log_trx_commit=1, InnoDB will write the log buffer to transaction log and flush to durable storage for every transaction.

Here, for all transaction commit, InnoDB will write to log and then write to disk, if in case the slower disk storage, it will badly impact the performance, i.e. the number of InnoDB transaction per second will be reduced.

innodb_flush_log_at_trx_commit=2

When you set innodb_flush_log_trx_commit = 2, InnoDB will write the log buffer to log file at every commit, but don't write data to disk. InnoDB flushes data once in every second.
Option 2, even if there is a power failure or system crash, data will be available in log file and can be recoverable.






So which option to be used on your application?


We have seen, the behaviour of each values, now it is based on the application requirement, we need to choose this variable value.

If performance is the main goal, you can set the value as 2. Since, InnoDB writing to disk once in every second, not for every transaction commit and it will improve the performance dramatically. In case if there is power failure or crash, data can be recoverable from transaction log.

If data safety is the main goal, you can set the value as 1, so that for every transaction commit, InnoDB will flush to disk. But performance may reduce in this case.

If you set value as 0, InnoDB will write from buffer to log once in every second and it will not perform any flush to disk operation for every transaction commit. The problem with this option is if in case of any power failure or system crash, there may be a chance of losing data up to one second.

Key points:


During buffer to log operation, i.e. writing from InnoDB buffer pool to InnoDB transaction log file - data is simply moved from InnoDB buffer to Operating system's cache, actually not written to the durable storage. So if you set innodb_fush_log_at_trx_commit either 0 or 2, there is a possibility of losing data up to one second.

If innodb_fush_log_at_trx_commit is set to 1 - InnoDB compels Operating System to flush data to durable storage. Writing to disk is a slow process and it is I/O blocking operation to ensure data is written to disk. So using this option, there is a chance; number of transaction per second will be reduced.

Note that, by default MySQL will auto commit the transactions.

Setting innodb_fush_log_at_trx_commit as 2 is useful when restoring huge databases, it will reduce the restoration time dramatically, and there are different opinions on this point, it is better to test in our own hand. In my experience, it really helped with the reduced restoration time.

Data Definition Language (DDL) changes flush the InnoDB log, independent of the innodb_fush_log_at_trx_commit setting.

Crash recovery operation works regardless of the innodb_fush_log_at_trx_commit setting, I am writing another article on InnoDB crash recovery.







How to configure innodb_fush_log_at_trx_commit variable?


The scope of the innodb_fush_log_at_trx_commit is GLOBAL and it can be set at dynamically without restarting server.

Dynamically on command line you can set innodb_fush_log_at_trx_commit as follows:

SET GLOBAL innodb_fush_log_at_trx_commit = 2;

On configuration file, you can set it as follows:

[mysqld]
innodb_fush_log_at_trx_commit = 2

Note: It requires server restart. Before making changes to configuration file, analyse the impact by setting it on dynamically.

If you try to set innodb_fush_log_at_trx_commit as session level variables, you will encounter the following error:

mysql> set session innodb_fush_log_at_trx_commit = 2;
ERROR 1229 (HY000): Variable 'innodb_fush_log_at_trx_commit' is a GLOBAL variable and should be set with SET GLOBAL
mysql>

How to change the frequency of InnoDB log file flushing?


The variable innodb_flush_log_at_timeout controls the frequency in which InnoDB flush log files. The frequency ranges from 1 to 2700 seconds, with the default value 1.






Note that, the higher this number, there is a higher the chance of losing data, in case of power failure or system crash.

For example, if you set this value as 5 seconds, in case of power failure, you may lose data upto 5 seconds.

In replication topology, to maintain durability and consistency of data, you can leave the default value, i.e. innodb_fush_log_at_trx_commit = 1.

I hope this article help you to understand, how InnoDB flush data and how it impact MySQL performance and data safety, also, how you can configure this variable to achieve maximum benefit. 

In future article let us see about how InnoDB crash recovery works and its configuration settings. If I missed something or if you wish to share your thoughts on this article, please mention in comment section, I will edit the post after review.




Tuesday, 22 August 2017

How to store IP (internet protocol) address in MySQL?

Internet Protocol address is a special kind of data. This data does not correspond directly to available MySQL built-in data types. I have seen in many servers, the IP(Internet Protocol) address is stored in CHAR(12), VARCHAR(15), TEXT and so on. 

The dotted-quad notation is just a way of writing for better readability, it is not the original form of raw data. The IP address is a UNSIGNED INTEGERS they are not strings. 

So the question is how we can store the IP addresses with dot in integer columns? The MySQL provides the built-it function to do this operation. The functions are given below:







For IPv4 addresses:

INET_ATON()
mysql> select inet_aton('127.0.0.1');

+------------------------+
| inet_aton('127.0.0.1') |
+------------------------+
|             2130706433 | 
+------------------------+
1 row in set (0.11 sec)
INET_NTOA()
SELECT inet_ntoa('2130706433');

+-------------------------+
| inet_ntoa('2130706433') |
+-------------------------+
| 127.0.0.1               | 
+-------------------------+
1 row in set (0.02 sec)







For IPv6 addresses:

The MySQL provides the following built-in functions to store and retrieve the IPv6 addresses. Here, IPv6 addresses are not stored as integers, since numeric-format IPv6 addresses require more bytes than the UNSIGNED BIGINT. So the below functions returns the VARBINARY(16) data types. Let us see with an example.

mysql> select hex(inet6_aton('127.0.0.1'));
+---------------------------------+
| hex(inet6_aton('127.0.0.1')) |
+---------------------------------+
| 7F000001                             |
+---------------------------------+
1 row in set (0.00 sec)

mysql> select hex(inet6_aton('2001:0db8:85a3:0000:0000:8a2e:0370:7334'));
+---------------------------------------------------------------------------+
| hex(inet6_aton('2001:0db8:85a3:0000:0000:8a2e:0370:7334')) |
+---------------------------------------------------------------------------+
| 20010DB885A3000000008A2E03707334                                 |
+---------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> select inet6_ntoa(unhex('20010DB885A3000000008A2E03707334'));
+----------------------------------------------------------------------------+
| inet6_ntoa(unhex('20010DB885A3000000008A2E03707334')) |
+----------------------------------------------------------------------------+
| 2001:db8:85a3::8a2e:370:7334                                                    |
+----------------------------------------------------------------------------+
1 row in set (0.00 sec)


Note:


Assume that, you are writing a query to find out the users, who are connected from the internet protocol address 127.0.0.1. So the following query can be derived.

SELECT name FROM user WHERE inet_ntoa(ipaddress) = '127.0.0.1';

Be aware that, this query will not use the index created on the ipaddress column, since, we are modifying the index column during the execution, also, it will convert the integer into IP adress row by row. So the work around will be:

SET @ip = inet_aton('127.0.0.1');
SELECT name FROM user WHERE ipaddress = @ip;
(or)


SELECT name FROM user WHERE ipaddress = inet_aton('127.0.0.1');



Now if you check the query execution plan, you will see that, the query using index on ipaddress column. I hope this post may help you. Please share your views on comment section and if you have other better methods, please share it on comment section.