MySQL Notes

Extending mySQL 4GB table limits to 4.2 billion rows

mySQL is a powerful free database that is pretty widely used in web development. However, as data begins to grow, the most common question i get for mySQL is

  • What are the limits of the mySQL database?

If one is to look around the internet, there are many discussions on the 4GB limit on the table size of mySQL. That very very true. But is that the maximum a mySQL database can do? Recently, i was asked this question. After abit of surfing. i came across this site http://jeremy.zawodny.com/blog/archives/000796.html that mentions a way to beat this 4GB limit. Apprently, you should be able to get mySQL to push up to 4.2BILLON rows. That’s right. BILLION. And that should be way above the 4GB limit, and i am going to try it out.

Infrastructure

The mySQL server sits on a 1GHz P3 linux machine with Fedora Core 2. mySQL version is Ver 11.18 Distrib 3.23.58, for redhat-linux-gnu (i686).

What does this mean? This mean that i am using a very old and slow hardware, that is definately 32-bit based, with a rather old version of mySQL. My thoughts are that if it works on this machine, it should be able to work anywhere. Given that the current version of mySQL at point of writing is v5.1.

A test script was written in perl to insert data into a test database working on the following columns

  • 1 UID column with auto increment
  • 1 BIGINT column with a manual insert value
  • 10 VARCHAR columns with field size 255 and values locked in at random md5sum values i generated on various files.

I also used phpMyAdmin to assist in the creation and addition of fields. Its merely a tool that makes my life a lot easier.

Inserting the data

The perl script adds 7500000 rows at a time and it took me a little over 3 hours to fully populate the database. I constantly used mySQL shell to monitor the status of the table. Here is the result of the table when it actually reached 4.2GB. At this point, when i try to add in additional data, i get an error saying “DBD::mysql::st execute failed: The table ‘test’ is full at testdb.pl line 24”. GREAT!

mysql> show table status like "test" \G
*************************** 1. row ***************************
     Name: test
     Type: MyISAM
     Row_format: Dynamic
     Rows: 6279191
     Avg_row_length: 684
     Data_length: 4294966644
     Max_data_length: 4294967295
     Index_length: 77326336
     Data_free: 0
     Auto_increment: 6279192
     Create_time: 2007-04-17 10:08:47
     Update_time: 2007-04-17 11:22:10
     Check_time: NULL
     Create_options:
     Comment:
1 row in set (0.00 sec)

Take a look at Max_data_length and Data_length. Goodness. This is a database full of junk!

Expanding the table

Here comes the test. It is to alter this table, such that it is able to break the 4.2GB barrier. The command used is simple

mysql> alter table test max_rows=200000000000

This command takes a while to complete. But once it is done, the result of the table status is as follows:-

mysql> show table status like "test" \G
*************************** 1. row ***************************
     Name: test
     Type: MyISAM
     Row_format: Dynamic
     Rows: 6330038
     Avg_row_length: 684
     Data_length: 4329745992
     Max_data_length: 281474976710655
     Index_length: 91130880
     Data_free: 0
     Auto_increment: 6330039
     Create_time: 2007-04-17 11:26:49
     Update_time: 2007-04-17 11:41:51
     Check_time: NULL
     Create_options: max_rows=4294967295
     Comment:
1 row in set (0.03 sec)

Check out the limit placed on the max_rows within the create options. It sits at an impressive 4.2BILLION. Secondary checks on my data in the table also shows that the data doesn’t get wiped out. in fact, they sit there just as it has been before. I.e. no data is lost!

When i try to run my test perl script again…. it works! Data continues to get added into the table! Further probes into the database files also shows continual increase in the database table. Isn’t that great?!