By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,742 Members | 1,206 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,742 IT Pros & Developers. It's quick & easy.

MySQL with large table.

P: n/a
Hi,

I am using MySQL for a table which will have 100M+ records, avg length
of records being 130 bytes.
When the number of records reach approx. 25M (and the file size close
to 4GB), the rate of inserts falls drastically from 800 per second to
30-40 per second.

Details:
* MySQL 3.23.58 on Fedora Core 3
* Table has 4 indices.
* I have got rid of 4GB file size problem with MAX_ROWS=1000000000.
* File system : ext3 on single disk. ext3 could create 10G file
without much trouble. So I am convinced that ext3 is not the
bottleneck.
* Tried using InnoDB engine but it also doesn't meet the requirements.

Requirements of database:
* A single table in the database with 100M+ rows, each of size 130
bytes (approx).
* 500-600 inserts per second.
* 200 selects and 200 updates per second. (These statements will
affect only one row)
* 3-5 select statements per minute which can return 10k to 500k
records.
* No foreign keys/ACID transaction requirements.
* Fast recovery in case of crash.

Questions:
* Does MySQL performance drops when the table grows beyond 4G?
* Which are most important server variables which need to fine tuned?
Currently I am using only key_buffer = 512M.

Answers/advice/opinions/pointers much appreciated.

Thanks,

--shashi

Jul 23 '05 #1
Share this Question
Share on Google+
2 Replies


P: n/a
Shashikant Kore wrote:
Hi,

I am using MySQL for a table which will have 100M+ records, avg length
of records being 130 bytes.
When the number of records reach approx. 25M (and the file size close
to 4GB), the rate of inserts falls drastically from 800 per second to
30-40 per second.
You might try using MERGE tables to spread the data over multiple
physical files that are each less than 4G. See:
http://dev.mysql.com/doc/mysql/en/me...ge-engine.html
Details:
* MySQL 3.23.58 on Fedora Core 3
Why such an old version of MySQL? Have you tried using a more recent
version to see if it makes a difference?
* I have got rid of 4GB file size problem with MAX_ROWS=1000000000.
* File system : ext3 on single disk. ext3 could create 10G file
without much trouble. So I am convinced that ext3 is not the
bottleneck.


Here's a posting from a person who says that switching to xfs or
reiserfs, instead of ext3 filesystem helped get better performance from
MySQL:
http://lists.mysql.com/benchmarks/126

Also, have you read the following web page:
http://dev.mysql.com/doc/mysql/en/insert-speed.html
There are several good suggestions for speeding up inserts, suchas using
LOAD DATA INFILE.

Regards,
Bill K.
Jul 23 '05 #2

P: n/a
Thanks for the response, Bill.

Bill Karwin wrote:

You might try using MERGE tables to spread the data over multiple
physical files that are each less than 4G. See:
http://dev.mysql.com/doc/mysql/en/me...ge-engine.html

MERGE tables proved to be of great help. The inserts are now faster. I
am not really looking into option of LOAD DATA INFILE as the inserts
are not batched. But if inserts get slower in future, I would
definitely look into this matter.

Details:
* MySQL 3.23.58 on Fedora Core 3


Why such an old version of MySQL? Have you tried using a more recent
version to see if it makes a difference?


I cannot say confidently that performance has improved a lot. I need to
go through the change log to figure out the new tunable parameters in
4.1 which will improve the performance.

* I have got rid of 4GB file size problem with MAX_ROWS=1000000000.
* File system : ext3 on single disk. ext3 could create 10G file
without much trouble. So I am convinced that ext3 is not the
bottleneck.


Here's a posting from a person who says that switching to xfs or
reiserfs, instead of ext3 filesystem helped get better performance from
MySQL:
http://lists.mysql.com/benchmarks/126


Will look into the option of using XFS/JFS.
Also, have you read the following web page:
http://dev.mysql.com/doc/mysql/en/insert-speed.html
There are several good suggestions for speeding up inserts, suchas using
LOAD DATA INFILE.

Yeah, I had scanned through this. Since I run very basic queries, most
of the suggestions were not applicable.

Meanwhile, I also explored option of using BDB engine, but it didn't
work out.

If anyone has worked with MERGE tables, please give me some insights
into improving its performance. The manual doesn't speak much on this.

Thanks,

--shashi
Regards,
Bill K.


Jul 23 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.