473,698 Members | 2,313 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

mySQL underperforming - trying to identify bottleneck


Currently we have a database with a main table containing 3 million
records - we want to increase that to 10 million but thats not a
possibility at the moment.
Nearly all 3 million records are deleted and replaced every day - all
through the day - currently we're handling this by having 2 sets of
tables - 1 for inserting, 1 for searching.

A block of records (10k - 1 million) (distinguished by a client
identifier field) happen on the 'alt' set of tables, then records are
inserted from CSV files using LOAD_DATA_INFIL E (csv file created by
loading xml or csv files in proprietary client formats, validating and
rewriting data in our format)
To facilitate faster search times summary tables are updated from the
latest update - ie. insert into summarytable select fields from
alttable join on supportingtable s where clientID = $clientID
Then we LOAD INDEX INTO CACHE for all the relevant tables (key_buffer
is set to 512MB)
Then we switch a flag in an info table to tell the searches to start
pulling from these updated tables and then we repeat the process on the
table that was previously the search table.

During this time even simple queries can end up in the slow query log
and I cant figure out why.

This query benchmarks at approx 0.25s
SELECT fldResort AS dest_name, fldResort as ap_destname,
fldDestinationA PC, min( fldPrice ) AS price, fldCountry as country,
fldBoardBasis, fldFlyTime, sum( fldOfferCount ) as offercount
FROM tblSummaryFull WHERE fldStatus = 0 AND fldDepartureDat e >=
'2006-12-27' AND fldDepartureDat e <= '2007-01-02' AND fldDuration >= 7
AND fldDuration <= 7 AND tblSummaryFull. fldSearchTypes LIKE '%all%'
GROUP BY dest_name, fldBoardBasis ORDER BY price
Its using where, temporary and filesort with a key length of 3 -
examined 23k rows -
The log reads:
Query_time: 11 Lock_time: 0 Rows_sent: 267 Rows_examined: 23889

But even the most basic queries are being affected

SELECT * FROM tblResortInfo WHERE fldClientID=17 AND fldAccomRef='38 83'

Benchmarked at 0.02s (there are 0 results for this query)
>From the log: # Query_time: 11 Lock_time: 0 Rows_sent: 0 Rows_examined:
1

The site is at very low traffic atm, (around 3k visitors per day)

I'm doing everything I can to improve performance and query speeds
before next summer (where we're aiming for around 30k per day) but I
cant seem to do anything about this and if queries wont run at their
optimal speed then all this work has been for nothing.

Its probably worth noting that our CPU usage is barely at 50% - ditto
with RAM

Nov 2 '06 #1
4 4095
On 2 Nov 2006 04:09:45 -0800, in mailing.databas e.mysql "NancyJ"
<ha***@hazelrya n.co.uk>
<11************ **********@e3g2 000cwe.googlegr oups.comwrote:
>| Currently we have a database with a main table containing 3 million
| records - we want to increase that to 10 million but thats not a
| possibility at the moment.
| Nearly all 3 million records are deleted and replaced every day - all
Why?
>| through the day - currently we're handling this by having 2 sets of
| tables - 1 for inserting, 1 for searching.
|
| A block of records (10k - 1 million) (distinguished by a client
| identifier field) happen on the 'alt' set of tables, then records are
| inserted from CSV files using LOAD_DATA_INFIL E (csv file created by
| loading xml or csv files in proprietary client formats, validating and
| rewriting data in our format)
| To facilitate faster search times summary tables are updated from the
| latest update - ie. insert into summarytable select fields from
| alttable join on supportingtable s where clientID = $clientID
| Then we LOAD INDEX INTO CACHE for all the relevant tables (key_buffer
| is set to 512MB)
| Then we switch a flag in an info table to tell the searches to start
| pulling from these updated tables and then we repeat the process on the
| table that was previously the search table.
|
| During this time even simple queries can end up in the slow query log
| and I cant figure out why.
What indices have you set for the table(s)?
>| This query benchmarks at approx 0.25s
| SELECT fldResort AS dest_name, fldResort as ap_destname,
| fldDestinationA PC, min( fldPrice ) AS price, fldCountry as country,
| fldBoardBasis, fldFlyTime, sum( fldOfferCount ) as offercount
| FROM tblSummaryFull WHERE fldStatus = 0 AND fldDepartureDat e >=
| '2006-12-27' AND fldDepartureDat e <= '2007-01-02' AND fldDuration >= 7
| AND fldDuration <= 7 AND tblSummaryFull. fldSearchTypes LIKE '%all%'
| GROUP BY dest_name, fldBoardBasis ORDER BY price
| Its using where, temporary and filesort with a key length of 3 -
| examined 23k rows -
| The log reads:
| Query_time: 11 Lock_time: 0 Rows_sent: 267 Rows_examined: 23889
|
| But even the most basic queries are being affected
|
| SELECT * FROM tblResortInfo WHERE fldClientID=17 AND fldAccomRef='38 83'
|
| Benchmarked at 0.02s (there are 0 results for this query)
| >From the log: # Query_time: 11 Lock_time: 0 Rows_sent: 0 Rows_examined:
| 1
|
| The site is at very low traffic atm, (around 3k visitors per day)
|
| I'm doing everything I can to improve performance and query speeds
| before next summer (where we're aiming for around 30k per day) but I
| cant seem to do anything about this and if queries wont run at their
| optimal speed then all this work has been for nothing.
|
| Its probably worth noting that our CPU usage is barely at 50% - ditto
| with RAM
---------------------------------------------------------------
jn******@yourpa ntsyahoo.com.au : Remove your pants to reply
---------------------------------------------------------------
Nov 2 '06 #2

Jeff North wrote:
On 2 Nov 2006 04:09:45 -0800, in mailing.databas e.mysql "NancyJ"
<ha***@hazelrya n.co.uk>
<11************ **********@e3g2 000cwe.googlegr oups.comwrote:
| Currently we have a database with a main table containing 3 million
| records - we want to increase that to 10 million but thats not a
| possibility at the moment.
| Nearly all 3 million records are deleted and replaced every day - all

Why?
Because they change every day - we have around 30 data suppliers and
every day they supply us with a new file - sometimes they want to add
the their current dataset, sometimes they want to replace it with a
whole new data set.
>
| through the day - currently we're handling this by having 2 sets of
| tables - 1 for inserting, 1 for searching.
|
| A block of records (10k - 1 million) (distinguished by a client
| identifier field) happen on the 'alt' set of tables, then records are
| inserted from CSV files using LOAD_DATA_INFIL E (csv file created by
| loading xml or csv files in proprietary client formats, validating and
| rewriting data in our format)
| To facilitate faster search times summary tables are updated from the
| latest update - ie. insert into summarytable select fields from
| alttable join on supportingtable s where clientID = $clientID
| Then we LOAD INDEX INTO CACHE for all the relevant tables (key_buffer
| is set to 512MB)
| Then we switch a flag in an info table to tell the searches to start
| pulling from these updated tables and then we repeat the process on the
| table that was previously the search table.
|
| During this time even simple queries can end up in the slow query log
| and I cant figure out why.

What indices have you set for the table(s)?
We have nearly 100 tables - it would take all day to list every index.
Under good conditions all our uncached queries are fast - I'm trying to
find the cause of simple queries that arent locked or being limited by
CPU or Memory, going 1000 times slower than they should.
| This query benchmarks at approx 0.25s
| SELECT fldResort AS dest_name, fldResort as ap_destname,
| fldDestinationA PC, min( fldPrice ) AS price, fldCountry as country,
| fldBoardBasis, fldFlyTime, sum( fldOfferCount ) as offercount
| FROM tblSummaryFull WHERE fldStatus = 0 AND fldDepartureDat e >=
| '2006-12-27' AND fldDepartureDat e <= '2007-01-02' AND fldDuration >= 7
| AND fldDuration <= 7 AND tblSummaryFull. fldSearchTypes LIKE '%all%'
| GROUP BY dest_name, fldBoardBasis ORDER BY price
| Its using where, temporary and filesort with a key length of 3 -
| examined 23k rows -
| The log reads:
| Query_time: 11 Lock_time: 0 Rows_sent: 267 Rows_examined: 23889
|
| But even the most basic queries are being affected
|
| SELECT * FROM tblResortInfo WHERE fldClientID=17 AND fldAccomRef='38 83'
|
| Benchmarked at 0.02s (there are 0 results for this query)
| >From the log: # Query_time: 11 Lock_time: 0 Rows_sent: 0 Rows_examined:
| 1
|
| The site is at very low traffic atm, (around 3k visitors per day)
|
| I'm doing everything I can to improve performance and query speeds
| before next summer (where we're aiming for around 30k per day) but I
| cant seem to do anything about this and if queries wont run at their
| optimal speed then all this work has been for nothing.
|
| Its probably worth noting that our CPU usage is barely at 50% - ditto
| with RAM
---------------------------------------------------------------
jn******@yourpa ntsyahoo.com.au : Remove your pants to reply
---------------------------------------------------------------
Nov 2 '06 #3
NancyJ wrote:
Currently we have a database with a main table containing 3 million
records - we want to increase that to 10 million but thats not a
possibility at the moment.
Nearly all 3 million records are deleted and replaced every day - all
through the day - currently we're handling this by having 2 sets of
tables - 1 for inserting, 1 for searching.

A block of records (10k - 1 million) (distinguished by a client
identifier field) happen on the 'alt' set of tables, then records are
inserted from CSV files using LOAD_DATA_INFIL E (csv file created by
loading xml or csv files in proprietary client formats, validating and
rewriting data in our format)
To facilitate faster search times summary tables are updated from the
latest update - ie. insert into summarytable select fields from
alttable join on supportingtable s where clientID = $clientID
Then we LOAD INDEX INTO CACHE for all the relevant tables (key_buffer
is set to 512MB)
Then we switch a flag in an info table to tell the searches to start
pulling from these updated tables and then we repeat the process on the
table that was previously the search table.

During this time even simple queries can end up in the slow query log
and I cant figure out why.

This query benchmarks at approx 0.25s
SELECT fldResort AS dest_name, fldResort as ap_destname,
fldDestinationA PC, min( fldPrice ) AS price, fldCountry as country,
fldBoardBasis, fldFlyTime, sum( fldOfferCount ) as offercount
FROM tblSummaryFull WHERE fldStatus = 0 AND fldDepartureDat e >=
'2006-12-27' AND fldDepartureDat e <= '2007-01-02' AND fldDuration >= 7
AND fldDuration <= 7 AND tblSummaryFull. fldSearchTypes LIKE '%all%'
GROUP BY dest_name, fldBoardBasis ORDER BY price
Its using where, temporary and filesort with a key length of 3 -
examined 23k rows -
The log reads:
Query_time: 11 Lock_time: 0 Rows_sent: 267 Rows_examined: 23889

But even the most basic queries are being affected

SELECT * FROM tblResortInfo WHERE fldClientID=17 AND fldAccomRef='38 83'

Benchmarked at 0.02s (there are 0 results for this query)
>>From the log: # Query_time: 11 Lock_time: 0 Rows_sent: 0 Rows_examined:
1

The site is at very low traffic atm, (around 3k visitors per day)

I'm doing everything I can to improve performance and query speeds
before next summer (where we're aiming for around 30k per day) but I
cant seem to do anything about this and if queries wont run at their
optimal speed then all this work has been for nothing.

Its probably worth noting that our CPU usage is barely at 50% - ditto
with RAM
it shouldn't really matter why a dba deletes or adds tables or index
fields. the server should and is able to handle this and then some if
you have the right configuration.

having said that, turn on the slow query logging on your server and
start looking at what is causing the bottlenecks through slowquerydump
command. it will give you a somewhat aggregated tally of what is going
on with all of your queries. utilizing the slow query dump results start
creating index fields on the guilty parties. can't get any simpler than
that, ey?

you should set the slow_query_time parameters (it's a threshold
parameter) in the my.cnf file. usually that is set at 5 seconds or
whatever you feel is the right number.
Nov 2 '06 #4
On 2 Nov 2006 08:41:30 -0800, in mailing.databas e.mysql "NancyJ"
<ha***@hazelrya n.co.uk>
<11************ *********@m73g2 000cwd.googlegr oups.comwrote:
>|
| Jeff North wrote:
|
| On 2 Nov 2006 04:09:45 -0800, in mailing.databas e.mysql "NancyJ"
| <ha***@hazelrya n.co.uk>
| <11************ **********@e3g2 000cwe.googlegr oups.comwrote:
| >
| | Currently we have a database with a main table containing 3 million
| | records - we want to increase that to 10 million but thats not a
| | possibility at the moment.
| | Nearly all 3 million records are deleted and replaced every day - all
| >
| Why?
| Because they change every day - we have around 30 data suppliers and
| every day they supply us with a new file - sometimes they want to add
| the their current dataset, sometimes they want to replace it with a
| whole new data set.
Just wanting clarification of what it was necessary to delete the
records :-)

What method are you using to delete the records DELETE FROM or
TRUNCATE table?
>| | through the day - currently we're handling this by having 2 sets of
| | tables - 1 for inserting, 1 for searching.
| |
| | A block of records (10k - 1 million) (distinguished by a client
| | identifier field) happen on the 'alt' set of tables, then records are
| | inserted from CSV files using LOAD_DATA_INFIL E (csv file created by
| | loading xml or csv files in proprietary client formats, validating and
| | rewriting data in our format)
| | To facilitate faster search times summary tables are updated from the
| | latest update - ie. insert into summarytable select fields from
| | alttable join on supportingtable s where clientID = $clientID
| | Then we LOAD INDEX INTO CACHE for all the relevant tables (key_buffer
| | is set to 512MB)
| | Then we switch a flag in an info table to tell the searches to start
| | pulling from these updated tables and then we repeat the process on the
| | table that was previously the search table.
| |
| | During this time even simple queries can end up in the slow query log
| | and I cant figure out why.
| >
| What indices have you set for the table(s)?
| >
| We have nearly 100 tables - it would take all day to list every index.
| Under good conditions all our uncached queries are fast - I'm trying to
| find the cause of simple queries that arent locked or being limited by
| CPU or Memory, going 1000 times slower than they should.
This may not be a database issue. If you are deleting and recreating
tables/files the actual data maybe fragmented on the hard drive. Have
you tried defragging your drive?
>| | This query benchmarks at approx 0.25s
| | SELECT fldResort AS dest_name, fldResort as ap_destname,
| | fldDestinationA PC, min( fldPrice ) AS price, fldCountry as country,
| | fldBoardBasis, fldFlyTime, sum( fldOfferCount ) as offercount
| | FROM tblSummaryFull WHERE fldStatus = 0 AND fldDepartureDat e >=
| | '2006-12-27' AND fldDepartureDat e <= '2007-01-02' AND fldDuration >= 7
| | AND fldDuration <= 7 AND tblSummaryFull. fldSearchTypes LIKE '%all%'
| | GROUP BY dest_name, fldBoardBasis ORDER BY price
| | Its using where, temporary and filesort with a key length of 3 -
| | examined 23k rows -
| | The log reads:
| | Query_time: 11 Lock_time: 0 Rows_sent: 267 Rows_examined: 23889
| |
| | But even the most basic queries are being affected
| |
| | SELECT * FROM tblResortInfo WHERE fldClientID=17 AND fldAccomRef='38 83'
| |
| | Benchmarked at 0.02s (there are 0 results for this query)
| | >From the log: # Query_time: 11 Lock_time: 0 Rows_sent: 0 Rows_examined:
| | 1
| |
| | The site is at very low traffic atm, (around 3k visitors per day)
| |
| | I'm doing everything I can to improve performance and query speeds
| | before next summer (where we're aiming for around 30k per day) but I
| | cant seem to do anything about this and if queries wont run at their
| | optimal speed then all this work has been for nothing.
| |
| | Its probably worth noting that our CPU usage is barely at 50% - ditto
| | with RAM
| ---------------------------------------------------------------
| jn******@yourpa ntsyahoo.com.au : Remove your pants to reply
| ---------------------------------------------------------------
---------------------------------------------------------------
jn******@yourpa ntsyahoo.com.au : Remove your pants to reply
---------------------------------------------------------------
Nov 2 '06 #5

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
11760
by: dave | last post by:
Hello there, I am at my wit's end ! I have used the following script succesfully to upload an image to my web space. But what I really want to be able to do is to update an existing record in a table in MySQL with the path & filename to the image. I have successfully uploaded and performed an update query on the database, but the problem I have is I cannot retain the primary key field in a variable which is then used in a SQL update...
2
403
by: R.Gill | last post by:
I have been trying to find information on the business potential for providing PHP/MySQL services in US and Europe. Would appreciate if somebody can help me on this.
11
17561
by: DJJ | last post by:
I am using the MySQL ODBC 3.51 driver to link three relatively small MySQL tables to a Microsoft Access 2003 database. I am finding that the data from the MySQL tables takes a hell of a long time to load making any kind linkage with my Access data virtually useless. I have the MySQL driver setup in as a USER DSN. The MySQL data is sitting out on a server and the Access database is running locally. The network connection is very...
2
2315
by: Shashikant Kore | last post by:
Hi, I am using MySQL for a table which will have 100M+ records, avg length of records being 130 bytes. When the number of records reach approx. 25M (and the file size close to 4GB), the rate of inserts falls drastically from 800 per second to 30-40 per second. Details: * MySQL 3.23.58 on Fedora Core 3
5
1231
by: supercomputer | last post by:
So i'm writing this program to check if a row exists in a table. If it doesn't it inserts it if it does it will update that row with the current info. Well it sorta works but not fully. It goes through and executes the correct querries but when it comes to determining if the row exists it doesn't get back a result. Yet if I mannually enter it into the mysql console I get a result. This only happens when my if statement to determine...
0
1749
by: Sam Flywheel | last post by:
Hello, all: I am pleased to announce that the MySQL Journal is moving forward. During the past two weeks, I've been discussing the journal with MySQL AB, and the plan that's been devised benefits all involved, including authors, readers, the community, and MySQL AB.  Let me outline how the journal will operate: * The name of the magazine is Tabula: The Journal of MySQL Development.
7
4192
by: Randy | last post by:
Folks: We have a web-based app that's _really_ slowing down because multiple clients are writing their own private data into a single, central database. I guess the previous programmer did things this way because it made things easy. Well, I'm the person that has to put up with the long-term headache. Anywho, someone at work wants things sped up, so what I'm looking at doing is this for each client:
1
2824
by: Charles Crume | last post by:
Hello; I am running the Apache web server, MySQL v4, and PHP on an NT4 server. Apache runs great, but the auction software I am using (Web2035 Auction software written in PHP) is very, very slow. Sometimes it takes 20-30 seconds to bring up an auction page from the items table which has less than 200 records in it. Can anyone can give me some pointers on where to start looking? (I don't
24
2491
by: RJ_32 | last post by:
When there is no rDNS record on an IP address, then it might take a while for the reverse lookup to timeout. I'm thinking of how on a tracert you can observe the delay. If so, then calling gethostbyaddr() can delay the serving of the web page, right? Because gethostbyaddr() is not forked into a separate process. So I suppose I'd need to call gethostbyaddr() as the very last task for a script. Does that sound right? Or is there another...
0
8676
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8608
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9029
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
7732
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
5860
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4370
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
3050
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
2332
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2006
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.