472,129 Members | 1,702 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,129 software developers and data experts.

Trying to optimize a MySQL query.

Okay, well what my query looks like is:

Expand|Select|Wrap|Line Numbers
  1. SELECT id, title FROM myTable LIMIT 0, 500 
This query is taking a long time and changes in time each time i run it, from 0-10 seconds.

And I don't see any reason behind it. id is set as my PRIMARY index.

The table has 132,000 rows, but I don't think it should be taking this long. Do you think the server is just slow? Or do you see a problem with my SQL query?

Thanks for any response. Also if you think it's a problem with my server, could you recommend a database host that would be faster?
Feb 18 '07 #1
5 1877
5,058 Expert 4TB

There is nothing wrong with that query.
It is possible that your server is slow, or that your connection is slow.

I'm using a server from bluehost.com that works pretty well and doesn't cost to much.
Feb 18 '07 #2
I don't think it's a connection problem, because when running it from just phpmyadmin I get...

Showing rows 0 - 499 (500 total, Query took 10.9797 sec)

If I use EXPLAIN I'm getting

id select_type table type possible_keys key key_len ref rows Extra

(Sorry that didn't come out well) but it says it's looking through all 132000 rows which seems weird?

What I don't understand is if I run some of my other queries, that are just a simple SELECT * FROM myTable WHERE id=5 they take 1/100 of a second. I mean I can count the first 500 rows and return them faster than my database is doing it, does it matter about the other data in the row? I think it may be because my host uses a shared mysql environment, but I think it has to do with my query, since other queries are performed fine. Any more help would be great.


This query takes less than a second

SELECT * FROM `myTable` WHERE id <= 500

I think there is a problem with the LIMIT?
Feb 18 '07 #3
Okay I'm just gonna use the < and <= things because LIMIT seems to suck horribly. If anyone can tell me a reason behind this please let me know.

And thanks atli for responding =].
Feb 18 '07 #4
5,058 Expert 4TB
I've check this out and it appearers that the LIMIT clause suffers from performance issues.

From what I've read, queries that have both the index and offset set will cause a full scan of the table.

Lets say we have a table with 100,000 entries.
This query will read the first 50 entries and then stop, finishing quickly.
Expand|Select|Wrap|Line Numbers
  1. SELECT * FROM mytable LIMIT 50
While this one will go through the entire table, and throw away the first 99,950 records before returning the final 50, which in large tables, will run slowly.
Expand|Select|Wrap|Line Numbers
  1. SELECT * FROM mytable LIMIT 99950 50

So I believe it is possible to rewrite your query and avoid the performance issue like so.
Expand|Select|Wrap|Line Numbers
  1. SELECT id, title FROM mytable LIMIT 500
Btw. If you use the WHILE statement like you posted, if you have or will ever delete a record with a id less than 500, that query wont return 500 records.
Feb 18 '07 #5
Well, the reason I used the LIMIT 0, 500 is because sometimes I start at 500, 500
1000, 500 etc. So the LIMIT was really causing a performance strain. It seems to be okay now, I'll just try not to delete any rows =] (hopefully).
Feb 19 '07 #6

Post your reply

Sign in to post your reply or Sign up for a free account.

Similar topics

4 posts views Thread by jy2003 | last post: by
reply views Thread by Andreas Falck | last post: by
4 posts views Thread by Denis St-Michel | last post: by
6 posts views Thread by Bruce D | last post: by
4 posts views Thread by Huaer.XC | last post: by
1 post views Thread by acornejo | last post: by
reply views Thread by leo001 | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.