By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,694 Members | 1,845 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,694 IT Pros & Developers. It's quick & easy.

Select Query

P: 43
Expand|Select|Wrap|Line Numbers
  1. select c19,name,c5,count(*) as count,sum(c13) as cost  from TableA where c1 like '%' and c5 like '%' and  name like 'bravo' and c19  between  '2009-01-01 00:00:00' and '2012-01-01 00:00:00' group by date(c19),name,c5 limit 10000 offset 0 
i am using mysql version 4.1 and there is a table having 2.5 million
records , the table structure is given below.. if i run above select
query time taken is 19.33 sec
I have some Questions:
1. Is this acceptable time in terms of mysql? (For me its not)
2. How can i reduce this time so that to increase performance?

I have tried multicolumn indexing but it didnt helped.currently i am using
indexing over name . The time is proportional to number of records fetched .

Explain results:

id | select_type | table | type | possible_keys | key | key_len | ref | rows |Extra
1 | SIMPLE | TableA | range| name | name | 15 | NULL | 256903 | Using where; Using temporary; Using filesort

Table structure:

Expand|Select|Wrap|Line Numbers
  1. CREATE TABLE TableA (
  2.   msg_id varchar(20) NOT NULL default '',
  3.   name varchar(15) NOT NULL default '',
  4.   c1 varchar(50) NOT NULL default '',
  5.   c2 varchar(15) default NULL,
  6.   c3 varchar(15) default NULL,
  7.   c4 tinyint(3) unsigned NOT NULL default '0',
  8.   c5 tinyint(3) unsigned NOT NULL default '0',
  9.   c6 varchar(15) default NULL,
  10.   c7 tinyint(3) unsigned NOT NULL default '0',
  11.   c8 tinyint(3) unsigned NOT NULL default '0',
  12.   c9 tinyint(3) unsigned NOT NULL default '0',
  13.   c10 tinyint(3) unsigned NOT NULL default '0',
  14.   c11 tinyint(3) unsigned NOT NULL default '0',
  15.   c12 tinyint(3) unsigned NOT NULL default '0',
  16.   c13 decimal(12,5) NOT NULL default '0.00000',
  17.   c14 varchar(15) NOT NULL default '',
  18.   c15 int(5) unsigned default '0',
  19.   c16 int(5) unsigned default '0',
  20.   c17 varchar(20) NOT NULL default '',
  21.   c18 tinyint(3) unsigned default '0',
  22.   c19 varchar(50) NOT NULL default '0000-00-00 00:00:00',
  23.   c20 datetime NOT NULL default '0000-00-00 00:00:00',
  24.   c21 varchar(15) default 'PENDING',
  25.   c22 int(3) unsigned default '0',
  26.   c23 varchar(5) NOT NULL default 'false',
  27.   PRIMARY KEY  (msg_id),
  28.   KEY uname (username)
please suggest me how i can reduce the query time.
Thanks In Advance.....
Mar 4 '09 #1
Share this Question
Share on Google+
6 Replies

Expert 100+
P: 801
I can not see any index. c19 should be of type datetime not varchar. Index on c19 will be helpful.
Mar 4 '09 #2

P: 43
Thanks for ur quick response... sorry my mistake .. i changed the fields name before submitting post... the c19 is not varchar its same as c20 and the index is on name not username... i have tried to use c19 as index already but same result.
Mar 4 '09 #3

P: 43
Now i dropped index name and added new index on c19 ... now the same query giving the result in 39 sec approx.. time got doubled now.... it didnt worked...any thing else i can try? Also have one question .. what should be the acceptable time for a query like above in same mysql?
Mar 4 '09 #4

Expert 100+
P: 785
Multicolumn index has no effect, because your query only references to one index! It would make sense and speed up the query if you rewrite your query to reference all the indexed columns.

How to speed up performance:
1.) rewrite your query:
You shouldn't use "like" everywhere! "like" is very slow. Instead "c1 like '%'" write "c1 is not null". Instead "name like 'bravo' " write "name='bravo'". I hope you have an index on "name" column! You can delete "name" in your group-by statement. Put the most restricting index in front, that means place "name='bravo'" right after the where-clause.
2.) use a shadow table: insert a trigger that writes to a second, new table (the shadow table) at every table update. The trigger should already include the sum and group-by clause. The shadow table should only have the needed columns "c19,name,c5,count,cost", so that it is very small and can be memory-mapped. Of course the shadow table should also have indexed columns (for example on "name")
Then you inquire only the shadow table to get results back within a few milliseconds. It's so fast because the shadow table only contains a few hundred entries in opposition to million entries in the original table.
Mar 4 '09 #5

Expert 100+
P: 785
I just noticed that you are using mySql 4.1, and it doesn't yet have triggers.

So upgrade to mySql5.0 or later first.
Or produce your shadow-table in another way (for example in a nightly batch-job, or as an additional SQL in your source code at every place where you modify the big table)
Mar 4 '09 #6

Expert 100+
P: 785
Less than 1 second if a user (for example on the webpage) waits for the result.
You can always get it that fast by writing a good inquiry and having a good database design. I know that sounds not possible, but I have doen it many times. Just imagine how fast google delivers the search results from billion of records (stored webpages)!

good inquiry:
the user only needs to see a few data to make a decision, so don't return unnecessary data that you don't display or the user doesn't need to see. It's probably already enough for the user to see only name and date. Then he can click on a record from the list to see its additional info (all the cn, n=1..23 columns)

good database design:
Don't put all your data inside one big table! Split it up into 2 tables, one with the most used data, the other with the seldom used. That means make one with columns c19,name,c5,count,cost only and reference (foreign key) to a second table that holds all the other cn, n=1..23 columns
Mar 4 '09 #7

Post your reply

Sign in to post your reply or Sign up for a free account.