469,090 Members | 1,082 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,090 developers. It's quick & easy.

need help with select and join for multiple tables

I have five tables in my database, there are actually NO common fields
between them, not even a KEY or ID or anything like that, except for
the "body" of a blob field. and that text is not identical, just a
portion of that text is identical.

each table has 5 fields, all different except the blob, which is
called "message", so normally I use something like:

select * from table1 where message like '%apple%';

to query this table, and the same goes for table 2, except the blob is
different, table 2 normally is like this:

select * from table2 where message like '%customerid=453%';

It's impossible to change the data in these fields (which would be the
best option), but there is one common element between them in the
message blob.
What I want to do is something like this:

select * from table1, table2 where message like '%order=100%';

however only one table will have that order, either table1 or table2,
but never both, and theres no way to tell which of the tables will
actually have the text.

In other words, I want to search a bunch of tables for common text
without having to actually submit the query five times, because the
list of elements to search is about 25,000 items... I'd rather submit
25,000 queries than 125,000 queries...

Yes I know its a stupid design, but it's stuck....
anyone got any workable ideas/suggestions?

B.
Jul 20 '05 #1
2 2443
Beyonder wrote:
I have five tables in my database, there are actually NO common fields
between them, not even a KEY or ID or anything like that, except for
the "body" of a blob field. and that text is not identical, just a
portion of that text is identical.
Yes I know its a stupid design, but it's stuck....
anyone got any workable ideas/suggestions?


LOL! You've got to be kidding. At this point, you might as well
uninstall MySQL and just use Perl or something to search through a bunch
of text files. You are effectively not using MySQL as a database. :-P

But I'll try to offer a constructive suggestion: since you can't change
the data, make some new data that helps you find the right records.
Create some tables that you use like a kind of custom index for the
others tables.

For instance:
CREATE TABLE table_token_idx (
table_name VARCHAR(32), # which table contains the entry
table_pk VARCHAR(32), # which row in that table
table_token VARCHAR(32), # which token was found, e.g. "customer_id"
table_value VARCHAR(32), # which customer_id value was found
)

Tip: use VARCHAR because you can't assume the customer_id or any other
value will be an integer forever. If it's textual data outside of your
control, it's almost certain that someday, someone will require the id
to contain some alpha characters or dashes or something.

Then populate the table in some offline scheduled job, by scanning
through all your text blobs and creating appropriate rows in the idx
tables. Depending on what kind of activity you expect on your blob
data, you can append to the idx table or even flush all rows and
repopulate from scratch every night.

You can then use the idx tables to search for specific order,
customerid, or whatever type of token you need to search for. Query for
a given customer_id using the idx table, and based on the results,
construct a new dynamic SQL statement that queries the correct table and
row where that customer_id is found. You have to build a new SQL
statement from scratch because you can't parameterize table names.

SELECT CONCAT('SELECT * FROM ', I.table_name, ' WHERE ID = \'',
I.table.pk, '\'')
FROM table_token_idx I
WHERE I.table_token = ? AND I.table_value = ?

Run this, supplying 'customer_id' and '453' as parameters, to use your
example. The result should be a SQL string that you can then prepare
and execute.

If you're not permitted to create new tables in this database, then
create another database, and keep the idx tables there. But you've got
to create a scaleable solution to index these data, you can't just
search it brute-force, because that will just get more expensive as the
size of the database grows.

Regards,
Bill K.

Jul 20 '05 #2
On Tue, 22 Jun 2004 16:35:53 -0700, Bill Karwin <bi**@karwin.com>
wrote:
LOL! You've got to be kidding. At this point, you might as well
uninstall MySQL and just use Perl or something to search through a bunch
of text files. You are effectively not using MySQL as a database. :-P


Thanks Bill,

actually the database is over 80 GIGS in size, and EACH query takes 30
minutes, regardless of how simple or how complex the query always
takes 30 minutes, no matter what. Yes, the hardware sucks, and the
database is HUGE.

I kinda took your following advice, I dumped the binary/master logs
for the two days in question, using mysqlbinlog and am using grep to
search through the text files.

imagine 25,000 queries at 30minutes each on the database side... yuck.
thats what? 12,500 hours or almost 520 days (about 1.5 years!) uh, no,
I dont think so... lol...

even the text grep is a bit slow, but its faster than that,uh... 7
days at best... augh!... someone please shoot me now...

B.
Jul 20 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

3 posts views Thread by Prem | last post: by
6 posts views Thread by Eric Robinson | last post: by
reply views Thread by zhoujie | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.