473,395 Members | 1,622 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,395 software developers and data experts.

A Question of design.

Daz
Hello all,

my question is more regarding advice on a script design. I have about
3600 entries in my database, the user submits a list, which is then
checked against those in the database to confirm whether or not they
already own a particular item. If they do, then it's not added to the
user table, whereas if it is, then it _is_ added to the user table.
However, if the item is not in the database, the user is advised of
this. So basically, I need to figure out a quick way to compare the
users submited items (probably 50 to 700 items), with those in an array
that I have created using the items from the database.

I can think of two ways to achieve this. Firstly, I can iterate through
all of the users items, and use in_array() to see if they are in the
database array of items. I think another method I can use, is very
similar, but rather than have an array of database items, I can put
them all into a single comma seperated string, and iterate through the
array of user items, using regex to check if the item is in the
database. There may be another more efficient way to acheive the
results I am looking for, but I can't think of anything else.

I would appreciate it if anyone could tell me which of the 2 is likely
to be faster, or even if there is an even better way altogether. I need
to find the quickest way, as I don't want to over work the server or
for the processing to cause a server timeout.

All the best.

Daz.

Oct 24 '06 #1
39 1800
Daz wrote:
I can think of two ways to achieve this. Firstly, I can iterate through
all of the users items, and use in_array() to see if they are in the
database array of items. I think another method I can use, is very
similar, but rather than have an array of database items, I can put
them all into a single comma seperated string, and iterate through the
array of user items, using regex to check if the item is in the
database. There may be another more efficient way to acheive the
results I am looking for, but I can't think of anything else.
Typically you would leave the enforcement of unique conditions to the
database, in order to avoid race conditions. The database can also more
quickly find an existing record if the column in question has an index.

Oct 24 '06 #2
Daz

Chung Leong wrote:
Daz wrote:
I can think of two ways to achieve this. Firstly, I can iterate through
all of the users items, and use in_array() to see if they are in the
database array of items. I think another method I can use, is very
similar, but rather than have an array of database items, I can put
them all into a single comma seperated string, and iterate through the
array of user items, using regex to check if the item is in the
database. There may be another more efficient way to acheive the
results I am looking for, but I can't think of anything else.

Typically you would leave the enforcement of unique conditions to the
database, in order to avoid race conditions. The database can also more
quickly find an existing record if the column in question has an index.
Hi Chung.

The problem is that when I have a few hundred results to compare.
Should I really query the database that many times? Could I do it with
a single query, and if so, how would I know what items the user already
owns a particular item, and update the database using the PHP-MySQL
layer. To my understanding, you can't execute and UPDATE or INSERT
statement from within a SELECT statement. Nor can you execute several
statements, such as multiple UPDATE statements or several INSERT
statements all in 1.

I know you could use INSERT INTO table_name VALUES ('val1','val2'),
('val3','val4')...;
But this query wouldn't work, especially from within a select
statement:
$query = "INSERT INTO table_name VALUES ('val1','val2'); INSERT INTO
table_name VALUES ('val3','val4');";

It's something I wish was fixed, although I am sure there is a
perfectly valid reason for it not to be, as we both know executing
these through phpmyadmin, or through the CLI, it would work fine.

Hopefully you can see where my problem is. Just to recap, it's
essentially how to:
a) Find the items in the users list that are valid (in the database)
and then:
1) Add it if needed
OR
2) Let the user know they already have it.
AND
b) Find the items in the users list tht aren't in the database (if
any), and let them know.

I hope this makes sense.

Many thanks for you.r input.

Daz

Oct 24 '06 #3
Daz wrote:
The problem is that when I have a few hundred results to compare.
Should I really query the database that many times? Could I do it with
a single query, and if so, how would I know what items the user already
owns a particular item, and update the database using the PHP-MySQL
layer. To my understanding, you can't execute and UPDATE or INSERT
statement from within a SELECT statement. Nor can you execute several
statements, such as multiple UPDATE statements or several INSERT
statements all in 1.
No, that still wouldn't remove the race condition. What you want to do
is put a unique constraint on the table, then have your script just
perform the INSERT. If it fails, then you know you have a duplicate.
MySQL also support the INSERT ... ON DUPLICATE KEY UPDATE syntax I
believe.

Oct 24 '06 #4
Rik
Chung Leong wrote:
Daz wrote:
>The problem is that when I have a few hundred results to compare.
Should I really query the database that many times? Could I do it
with a single query, and if so, how would I know what items the user
already owns a particular item, and update the database using the
PHP-MySQL layer. To my understanding, you can't execute and UPDATE
or INSERT statement from within a SELECT statement. Nor can you
execute several statements, such as multiple UPDATE statements or
several INSERT statements all in 1.

No, that still wouldn't remove the race condition. What you want to do
is put a unique constraint on the table, then have your script just
perform the INSERT. If it fails, then you know you have a duplicate.
MySQL also support the INSERT ... ON DUPLICATE KEY UPDATE syntax I
believe.
Yup, or the shorter REPLACE INTO which does exactly the same. And in that
case it can be done in one query, like:
REPLACE INTO tabel (fields...)
VALUES
(val1.1,val1.2,val1.3,val1.4),
(val2.1,val2.2,val2.3,val2.4),
etc....

People should use unique identifiers more...
--
Rik Wasmus
Oct 24 '06 #5
Following on from Daz's message. . .
>Hello all,

my question is more regarding advice on a script design. I have about
3600 entries in my database, the user submits a list, which is then
checked against those in the database to confirm whether or not they
already own a particular item. If they do, then it's not added to the
user table, whereas if it is, then it _is_ added to the user table.
However, if the item is not in the database, the user is advised of
this. So basically, I need to figure out a quick way to compare the
users submited items (probably 50 to 700 items), with those in an array
that I have created using the items from the database.
As I read this you are simply trying to decide which items in list U are
not in list D (U=user's list D=database list).

Two methods spring to mind.
1 - (Possibly not suitable for PHP)
You set up two arrays of bits with the position in the array being the
'ID'.
So if the U list has items 3,4 and 6 the array looks like 00011010000...
and similarly with the D list and now you can AND (etc) to give set
operations.
2 - (Probably better for PHP)
Sort both lists
Set two pointers to start (lowest) of both lists (call them pU and pD)
repeat until end of both lists reached
Compare the pointed to items
if D[pD] == U[pU] then "U already has this D". Bump both pointers
if D[pD] < U[pU] then "U doesn't have this D". Bump pD.
if D[pD] >U[pU] then "This U isn't in D". Bump pU.

With any luck your D list should be pre-sorted as a result of the DB
query.

For speed you may want to bulk your updates by doing the logic and all
of the 'what goes in which category' first.

--
PETER FOX Not the same since the porcelain business went down the pan
pe******@eminent.demon.co.uk.not.this.bit.no.html
2 Tees Close, Witham, Essex.
Gravity beer in Essex <http://www.eminent.demon.co.uk>
Oct 24 '06 #6
Daz

Chung Leong wrote:
Daz wrote:
The problem is that when I have a few hundred results to compare.
Should I really query the database that many times? Could I do it with
a single query, and if so, how would I know what items the user already
owns a particular item, and update the database using the PHP-MySQL
layer. To my understanding, you can't execute and UPDATE or INSERT
statement from within a SELECT statement. Nor can you execute several
statements, such as multiple UPDATE statements or several INSERT
statements all in 1.

No, that still wouldn't remove the race condition. What you want to do
is put a unique constraint on the table, then have your script just
perform the INSERT. If it fails, then you know you have a duplicate.
MySQL also support the INSERT ... ON DUPLICATE KEY UPDATE syntax I
believe.
I can't use any unique keys on my table, as each user can have 'up to'
3600 items, and a row is added for each item the user has, in the user
table. For example:

+-----+---------+
| uid | item_id |
+-----+---------+
| 3 | 1 |
| 3 | 3 |
| 3 | 5 |
| 3 | 6 |
| 3 | 7 |
| 3 | 9 |
| 3 | 12 |
| 3 | 13 |
| 3 | 15 |
| 3 | 16 |
+-----+---------+

If a row doesn't exist, then a user doesn't own the item.

Oct 24 '06 #7
Daz wrote:
Chung Leong wrote:
>>Daz wrote:
>>>The problem is that when I have a few hundred results to compare.
Should I really query the database that many times? Could I do it with
a single query, and if so, how would I know what items the user already
owns a particular item, and update the database using the PHP-MySQL
layer. To my understanding, you can't execute and UPDATE or INSERT
statement from within a SELECT statement. Nor can you execute several
statements, such as multiple UPDATE statements or several INSERT
statements all in 1.

No, that still wouldn't remove the race condition. What you want to do
is put a unique constraint on the table, then have your script just
perform the INSERT. If it fails, then you know you have a duplicate.
MySQL also support the INSERT ... ON DUPLICATE KEY UPDATE syntax I
believe.


I can't use any unique keys on my table, as each user can have 'up to'
3600 items, and a row is added for each item the user has, in the user
table. For example:

+-----+---------+
| uid | item_id |
+-----+---------+
| 3 | 1 |
| 3 | 3 |
| 3 | 5 |
| 3 | 6 |
| 3 | 7 |
| 3 | 9 |
| 3 | 12 |
| 3 | 13 |
| 3 | 15 |
| 3 | 16 |
+-----+---------+

If a row doesn't exist, then a user doesn't own the item.
You have a way of uniquely identifying the row, don't you? You have to
have something to determine if it's a duplicate or not.

And that gives you a unique index.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Oct 24 '06 #8
Daz

Rik wrote:
Chung Leong wrote:
Daz wrote:
The problem is that when I have a few hundred results to compare.
Should I really query the database that many times? Could I do it
with a single query, and if so, how would I know what items the user
already owns a particular item, and update the database using the
PHP-MySQL layer. To my understanding, you can't execute and UPDATE
or INSERT statement from within a SELECT statement. Nor can you
execute several statements, such as multiple UPDATE statements or
several INSERT statements all in 1.
No, that still wouldn't remove the race condition. What you want to do
is put a unique constraint on the table, then have your script just
perform the INSERT. If it fails, then you know you have a duplicate.
MySQL also support the INSERT ... ON DUPLICATE KEY UPDATE syntax I
believe.

Yup, or the shorter REPLACE INTO which does exactly the same. And in that
case it can be done in one query, like:
REPLACE INTO tabel (fields...)
VALUES
(val1.1,val1.2,val1.3,val1.4),
(val2.1,val2.2,val2.3,val2.4),
etc....

People should use unique identifiers more...
--
Rik Wasmus
Rik,

That's very useful to know. Thanks for your input. However, I am not
sure if I can get a list of rows that have been REPLACEd (Items that
the user already owns), and items that aren't valid in the items
reference table. The items added must be in the main reference table
(The table with 3600 items). Each of these has a unique ID, and if it
exists, it's added to the user table in the format in the post below.

Many thanks.

Daz.

Oct 24 '06 #9
Daz

Jerry Stuckle wrote:
Daz wrote:
Chung Leong wrote:
>Daz wrote:

The problem is that when I have a few hundred results to compare.
Should I really query the database that many times? Could I do it with
a single query, and if so, how would I know what items the user already
owns a particular item, and update the database using the PHP-MySQL
layer. To my understanding, you can't execute and UPDATE or INSERT
statement from within a SELECT statement. Nor can you execute several
statements, such as multiple UPDATE statements or several INSERT
statements all in 1.

No, that still wouldn't remove the race condition. What you want to do
is put a unique constraint on the table, then have your script just
perform the INSERT. If it fails, then you know you have a duplicate.
MySQL also support the INSERT ... ON DUPLICATE KEY UPDATE syntax I
believe.

I can't use any unique keys on my table, as each user can have 'up to'
3600 items, and a row is added for each item the user has, in the user
table. For example:

+-----+---------+
| uid | item_id |
+-----+---------+
| 3 | 1 |
| 3 | 3 |
| 3 | 5 |
| 3 | 6 |
| 3 | 7 |
| 3 | 9 |
| 3 | 12 |
| 3 | 13 |
| 3 | 15 |
| 3 | 16 |
+-----+---------+

If a row doesn't exist, then a user doesn't own the item.

You have a way of uniquely identifying the row, don't you? You have to
have something to determine if it's a duplicate or not.

And that gives you a unique index.
At present, I simply pull up a derived table for the user, and my
script iterates through the rows, and checkes which items that user
owns. Rows are added if they aren't in the user table, however, the
user is advised if the item name they are adding is invalid, and the
item is not added.

I would be happy to give you an example of all of the tables I am using
(three in all), if you'd like.

All the best.

Daz.

Oct 24 '06 #10
Daz

Peter Fox wrote:
As I read this you are simply trying to decide which items in list U are
not in list D (U=user's list D=database list).
That's only part of it. I also need to check if the user has a
corresponding row for each item they submit. If not, I add the
corresponding row pending a check to ensure that the item is in fact a
valid item in the database.
Two methods spring to mind.
1 - (Possibly not suitable for PHP)
You set up two arrays of bits with the position in the array being the
'ID'.
So if the U list has items 3,4 and 6 the array looks like 00011010000...
and similarly with the D list and now you can AND (etc) to give set
operations.
I am not sure if that would work, as the user submits a list of items
by name. If it's a valid item in the database, a user row is added.
column 1 continaing the users uinique ID, which can occur more than
once in the table, (once for each item). The ID of the item is
currently worked out whilst using array_search() on the database items.
If it's in the array, the item ID is returned and a row is added for
the user, for that item. If it doesn't exist, no key is returned, and
the user is advised that it's not a valid item.
2 - (Probably better for PHP)
Sort both lists
Set two pointers to start (lowest) of both lists (call them pU and pD)
repeat until end of both lists reached
Compare the pointed to items
if D[pD] == U[pU] then "U already has this D". Bump both pointers
if D[pD] < U[pU] then "U doesn't have this D". Bump pD.
if D[pD] >U[pU] then "This U isn't in D". Bump pU.
That would probably work with item IDs, however, it wouldn't work using
the item names which is what the user submits.
With any luck your D list should be pre-sorted as a result of the DB
query.
Yes, the results are sorted, but my the item name.
For speed you may want to bulk your updates by doing the logic and all
of the 'what goes in which category' first.
At present:
A list of items is derived from the user submited list as the form of
an array.
A query is dynamically created, so that only the items in the users
list are pulled from the database (as well as whether they own the item
or not).
The users list is then iterated through, and compared to the items
pully from the database using array_search().
If the item is there, then it's valid, so I check if the user has the
item. If they don't have it, I add it to another array from which the
INSERT query is made from.
If they do have it, then a counter is incremented which counts how many
items are valid, but are already owned by the user.
If array_search() returns 'false', then the item is not valid. Another
array is added to, which is iterated through and displayed to the user
to let them know which items aren't in the main database reference
table.

Hope this makes sense.

Oct 24 '06 #11
Daz

Daz wrote:
Hello all,

my question is more regarding advice on a script design. I have about
3600 entries in my database, the user submits a list, which is then
checked against those in the database to confirm whether or not they
already own a particular item. If they do, then it's not added to the
user table, whereas if it is, then it _is_ added to the user table.
However, if the item is not in the database, the user is advised of
this. So basically, I need to figure out a quick way to compare the
users submited items (probably 50 to 700 items), with those in an array
that I have created using the items from the database.

I can think of two ways to achieve this. Firstly, I can iterate through
all of the users items, and use in_array() to see if they are in the
database array of items. I think another method I can use, is very
similar, but rather than have an array of database items, I can put
them all into a single comma seperated string, and iterate through the
array of user items, using regex to check if the item is in the
database. There may be another more efficient way to acheive the
results I am looking for, but I can't think of anything else.

I would appreciate it if anyone could tell me which of the 2 is likely
to be faster, or even if there is an even better way altogether. I need
to find the quickest way, as I don't want to over work the server or
for the processing to cause a server timeout.

All the best.

Daz.
I think that there is a lot of confusion here. Some people are not
getting the right idea. Probably because my attampt to explain failed.

=======================================

Here is a small snippet from my main database reference table. They are
virtual books, and each book has an id. The last column is not needed
for this operation:

+---------+--------------------------------+------------+
| book_id | book_name | is_retired |
+---------+--------------------------------+------------+
| 2 | 13 Banlow Street | 1 |
| 299 | Baseball Fans | 1 |
| 471 | Cherry Blossoms | 0 |
| 665 | Down by the River | 1 |
| 1181 | I will always Remember | 0 |
| 1339 | Kimbler: The Beginning | 0 |
| 1433 | Let the Game Begin | 0 |

And so on... There are just over 3600 entries at present.

=======================================

Here is a small extraction from the users table, which shows which
books the user owns.

+-----+---------+
| uid | book_id |
+-----+---------+
| 3 | 3194 |
| 2 | 2947 |
| 3 | 2091 |
| 3 | 307 |
| 3 | 1434 |
| 4 | 3278 |
| 3 | 1288 |
| 2 | 3239 |
| 3 | 2467 |
| 1 | 991 |
+-----+---------+

Remember. Neither of these columns contain unique values.

========================================

Finally, here is an example of my users table:

+---------+-------------+
| user_id | username |
+---------+-------------+
| 1 | Anonymous |
| 2 | user_1 |
| 3 | user_2 |
+---------+-------------+

========================================

As you can see. I didn't want to add more than 3600 columns to the
users book table, as this would mean I get a lot of NULLs which isn't
very efficient and also bring with it a few more down sides.

The only other option I have, is to create a table for each book. That
would mean more than 3600 tables, and I would have to search through
3600 tables to get the results I am looking for. This is totaly
unefficient for what I need to acheive, however, the I _would_ be able
to run an index on the user ID. :P

I hope that my explanation as to what I am trying to do, and why I have
done what I have so far, is addequate. It has taken me a long time to
get the database like this (as this was one of my first ever
databases), and it has been reborn several times after I discovered how
database normalization works, along with it's benfits.

Best wishes.

Daz

Oct 24 '06 #12
Daz wrote:
Jerry Stuckle wrote:
>>Daz wrote:
>>>Chung Leong wrote:
Daz wrote:
>The problem is that when I have a few hundred results to compare.
>Should I really query the database that many times? Could I do it with
>a single query, and if so, how would I know what items the user already
>owns a particular item, and update the database using the PHP-MySQL
>layer. To my understanding, you can't execute and UPDATE or INSERT
>statement from within a SELECT statement. Nor can you execute several
>statements, such as multiple UPDATE statements or several INSERT
>statements all in 1.

No, that still wouldn't remove the race condition. What you want to do
is put a unique constraint on the table, then have your script just
perform the INSERT. If it fails, then you know you have a duplicate.
MySQL also support the INSERT ... ON DUPLICATE KEY UPDATE syntax I
believe.
I can't use any unique keys on my table, as each user can have 'up to'
3600 items, and a row is added for each item the user has, in the user
table. For example:

+-----+---------+
| uid | item_id |
+-----+---------+
| 3 | 1 |
| 3 | 3 |
| 3 | 5 |
| 3 | 6 |
| 3 | 7 |
| 3 | 9 |
| 3 | 12 |
| 3 | 13 |
| 3 | 15 |
| 3 | 16 |
+-----+---------+

If a row doesn't exist, then a user doesn't own the item.

You have a way of uniquely identifying the row, don't you? You have to
have something to determine if it's a duplicate or not.

And that gives you a unique index.


At present, I simply pull up a derived table for the user, and my
script iterates through the rows, and checkes which items that user
owns. Rows are added if they aren't in the user table, however, the
user is advised if the item name they are adding is invalid, and the
item is not added.

I would be happy to give you an example of all of the tables I am using
(three in all), if you'd like.

All the best.

Daz.
So, validate the names. Then use Chung's process for updating your
table with the ones which are valid.
--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Oct 24 '06 #13
Daz
So, validate the names. Then use Chung's process for updating your
table with the ones which are valid.
Hi Jerry.

Thanks for that.

This brings be back to my original question.

Is the best way to validate the names, by having the db names in one
array, and the users books in another, and iterating through the users
array using in_array() to check if it's a valid book in the db array.
Or would it be better to put all of the database items into a comma
separated string, and then iterate through the user array using
preg_match() or maybe even strtr().

Perhaps the difference is negligable. But that is what I'd like to find
out. If it's something that no-one knows the answer to, I am happy to
set up a test, but I didn't see any point in 'reinventing the wheel' so
to speak, if someone already knew the answer.

Many thanks for your input.

Daz.

Oct 24 '06 #14
Daz napisal(a):
I can't use any unique keys on my table, as each user can have 'up to'
3600 items, and a row is added for each item the user has, in the user
table. For example:

+-----+---------+
| uid | item_id |
+-----+---------+
| 3 | 1 |
| 3 | 3 |
| 3 | 5 |
| 3 | 6 |
| 3 | 7 |
| 3 | 9 |
| 3 | 12 |
| 3 | 13 |
| 3 | 15 |
| 3 | 16 |
+-----+---------+

If a row doesn't exist, then a user doesn't own the item.
I'm not terribly familiar with MySQL. I think it supports multi-column
unique constraint. So in your case, you're force the uid + item_id
combination to be unique.

Oct 24 '06 #15
Rik
Chung Leong wrote:
Daz napisal(a):
>I can't use any unique keys on my table, as each user can have 'up
to' 3600 items, and a row is added for each item the user has, in
the user table. For example:

+-----+---------+
>>uid | item_id |
+-----+---------+
>> 3 | 1 |
3 | 3 |
3 | 5 |
3 | 6 |
3 | 7 |
3 | 9 |
3 | 12 |
3 | 13 |
3 | 15 |
3 | 16 |
+-----+---------+

If a row doesn't exist, then a user doesn't own the item.

I'm not terribly familiar with MySQL. I think it supports multi-column
unique constraint. So in your case, you're force the uid + item_id
combination to be unique.
Yup, and if it's about in ID that's in another table, the InnoDB engine
supports Foreign Key constraints.

Grtz,
--
Rik Wasmus
Oct 24 '06 #16
On 24 Oct 2006 04:18:56 -0700, in comp.lang.php "Daz"
<cu********@gmail.com>
<11**********************@e3g2000cwe.googlegroups. comwrote:
>|
| Daz wrote:
| Hello all,
| >
| my question is more regarding advice on a script design. I have about
| 3600 entries in my database, the user submits a list, which is then
| checked against those in the database to confirm whether or not they
| already own a particular item. If they do, then it's not added to the
| user table, whereas if it is, then it _is_ added to the user table.
| However, if the item is not in the database, the user is advised of
| this. So basically, I need to figure out a quick way to compare the
| users submited items (probably 50 to 700 items), with those in an array
| that I have created using the items from the database.
| >
| I can think of two ways to achieve this. Firstly, I can iterate through
| all of the users items, and use in_array() to see if they are in the
| database array of items. I think another method I can use, is very
| similar, but rather than have an array of database items, I can put
| them all into a single comma seperated string, and iterate through the
| array of user items, using regex to check if the item is in the
| database. There may be another more efficient way to acheive the
| results I am looking for, but I can't think of anything else.
| >
| I would appreciate it if anyone could tell me which of the 2 is likely
| to be faster, or even if there is an even better way altogether. I need
| to find the quickest way, as I don't want to over work the server or
| for the processing to cause a server timeout.
| >
| All the best.
| >
| Daz.
|
| I think that there is a lot of confusion here. Some people are not
| getting the right idea. Probably because my attampt to explain failed.
|
| =======================================
|
| Here is a small snippet from my main database reference table. They are
| virtual books, and each book has an id. The last column is not needed
| for this operation:
|
| +---------+--------------------------------+------------+
| | book_id | book_name | is_retired |
| +---------+--------------------------------+------------+
| | 2 | 13 Banlow Street | 1 |
| | 299 | Baseball Fans | 1 |
| | 471 | Cherry Blossoms | 0 |
| | 665 | Down by the River | 1 |
| | 1181 | I will always Remember | 0 |
| | 1339 | Kimbler: The Beginning | 0 |
| | 1433 | Let the Game Begin | 0 |
|
| And so on... There are just over 3600 entries at present.
|
| =======================================
|
| Here is a small extraction from the users table, which shows which
| books the user owns.
|
| +-----+---------+
| | uid | book_id |
| +-----+---------+
| | 3 | 3194 |
| | 2 | 2947 |
| | 3 | 2091 |
| | 3 | 307 |
| | 3 | 1434 |
| | 4 | 3278 |
| | 3 | 1288 |
| | 2 | 3239 |
| | 3 | 2467 |
| | 1 | 991 |
| +-----+---------+
|
| Remember. Neither of these columns contain unique values.
|
| ========================================
|
| Finally, here is an example of my users table:
|
| +---------+-------------+
| | user_id | username |
| +---------+-------------+
| | 1 | Anonymous |
| | 2 | user_1 |
| | 3 | user_2 |
| +---------+-------------+
|
| ========================================
|
| As you can see. I didn't want to add more than 3600 columns to the
| users book table, as this would mean I get a lot of NULLs which isn't
| very efficient and also bring with it a few more down sides.
|
| The only other option I have, is to create a table for each book. That
| would mean more than 3600 tables, and I would have to search through
| 3600 tables to get the results I am looking for. This is totaly
| unefficient for what I need to acheive, however, the I _would_ be able
| to run an index on the user ID. :P
|
| I hope that my explanation as to what I am trying to do, and why I have
| done what I have so far, is addequate. It has taken me a long time to
| get the database like this (as this was one of my first ever
| databases), and it has been reborn several times after I discovered how
| database normalization works, along with it's benfits.

First you need to find which books the user already has selected

SELECT GROUP_CONCAT(book_id SEPARATOR ',') AS GCR
FROM user_table
WHERE uid=3
GROUP BY uid;

save the value of GCR (group_concat result) field to a variable
The above result will produce a comma separated list.

Next if you want to list books that the user HASN'T selected (recently
added books) then you can use:

SELECT *
FROM book_table
WHERE find_in_set(book_id,'".$GCR.'")=0;

If you want to list the books the use HAS selected then use:
SELECT *
FROM book_table
WHERE find_in_set(book_id,'".$GCR.'")>0;

If you want to present the user with the entire list then you can use:
SELECT *, find_in_set(book_id,'".$GCR.'") AS FIS
FROM book_table;

In your code you can check the FIS field for 0 or non-zero. If the FIS
is zero the you could add a check box next to the book title to allow
the user to select it. If the FIS 0 then just present the book title
as plain text, thus visually indicating to the user that they have
already selected this item.

When the user submits the request then you know only NEW entries have
been selected and can simply use a INSERT INTO statement.

HTH
---------------------------------------------------------------
jn******@yourpantsyahoo.com.au : Remove your pants to reply
---------------------------------------------------------------
Oct 24 '06 #17
Rik
Daz wrote:
>So, validate the names. Then use Chung's process for updating your
table with the ones which are valid.

Hi Jerry.

Thanks for that.

This brings be back to my original question.

Is the best way to validate the names, by having the db names in one
array, and the users books in another, and iterating through the users
array using in_array() to check if it's a valid book in the db array.
Create an array of what to add, capture all existing items in another array
, then it's a simple question of array_diff() for items that are invalid.

Then create an array of already owned items, array_intersect() will tell
you what to update, the rest will have to be added.
--
Grtz

Rik Wasmus
Oct 24 '06 #18
Rik
Daz wrote:
I think that there is a lot of confusion here. Some people are not
getting the right idea. Probably because my attampt to explain failed.

Here is a small snippet from my main database reference table. They
are virtual books, and each book has an id. The last column is not
needed for this operation:

+---------+--------------------------------+------------+
>book_id | book_name | is_retired |
+---------+--------------------------------+------------+ 2 |
13 Banlow Street | 1 | 299 | Baseball
Fans | 1 | 471 | Cherry Blossoms
| 0 | 665 | Down by the River | 1
| 1181 | I will always Remember | 0 |
1339 | Kimbler: The Beginning | 0 |
1433 | Let the Game Begin | 0 |

And so on... There are just over 3600 entries at present.
So, a unique book-id.
Here is a small extraction from the users table, which shows which
books the user owns.

+-----+---------+
>uid | book_id |
+-----+---------+
> 3 | 3194 |
2 | 2947 |
3 | 2091 |
3 | 307 |
3 | 1434 |
4 | 3278 |
3 | 1288 |
2 | 3239 |
3 | 2467 |
1 | 991 |
+-----+---------+

Remember. Neither of these columns contain unique values.
But uid & bookid combined are unique, create a unique index on the TWO
columns at once.
As you can see. I didn't want to add more than 3600 columns to the
users book table, as this would mean I get a lot of NULLs which isn't
very efficient and also bring with it a few more down sides.
Why not add more? With the right indexes, searching will still be fast. You
could even add a foreign key constraint to cascade deletions. You will not
get null-values...
The only other option I have, is to create a table for each book. That
Brrr, do not do that :-)
--
Rik Wasmus
Oct 24 '06 #19
Rik napisal(a):
Daz wrote:
So, validate the names. Then use Chung's process for updating your
table with the ones which are valid.
Hi Jerry.

Thanks for that.

This brings be back to my original question.

Is the best way to validate the names, by having the db names in one
array, and the users books in another, and iterating through the users
array using in_array() to check if it's a valid book in the db array.

Create an array of what to add, capture all existing items in another array
, then it's a simple question of array_diff() for items that are invalid.

Then create an array of already owned items, array_intersect() will tell
you what to update, the rest will have to be added.
--
Grtz

Rik Wasmus
Then you end up with a race condition, I think even if a transaction is
used. If there are two threads trying to insert the same data running
simultaneously, a transaction would not block the second thread from
fetching the same result set as the first. Thus both threads could
think that a particular record doesn't exist and both would insert it.
You would need to lock the table for the duration of the entire
operation, which is pretty lousy.

Oct 24 '06 #20
Rik
Chung Leong wrote:
>Create an array of what to add, capture all existing items in
another array , then it's a simple question of array_diff() for
items that are invalid.

Then create an array of already owned items, array_intersect() will
tell you what to update, the rest will have to be added.
Then you end up with a race condition, I think even if a transaction
is used. If there are two threads trying to insert the same data
running simultaneously, a transaction would not block the second
thread from fetching the same result set as the first. Thus both
threads could
think that a particular record doesn't exist and both would insert it.
You would need to lock the table for the duration of the entire
operation, which is pretty lousy.
My solution was purely to give him back the information he wants (he not
only wants to reject invalids, update already existing, and insert new, but
he wants to know which ones they are).

The actual updating can still be done with the previous simple replace
into, avoiding race condition.
--
Grtz,

Rik Wasmus
Oct 24 '06 #21
Rik napisal(a):
My solution was purely to give him back the information he wants (he not
only wants to reject invalids, update already existing, and insert new, but
he wants to know which ones they are).

The actual updating can still be done with the previous simple replace
into, avoiding race condition.
I'm afraid the OP is still thinking in terms of loading up an array
with existing records to see which not to insert. Sometimes you can get
away with having a race, but inserting hundreds of records is a long
operation. There is a reasonable chance that a second select could come
along and beat it.

Oct 24 '06 #22
Daz

Jeff North wrote:
First you need to find which books the user already has selected

SELECT GROUP_CONCAT(book_id SEPARATOR ',') AS GCR
FROM user_table
WHERE uid=3
GROUP BY uid;

save the value of GCR (group_concat result) field to a variable
The above result will produce a comma separated list.

Next if you want to list books that the user HASN'T selected (recently
added books) then you can use:

SELECT *
FROM book_table
WHERE find_in_set(book_id,'".$GCR.'")=0;

If you want to list the books the use HAS selected then use:
SELECT *
FROM book_table
WHERE find_in_set(book_id,'".$GCR.'")>0;

If you want to present the user with the entire list then you can use:
SELECT *, find_in_set(book_id,'".$GCR.'") AS FIS
FROM book_table;

In your code you can check the FIS field for 0 or non-zero. If the FIS
is zero the you could add a check box next to the book title to allow
the user to select it. If the FIS 0 then just present the book title
as plain text, thus visually indicating to the user that they have
already selected this item.

When the user submits the request then you know only NEW entries have
been selected and can simply use a INSERT INTO statement.
Wow! I am most impressed. I had no idea that I could do that with
MySQL. I will be looking into this in more detail over the next few
days.

Many thanks for your input.

Daz.

Oct 24 '06 #23
Daz

Rik wrote:
So, a unique book-id.
The book_id column is currently indexed, and was the first column I
added a primary index to.

Rik wrote:
But uid & bookid combined are unique, create a unique index on the TWO
columns at once.
Perhaps it's just me, but I can't quite see how I can. In the table
with containing just the usernames and user id, naturally, they are
both indexed, but on my user_books table, there is one row for each
user book. Therefore, if a user has 300 books, there are 300 rows for
that user. Also, for book_id. If more than 1 user has the same book,
then the book_ids are no longer unique either.

Sorry for not understanding what you mean.

Rik wrote:
Why not add more? With the right indexes, searching will still be fast. You
could even add a foreign key constraint to cascade deletions. You will not
get null-values...
By 'Why not add more?', I assume you mean columns? Perhaps I am missing
something, but it would mean I'd have to search the book id in multiple
columns, as well as know how many columns there are at any on time, and
their names. I think it would make things far too complex. Also, I
don't understand how you could avoid null values or redundant data, as
if the user doesn't have a book, the column would be set to either NULL
or '0' which would make it redundant as it' not necessary.

I had my database organised like this previously when I first created
it a few months back when I was learning MySQL, (as I still am). But
soon realized that the database was using more space than needed, and
it was a really pain having to search every single column.

Rik wrote:
>Daz wrote:
The only other option I have, is to create a table for each book. That
Brrr, do not do that :-)
Yeah, I definitely agree with you there! :D

Many thanks for your time.

Daz.

Oct 24 '06 #24
Daz

Rik wrote:
Create an array of what to add, capture all existing items in another array
, then it's a simple question of array_diff() for items that are invalid.

Then create an array of already owned items, array_intersect() will tell
you what to update, the rest will have to be added.
I had no idea that such a function existed. This is yet another item
added to my list of 'things to do'.

Your help is very much appreciated.

Daz.

Oct 24 '06 #25
Daz

Chung Leong wrote:
Then you end up with a race condition, I think even if a transaction is
used. If there are two threads trying to insert the same data running
simultaneously, a transaction would not block the second thread from
fetching the same result set as the first. Thus both threads could
think that a particular record doesn't exist and both would insert it.
You would need to lock the table for the duration of the entire
operation, which is pretty lousy.
I am going to need to create a function that checks for duplicate
entries I think. However, the only time that there would be any chance
of a race condition, would be if the same user was logged on twice, and
carried out the same action symaltaniously. Baring in mind that there
is only an appromimately 0.1 second window (as this is how long the
transaction takes), I think it's very slim that it would happen. A user
shouldn't need to run the same thing twice symaltaniously, however, I
think I may start looking into methods that will log the user out of
one account if they login a second time with another.

Basically, I acknowledge there is a very slim chance of having
duplicate entries, but how else could I get around this?

All the best.

Daz.

Oct 24 '06 #26
Daz

Rik wrote:
My solution was purely to give him back the information he wants (he not
only wants to reject invalids, update already existing, and insert new, but
he wants to know which ones they are).

The actual updating can still be done with the previous simple replace
into, avoiding race condition.
That is indeed a good solution to help me find what needs to happen
with which data. However, I cannot use REPLACE INTO as I am unable to
run any kind of index on the table that I need to update, and from what
I can see, it's a requirement.

Thanks.

Daz.

Oct 24 '06 #27
Daz

Chung Leong wrote:
I'm afraid the OP is still thinking in terms of loading up an array
with existing records to see which not to insert. Sometimes you can get
away with having a race, but inserting hundreds of records is a long
operation. There is a reasonable chance that a second select could come
along and beat it.
I disagree. There is an extremely small chance of having a race
condition, as users can only update their own data, no one elses.
Unless a user is logged on twice, a race condition should never occur.
Even if they did log in twice, they'd have a very narrow window within
which to create the race condition. If they did, they would have to
deal with having duplicated books. Hehe.

Many thanks.

Daz.

Oct 24 '06 #28
Daz
Daz wrote:
Chung Leong wrote:
Then you end up with a race condition, I think even if a transaction is
used. If there are two threads trying to insert the same data running
simultaneously, a transaction would not block the second thread from
fetching the same result set as the first. Thus both threads could
think that a particular record doesn't exist and both would insert it.
You would need to lock the table for the duration of the entire
operation, which is pretty lousy.

I am going to need to create a function that checks for duplicate
entries I think. However, the only time that there would be any chance
of a race condition, would be if the same user was logged on twice, and
carried out the same action symaltaniously. Baring in mind that there
is only an appromimately 0.1 second window (as this is how long the
transaction takes), I think it's very slim that it would happen. A user
shouldn't need to run the same thing twice symaltaniously, however, I
think I may start looking into methods that will log the user out of
one account if they login a second time with another.

Basically, I acknowledge there is a very slim chance of having
duplicate entries, but how else could I get around this?

All the best.

Daz.
Here is another possible idea. Why not have every script add a user ID
to another database when a script is executed that will use the db.
Once the script has finished, the user ID is removed. This can be
indexed quite effectively, and will not allow the script to execute if
their user ID is in that database. There could be a potential problem
with any transaction that never finishes, but part of the table could
be a timestamp. And whenever which will remove/update any rows that are
more than say 30 seconds old, as this is the timeout limit for the
server.

The idea sounds a little rusty, but I personally feel it could work.

Oct 24 '06 #29
On 24 Oct 2006 12:19:46 -0700, in comp.lang.php "Daz"
<cu********@gmail.com>
<11**********************@m73g2000cwd.googlegroups .comwrote:
>|
| Jeff North wrote:
| First you need to find which books the user already has selected
| >
| SELECT GROUP_CONCAT(book_id SEPARATOR ',') AS GCR
| FROM user_table
| WHERE uid=3
| GROUP BY uid;
| >
| save the value of GCR (group_concat result) field to a variable
| The above result will produce a comma separated list.
| >
| Next if you want to list books that the user HASN'T selected (recently
| added books) then you can use:
| >
| SELECT *
| FROM book_table
| WHERE find_in_set(book_id,'".$GCR.'")=0;
| >
| If you want to list the books the use HAS selected then use:
| SELECT *
| FROM book_table
| WHERE find_in_set(book_id,'".$GCR.'")>0;
| >
| If you want to present the user with the entire list then you can use:
| SELECT *, find_in_set(book_id,'".$GCR.'") AS FIS
| FROM book_table;
| >
| In your code you can check the FIS field for 0 or non-zero. If the FIS
| is zero the you could add a check box next to the book title to allow
| the user to select it. If the FIS 0 then just present the book title
| as plain text, thus visually indicating to the user that they have
| already selected this item.
| >
| When the user submits the request then you know only NEW entries have
| been selected and can simply use a INSERT INTO statement.
|
| Wow! I am most impressed. I had no idea that I could do that with
| MySQL. I will be looking into this in more detail over the next few
| days.
|
| Many thanks for your input.
Since you are using a large number of entries the group_concat may
produce a very long field. I suggest you read up on this within the
manual (http://dev.mysql.com/doc/)

"In MySQL, you can get the concatenated values of expression
combinations. You can eliminate duplicate values by using DISTINCT. If
you want to sort values in the result, you should use ORDER BY clause.
To sort in reverse order, add the DESC (descending) keyword to the
name of the column you are sorting by in the ORDER BY clause. The
default is ascending order; this may be specified explicitly using the
ASC keyword. SEPARATOR is followed by the string value that should be
inserted between values of result. The default is a comma (`,'). You
can remove the separator altogether by specifying SEPARATOR ''. You
can set a maximum allowed length with the group_concat_max_len system
variable. The syntax to do this at runtime is as follows, where val is
an unsigned integer:
SET [SESSION | GLOBAL] group_concat_max_len = val;
If a maximum length has been set, the result is truncated to this
maximum length."
---------------------------------------------------------------
jn******@yourpantsyahoo.com.au : Remove your pants to reply
---------------------------------------------------------------
Oct 24 '06 #30
Daz napisal(a):
I disagree. There is an extremely small chance of having a race
condition, as users can only update their own data, no one elses.
Unless a user is logged on twice, a race condition should never occur.
Even if they did log in twice, they'd have a very narrow window within
which to create the race condition. If they did, they would have to
deal with having duplicated books. Hehe.
Do as you see fit. The scenario I described is not as improbable as you
think. If the user click on a button twice, then the same request would
be launched twice.

Oct 24 '06 #31
Rik
Daz wrote:
Rik wrote:
>My solution was purely to give him back the information he wants (he
not only wants to reject invalids, update already existing, and
insert new, but he wants to know which ones they are).

The actual updating can still be done with the previous simple
replace into, avoiding race condition.

That is indeed a good solution to help me find what needs to happen
with which data. However, I cannot use REPLACE INTO as I am unable to
run any kind of index on the table that I need to update, and from
what
I can see, it's a requirement.
I have told you several times an index on 2 columns is perfectly possible.
Make the index:
ALTER TABLE `user_books` ADD UNIQUE (
`user_id` ,
`book_id`
)

Voilá.
--
Rik Wasmus
Oct 24 '06 #32
Daz

Rik wrote:
I have told you several times an index on 2 columns is perfectly possible.
Make the index:
ALTER TABLE `user_books` ADD UNIQUE (
`user_id` ,
`book_id`
)

Voilá.
Hi Rick. Sorry, I believe that I have also mentioned several times that
I can't, as each user has one row per book, meaning there won't be any
unique uids, and if there were, it wouldn't be for very long at all.
The book_ids in the user table will also be duplicated, as more than
one user is likely to have the same book.

Thanks for your help. :)

Daz.

Oct 25 '06 #33
Rik
Daz wrote:
Rik wrote:
>I have told you several times an index on 2 columns is perfectly
possible. Make the index:
ALTER TABLE `user_books` ADD UNIQUE (
`user_id` ,
`book_id`
)

Voilá.

Hi Rick. Sorry, I believe that I have also mentioned several times
that
I can't, as each user has one row per book, meaning there won't be any
unique uids, and if there were, it wouldn't be for very long at all.
The book_ids in the user table will also be duplicated, as more than
one user is likely to have the same book.
You don't seem to get it.
Each user does NOT have one row per book. Each user has ONE row per book
THAT HE OWNS.

Table USERS:
user_id PRIMARY(/UNIQUE)
user_name etc....

Table BOOKS
book_id PRIMARY(/UNIQUE)
book_title etc....

Table USER_BOOKS
user_id ---|
|-UNIQUE
book_id ---|

And those 2 id's there are combined unique, for the user will have only one
copy of one book, or if he has several, add a field 'amount', and
increment/decrement that as fit. A field does not have to be unique for
such an index, a COMBINATION of fields (mostly one, but an arbitrary amount
suffices) has to.

Hopefully this way I have shed some light on the subject, I cannot tell you
any clearer.
--
Grtz,

Rik Wasmus
Oct 25 '06 #34
Daz wrote:
Daz wrote:
>>Chung Leong wrote:
>>>Then you end up with a race condition, I think even if a transaction is
used. If there are two threads trying to insert the same data running
simultaneously, a transaction would not block the second thread from
fetching the same result set as the first. Thus both threads could
think that a particular record doesn't exist and both would insert it.
You would need to lock the table for the duration of the entire
operation, which is pretty lousy.

I am going to need to create a function that checks for duplicate
entries I think. However, the only time that there would be any chance
of a race condition, would be if the same user was logged on twice, and
carried out the same action symaltaniously. Baring in mind that there
is only an appromimately 0.1 second window (as this is how long the
transaction takes), I think it's very slim that it would happen. A user
shouldn't need to run the same thing twice symaltaniously, however, I
think I may start looking into methods that will log the user out of
one account if they login a second time with another.

Basically, I acknowledge there is a very slim chance of having
duplicate entries, but how else could I get around this?

All the best.

Daz.


Here is another possible idea. Why not have every script add a user ID
to another database when a script is executed that will use the db.
Once the script has finished, the user ID is removed. This can be
indexed quite effectively, and will not allow the script to execute if
their user ID is in that database. There could be a potential problem
with any transaction that never finishes, but part of the table could
be a timestamp. And whenever which will remove/update any rows that are
more than say 30 seconds old, as this is the timeout limit for the
server.

The idea sounds a little rusty, but I personally feel it could work.
Because it's unnecessary overhead, that's why. Read Rik's comments.
You don't need it!

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Oct 25 '06 #35
Daz

Rik wrote:
You don't seem to get it.
Each user does NOT have one row per book. Each user has ONE row per book
THAT HE OWNS.

Table USERS:
user_id PRIMARY(/UNIQUE)
user_name etc....

Table BOOKS
book_id PRIMARY(/UNIQUE)
book_title etc....

Table USER_BOOKS
user_id ---|
|-UNIQUE
book_id ---|

And those 2 id's there are combined unique, for the user will have only one
copy of one book, or if he has several, add a field 'amount', and
increment/decrement that as fit. A field does not have to be unique for
such an index, a COMBINATION of fields (mostly one, but an arbitrary amount
suffices) has to.

Hopefully this way I have shed some light on the subject, I cannot tell you
any clearer.
That's much clearer. Tahk you very much. I think I had misunderstood
the nature of UNIQUE indexes, and thought it meant that there could
only be one field in a column containing a particular value. What I was
missing was that I UNIQUE can work across columns. That's very good to
know. Sorry for wasting more of your time than you felt was necessary.
It was not in vein, and it's very much appreciated. :)

Sincerest regards.

Daz.

Oct 25 '06 #36
Rik
Daz wrote:
That's much clearer. Tahk you very much. I think I had misunderstood
the nature of UNIQUE indexes, and thought it meant that there could
only be one field in a column containing a particular value. What I
was missing was that I UNIQUE can work across columns. That's very
good to
know.
Not only unique, other indexes can also be muli-column.
Sorry for wasting more of your time than you felt was necessary.
It was not in vein, and it's very much appreciated. :)
Well, as long as the message has arrived now, I'm a pleased man :-).
--
Grtz,

Rik Wasmus
Oct 25 '06 #37

Rik wrote:
Chung Leong wrote:
Daz wrote:
The problem is that when I have a few hundred results to compare.
Should I really query the database that many times? Could I do it
with a single query, and if so, how would I know what items the user
already owns a particular item, and update the database using the
PHP-MySQL layer. To my understanding, you can't execute and UPDATE
or INSERT statement from within a SELECT statement. Nor can you
execute several statements, such as multiple UPDATE statements or
several INSERT statements all in 1.
No, that still wouldn't remove the race condition. What you want to do
is put a unique constraint on the table, then have your script just
perform the INSERT. If it fails, then you know you have a duplicate.
MySQL also support the INSERT ... ON DUPLICATE KEY UPDATE syntax I
believe.

Yup, or the shorter REPLACE INTO which does exactly the same.
REPLACE INTO does not do exactly the same as INSERT ... ON DUPLICATE
KEY UPDATE
REPLACE INTO deletes a row that is already there and replaces it with a
new row containing values only for fields you supply.
INSERT ... ON DUPLICATE KEY UPDATE simply changes any values you ask it
to on an existing row, leaving any fields that you haven't mentioned
intact.

Oct 25 '06 #38
Rik
Captain Paralytic wrote:
Rik wrote:
>Chung Leong wrote:
>>Daz wrote:
The problem is that when I have a few hundred results to compare.
Should I really query the database that many times? Could I do it
with a single query, and if so, how would I know what items the
user already owns a particular item, and update the database using
the PHP-MySQL layer. To my understanding, you can't execute and
UPDATE or INSERT statement from within a SELECT statement. Nor can
you execute several statements, such as multiple UPDATE statements
or several INSERT statements all in 1.

No, that still wouldn't remove the race condition. What you want to
do is put a unique constraint on the table, then have your script
just perform the INSERT. If it fails, then you know you have a
duplicate. MySQL also support the INSERT ... ON DUPLICATE KEY
UPDATE syntax I believe.

Yup, or the shorter REPLACE INTO which does exactly the same.

REPLACE INTO does not do exactly the same as INSERT ... ON DUPLICATE
KEY UPDATE
REPLACE INTO deletes a row that is already there and replaces it with
a new row containing values only for fields you supply.
It can even delete several rows to replace with a new row, if the indexes
require it yes.
INSERT ... ON DUPLICATE KEY UPDATE simply changes any values you ask
it to on an existing row, leaving any fields that you haven't
mentioned intact.

Yes, you're right, the inner workings are different. However, when feeding
2 fields to a 2 column table here, the effect will be the same.
--
Grtz

Rik Wasmus
Oct 25 '06 #39
"Rik" <lu************@hotmail.comwrote:
>Chung Leong wrote:
>>
No, that still wouldn't remove the race condition. What you want to do
is put a unique constraint on the table, then have your script just
perform the INSERT. If it fails, then you know you have a duplicate.
MySQL also support the INSERT ... ON DUPLICATE KEY UPDATE syntax I
believe.

Yup, or the shorter REPLACE INTO which does exactly the same.
.... as long as one understands that "REPLACE INTO" is a MySQL-only
extension. If one plans to upgrade to Postgres or a commercial database,
one will have to remember to change that back to INSERT/UPDATE.
--
Tim Roberts, ti**@probo.com
Providenza & Boekelheide, Inc.
Oct 26 '06 #40

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: adb | last post by:
I came up with a replication configuration that is basically the result of all the restrictions of replication as well as the restrictions of allowable software on work PC's and I was curious if...
3
by: zlst | last post by:
Many technological innovations rely upon User Interface Design to elevate their technical complexity to a usable product. Technology alone may not win user acceptance and subsequent marketability....
0
by: Edward Diener | last post by:
In Borland's VCL it was possible to divide a component into design time and run time DLLs. The design time DLL would only be necessary when the programmer was setting a component's properties or...
7
by: Shimon Sim | last post by:
I have a custom composite control I have following property
2
by: Paul Cheetham | last post by:
Hi, I have moved an application from VS2003 to VS2005, and I am now unable to view most of my forms in the designer. The majority of the forms in my project are derived from class PACForm,...
1
by: Nogusta123 | last post by:
Hi, I have had a lot of problems getting web pages, master pages and content pages to render in VS2005 design view the same as they would in Internet Explorer. I did a lot of looking on the...
0
by: YellowFin Announcements | last post by:
Introduction Usability and relevance have been identified as the major factors preventing mass adoption of Business Intelligence applications. What we have today are traditional BI tools that...
19
by: neelsmail | last post by:
Hi, I have been working on C++ for some time now, and I think I have a flair for design (which just might be only my imagination over- stretched.. :) ). So, I tried to find a design...
10
by: vital | last post by:
Hi, I am designing the middle tier of a project. It has 6 classes and microsoft application data access block. The six classes are DBServices, Logger, ProjectServices ... etc. and all these...
4
by: Ken Fine | last post by:
I've been living with a frustrating issue with VS.NET for some months now and I need to figure out what the problem is. Hopefully someone has run into the same issue and can suggest a fix. I...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.