By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,934 Members | 1,449 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,934 IT Pros & Developers. It's quick & easy.

Copying a MySQL row to another table

P: n/a
What's the easiest way to copy a row from one table to another
(with the same schema), without knowing in advance the number and
types of the columns?

The problem: I have several sets of tables with a primary key of
'account', which contains an email address. If someone changes
their email address, I need to change the 'account' field.
Unfortunately, someone thought it would speed up lookups if instead
of having one table 'settings', they have 27 tables: settings_a,
settings_b, settings_c, ... settings_z, settings_other, based on
the first letter of 'account'. I can't change this, at least not
right away, as there is a lot of code in a commercial distribution
we bought that uses it. The suggestion to use an account number
generated by an autoincrement column as a primary key instead of a
string is also something I can't implement right away.

So, to rename an account, I need to fetch the row (using the old
value of account), change the 'account' field, and likely (if the
first character changes) move the row to a different table and
delete it from the original one.

I don't know how many fields the table has besides 'account', or
what types they are, and my code needs to keep working if someone
adds, deletes, reorders, etc. columns. (All 27 tables in a set
will keep the same schema, and 'account' won't change.) Any ideas
how to construct an appropriate INSERT query? I'm not enthusiastic
about having to parse the output of 'SHOW CREATE TABLE settings_a',
especially trying to predict the output of SHOW CREATE TABLE for
the next couple of major versions of MySQL before they are started.

Gordon L. Burditt
Jul 23 '05 #1
Share this Question
Share on Google+
4 Replies


P: n/a
Gordon Burditt wrote:
What's the easiest way to copy a row from one table to another
(with the same schema), without knowing in advance the number and
types of the columns?

The problem: I have several sets of tables with a primary key of
'account', which contains an email address. If someone changes
their email address, I need to change the 'account' field.
Unfortunately, someone thought it would speed up lookups if instead
of having one table 'settings', they have 27 tables: settings_a,
settings_b, settings_c, ... settings_z, settings_other, based on
the first letter of 'account'. I can't change this, at least not
right away, as there is a lot of code in a commercial distribution
we bought that uses it. The suggestion to use an account number
generated by an autoincrement column as a primary key instead of a
string is also something I can't implement right away.

So, to rename an account, I need to fetch the row (using the old
value of account), change the 'account' field, and likely (if the
first character changes) move the row to a different table and
delete it from the original one.

I don't know how many fields the table has besides 'account', or
what types they are, and my code needs to keep working if someone
adds, deletes, reorders, etc. columns. (All 27 tables in a set
will keep the same schema, and 'account' won't change.) Any ideas
how to construct an appropriate INSERT query? I'm not enthusiastic
about having to parse the output of 'SHOW CREATE TABLE settings_a',
especially trying to predict the output of SHOW CREATE TABLE for
the next couple of major versions of MySQL before they are started.

Gordon L. Burditt

Gordon,

You should use an UPDATE not an INSERT to do this. This will
work even if account is the primary key. You should write a set
of SQL statements (one per table) like so:

UPDATE <table-name> SET account = <new_value> WHERE account =
<old-value>;

Since this is something you will probably have to periodically
create an executable with one UPDATE per table that accepts
new-value and old-value as parameters.

HTH
Jerry

Jul 23 '05 #2

P: n/a
Gordon Burditt wrote:
The problem: I have several sets of tables with a primary key of
'account', which contains an email address. If someone changes
their email address, I need to change the 'account' field.
Unfortunately, someone thought it would speed up lookups if instead
of having one table 'settings', they have 27 tables: settings_a,
settings_b, settings_c, ... settings_z, settings_other, based on
the first letter of 'account'.
Ouch. There are so many things wrong with this schema that I won't even
start. But I'm sure you already know that. Sorry you got mixed up with
such a botched project.
So, to rename an account, I need to fetch the row (using the old
value of account), change the 'account' field, and likely (if the
first character changes) move the row to a different table and
delete it from the original one.
My suggestion is to do the following three steps when you need to change
an account name:

1. Use UPDATE to change the account name in settings_a, even if though
changes the first letter so the account doesn't belong in that table
anymore.

UPDATE settings_a SET account = 'bishop' WHERE account = 'ash';

2. Copy the record to the correct table, according to the first letter.
Use INSERT without specifying the columns, and SELECT * without
specifying the columns. If the number, type, and order of the columns
in both tables match, it _should_ work.

INSERT INTO settings_b
SELECT * FROM settings_a WHERE account = 'bishop';

3. Remove the old record from the old table.

DELETE FROM settings_a WHERE account = 'bishop';
I don't know how many fields the table has besides 'account', or
what types they are, and my code needs to keep working if someone
adds, deletes, reorders, etc. columns. (All 27 tables in a set
will keep the same schema, and 'account' won't change.)
So other people have freedom to alter the schema, and expect the
applications magically to stay up to date, but not you? That hardly
seems fair.

For what it's worth, some database programming interfaces will allow you
to simply do a "SELECT * from table WHERE pk = value" and it'll return a
result set with enough metadata so you can know the field names and
types, and maybe the column order as well.
I'm not enthusiastic
about having to parse the output of 'SHOW CREATE TABLE settings_a',
especially trying to predict the output of SHOW CREATE TABLE for
the next couple of major versions of MySQL before they are started.


I wouldn't worry about that. It's the least of the problems in this
application!

Regards,
Bill K.
Jul 23 '05 #3

P: n/a
>> The problem: I have several sets of tables with a primary key of
'account', which contains an email address. If someone changes
their email address, I need to change the 'account' field.
Unfortunately, someone thought it would speed up lookups if instead
of having one table 'settings', they have 27 tables: settings_a,
settings_b, settings_c, ... settings_z, settings_other, based on
the first letter of 'account'.
Ouch. There are so many things wrong with this schema that I won't even
start. But I'm sure you already know that. Sorry you got mixed up with
such a botched project.


Fortunately, all I have to do is provision the webmail accounts,
and it provides scripts to do most of it. The webmail itself tracks
all the user-settable configuration stuff. The billing system
doesn't CARE about the spam filter settings (we don't charge extra
for spam filtering) so most of it can be ignored. But if a user
changes username, we want his settings to follow him. And there's
a lot of settings.

It wouldn't surprise me if we ran some benchmarks that those 270
tables instead of 10 *slow down* things overall, especially with
all the file opens.
So, to rename an account, I need to fetch the row (using the old
value of account), change the 'account' field, and likely (if the
first character changes) move the row to a different table and
delete it from the original one.


My suggestion is to do the following three steps when you need to change
an account name:

1. Use UPDATE to change the account name in settings_a, even if though
changes the first letter so the account doesn't belong in that table
anymore.

UPDATE settings_a SET account = 'bishop' WHERE account = 'ash';

2. Copy the record to the correct table, according to the first letter.
Use INSERT without specifying the columns, and SELECT * without
specifying the columns. If the number, type, and order of the columns
in both tables match, it _should_ work.

INSERT INTO settings_b
SELECT * FROM settings_a WHERE account = 'bishop';

3. Remove the old record from the old table.

DELETE FROM settings_a WHERE account = 'bishop';


This will work nicely. And when done in a transaction, it won't
risk leaving dregs around.
I don't know how many fields the table has besides 'account', or
what types they are, and my code needs to keep working if someone
adds, deletes, reorders, etc. columns. (All 27 tables in a set
will keep the same schema, and 'account' won't change.)


So other people have freedom to alter the schema, and expect the
applications magically to stay up to date, but not you? That hardly
seems fair.


I don't want provisioning breaking because of local changes or a
new version from the vendor. Or at least not often. We do want
to track the vendor since they are coming up with some useful stuff
at far less cost than we could. I might (or might not) be the one
making some of the other changes, but if it requires synchronized
changes in provisioning and the application, it gets a lot harder
to test.

I try to write SQL to be insensitive to added fields, or the
ordering of fields. That means NO "select * from", specifically
named fields on an insert, and such. Your suggestion above follows
the spirit while breaking the letter of those rules.
For what it's worth, some database programming interfaces will allow you
to simply do a "SELECT * from table WHERE pk = value" and it'll return a
result set with enough metadata so you can know the field names and
types, and maybe the column order as well.


I thought about building up an insert query using that information.
It's a lot of trouble.
I'm not enthusiastic
about having to parse the output of 'SHOW CREATE TABLE settings_a',
especially trying to predict the output of SHOW CREATE TABLE for
the next couple of major versions of MySQL before they are started.


I wouldn't worry about that. It's the least of the problems in this
application!


Well, I really don't want my parser to choke just because MySQL 5.7
adds a table attribute "DEFAULT TIMEZONE", totally irrelevant to
any features I'm actually using, but making it difficult to upgrade
the server. I don't like running really ancient versions of SQL
servers and I don't want to intentionally write in something that
will make it harder. Most of the older provisioning system doesn't
even use sub-selects because the servers it used didn't support
them.

Gordon L. Burditt
Jul 23 '05 #4

P: n/a
Gordon Burditt wrote:
It wouldn't surprise me if we ran some benchmarks that those 270
tables instead of 10 *slow down* things overall, especially with
all the file opens.
Well, it *certainly* slows down the code development and maintenance.
But I bet you're right about runtime performance as well.
This will work nicely. And when done in a transaction, it won't
risk leaving dregs around.
You may already know this, but using transactions depends on the tables
being stored in InnoDB format. The MyISAM storage engine doesn't
provide transaction isolation features.

If you are using InnoDB, at least that would mitigate the concern above
of having many open files. The default configuration for InnoDB is that
all the tables share the same physical database file.
Well, I really don't want my parser to choke just because MySQL 5.7
adds a table attribute "DEFAULT TIMEZONE", totally irrelevant to
any features I'm actually using, but making it difficult to upgrade
the server.


That's very honorable of you. But I assume other bad design decisions
exist in the database, which are more likely to prevent a smooth upgrade
to some hypothetical future major release of MySQL.

Good luck,
Bill K.
Jul 23 '05 #5

This discussion thread is closed

Replies have been disabled for this discussion.