By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,098 Members | 1,893 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,098 IT Pros & Developers. It's quick & easy.

duplicate entry after update

P: n/a
Normally when I use the Adaptor update function and it work fine but
sometimes I get a "duplicate entry" error after the merge. I believe the
problem is because the primary key is modified and the merge function is
unable compensate for this event. I see three possible solutions to this
problem. One being not to let them update the key which is not an option.
Two being to determine if the key has changed and refilling the entire
dataset (any clue how?). Lastly determining why what I am doing is not
working. ;>

Any suggestions?

Should I be accepting the changes before the merge?

This is how I do it:

Open SQL;
DataSet tmpDS = oldDS.GetChanges();
if(tmpDS != null)
{
Adaptor.Update(tmpDS);
oldDS.Merge(tmpDS);
oldDS.AcceptChanges;
}
Close SQL;

Regards,
John
Nov 16 '05 #1
Share this Question
Share on Google+
6 Replies


P: n/a
John J. Hughes II <no@invalid.com> wrote:
Normally when I use the Adaptor update function and it work fine but
sometimes I get a "duplicate entry" error after the merge. I believe the
problem is because the primary key is modified and the merge function is
unable compensate for this event. I see three possible solutions to this
problem. One being not to let them update the key which is not an option.
Two being to determine if the key has changed and refilling the entire
dataset (any clue how?). Lastly determining why what I am doing is not
working. ;>

Any suggestions?
Changing the primary key is always a bad idea. Half the point (or
more!) of a primary key is to be able to identify a row - if you change
it, it clearly can't work.
Should I be accepting the changes before the merge?


No - you need to find *some* way of preventing the primary key from
being modified. Can you add a new column and use that as the primary
key instead?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 16 '05 #2

P: n/a
Hum, whereas I agree with the sentiment it's a little simplistic.

I do try to avoid changing primary keys but short of having identities in
all my tables I don't see a practical way of doing this all the time. My
silly customer keep insisting that there be an option to change the keys.

A key is what makes a row unique but if I want change the key the row would
still be unique but unique in a different way. Since the dataset retains
the original row data and the changed data it's perfectly capable of
updating the SQL with no problem (or at least I have never found one). The
problem is when the select statement is run by the update to get the changed
rows. Now the updated dataset no longer retains the old key values so it
can not merge back to the old dataset which still knows the orginal keys.

Now I have a theory that if I do a GetChanges and Update (which works) and
then do an AcceptChanges on the old dataset before doing the merge I might
solve my problem. Of course I then have to do an AcceptChanges after the
merge for a total of two AcceptChanges. But since I am not very familiar
with how the Merge function works I am concerned with implementing this and
hoping for the best.

Do you have any opinions on this mythology?

Open SQL;
DataSet tmpDS = oldDS.GetChanges();
if(tmpDS != null)
{
Adaptor.Update(tmpDS);
oldDS.AcceptChanges; // should set the old keys to new key values
oldDS.Merge(tmpDS);
oldDS.AcceptChanges;
}
Close SQL;
Regards,
John
Nov 16 '05 #3

P: n/a
John J. Hughes II <no@invalid.com> wrote:
Hum, whereas I agree with the sentiment it's a little simplistic.

I do try to avoid changing primary keys but short of having identities in
all my tables I don't see a practical way of doing this all the time. My
silly customer keep insisting that there be an option to change the keys.
I don't see what's wrong with having an identity in every table. Other
than the very slight hit of having an extra column, it seems to solve
the problem very neatly. You could then have additional uniqueness
constraints on other columns if you wanted to.
A key is what makes a row unique but if I want change the key the row would
still be unique but unique in a different way.
No, a key is more than what makes a row unique - it's what allows it to
be identified. This is very important if you ever have a situation
where one client fetches a row and then later wants to modify it. If
another client has come in in the mean time and changed the primary
key, there's no way of the first client identifying it any more.
Since the dataset retains
the original row data and the changed data it's perfectly capable of
updating the SQL with no problem (or at least I have never found one). The
problem is when the select statement is run by the update to get the changed
rows. Now the updated dataset no longer retains the old key values so it
can not merge back to the old dataset which still knows the orginal keys.
It sounds like you *might* be okay just by writing the SQL commands
yourself and specifying that the parameters in the where clause should
use the original values rather than the current (ie new) ones.
Now I have a theory that if I do a GetChanges and Update (which works) and
then do an AcceptChanges on the old dataset before doing the merge I might
solve my problem. Of course I then have to do an AcceptChanges after the
merge for a total of two AcceptChanges. But since I am not very familiar
with how the Merge function works I am concerned with implementing this and
hoping for the best.


I don't know about Merge myself, to be honest. Calling AcceptChanges is
rarely a good idea though, in my experience.

I still *strongly* recommend changing the schema so that you've got a
primary key which *doesn't* change.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 16 '05 #4

P: n/a
Thanks for the response. I will continue to give it consideration. I may
have to do the identity thing on every table. My main concern with the
identity was the more the size then anything else. A uniqueidentifier is a
16 byte value and when you get to the 100k plus records on a couple tables
you are talking a lot of storage space. I already am getting complaints
from customers about database size and telling them disk space is cheap does
not seem to help much. Then of course there is the whole problem of adding
it to a couple dozen existing tables on customer's systems

Oh well... Live and learn.

Thanks again,
John
Nov 16 '05 #5

P: n/a
John J. Hughes II <no@invalid.com> wrote:
Thanks for the response. I will continue to give it consideration. I may
have to do the identity thing on every table. My main concern with the
identity was the more the size then anything else. A uniqueidentifier is a
16 byte value and when you get to the 100k plus records on a couple tables
you are talking a lot of storage space. I already am getting complaints
from customers about database size and telling them disk space is cheap does
not seem to help much. Then of course there is the whole problem of adding
it to a couple dozen existing tables on customer's systems


Well, you don't need to use uniqueidentifier - you could use an
identity int column for just 4 bytes per record. If you've already got
a primary key which includes 5 fields, I wouldn't have thought adding
an extra 4 bytes would make that much difference.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 16 '05 #6

P: n/a
Yea four bytes would not be as bad. I have been working on it this weekend
and I think a combination would be the best. Normally on my larger tables
there is little real reason to change the data mostly they are event that
have occurred so I may leave them as is and require the user to edit the key
in a less direct method. The tables that are smaller and are more likely to
need the keys changed will use the unique identifier. I have some other
tables that in which the keys change a lot but there are few new records so
I might use int identity.

I prefer the unique identifier in most cases because the seed increment
thing has gotten me in trouble on more the a few occasions. The System.guid
properties seems to work pretty well on new records.

Oh well time goes forward and some people have to be dragged along kicking
and screaming :)

Thanks again,
John
Nov 16 '05 #7

This discussion thread is closed

Replies have been disabled for this discussion.