By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,391 Members | 1,387 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,391 IT Pros & Developers. It's quick & easy.

Best way to override AllowNull in a schema?

P: n/a
I'm making a CRUD screen for an Oracle database... but problem is that
the primary key in that table is populated via an autonumber, which in
Oracle is done just with triggers and sequences. Suffice to say that
ADO.Net 1.1 is unaware that this column is, despite appearances,
optional - which is a problem when creating new rows. I was wondering
what the best way to remove the constraint is? I've been bitten too
many times by assuming the direct approach with ADO.Net - there are too
many hidden gotchas. My first instinct is to just set the datacolumn's
AllowDBNull property - but the constraint still blows up on
GetChanges() - which I'm using to check if any real constraints are
being violated.

Any pointers?

Nov 17 '06 #1
Share this Question
Share on Google+
13 Replies


P: n/a
Hi Martin,

Commonly, one would just set AutoIncrement, ReadOnly and Unique all to true on
the sequenced column.

I'm not an Oracle user, however, but I know this approach works with
auto-numbered columns in Sql Server. I can't imagine it being different from
Oracle since the issue here is really dealing with how ADO.NET handles that
concept, regardless of the RDBMS.

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@j44g2000cwa.googlegr oups.com...
I'm making a CRUD screen for an Oracle database... but problem is that
the primary key in that table is populated via an autonumber, which in
Oracle is done just with triggers and sequences. Suffice to say that
ADO.Net 1.1 is unaware that this column is, despite appearances,
optional - which is a problem when creating new rows. I was wondering
what the best way to remove the constraint is? I've been bitten too
many times by assuming the direct approach with ADO.Net - there are too
many hidden gotchas. My first instinct is to just set the datacolumn's
AllowDBNull property - but the constraint still blows up on
GetChanges() - which I'm using to check if any real constraints are
being violated.

Any pointers?

Nov 17 '06 #2

P: n/a
Just gave that a shot, with a few variations on the autoincrement
properties - it doesn't seem to allow the constraint to be violated,
and it doesn't seem to be sufficiently intelligent to determine
appropriate values for the insertion.

I think I'll have to give up and use EnforceConstraints = false.
Better to get an error on database update that I can work with than
risk the application getting trapped in some sort of state where it
can't figure out a valid value for the primary key when the database
will overwrite it anyways.

Thanks for the pointers, though. Hopefully we'll be using SQL Server
at some point in the future and then I'll be able to escape from the
imperfect mapping of ADO.Net to Oracle.

Dave Sexton wrote:
Hi Martin,

Commonly, one would just set AutoIncrement, ReadOnly and Unique all to true on
the sequenced column.

I'm not an Oracle user, however, but I know this approach works with
auto-numbered columns in Sql Server. I can't imagine it being different from
Oracle since the issue here is really dealing with how ADO.NET handles that
concept, regardless of the RDBMS.

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@j44g2000cwa.googlegr oups.com...
I'm making a CRUD screen for an Oracle database... but problem is that
the primary key in that table is populated via an autonumber, which in
Oracle is done just with triggers and sequences. Suffice to say that
ADO.Net 1.1 is unaware that this column is, despite appearances,
optional - which is a problem when creating new rows. I was wondering
what the best way to remove the constraint is? I've been bitten too
many times by assuming the direct approach with ADO.Net - there are too
many hidden gotchas. My first instinct is to just set the datacolumn's
AllowDBNull property - but the constraint still blows up on
GetChanges() - which I'm using to check if any real constraints are
being violated.

Any pointers?
Nov 17 '06 #3

P: n/a
Hi Martin,
>Commonly, one would just set AutoIncrement, ReadOnly and Unique all to true
on
the sequenced column.

I'm not an Oracle user, however, but I know this approach works with
auto-numbered columns in Sql Server. I can't imagine it being different
from
Oracle since the issue here is really dealing with how ADO.NET handles that
concept, regardless of the RDBMS.

Just gave that a shot, with a few variations on the autoincrement
properties - it doesn't seem to allow the constraint to be violated,
But that should be a good thing ;)
and it doesn't seem to be sufficiently intelligent to determine
appropriate values for the insertion.
No, it doesn't insert that value at all since it represents a sequenced column
that has a value generated by Oracle. It's purpose in the DataTable is to
provide a unique key that serves as a placeholder until the data is inserted
into the database, in which case the DataRow should be updated with the
correct, Oracle-generated, value.
I think I'll have to give up and use EnforceConstraints = false.
Eeek.
Better to get an error on database update that I can work with than
risk the application getting trapped in some sort of state where it
can't figure out a valid value for the primary key when the database
will overwrite it anyways.
The database shouldn't be overwriting anything since the value shouldn't be
supplied to the database during an insert, because it has no meaning. The
value should, however, be updated in the DataRow from the data source itself
after the insert.

When you insert the record you should return a result set with the new
sequence value generated by Oracle:

-- T-SQL, but hopefully you'll get the idea:

INSERT table SELECT @non_key_1, @non_key_2;

SELECT key, non_key_1, non_key_2 FROM table
WHERE key = SCOPE_IDENTITY();

The DbDataAdapter is supposed to handle the update of the underlying DataRow
automatically when the InsertCommand's UpdatedRowSource property is set
appropriately (SqlCommand defaults to "Both").

I'm not sure if the OracleDataAdapter provides this functionality, but it's
worth testing out. If Oracle doesn't provide this capability then I'm not
sure what you can do.
Thanks for the pointers, though. Hopefully we'll be using SQL Server
at some point in the future and then I'll be able to escape from the
imperfect mapping of ADO.Net to Oracle.
NP, GL ;)

--
Dave Sexton
Nov 17 '06 #4

P: n/a
Unfortunately, at least when mapping to Oracle, the AutoIncrement
properties do exactly what they sound like - rather than keep the field
empty and ignore its nonempty/unique constraints, knowing that the
database will take care of it, the ADO.Net tries to autogenerate the
new value for the autoincrement column itself.... badly. So, what
happens is I pop open my form to create the new row, I edit the row...
and when I try to add the row to the dataset, I get an error of
violating the unique key constraint. ADO.Net had generated a value of
"2" for the autoincrement column - I haveno idea how it came up with
that number, but it sure isn't unique.

There are numerous fields that can be used to tweak the
autoincrementing algorithm.... I suppose I could do something
horrifically hackish like giving it a negative default and incrementing
it in reverse direction, since Oracle will be throwing out that illegal
value anyways. That way I'd avoid column conflicts when inserting
multiple values, and never conflict with anything coming out of the
DB.... but it's just such an ugly solution.

Dave Sexton wrote:
Hi Martin,
Commonly, one would just set AutoIncrement, ReadOnly and Unique all to true
on
the sequenced column.

I'm not an Oracle user, however, but I know this approach works with
auto-numbered columns in Sql Server. I can't imagine it being different
from
Oracle since the issue here is really dealing with how ADO.NET handles that
concept, regardless of the RDBMS.
Just gave that a shot, with a few variations on the autoincrement
properties - it doesn't seem to allow the constraint to be violated,

But that should be a good thing ;)
and it doesn't seem to be sufficiently intelligent to determine
appropriate values for the insertion.

No, it doesn't insert that value at all since it represents a sequenced column
that has a value generated by Oracle. It's purpose in the DataTable is to
provide a unique key that serves as a placeholder until the data is inserted
into the database, in which case the DataRow should be updated with the
correct, Oracle-generated, value.
I think I'll have to give up and use EnforceConstraints = false.

Eeek.
Better to get an error on database update that I can work with than
risk the application getting trapped in some sort of state where it
can't figure out a valid value for the primary key when the database
will overwrite it anyways.

The database shouldn't be overwriting anything since the value shouldn't be
supplied to the database during an insert, because it has no meaning. The
value should, however, be updated in the DataRow from the data source itself
after the insert.

When you insert the record you should return a result set with the new
sequence value generated by Oracle:

-- T-SQL, but hopefully you'll get the idea:

INSERT table SELECT @non_key_1, @non_key_2;

SELECT key, non_key_1, non_key_2 FROM table
WHERE key = SCOPE_IDENTITY();

The DbDataAdapter is supposed to handle the update of the underlying DataRow
automatically when the InsertCommand's UpdatedRowSource property is set
appropriately (SqlCommand defaults to "Both").

I'm not sure if the OracleDataAdapter provides this functionality, but it's
worth testing out. If Oracle doesn't provide this capability then I'm not
sure what you can do.
Thanks for the pointers, though. Hopefully we'll be using SQL Server
at some point in the future and then I'll be able to escape from the
imperfect mapping of ADO.Net to Oracle.

NP, GL ;)

--
Dave Sexton
Nov 17 '06 #5

P: n/a
Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening. If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table. The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field should
be fixed at the value that Oracle provided. Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.

Dave Sexton wrote:
Hi Martin,
Commonly, one would just set AutoIncrement, ReadOnly and Unique all to true
on
the sequenced column.

I'm not an Oracle user, however, but I know this approach works with
auto-numbered columns in Sql Server. I can't imagine it being different
from
Oracle since the issue here is really dealing with how ADO.NET handles that
concept, regardless of the RDBMS.
Just gave that a shot, with a few variations on the autoincrement
properties - it doesn't seem to allow the constraint to be violated,

But that should be a good thing ;)
and it doesn't seem to be sufficiently intelligent to determine
appropriate values for the insertion.

No, it doesn't insert that value at all since it represents a sequenced column
that has a value generated by Oracle. It's purpose in the DataTable is to
provide a unique key that serves as a placeholder until the data is inserted
into the database, in which case the DataRow should be updated with the
correct, Oracle-generated, value.
I think I'll have to give up and use EnforceConstraints = false.

Eeek.
Better to get an error on database update that I can work with than
risk the application getting trapped in some sort of state where it
can't figure out a valid value for the primary key when the database
will overwrite it anyways.

The database shouldn't be overwriting anything since the value shouldn't be
supplied to the database during an insert, because it has no meaning. The
value should, however, be updated in the DataRow from the data source itself
after the insert.

When you insert the record you should return a result set with the new
sequence value generated by Oracle:

-- T-SQL, but hopefully you'll get the idea:

INSERT table SELECT @non_key_1, @non_key_2;

SELECT key, non_key_1, non_key_2 FROM table
WHERE key = SCOPE_IDENTITY();

The DbDataAdapter is supposed to handle the update of the underlying DataRow
automatically when the InsertCommand's UpdatedRowSource property is set
appropriately (SqlCommand defaults to "Both").

I'm not sure if the OracleDataAdapter provides this functionality, but it's
worth testing out. If Oracle doesn't provide this capability then I'm not
sure what you can do.
Thanks for the pointers, though. Hopefully we'll be using SQL Server
at some point in the future and then I'll be able to escape from the
imperfect mapping of ADO.Net to Oracle.

NP, GL ;)

--
Dave Sexton
Nov 17 '06 #6

P: n/a
Hi Martin,
Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.
No, it shouldn't be null. If the column is Unique, then DBNull for every new
value wouldn't satisfy that constraint. AutoIncrement creates a new number
that is temporary, and preserves the uniqueness of newly added rows until you
can perform the update and retrieve the actual value. The behavior you have
described is exactly what I would expect from an AutoIncrement column.
If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.
The values chosen by the DataTable for newly added rows should always be
unique (I've never had a problem), so I'm curious to know what error you're
getting when adding a new row. Care to post the code and the exception?
The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field should
be fixed at the value that Oracle provided
That would violate the Unique constraint, so it's impossible. If you want to
relax the constraints then you don't need the AutoIncrement column either, but
I wouldn't recommend that approach.
Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.
This particular issue you're having has nothing to do with the RDBMS. The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to the
database or the data provider being used.

--
Dave Sexton
Nov 17 '06 #7

P: n/a
Ah, I misunderstood. I imagined the behaviour for an autoincrement to
be "must be null for new rows, and apply primary key constraint for
existing ones" or something. Either way, I can't post the code because
I'm a new developer working on a large old company app (ported forward
from VB), trying to reapply their old crudscreen tool to a new table
I've added. So, my tweaks to the tool are here-there-and-everywhere,
unfortunately.

The crux of the matter is that the autoincrement system creates a
number that violates the uniqueness constraint of the table. I have a
table with 3 rows, primary key values 1, 2 and 3. When I create a new
row and attempt to add it to my dataset, it has a value of 2 that the
autoincrement column has autogenerated - which, logically, the system
rejects.

The idea of having to generate a
"unique-within-my-dataset-but-not-necessarily-within-the-actual-table"
value that will be replaced anyways seems very hackish, but I suppose I
have to satisfy the constraint on the dataset.

I'll try the "start at -1 and decrement" approach.

Dave Sexton wrote:
Hi Martin,
Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.

No, it shouldn't be null. If the column is Unique, then DBNull for every new
value wouldn't satisfy that constraint. AutoIncrement creates a new number
that is temporary, and preserves the uniqueness of newly added rows until you
can perform the update and retrieve the actual value. The behavior you have
described is exactly what I would expect from an AutoIncrement column.
If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.

The values chosen by the DataTable for newly added rows should always be
unique (I've never had a problem), so I'm curious to know what error you're
getting when adding a new row. Care to post the code and the exception?
The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field should
be fixed at the value that Oracle provided

That would violate the Unique constraint, so it's impossible. If you want to
relax the constraints then you don't need the AutoIncrement column either, but
I wouldn't recommend that approach.
Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.

This particular issue you're having has nothing to do with the RDBMS. The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to the
database or the data provider being used.

--
Dave Sexton
Nov 17 '06 #8

P: n/a
Hi Martin,
Unfortunately, at least when mapping to Oracle, the AutoIncrement
properties do exactly what they sound like - rather than keep the field
empty and ignore its nonempty/unique constraints, knowing that the
database will take care of it, the ADO.Net tries to autogenerate the
new value for the autoincrement column itself.... badly.
It's just AutoIncrementing, as you requested :)
So, what
happens is I pop open my form to create the new row, I edit the row...
and when I try to add the row to the dataset, I get an error of
violating the unique key constraint
You might want to post that code.
ADO.Net had generated a value of
"2" for the autoincrement column - I haveno idea how it came up with
that number, but it sure isn't unique.
You can verify its uniqueness by viewing the DataSet in a watch window. In VS
2005 there is a DataSet Visualizer (little magnifying glass in the watch row
that contains the variable you entered) that makes this easy.

I have to assume it's unique if it was generated by the DataTable.
There are numerous fields that can be used to tweak the
autoincrementing algorithm.... I suppose I could do something
horrifically hackish like giving it a negative default and incrementing
it in reverse direction, since Oracle will be throwing out that illegal
value anyways. That way I'd avoid column conflicts when inserting
multiple values, and never conflict with anything coming out of the
DB.... but it's just such an ugly solution.
Oracle shouldn't have to throw anything away. Again, the auto-incremented
values in your DataTable shouldn't be sent to Oracle. They are meaningless to
Oracle since it generates its own incremented value.

Once Oracle generates the value, you can return the value in a result set like
I showed you in my T-SQL example, and the OracleDataAdapter should update the
DataRow (as long as OracleDataAdapter conforms to the standard ADO.NET
DbDataAdapter logic for updating the row source, you shouldn't have to do
anything at all in code)

--
Dave Sexton
Nov 17 '06 #9

P: n/a
Looking it over, I think I've figured out what's happening. In Oracle,
the autonumber sequence is external to the column - it's just a
sequencer object. In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

In Oracle, it can't get that information automatically. I would have
to call a second query to find out what the current value of my
sequencer is to get the current AutoIncrementSeed. I could do a decent
faking of it with something like Max(myAutonumberColumnValue) + 1 as
the seed and set that as the AutoIncrementSeed.... either way though,
I'm creating values that will be thrown away when the data enters the
table, so I'm just using negatives. It works well.

Thanks for all your help. Personally, I'm thinking that the idea of
applying constraints to values that will never actually enter the
database is silly - but you've been extremely helpful at understanding
how things work within the dataset (and it's relation to the data
adaptors) - and found a workable solution to my problem.

Martin Z wrote:
Ah, I misunderstood. I imagined the behaviour for an autoincrement to
be "must be null for new rows, and apply primary key constraint for
existing ones" or something. Either way, I can't post the code because
I'm a new developer working on a large old company app (ported forward
from VB), trying to reapply their old crudscreen tool to a new table
I've added. So, my tweaks to the tool are here-there-and-everywhere,
unfortunately.

The crux of the matter is that the autoincrement system creates a
number that violates the uniqueness constraint of the table. I have a
table with 3 rows, primary key values 1, 2 and 3. When I create a new
row and attempt to add it to my dataset, it has a value of 2 that the
autoincrement column has autogenerated - which, logically, the system
rejects.

The idea of having to generate a
"unique-within-my-dataset-but-not-necessarily-within-the-actual-table"
value that will be replaced anyways seems very hackish, but I suppose I
have to satisfy the constraint on the dataset.

I'll try the "start at -1 and decrement" approach.

Dave Sexton wrote:
Hi Martin,
Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.
No, it shouldn't be null. If the column is Unique, then DBNull for every new
value wouldn't satisfy that constraint. AutoIncrement creates a new number
that is temporary, and preserves the uniqueness of newly added rows until you
can perform the update and retrieve the actual value. The behavior you have
described is exactly what I would expect from an AutoIncrement column.
If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.
The values chosen by the DataTable for newly added rows should always be
unique (I've never had a problem), so I'm curious to know what error you're
getting when adding a new row. Care to post the code and the exception?
The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field should
be fixed at the value that Oracle provided
That would violate the Unique constraint, so it's impossible. If you want to
relax the constraints then you don't need the AutoIncrement column either, but
I wouldn't recommend that approach.
Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.
This particular issue you're having has nothing to do with the RDBMS. The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to the
database or the data provider being used.

--
Dave Sexton
Nov 17 '06 #10

P: n/a
Hi Martin,
In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.
No, this has nothing to do with the RDBMS. I'm quite sure :)

I've never had such an issue with an auto-incrementing DataColumn.

As a matter of fact, if there are already records in the DataTable when a new
record is added, the greatest number that is still unique is chosen
automatically, it seems.

I'm curious to see exactly what you are doing, but I understand if you can't
post code that's not yours to post.

Maybe this will help:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.AutoIncrement = true;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(null, "1"); // id=0
DataRow row2 = table.Rows.Add(null, "2"); // id=1
DataRow row3 = table.Rows.Add(null, "3"); // id=2

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2
Console.WriteLine(row4["ID"]); // id=3

Output:

0
1
2
3
0
1
2
3

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@m7g2000cwm.googlegro ups.com...
Looking it over, I think I've figured out what's happening. In Oracle,
the autonumber sequence is external to the column - it's just a
sequencer object. In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

In Oracle, it can't get that information automatically. I would have
to call a second query to find out what the current value of my
sequencer is to get the current AutoIncrementSeed. I could do a decent
faking of it with something like Max(myAutonumberColumnValue) + 1 as
the seed and set that as the AutoIncrementSeed.... either way though,
I'm creating values that will be thrown away when the data enters the
table, so I'm just using negatives. It works well.

Thanks for all your help. Personally, I'm thinking that the idea of
applying constraints to values that will never actually enter the
database is silly - but you've been extremely helpful at understanding
how things work within the dataset (and it's relation to the data
adaptors) - and found a workable solution to my problem.

Martin Z wrote:
>Ah, I misunderstood. I imagined the behaviour for an autoincrement to
be "must be null for new rows, and apply primary key constraint for
existing ones" or something. Either way, I can't post the code because
I'm a new developer working on a large old company app (ported forward
from VB), trying to reapply their old crudscreen tool to a new table
I've added. So, my tweaks to the tool are here-there-and-everywhere,
unfortunately.

The crux of the matter is that the autoincrement system creates a
number that violates the uniqueness constraint of the table. I have a
table with 3 rows, primary key values 1, 2 and 3. When I create a new
row and attempt to add it to my dataset, it has a value of 2 that the
autoincrement column has autogenerated - which, logically, the system
rejects.

The idea of having to generate a
"unique-within-my-dataset-but-not-necessarily-within-the-actual-table"
value that will be replaced anyways seems very hackish, but I suppose I
have to satisfy the constraint on the dataset.

I'll try the "start at -1 and decrement" approach.

Dave Sexton wrote:
Hi Martin,

Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.

No, it shouldn't be null. If the column is Unique, then DBNull for every
new
value wouldn't satisfy that constraint. AutoIncrement creates a new
number
that is temporary, and preserves the uniqueness of newly added rows until
you
can perform the update and retrieve the actual value. The behavior you
have
described is exactly what I would expect from an AutoIncrement column.

If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.

The values chosen by the DataTable for newly added rows should always be
unique (I've never had a problem), so I'm curious to know what error
you're
getting when adding a new row. Care to post the code and the exception?

The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field should
be fixed at the value that Oracle provided

That would violate the Unique constraint, so it's impossible. If you
want to
relax the constraints then you don't need the AutoIncrement column
either, but
I wouldn't recommend that approach.

Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.

This particular issue you're having has nothing to do with the RDBMS.
The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to the
database or the data provider being used.

--
Dave Sexton

Nov 17 '06 #11

P: n/a
Gotta run, but try this, and you'll see that you must *manually* set
the IncrementSeed value, which I assume it pulls from schema in SQL
Server:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(new Object[] {0, "1"}); // id=0
DataRow row2 = table.Rows.Add(new Object[] {1, "1"}); // id=1
DataRow row3 = table.Rows.Add(new Object[] {2, "1"}); // id=2

cInt32.AutoIncrement = true;

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();
Console.ReadLine();

Dave Sexton wrote:
Hi Martin,
In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

No, this has nothing to do with the RDBMS. I'm quite sure :)

I've never had such an issue with an auto-incrementing DataColumn.

As a matter of fact, if there are already records in the DataTable when a new
record is added, the greatest number that is still unique is chosen
automatically, it seems.

I'm curious to see exactly what you are doing, but I understand if you can't
post code that's not yours to post.

Maybe this will help:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.AutoIncrement = true;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(null, "1"); // id=0
DataRow row2 = table.Rows.Add(null, "2"); // id=1
DataRow row3 = table.Rows.Add(null, "3"); // id=2

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2
Console.WriteLine(row4["ID"]); // id=3

Output:

0
1
2
3
0
1
2
3

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@m7g2000cwm.googlegro ups.com...
Looking it over, I think I've figured out what's happening. In Oracle,
the autonumber sequence is external to the column - it's just a
sequencer object. In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

In Oracle, it can't get that information automatically. I would have
to call a second query to find out what the current value of my
sequencer is to get the current AutoIncrementSeed. I could do a decent
faking of it with something like Max(myAutonumberColumnValue) + 1 as
the seed and set that as the AutoIncrementSeed.... either way though,
I'm creating values that will be thrown away when the data enters the
table, so I'm just using negatives. It works well.

Thanks for all your help. Personally, I'm thinking that the idea of
applying constraints to values that will never actually enter the
database is silly - but you've been extremely helpful at understanding
how things work within the dataset (and it's relation to the data
adaptors) - and found a workable solution to my problem.

Martin Z wrote:
Ah, I misunderstood. I imagined the behaviour for an autoincrement to
be "must be null for new rows, and apply primary key constraint for
existing ones" or something. Either way, I can't post the code because
I'm a new developer working on a large old company app (ported forward
from VB), trying to reapply their old crudscreen tool to a new table
I've added. So, my tweaks to the tool are here-there-and-everywhere,
unfortunately.

The crux of the matter is that the autoincrement system creates a
number that violates the uniqueness constraint of the table. I have a
table with 3 rows, primary key values 1, 2 and 3. When I create a new
row and attempt to add it to my dataset, it has a value of 2 that the
autoincrement column has autogenerated - which, logically, the system
rejects.

The idea of having to generate a
"unique-within-my-dataset-but-not-necessarily-within-the-actual-table"
value that will be replaced anyways seems very hackish, but I suppose I
have to satisfy the constraint on the dataset.

I'll try the "start at -1 and decrement" approach.

Dave Sexton wrote:
Hi Martin,

Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.

No, it shouldn't be null. If the column is Unique, then DBNull for every
new
value wouldn't satisfy that constraint. AutoIncrement creates a new
number
that is temporary, and preserves the uniqueness of newly added rows until
you
can perform the update and retrieve the actual value. The behavior you
have
described is exactly what I would expect from an AutoIncrement column.

If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.

The values chosen by the DataTable for newly added rows should always be
unique (I've never had a problem), so I'm curious to know what error
you're
getting when adding a new row. Care to post the code and the exception?

The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field should
be fixed at the value that Oracle provided

That would violate the Unique constraint, so it's impossible. If you
want to
relax the constraints then you don't need the AutoIncrement column
either, but
I wouldn't recommend that approach.

Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.

This particular issue you're having has nothing to do with the RDBMS.
The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to the
database or the data provider being used.

--
Dave Sexton
Nov 17 '06 #12

P: n/a
Hi Martin,

I can assure you that there is no automatic "schema-pulling" for
AutoIncrementSeed, even when FillSchema is called on an SqlDataAdapter. It
even states this in the docs:

"FillSchema Method"
http://msdn2.microsoft.com/en-us/library/152bda9x.aspx

Normally, you would just ensure that the DataColumn is completely initialized
before it's used. Using a strong-Typed DataSet will alleviate this problem
for you. Or just set the column's AutoIncrement property before the DataTable
is used.

If not, you have to manually set the seed as you suggested. You could do this
dynamically if you wanted to:
cInt32.AutoIncrement = true;
// add this line:
cInt32.AutoIncrementSeed = (int) table.Compute("MAX(ID)", null) + 1;

No negative numbers or seed hacking required (and no interaction with the
database what-so-ever :)

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@e3g2000cwe.googlegro ups.com...
Gotta run, but try this, and you'll see that you must *manually* set
the IncrementSeed value, which I assume it pulls from schema in SQL
Server:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(new Object[] {0, "1"}); // id=0
DataRow row2 = table.Rows.Add(new Object[] {1, "1"}); // id=1
DataRow row3 = table.Rows.Add(new Object[] {2, "1"}); // id=2

cInt32.AutoIncrement = true;

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();
Console.ReadLine();

Dave Sexton wrote:
>Hi Martin,
In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

No, this has nothing to do with the RDBMS. I'm quite sure :)

I've never had such an issue with an auto-incrementing DataColumn.

As a matter of fact, if there are already records in the DataTable when a
new
record is added, the greatest number that is still unique is chosen
automatically, it seems.

I'm curious to see exactly what you are doing, but I understand if you
can't
post code that's not yours to post.

Maybe this will help:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.AutoIncrement = true;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(null, "1"); // id=0
DataRow row2 = table.Rows.Add(null, "2"); // id=1
DataRow row3 = table.Rows.Add(null, "3"); // id=2

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2
Console.WriteLine(row4["ID"]); // id=3

Output:

0
1
2
3
0
1
2
3

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@m7g2000cwm.googlegr oups.com...
Looking it over, I think I've figured out what's happening. In Oracle,
the autonumber sequence is external to the column - it's just a
sequencer object. In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

In Oracle, it can't get that information automatically. I would have
to call a second query to find out what the current value of my
sequencer is to get the current AutoIncrementSeed. I could do a decent
faking of it with something like Max(myAutonumberColumnValue) + 1 as
the seed and set that as the AutoIncrementSeed.... either way though,
I'm creating values that will be thrown away when the data enters the
table, so I'm just using negatives. It works well.

Thanks for all your help. Personally, I'm thinking that the idea of
applying constraints to values that will never actually enter the
database is silly - but you've been extremely helpful at understanding
how things work within the dataset (and it's relation to the data
adaptors) - and found a workable solution to my problem.

Martin Z wrote:
Ah, I misunderstood. I imagined the behaviour for an autoincrement to
be "must be null for new rows, and apply primary key constraint for
existing ones" or something. Either way, I can't post the code because
I'm a new developer working on a large old company app (ported forward
from VB), trying to reapply their old crudscreen tool to a new table
I've added. So, my tweaks to the tool are here-there-and-everywhere,
unfortunately.

The crux of the matter is that the autoincrement system creates a
number that violates the uniqueness constraint of the table. I have a
table with 3 rows, primary key values 1, 2 and 3. When I create a new
row and attempt to add it to my dataset, it has a value of 2 that the
autoincrement column has autogenerated - which, logically, the system
rejects.

The idea of having to generate a
"unique-within-my-dataset-but-not-necessarily-within-the-actual-table"
value that will be replaced anyways seems very hackish, but I suppose I
have to satisfy the constraint on the dataset.

I'll try the "start at -1 and decrement" approach.

Dave Sexton wrote:
Hi Martin,

Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.

No, it shouldn't be null. If the column is Unique, then DBNull for
every
new
value wouldn't satisfy that constraint. AutoIncrement creates a new
number
that is temporary, and preserves the uniqueness of newly added rows
until
you
can perform the update and retrieve the actual value. The behavior
you
have
described is exactly what I would expect from an AutoIncrement column.

If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a
value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.

The values chosen by the DataTable for newly added rows should always
be
unique (I've never had a problem), so I'm curious to know what error
you're
getting when adding a new row. Care to post the code and the
exception?

The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field
should
be fixed at the value that Oracle provided

That would violate the Unique constraint, so it's impossible. If you
want to
relax the constraints then you don't need the AutoIncrement column
either, but
I wouldn't recommend that approach.

Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.

This particular issue you're having has nothing to do with the RDBMS.
The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to
the
database or the data provider being used.

--
Dave Sexton

Nov 17 '06 #13

P: n/a
Ah. There was my problem - I was setting the AutoIncrement property
_after_ the data was loaded from the table, and assuming that it would
consider the pre-existing values within the dataset. When that failed,
I assumed that the database must provide it with additional information
to get a valid AutoIncrementSeed, the same way it uses the database to
set the AllowDbNull property.

Your explanation is obvious in retrospect. As the data is added into
the dataset, the autoincrement system simply takes the maximum and uses
the next beyond. Unfortunately, it doesn't do that automatically when
AutoIncrement is set true on a datatable that already contains data -
which is my approach. The problem, of course, being that the dataset
is coming into the CRUD screen system from an external source - the
system I'm working on wasn't involved in fetching the dataset.

Frustrating how this stuff makes perfect sense once one realises how it
was intended to be used, as opposed to the bad perversions of it that I
tend to do.

Obviously, I'm new to ADO.Net, so I have to say "OOOOOOOOOHHHH!!!" to
the Compute thing. I was expecting to code that bit manually, which is
why I said screw-it and did the negative-hack. I am really looking
forward to 3.0, when stuff like that will become first-class members of
the language instead of relying on passing expression strings into
functions.

Thanks again for your help, and putting up with my stubbornness.

Dave Sexton wrote:
Hi Martin,

I can assure you that there is no automatic "schema-pulling" for
AutoIncrementSeed, even when FillSchema is called on an SqlDataAdapter. It
even states this in the docs:

"FillSchema Method"
http://msdn2.microsoft.com/en-us/library/152bda9x.aspx

Normally, you would just ensure that the DataColumn is completely initialized
before it's used. Using a strong-Typed DataSet will alleviate this problem
for you. Or just set the column's AutoIncrement property before the DataTable
is used.

If not, you have to manually set the seed as you suggested. You could do this
dynamically if you wanted to:
cInt32.AutoIncrement = true;

// add this line:
cInt32.AutoIncrementSeed = (int) table.Compute("MAX(ID)", null) + 1;

No negative numbers or seed hacking required (and no interaction with the
database what-so-ever :)

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@e3g2000cwe.googlegro ups.com...
Gotta run, but try this, and you'll see that you must *manually* set
the IncrementSeed value, which I assume it pulls from schema in SQL
Server:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(new Object[] {0, "1"}); // id=0
DataRow row2 = table.Rows.Add(new Object[] {1, "1"}); // id=1
DataRow row3 = table.Rows.Add(new Object[] {2, "1"}); // id=2

cInt32.AutoIncrement = true;

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();
Console.ReadLine();

Dave Sexton wrote:
Hi Martin,

In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

No, this has nothing to do with the RDBMS. I'm quite sure :)

I've never had such an issue with an auto-incrementing DataColumn.

As a matter of fact, if there are already records in the DataTable when a
new
record is added, the greatest number that is still unique is chosen
automatically, it seems.

I'm curious to see exactly what you are doing, but I understand if you
can't
post code that's not yours to post.

Maybe this will help:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.AutoIncrement = true;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(null, "1"); // id=0
DataRow row2 = table.Rows.Add(null, "2"); // id=1
DataRow row3 = table.Rows.Add(null, "3"); // id=2

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2
Console.WriteLine(row4["ID"]); // id=3

Output:

0
1
2
3
0
1
2
3

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@m7g2000cwm.googlegro ups.com...
Looking it over, I think I've figured out what's happening. In Oracle,
the autonumber sequence is external to the column - it's just a
sequencer object. In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

In Oracle, it can't get that information automatically. I would have
to call a second query to find out what the current value of my
sequencer is to get the current AutoIncrementSeed. I could do a decent
faking of it with something like Max(myAutonumberColumnValue) + 1 as
the seed and set that as the AutoIncrementSeed.... either way though,
I'm creating values that will be thrown away when the data enters the
table, so I'm just using negatives. It works well.

Thanks for all your help. Personally, I'm thinking that the idea of
applying constraints to values that will never actually enter the
database is silly - but you've been extremely helpful at understanding
how things work within the dataset (and it's relation to the data
adaptors) - and found a workable solution to my problem.

Martin Z wrote:
Ah, I misunderstood. I imagined the behaviour for an autoincrement to
be "must be null for new rows, and apply primary key constraint for
existing ones" or something. Either way, I can't post the code because
I'm a new developer working on a large old company app (ported forward
from VB), trying to reapply their old crudscreen tool to a new table
I've added. So, my tweaks to the tool are here-there-and-everywhere,
unfortunately.

The crux of the matter is that the autoincrement system creates a
number that violates the uniqueness constraint of the table. I have a
table with 3 rows, primary key values 1, 2 and 3. When I create a new
row and attempt to add it to my dataset, it has a value of 2 that the
autoincrement column has autogenerated - which, logically, the system
rejects.

The idea of having to generate a
"unique-within-my-dataset-but-not-necessarily-within-the-actual-table"
value that will be replaced anyways seems very hackish, but I suppose I
have to satisfy the constraint on the dataset.

I'll try the "start at -1 and decrement" approach.

Dave Sexton wrote:
Hi Martin,

Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.

No, it shouldn't be null. If the column is Unique, then DBNull for
every
new
value wouldn't satisfy that constraint. AutoIncrement creates a new
number
that is temporary, and preserves the uniqueness of newly added rows
until
you
can perform the update and retrieve the actual value. The behavior
you
have
described is exactly what I would expect from an AutoIncrement column.

If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a
value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.

The values chosen by the DataTable for newly added rows should always
be
unique (I've never had a problem), so I'm curious to know what error
you're
getting when adding a new row. Care to post the code and the
exception?

The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field
should
be fixed at the value that Oracle provided

That would violate the Unique constraint, so it's impossible. If you
want to
relax the constraints then you don't need the AutoIncrement column
either, but
I wouldn't recommend that approach.

Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.

This particular issue you're having has nothing to do with the RDBMS.
The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to
the
database or the data provider being used.

--
Dave Sexton
Nov 18 '06 #14

This discussion thread is closed

Replies have been disabled for this discussion.