473,406 Members | 2,705 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,406 software developers and data experts.

Best way to override AllowNull in a schema?

I'm making a CRUD screen for an Oracle database... but problem is that
the primary key in that table is populated via an autonumber, which in
Oracle is done just with triggers and sequences. Suffice to say that
ADO.Net 1.1 is unaware that this column is, despite appearances,
optional - which is a problem when creating new rows. I was wondering
what the best way to remove the constraint is? I've been bitten too
many times by assuming the direct approach with ADO.Net - there are too
many hidden gotchas. My first instinct is to just set the datacolumn's
AllowDBNull property - but the constraint still blows up on
GetChanges() - which I'm using to check if any real constraints are
being violated.

Any pointers?

Nov 17 '06 #1
13 1884
Hi Martin,

Commonly, one would just set AutoIncrement, ReadOnly and Unique all to true on
the sequenced column.

I'm not an Oracle user, however, but I know this approach works with
auto-numbered columns in Sql Server. I can't imagine it being different from
Oracle since the issue here is really dealing with how ADO.NET handles that
concept, regardless of the RDBMS.

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@j44g2000cwa.googlegr oups.com...
I'm making a CRUD screen for an Oracle database... but problem is that
the primary key in that table is populated via an autonumber, which in
Oracle is done just with triggers and sequences. Suffice to say that
ADO.Net 1.1 is unaware that this column is, despite appearances,
optional - which is a problem when creating new rows. I was wondering
what the best way to remove the constraint is? I've been bitten too
many times by assuming the direct approach with ADO.Net - there are too
many hidden gotchas. My first instinct is to just set the datacolumn's
AllowDBNull property - but the constraint still blows up on
GetChanges() - which I'm using to check if any real constraints are
being violated.

Any pointers?

Nov 17 '06 #2
Just gave that a shot, with a few variations on the autoincrement
properties - it doesn't seem to allow the constraint to be violated,
and it doesn't seem to be sufficiently intelligent to determine
appropriate values for the insertion.

I think I'll have to give up and use EnforceConstraints = false.
Better to get an error on database update that I can work with than
risk the application getting trapped in some sort of state where it
can't figure out a valid value for the primary key when the database
will overwrite it anyways.

Thanks for the pointers, though. Hopefully we'll be using SQL Server
at some point in the future and then I'll be able to escape from the
imperfect mapping of ADO.Net to Oracle.

Dave Sexton wrote:
Hi Martin,

Commonly, one would just set AutoIncrement, ReadOnly and Unique all to true on
the sequenced column.

I'm not an Oracle user, however, but I know this approach works with
auto-numbered columns in Sql Server. I can't imagine it being different from
Oracle since the issue here is really dealing with how ADO.NET handles that
concept, regardless of the RDBMS.

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@j44g2000cwa.googlegr oups.com...
I'm making a CRUD screen for an Oracle database... but problem is that
the primary key in that table is populated via an autonumber, which in
Oracle is done just with triggers and sequences. Suffice to say that
ADO.Net 1.1 is unaware that this column is, despite appearances,
optional - which is a problem when creating new rows. I was wondering
what the best way to remove the constraint is? I've been bitten too
many times by assuming the direct approach with ADO.Net - there are too
many hidden gotchas. My first instinct is to just set the datacolumn's
AllowDBNull property - but the constraint still blows up on
GetChanges() - which I'm using to check if any real constraints are
being violated.

Any pointers?
Nov 17 '06 #3
Hi Martin,
>Commonly, one would just set AutoIncrement, ReadOnly and Unique all to true
on
the sequenced column.

I'm not an Oracle user, however, but I know this approach works with
auto-numbered columns in Sql Server. I can't imagine it being different
from
Oracle since the issue here is really dealing with how ADO.NET handles that
concept, regardless of the RDBMS.

Just gave that a shot, with a few variations on the autoincrement
properties - it doesn't seem to allow the constraint to be violated,
But that should be a good thing ;)
and it doesn't seem to be sufficiently intelligent to determine
appropriate values for the insertion.
No, it doesn't insert that value at all since it represents a sequenced column
that has a value generated by Oracle. It's purpose in the DataTable is to
provide a unique key that serves as a placeholder until the data is inserted
into the database, in which case the DataRow should be updated with the
correct, Oracle-generated, value.
I think I'll have to give up and use EnforceConstraints = false.
Eeek.
Better to get an error on database update that I can work with than
risk the application getting trapped in some sort of state where it
can't figure out a valid value for the primary key when the database
will overwrite it anyways.
The database shouldn't be overwriting anything since the value shouldn't be
supplied to the database during an insert, because it has no meaning. The
value should, however, be updated in the DataRow from the data source itself
after the insert.

When you insert the record you should return a result set with the new
sequence value generated by Oracle:

-- T-SQL, but hopefully you'll get the idea:

INSERT table SELECT @non_key_1, @non_key_2;

SELECT key, non_key_1, non_key_2 FROM table
WHERE key = SCOPE_IDENTITY();

The DbDataAdapter is supposed to handle the update of the underlying DataRow
automatically when the InsertCommand's UpdatedRowSource property is set
appropriately (SqlCommand defaults to "Both").

I'm not sure if the OracleDataAdapter provides this functionality, but it's
worth testing out. If Oracle doesn't provide this capability then I'm not
sure what you can do.
Thanks for the pointers, though. Hopefully we'll be using SQL Server
at some point in the future and then I'll be able to escape from the
imperfect mapping of ADO.Net to Oracle.
NP, GL ;)

--
Dave Sexton
Nov 17 '06 #4
Unfortunately, at least when mapping to Oracle, the AutoIncrement
properties do exactly what they sound like - rather than keep the field
empty and ignore its nonempty/unique constraints, knowing that the
database will take care of it, the ADO.Net tries to autogenerate the
new value for the autoincrement column itself.... badly. So, what
happens is I pop open my form to create the new row, I edit the row...
and when I try to add the row to the dataset, I get an error of
violating the unique key constraint. ADO.Net had generated a value of
"2" for the autoincrement column - I haveno idea how it came up with
that number, but it sure isn't unique.

There are numerous fields that can be used to tweak the
autoincrementing algorithm.... I suppose I could do something
horrifically hackish like giving it a negative default and incrementing
it in reverse direction, since Oracle will be throwing out that illegal
value anyways. That way I'd avoid column conflicts when inserting
multiple values, and never conflict with anything coming out of the
DB.... but it's just such an ugly solution.

Dave Sexton wrote:
Hi Martin,
Commonly, one would just set AutoIncrement, ReadOnly and Unique all to true
on
the sequenced column.

I'm not an Oracle user, however, but I know this approach works with
auto-numbered columns in Sql Server. I can't imagine it being different
from
Oracle since the issue here is really dealing with how ADO.NET handles that
concept, regardless of the RDBMS.
Just gave that a shot, with a few variations on the autoincrement
properties - it doesn't seem to allow the constraint to be violated,

But that should be a good thing ;)
and it doesn't seem to be sufficiently intelligent to determine
appropriate values for the insertion.

No, it doesn't insert that value at all since it represents a sequenced column
that has a value generated by Oracle. It's purpose in the DataTable is to
provide a unique key that serves as a placeholder until the data is inserted
into the database, in which case the DataRow should be updated with the
correct, Oracle-generated, value.
I think I'll have to give up and use EnforceConstraints = false.

Eeek.
Better to get an error on database update that I can work with than
risk the application getting trapped in some sort of state where it
can't figure out a valid value for the primary key when the database
will overwrite it anyways.

The database shouldn't be overwriting anything since the value shouldn't be
supplied to the database during an insert, because it has no meaning. The
value should, however, be updated in the DataRow from the data source itself
after the insert.

When you insert the record you should return a result set with the new
sequence value generated by Oracle:

-- T-SQL, but hopefully you'll get the idea:

INSERT table SELECT @non_key_1, @non_key_2;

SELECT key, non_key_1, non_key_2 FROM table
WHERE key = SCOPE_IDENTITY();

The DbDataAdapter is supposed to handle the update of the underlying DataRow
automatically when the InsertCommand's UpdatedRowSource property is set
appropriately (SqlCommand defaults to "Both").

I'm not sure if the OracleDataAdapter provides this functionality, but it's
worth testing out. If Oracle doesn't provide this capability then I'm not
sure what you can do.
Thanks for the pointers, though. Hopefully we'll be using SQL Server
at some point in the future and then I'll be able to escape from the
imperfect mapping of ADO.Net to Oracle.

NP, GL ;)

--
Dave Sexton
Nov 17 '06 #5
Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening. If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table. The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field should
be fixed at the value that Oracle provided. Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.

Dave Sexton wrote:
Hi Martin,
Commonly, one would just set AutoIncrement, ReadOnly and Unique all to true
on
the sequenced column.

I'm not an Oracle user, however, but I know this approach works with
auto-numbered columns in Sql Server. I can't imagine it being different
from
Oracle since the issue here is really dealing with how ADO.NET handles that
concept, regardless of the RDBMS.
Just gave that a shot, with a few variations on the autoincrement
properties - it doesn't seem to allow the constraint to be violated,

But that should be a good thing ;)
and it doesn't seem to be sufficiently intelligent to determine
appropriate values for the insertion.

No, it doesn't insert that value at all since it represents a sequenced column
that has a value generated by Oracle. It's purpose in the DataTable is to
provide a unique key that serves as a placeholder until the data is inserted
into the database, in which case the DataRow should be updated with the
correct, Oracle-generated, value.
I think I'll have to give up and use EnforceConstraints = false.

Eeek.
Better to get an error on database update that I can work with than
risk the application getting trapped in some sort of state where it
can't figure out a valid value for the primary key when the database
will overwrite it anyways.

The database shouldn't be overwriting anything since the value shouldn't be
supplied to the database during an insert, because it has no meaning. The
value should, however, be updated in the DataRow from the data source itself
after the insert.

When you insert the record you should return a result set with the new
sequence value generated by Oracle:

-- T-SQL, but hopefully you'll get the idea:

INSERT table SELECT @non_key_1, @non_key_2;

SELECT key, non_key_1, non_key_2 FROM table
WHERE key = SCOPE_IDENTITY();

The DbDataAdapter is supposed to handle the update of the underlying DataRow
automatically when the InsertCommand's UpdatedRowSource property is set
appropriately (SqlCommand defaults to "Both").

I'm not sure if the OracleDataAdapter provides this functionality, but it's
worth testing out. If Oracle doesn't provide this capability then I'm not
sure what you can do.
Thanks for the pointers, though. Hopefully we'll be using SQL Server
at some point in the future and then I'll be able to escape from the
imperfect mapping of ADO.Net to Oracle.

NP, GL ;)

--
Dave Sexton
Nov 17 '06 #6
Hi Martin,
Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.
No, it shouldn't be null. If the column is Unique, then DBNull for every new
value wouldn't satisfy that constraint. AutoIncrement creates a new number
that is temporary, and preserves the uniqueness of newly added rows until you
can perform the update and retrieve the actual value. The behavior you have
described is exactly what I would expect from an AutoIncrement column.
If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.
The values chosen by the DataTable for newly added rows should always be
unique (I've never had a problem), so I'm curious to know what error you're
getting when adding a new row. Care to post the code and the exception?
The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field should
be fixed at the value that Oracle provided
That would violate the Unique constraint, so it's impossible. If you want to
relax the constraints then you don't need the AutoIncrement column either, but
I wouldn't recommend that approach.
Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.
This particular issue you're having has nothing to do with the RDBMS. The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to the
database or the data provider being used.

--
Dave Sexton
Nov 17 '06 #7
Ah, I misunderstood. I imagined the behaviour for an autoincrement to
be "must be null for new rows, and apply primary key constraint for
existing ones" or something. Either way, I can't post the code because
I'm a new developer working on a large old company app (ported forward
from VB), trying to reapply their old crudscreen tool to a new table
I've added. So, my tweaks to the tool are here-there-and-everywhere,
unfortunately.

The crux of the matter is that the autoincrement system creates a
number that violates the uniqueness constraint of the table. I have a
table with 3 rows, primary key values 1, 2 and 3. When I create a new
row and attempt to add it to my dataset, it has a value of 2 that the
autoincrement column has autogenerated - which, logically, the system
rejects.

The idea of having to generate a
"unique-within-my-dataset-but-not-necessarily-within-the-actual-table"
value that will be replaced anyways seems very hackish, but I suppose I
have to satisfy the constraint on the dataset.

I'll try the "start at -1 and decrement" approach.

Dave Sexton wrote:
Hi Martin,
Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.

No, it shouldn't be null. If the column is Unique, then DBNull for every new
value wouldn't satisfy that constraint. AutoIncrement creates a new number
that is temporary, and preserves the uniqueness of newly added rows until you
can perform the update and retrieve the actual value. The behavior you have
described is exactly what I would expect from an AutoIncrement column.
If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.

The values chosen by the DataTable for newly added rows should always be
unique (I've never had a problem), so I'm curious to know what error you're
getting when adding a new row. Care to post the code and the exception?
The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field should
be fixed at the value that Oracle provided

That would violate the Unique constraint, so it's impossible. If you want to
relax the constraints then you don't need the AutoIncrement column either, but
I wouldn't recommend that approach.
Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.

This particular issue you're having has nothing to do with the RDBMS. The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to the
database or the data provider being used.

--
Dave Sexton
Nov 17 '06 #8
Hi Martin,
Unfortunately, at least when mapping to Oracle, the AutoIncrement
properties do exactly what they sound like - rather than keep the field
empty and ignore its nonempty/unique constraints, knowing that the
database will take care of it, the ADO.Net tries to autogenerate the
new value for the autoincrement column itself.... badly.
It's just AutoIncrementing, as you requested :)
So, what
happens is I pop open my form to create the new row, I edit the row...
and when I try to add the row to the dataset, I get an error of
violating the unique key constraint
You might want to post that code.
ADO.Net had generated a value of
"2" for the autoincrement column - I haveno idea how it came up with
that number, but it sure isn't unique.
You can verify its uniqueness by viewing the DataSet in a watch window. In VS
2005 there is a DataSet Visualizer (little magnifying glass in the watch row
that contains the variable you entered) that makes this easy.

I have to assume it's unique if it was generated by the DataTable.
There are numerous fields that can be used to tweak the
autoincrementing algorithm.... I suppose I could do something
horrifically hackish like giving it a negative default and incrementing
it in reverse direction, since Oracle will be throwing out that illegal
value anyways. That way I'd avoid column conflicts when inserting
multiple values, and never conflict with anything coming out of the
DB.... but it's just such an ugly solution.
Oracle shouldn't have to throw anything away. Again, the auto-incremented
values in your DataTable shouldn't be sent to Oracle. They are meaningless to
Oracle since it generates its own incremented value.

Once Oracle generates the value, you can return the value in a result set like
I showed you in my T-SQL example, and the OracleDataAdapter should update the
DataRow (as long as OracleDataAdapter conforms to the standard ADO.NET
DbDataAdapter logic for updating the row source, you shouldn't have to do
anything at all in code)

--
Dave Sexton
Nov 17 '06 #9
Looking it over, I think I've figured out what's happening. In Oracle,
the autonumber sequence is external to the column - it's just a
sequencer object. In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

In Oracle, it can't get that information automatically. I would have
to call a second query to find out what the current value of my
sequencer is to get the current AutoIncrementSeed. I could do a decent
faking of it with something like Max(myAutonumberColumnValue) + 1 as
the seed and set that as the AutoIncrementSeed.... either way though,
I'm creating values that will be thrown away when the data enters the
table, so I'm just using negatives. It works well.

Thanks for all your help. Personally, I'm thinking that the idea of
applying constraints to values that will never actually enter the
database is silly - but you've been extremely helpful at understanding
how things work within the dataset (and it's relation to the data
adaptors) - and found a workable solution to my problem.

Martin Z wrote:
Ah, I misunderstood. I imagined the behaviour for an autoincrement to
be "must be null for new rows, and apply primary key constraint for
existing ones" or something. Either way, I can't post the code because
I'm a new developer working on a large old company app (ported forward
from VB), trying to reapply their old crudscreen tool to a new table
I've added. So, my tweaks to the tool are here-there-and-everywhere,
unfortunately.

The crux of the matter is that the autoincrement system creates a
number that violates the uniqueness constraint of the table. I have a
table with 3 rows, primary key values 1, 2 and 3. When I create a new
row and attempt to add it to my dataset, it has a value of 2 that the
autoincrement column has autogenerated - which, logically, the system
rejects.

The idea of having to generate a
"unique-within-my-dataset-but-not-necessarily-within-the-actual-table"
value that will be replaced anyways seems very hackish, but I suppose I
have to satisfy the constraint on the dataset.

I'll try the "start at -1 and decrement" approach.

Dave Sexton wrote:
Hi Martin,
Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.
No, it shouldn't be null. If the column is Unique, then DBNull for every new
value wouldn't satisfy that constraint. AutoIncrement creates a new number
that is temporary, and preserves the uniqueness of newly added rows until you
can perform the update and retrieve the actual value. The behavior you have
described is exactly what I would expect from an AutoIncrement column.
If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.
The values chosen by the DataTable for newly added rows should always be
unique (I've never had a problem), so I'm curious to know what error you're
getting when adding a new row. Care to post the code and the exception?
The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field should
be fixed at the value that Oracle provided
That would violate the Unique constraint, so it's impossible. If you want to
relax the constraints then you don't need the AutoIncrement column either, but
I wouldn't recommend that approach.
Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.
This particular issue you're having has nothing to do with the RDBMS. The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to the
database or the data provider being used.

--
Dave Sexton
Nov 17 '06 #10
Hi Martin,
In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.
No, this has nothing to do with the RDBMS. I'm quite sure :)

I've never had such an issue with an auto-incrementing DataColumn.

As a matter of fact, if there are already records in the DataTable when a new
record is added, the greatest number that is still unique is chosen
automatically, it seems.

I'm curious to see exactly what you are doing, but I understand if you can't
post code that's not yours to post.

Maybe this will help:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.AutoIncrement = true;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(null, "1"); // id=0
DataRow row2 = table.Rows.Add(null, "2"); // id=1
DataRow row3 = table.Rows.Add(null, "3"); // id=2

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2
Console.WriteLine(row4["ID"]); // id=3

Output:

0
1
2
3
0
1
2
3

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@m7g2000cwm.googlegro ups.com...
Looking it over, I think I've figured out what's happening. In Oracle,
the autonumber sequence is external to the column - it's just a
sequencer object. In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

In Oracle, it can't get that information automatically. I would have
to call a second query to find out what the current value of my
sequencer is to get the current AutoIncrementSeed. I could do a decent
faking of it with something like Max(myAutonumberColumnValue) + 1 as
the seed and set that as the AutoIncrementSeed.... either way though,
I'm creating values that will be thrown away when the data enters the
table, so I'm just using negatives. It works well.

Thanks for all your help. Personally, I'm thinking that the idea of
applying constraints to values that will never actually enter the
database is silly - but you've been extremely helpful at understanding
how things work within the dataset (and it's relation to the data
adaptors) - and found a workable solution to my problem.

Martin Z wrote:
>Ah, I misunderstood. I imagined the behaviour for an autoincrement to
be "must be null for new rows, and apply primary key constraint for
existing ones" or something. Either way, I can't post the code because
I'm a new developer working on a large old company app (ported forward
from VB), trying to reapply their old crudscreen tool to a new table
I've added. So, my tweaks to the tool are here-there-and-everywhere,
unfortunately.

The crux of the matter is that the autoincrement system creates a
number that violates the uniqueness constraint of the table. I have a
table with 3 rows, primary key values 1, 2 and 3. When I create a new
row and attempt to add it to my dataset, it has a value of 2 that the
autoincrement column has autogenerated - which, logically, the system
rejects.

The idea of having to generate a
"unique-within-my-dataset-but-not-necessarily-within-the-actual-table"
value that will be replaced anyways seems very hackish, but I suppose I
have to satisfy the constraint on the dataset.

I'll try the "start at -1 and decrement" approach.

Dave Sexton wrote:
Hi Martin,

Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.

No, it shouldn't be null. If the column is Unique, then DBNull for every
new
value wouldn't satisfy that constraint. AutoIncrement creates a new
number
that is temporary, and preserves the uniqueness of newly added rows until
you
can perform the update and retrieve the actual value. The behavior you
have
described is exactly what I would expect from an AutoIncrement column.

If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.

The values chosen by the DataTable for newly added rows should always be
unique (I've never had a problem), so I'm curious to know what error
you're
getting when adding a new row. Care to post the code and the exception?

The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field should
be fixed at the value that Oracle provided

That would violate the Unique constraint, so it's impossible. If you
want to
relax the constraints then you don't need the AutoIncrement column
either, but
I wouldn't recommend that approach.

Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.

This particular issue you're having has nothing to do with the RDBMS.
The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to the
database or the data provider being used.

--
Dave Sexton

Nov 17 '06 #11
Gotta run, but try this, and you'll see that you must *manually* set
the IncrementSeed value, which I assume it pulls from schema in SQL
Server:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(new Object[] {0, "1"}); // id=0
DataRow row2 = table.Rows.Add(new Object[] {1, "1"}); // id=1
DataRow row3 = table.Rows.Add(new Object[] {2, "1"}); // id=2

cInt32.AutoIncrement = true;

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();
Console.ReadLine();

Dave Sexton wrote:
Hi Martin,
In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

No, this has nothing to do with the RDBMS. I'm quite sure :)

I've never had such an issue with an auto-incrementing DataColumn.

As a matter of fact, if there are already records in the DataTable when a new
record is added, the greatest number that is still unique is chosen
automatically, it seems.

I'm curious to see exactly what you are doing, but I understand if you can't
post code that's not yours to post.

Maybe this will help:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.AutoIncrement = true;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(null, "1"); // id=0
DataRow row2 = table.Rows.Add(null, "2"); // id=1
DataRow row3 = table.Rows.Add(null, "3"); // id=2

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2
Console.WriteLine(row4["ID"]); // id=3

Output:

0
1
2
3
0
1
2
3

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@m7g2000cwm.googlegro ups.com...
Looking it over, I think I've figured out what's happening. In Oracle,
the autonumber sequence is external to the column - it's just a
sequencer object. In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

In Oracle, it can't get that information automatically. I would have
to call a second query to find out what the current value of my
sequencer is to get the current AutoIncrementSeed. I could do a decent
faking of it with something like Max(myAutonumberColumnValue) + 1 as
the seed and set that as the AutoIncrementSeed.... either way though,
I'm creating values that will be thrown away when the data enters the
table, so I'm just using negatives. It works well.

Thanks for all your help. Personally, I'm thinking that the idea of
applying constraints to values that will never actually enter the
database is silly - but you've been extremely helpful at understanding
how things work within the dataset (and it's relation to the data
adaptors) - and found a workable solution to my problem.

Martin Z wrote:
Ah, I misunderstood. I imagined the behaviour for an autoincrement to
be "must be null for new rows, and apply primary key constraint for
existing ones" or something. Either way, I can't post the code because
I'm a new developer working on a large old company app (ported forward
from VB), trying to reapply their old crudscreen tool to a new table
I've added. So, my tweaks to the tool are here-there-and-everywhere,
unfortunately.

The crux of the matter is that the autoincrement system creates a
number that violates the uniqueness constraint of the table. I have a
table with 3 rows, primary key values 1, 2 and 3. When I create a new
row and attempt to add it to my dataset, it has a value of 2 that the
autoincrement column has autogenerated - which, logically, the system
rejects.

The idea of having to generate a
"unique-within-my-dataset-but-not-necessarily-within-the-actual-table"
value that will be replaced anyways seems very hackish, but I suppose I
have to satisfy the constraint on the dataset.

I'll try the "start at -1 and decrement" approach.

Dave Sexton wrote:
Hi Martin,

Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.

No, it shouldn't be null. If the column is Unique, then DBNull for every
new
value wouldn't satisfy that constraint. AutoIncrement creates a new
number
that is temporary, and preserves the uniqueness of newly added rows until
you
can perform the update and retrieve the actual value. The behavior you
have
described is exactly what I would expect from an AutoIncrement column.

If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.

The values chosen by the DataTable for newly added rows should always be
unique (I've never had a problem), so I'm curious to know what error
you're
getting when adding a new row. Care to post the code and the exception?

The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field should
be fixed at the value that Oracle provided

That would violate the Unique constraint, so it's impossible. If you
want to
relax the constraints then you don't need the AutoIncrement column
either, but
I wouldn't recommend that approach.

Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.

This particular issue you're having has nothing to do with the RDBMS.
The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to the
database or the data provider being used.

--
Dave Sexton
Nov 17 '06 #12
Hi Martin,

I can assure you that there is no automatic "schema-pulling" for
AutoIncrementSeed, even when FillSchema is called on an SqlDataAdapter. It
even states this in the docs:

"FillSchema Method"
http://msdn2.microsoft.com/en-us/library/152bda9x.aspx

Normally, you would just ensure that the DataColumn is completely initialized
before it's used. Using a strong-Typed DataSet will alleviate this problem
for you. Or just set the column's AutoIncrement property before the DataTable
is used.

If not, you have to manually set the seed as you suggested. You could do this
dynamically if you wanted to:
cInt32.AutoIncrement = true;
// add this line:
cInt32.AutoIncrementSeed = (int) table.Compute("MAX(ID)", null) + 1;

No negative numbers or seed hacking required (and no interaction with the
database what-so-ever :)

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@e3g2000cwe.googlegro ups.com...
Gotta run, but try this, and you'll see that you must *manually* set
the IncrementSeed value, which I assume it pulls from schema in SQL
Server:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(new Object[] {0, "1"}); // id=0
DataRow row2 = table.Rows.Add(new Object[] {1, "1"}); // id=1
DataRow row3 = table.Rows.Add(new Object[] {2, "1"}); // id=2

cInt32.AutoIncrement = true;

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();
Console.ReadLine();

Dave Sexton wrote:
>Hi Martin,
In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

No, this has nothing to do with the RDBMS. I'm quite sure :)

I've never had such an issue with an auto-incrementing DataColumn.

As a matter of fact, if there are already records in the DataTable when a
new
record is added, the greatest number that is still unique is chosen
automatically, it seems.

I'm curious to see exactly what you are doing, but I understand if you
can't
post code that's not yours to post.

Maybe this will help:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.AutoIncrement = true;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(null, "1"); // id=0
DataRow row2 = table.Rows.Add(null, "2"); // id=1
DataRow row3 = table.Rows.Add(null, "3"); // id=2

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2
Console.WriteLine(row4["ID"]); // id=3

Output:

0
1
2
3
0
1
2
3

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@m7g2000cwm.googlegr oups.com...
Looking it over, I think I've figured out what's happening. In Oracle,
the autonumber sequence is external to the column - it's just a
sequencer object. In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

In Oracle, it can't get that information automatically. I would have
to call a second query to find out what the current value of my
sequencer is to get the current AutoIncrementSeed. I could do a decent
faking of it with something like Max(myAutonumberColumnValue) + 1 as
the seed and set that as the AutoIncrementSeed.... either way though,
I'm creating values that will be thrown away when the data enters the
table, so I'm just using negatives. It works well.

Thanks for all your help. Personally, I'm thinking that the idea of
applying constraints to values that will never actually enter the
database is silly - but you've been extremely helpful at understanding
how things work within the dataset (and it's relation to the data
adaptors) - and found a workable solution to my problem.

Martin Z wrote:
Ah, I misunderstood. I imagined the behaviour for an autoincrement to
be "must be null for new rows, and apply primary key constraint for
existing ones" or something. Either way, I can't post the code because
I'm a new developer working on a large old company app (ported forward
from VB), trying to reapply their old crudscreen tool to a new table
I've added. So, my tweaks to the tool are here-there-and-everywhere,
unfortunately.

The crux of the matter is that the autoincrement system creates a
number that violates the uniqueness constraint of the table. I have a
table with 3 rows, primary key values 1, 2 and 3. When I create a new
row and attempt to add it to my dataset, it has a value of 2 that the
autoincrement column has autogenerated - which, logically, the system
rejects.

The idea of having to generate a
"unique-within-my-dataset-but-not-necessarily-within-the-actual-table"
value that will be replaced anyways seems very hackish, but I suppose I
have to satisfy the constraint on the dataset.

I'll try the "start at -1 and decrement" approach.

Dave Sexton wrote:
Hi Martin,

Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.

No, it shouldn't be null. If the column is Unique, then DBNull for
every
new
value wouldn't satisfy that constraint. AutoIncrement creates a new
number
that is temporary, and preserves the uniqueness of newly added rows
until
you
can perform the update and retrieve the actual value. The behavior
you
have
described is exactly what I would expect from an AutoIncrement column.

If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a
value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.

The values chosen by the DataTable for newly added rows should always
be
unique (I've never had a problem), so I'm curious to know what error
you're
getting when adding a new row. Care to post the code and the
exception?

The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field
should
be fixed at the value that Oracle provided

That would violate the Unique constraint, so it's impossible. If you
want to
relax the constraints then you don't need the AutoIncrement column
either, but
I wouldn't recommend that approach.

Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.

This particular issue you're having has nothing to do with the RDBMS.
The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to
the
database or the data provider being used.

--
Dave Sexton

Nov 17 '06 #13
Ah. There was my problem - I was setting the AutoIncrement property
_after_ the data was loaded from the table, and assuming that it would
consider the pre-existing values within the dataset. When that failed,
I assumed that the database must provide it with additional information
to get a valid AutoIncrementSeed, the same way it uses the database to
set the AllowDbNull property.

Your explanation is obvious in retrospect. As the data is added into
the dataset, the autoincrement system simply takes the maximum and uses
the next beyond. Unfortunately, it doesn't do that automatically when
AutoIncrement is set true on a datatable that already contains data -
which is my approach. The problem, of course, being that the dataset
is coming into the CRUD screen system from an external source - the
system I'm working on wasn't involved in fetching the dataset.

Frustrating how this stuff makes perfect sense once one realises how it
was intended to be used, as opposed to the bad perversions of it that I
tend to do.

Obviously, I'm new to ADO.Net, so I have to say "OOOOOOOOOHHHH!!!" to
the Compute thing. I was expecting to code that bit manually, which is
why I said screw-it and did the negative-hack. I am really looking
forward to 3.0, when stuff like that will become first-class members of
the language instead of relying on passing expression strings into
functions.

Thanks again for your help, and putting up with my stubbornness.

Dave Sexton wrote:
Hi Martin,

I can assure you that there is no automatic "schema-pulling" for
AutoIncrementSeed, even when FillSchema is called on an SqlDataAdapter. It
even states this in the docs:

"FillSchema Method"
http://msdn2.microsoft.com/en-us/library/152bda9x.aspx

Normally, you would just ensure that the DataColumn is completely initialized
before it's used. Using a strong-Typed DataSet will alleviate this problem
for you. Or just set the column's AutoIncrement property before the DataTable
is used.

If not, you have to manually set the seed as you suggested. You could do this
dynamically if you wanted to:
cInt32.AutoIncrement = true;

// add this line:
cInt32.AutoIncrementSeed = (int) table.Compute("MAX(ID)", null) + 1;

No negative numbers or seed hacking required (and no interaction with the
database what-so-ever :)

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@e3g2000cwe.googlegro ups.com...
Gotta run, but try this, and you'll see that you must *manually* set
the IncrementSeed value, which I assume it pulls from schema in SQL
Server:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(new Object[] {0, "1"}); // id=0
DataRow row2 = table.Rows.Add(new Object[] {1, "1"}); // id=1
DataRow row3 = table.Rows.Add(new Object[] {2, "1"}); // id=2

cInt32.AutoIncrement = true;

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();
Console.ReadLine();

Dave Sexton wrote:
Hi Martin,

In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

No, this has nothing to do with the RDBMS. I'm quite sure :)

I've never had such an issue with an auto-incrementing DataColumn.

As a matter of fact, if there are already records in the DataTable when a
new
record is added, the greatest number that is still unique is chosen
automatically, it seems.

I'm curious to see exactly what you are doing, but I understand if you
can't
post code that's not yours to post.

Maybe this will help:

DataSet data = new DataSet();
DataTable table = data.Tables.Add();
DataColumn cInt32 = table.Columns.Add("ID", typeof(int));
DataColumn cString = table.Columns.Add("Value", typeof(string));

cInt32.AllowDBNull = false;
cInt32.AutoIncrement = true;
cInt32.Unique = true;
cInt32.ReadOnly = true;

DataRow row1 = table.Rows.Add(null, "1"); // id=0
DataRow row2 = table.Rows.Add(null, "2"); // id=1
DataRow row3 = table.Rows.Add(null, "3"); // id=2

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2

DataRow row4 = table.NewRow(); // id=3
row4["Value"] = "4";

Console.WriteLine(row4["ID"]);

table.Rows.Add(row4); // id=3

table.AcceptChanges();

Console.WriteLine(row1["ID"]); // id=0
Console.WriteLine(row2["ID"]); // id=1
Console.WriteLine(row3["ID"]); // id=2
Console.WriteLine(row4["ID"]); // id=3

Output:

0
1
2
3
0
1
2
3

--
Dave Sexton

"Martin Z" <ma***********@gmail.comwrote in message
news:11**********************@m7g2000cwm.googlegro ups.com...
Looking it over, I think I've figured out what's happening. In Oracle,
the autonumber sequence is external to the column - it's just a
sequencer object. In Sql Server, the sequence is tied to the column.
Thus, when ADO.Net fetches the schema for the dataset, it can
automatically fetch the maximum value of the autonumber and properly
configure the AutoIncrementSeed, AutoIncrementStep values based on the
schema.

In Oracle, it can't get that information automatically. I would have
to call a second query to find out what the current value of my
sequencer is to get the current AutoIncrementSeed. I could do a decent
faking of it with something like Max(myAutonumberColumnValue) + 1 as
the seed and set that as the AutoIncrementSeed.... either way though,
I'm creating values that will be thrown away when the data enters the
table, so I'm just using negatives. It works well.

Thanks for all your help. Personally, I'm thinking that the idea of
applying constraints to values that will never actually enter the
database is silly - but you've been extremely helpful at understanding
how things work within the dataset (and it's relation to the data
adaptors) - and found a workable solution to my problem.

Martin Z wrote:
Ah, I misunderstood. I imagined the behaviour for an autoincrement to
be "must be null for new rows, and apply primary key constraint for
existing ones" or something. Either way, I can't post the code because
I'm a new developer working on a large old company app (ported forward
from VB), trying to reapply their old crudscreen tool to a new table
I've added. So, my tweaks to the tool are here-there-and-everywhere,
unfortunately.

The crux of the matter is that the autoincrement system creates a
number that violates the uniqueness constraint of the table. I have a
table with 3 rows, primary key values 1, 2 and 3. When I create a new
row and attempt to add it to my dataset, it has a value of 2 that the
autoincrement column has autogenerated - which, logically, the system
rejects.

The idea of having to generate a
"unique-within-my-dataset-but-not-necessarily-within-the-actual-table"
value that will be replaced anyways seems very hackish, but I suppose I
have to satisfy the constraint on the dataset.

I'll try the "start at -1 and decrement" approach.

Dave Sexton wrote:
Hi Martin,

Oop, I missed the tail end of your message. Yes, the value should
always be null until the commit, when it should be populated.... but
this behaviour is not happening.

No, it shouldn't be null. If the column is Unique, then DBNull for
every
new
value wouldn't satisfy that constraint. AutoIncrement creates a new
number
that is temporary, and preserves the uniqueness of newly added rows
until
you
can perform the update and retrieve the actual value. The behavior
you
have
described is exactly what I would expect from an AutoIncrement column.

If I create an empty datarow and set
the other (non-autoincrementing-key-fields) it attempts to put a
value
in the key field too - either way it violates constraints. I can't
even get to update, because it complains the moment I add the row to
the table.

The values chosen by the DataTable for newly added rows should always
be
unique (I've never had a problem), so I'm curious to know what error
you're
getting when adding a new row. Care to post the code and the
exception?

The behaviour I want is obvious - the autoincrement field
should be stuck at DBNull until update, at which time the field
should
be fixed at the value that Oracle provided

That would violate the Unique constraint, so it's impossible. If you
want to
relax the constraints then you don't need the AutoIncrement column
either, but
I wouldn't recommend that approach.

Unfortunately, for some
reasons datasets don't provide that mechanism for me - maybe it only
works like that when using SqlServer - I don't know well enough how
DataSets work under the hood.

This particular issue you're having has nothing to do with the RDBMS.
The
DataSet and the database are distinct entities. AutoIncrement and its
behavior belongs solely to the DataSet, which has no relationship to
the
database or the data provider being used.

--
Dave Sexton
Nov 18 '06 #14

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Slon | last post by:
Hi everybody! Please, can anybody help me how to set AllowNull or remove "(null)" expression from cells of datagrid. I use dot.Net VisualBasic 7.0. I have found way in help, but it does not work:...
16
by: D Witherspoon | last post by:
I am developing a Windows Forms application in VB.NET that will use .NET remoting to access the data tier classes. A very simple way I have come up with is by creating typed (.xsd) datasets. For...
11
by: DrUg13 | last post by:
In java, this seems so easy. You need a new object Object test = new Object() gives me exactly what I want. could someone please help me understand the different ways to do the same thing in...
19
by: Steve Jorgensen | last post by:
I've run across this issue several times of late, and I've never come up with a satisfactory answer to the best way to handle this schema issue. You have a large section of schema in which a...
4
by: Collin Peters | last post by:
I have searched the Internet... but haven't found much relating to this. I am wondering on what the best practices are for migrating a developmemnt database to a release database. Here is the...
10
by: Mike Logan | last post by:
I am using the "contract first" design methodology. Contract First is design the WSDL first then design the server and client. However I must design my XSD/XML Schema before anything. I am...
1
by: thomas.porschberg | last post by:
Hi XML-gurus, imagine a provider of some information in XML format and a client who reads this data. Both sides share a XSD as a contract. I think this is quite common. Now the client wants...
2
by: Eddie | last post by:
I have a DataSet with relations and other constraints being populated from the backend (SQL Server 2005). I use DataSet.FillSchema to retrieve the table schema for each table in the dataset. I...
2
by: sabbadin12 | last post by:
Hi, I'm going to work on an application that uses a postgreSQL database so that it can uses SQLServer 2005. I think I solved most problems on the programming side, but I still have some doubts...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.