473,322 Members | 1,755 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,322 software developers and data experts.

oracle - mysql comparison

hey all,

I realize that this question might pop up from time to time, but I
haven't seen it a while and things might of changed, so -

Right now (July 2004) how does mysql stand up in comparison to oracle?
We are seriously considering migrating our multi-processor oracle
system to mysql to save on licensing costs, and would need several
features that mysql may or may not have:

- replication
- archive logging
- interoperability with oracle/database links with oracle
- PL/SQL type language/stored procedures
- roles
- oracle migration scripts/conversion
- embedded java API
- partitioning/tablespace assignment
- import/export tools
- equivalent oracle datatypes
- multi-processor support
- performance (relatively equivalent or better)

In addition, have any oracle DBAs out there gone through the
conversion process between mysql and oracle? If so, what were the
gotchas/catches that you went through? What are features that are
there that you like or feel are missing? I apologize in advance for
the cross-post, but I did want to get as wide a range of opinion as
possible..

Thanks much,

jon
Jul 20 '05
133 8871
VC

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378179.508307@yasure...
VC wrote:
As I've demonstrated, only a subset of rows involved in the transaction has to be locked which naturally can be the whole table.
Patently false. You can not lock rows that have not yet been inserted
while the transaction is taking place.


As a matter of fact, you can. In the reservation example:

create table PASSENGERS(FLIGHT_NUMBER ..., INFO ...);
create index P_Idx on PASSENGERS(FLIGHT_NUMBER);
set transaction isolation level serializable

.... the statement below will lock not only the existing rows satisfying the
FLIGHT_NUMBER=999 predicate, but also the relevant index (P_Idx) range thus
preventiing potential *inserts*:

select count(*) into l_cnt from PASSENGERS where FLIGHT_NUMBER=999; --

This mechanism is called key-range locking. It allows to avoid full table
locks and is implemented in all the major locking databases. MYSQL, by the
way, has it too.
And you have no means of keeping
them out of your result set except a full table lock.


See above.

VC
Jul 20 '05 #101

"Dan" <gu***********@verizon.com> schrieb im Newsbeitrag news:Oy*******************@nwrddc03.gnilink.net...
There is a critical difference between the notion of 'serialized' and the
term 'serializable'. Perhaps you could look it up? If not, Phil Bernstein,
the guy that literally wrote the book on the subject is a real professor at
the University of Washington. You might be able to catch a class...

Looks like this thread was worth catching up...
Which book do you mean,
http://www.amazon.com/exec/obidos/tg...books&n=507846 ?

And what about
http://www.amazon.com/exec/obidos/tg...glance&s=books
and
http://www.amazon.com/exec/obidos/tg...books&n=507846
.. Any opinion on those? I have no problem with the price but I wouldn't like to waste money either. I work as DBA and developer.

Lots of greetings and thanks!
Volker

Jul 20 '05 #102
Dan

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378179.508307@yasure...
VC wrote:
As I've already mentioned several times, no one disputes the fact that in
certain cases Oracle provides higher concurrency due to MVCC.
Agreed.
I also said that there are several solutions to the reporting problem in
locking databases, such as a replicated or stand-by database.


Many believe this but it is patently false. What do you do about
transactions that take place while you are replicating the database?

You either lock the table while replicating or the replication is also
not consistent to a point-in-time. You can not have it both ways.

There is
another solution, namely triple mirroring of the OLTP database. SAN vendor harware can "split off" the third mirrored drive set creating almost
instantaneously a clone of the original database (e.g. EMC BCV) at a given point in time. It's interesting to notice, that the same technique is
widely used for Oracle databases as well in order to off-load the main
instance. The clone is used both for reporting and backups.


Almost instantly means ALMOST consistent to a point-in-time. But now you
are talking about data consistency by hardware intervention which is
just as valid if we were talking about 3x5 cards and a photocopier.
Serialize to your hearts content ... you aren't going to do it without
a full table lock ...

As I've demonstrated, only a subset of rows involved in the transaction has to be locked which naturally can be the whole table.


Patently false. You can not lock rows that have not yet been inserted
while the transaction is taking place. And you have no means of keeping
them out of your result set except a full table lock.


Absolutely not true.

There is a critical difference between the notion of 'serialized' and the
term 'serializable'. Perhaps you could look it up? If not, Phil Bernstein,
the guy that literally wrote the book on the subject is a real professor at
the University of Washington. You might be able to catch a class...

Once you understand the difference in terminology, then perhaps you'll
understand why you don't need a full table lock to ensure a serializable
schedule even when a concurrent transaction inserts a row (again! in
contrast to serialized).
Daniel Morgan

- Dan
Jul 20 '05 #103
Dan

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378283.651396@yasure...
Volker Hetzer wrote:
"VC" <bo*******@hotmail.com> schrieb im Newsbeitrag news:RfdLc.109372$WX.92600@attbi_s51...
Here's another textbook example for you:
==
There are two linked accounts (id=1 and id=2) in a bank. A transaction
might look as follows:

== withdraw from 1
select amount into x from accounts where id=1
select amount into y from accounts where id=2
if x+y >= withdrawal then update accounts set amount=amount-withdrawal
where id=1
commit
=====

Any commercial locking scheduler will handle the scenario correctly. Oraclewon't.
Sorry for butting in but I'm just trying to learn here, so what would go wrong? And what about "select for update" in oracle? It's supposed to lock the rows it hits.
Lots of Greetings!
Volker


You are correct and VC, as I earlier stated to him, is demonstrating far
less than a robust understanding of transaction processing: Lots of
smoke but no fire.


VC already explained it. Serializable schedules are always preferrable to
serialized schedules. The 'FOR UPDATE' clause is a hack that only results
in a *serialized* sequence of transactions.

Again, you need to understand the difference between serialized and
serializable, and you need to either take a basic database course or read a
textbook. All I see from you is a bunch of hand waving. If someone asks a
question in the middle of some private war of yours, you latch on like the
guy is backing you up. VC gave more than ample evidence and he was very
patient in explaining it to you, but you chose to ignore him. Why? Can you
address issues in a more apolitical manner? Do you always have to have the
last word, even if it makes you look like a moron?

The emporer really has no clothes.
Daniel Morgan

- Dan
Jul 20 '05 #104
Dan

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378462.109262@yasure...
VC wrote:
"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message
news:<p_XKc.135317$XM6.125873@attbi_s53>...
Under DB2/MSSQL, a transaction will lock only a subset of rows (not thewhole table) for a given flight thus ensuring consistent results
I've said in another post that it's a crappy example. A little bit more onthat.

You are attacking a strawman -- the design is not what's being discussed
here, concurrency control/consistency is...
VC


The problem here VC, as I see it, is that you need to go take a decent
class on database architecture. You meet with ease the criteria to wear
the button that says "A little knowledge is a dangerous thing."


Where was VC wrong? Where is the refutation of his argument? Name calling
only makes you look bad, and my sons did it when they were five years old
and were frustrated that they couldn't get their way. VC was absolutely correct and was perfectly behaved in his discussion with
you. I wish I could stomach the B.S. as well as he did. You should *not*
be teaching anything if you claim to know general principles of good
database design or the relational model and drag it into your discussions of
low level Oracle instance configuration settings.
Daniel Morgan


- Dan
Jul 20 '05 #105
Dan wrote:
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378179.508307@yasure...
VC wrote:

As I've already mentioned several times, no one disputes the fact that
in
certain cases Oracle provides higher concurrency due to MVCC.


Agreed.

I also said that there are several solutions to the reporting problem in
locking databases, such as a replicated or stand-by database.


Many believe this but it is patently false. What do you do about
transactions that take place while you are replicating the database?

You either lock the table while replicating or the replication is also
not consistent to a point-in-time. You can not have it both ways.

There is
another solution, namely triple mirroring of the OLTP database. SAN
vendor
harware can "split off" the third mirrored drive set creating almost
instantaneously a clone of the original database (e.g. EMC BCV) at a
given
point in time. It's interesting to notice, that the same technique is
widely used for Oracle databases as well in order to off-load the main
instance. The clone is used both for reporting and backups.


Almost instantly means ALMOST consistent to a point-in-time. But now you
are talking about data consistency by hardware intervention which is
just as valid if we were talking about 3x5 cards and a photocopier.

Serialize to your hearts content ... you aren't going to do it without
a full table lock ...
As I've demonstrated, only a subset of rows involved in the transaction
has
to be locked which naturally can be the whole table.


Patently false. You can not lock rows that have not yet been inserted
while the transaction is taking place. And you have no means of keeping
them out of your result set except a full table lock.

Absolutely not true.

There is a critical difference between the notion of 'serialized' and the
term 'serializable'. Perhaps you could look it up? If not, Phil Bernstein,
the guy that literally wrote the book on the subject is a real professor at
the University of Washington. You might be able to catch a class...

Once you understand the difference in terminology, then perhaps you'll
understand why you don't need a full table lock to ensure a serializable
schedule even when a concurrent transaction inserts a row (again! in
contrast to serialized).

Daniel Morgan


- Dan


No doubt you thought you understood what I intended. I did not once
mention serialized or serializable did I?

Daniel Morgan

Jul 20 '05 #106
Dan wrote:
Where was VC wrong? - Dan


When you respond to what I wrote rather than what you wrote I wrote I
will gladly respond.

From where I am reading ... you came into a conversation in the middle
and are not tracking on the point I have been trying to make which
relates to point-in-time accuracy. I have NOT once made reference to
any type of cereal: Not Corn Flakes, Not Cheerios, Not Oat Meal.
Hope you now have a sense of how I feel reading what you wrote. Hope
you are thinking "I wrote serial not cereal."

Daniel Morgan

Jul 20 '05 #107
"VC" <bo*******@hotmail.com> wrote in message news:<7thLc.147195$Oq2.90613@attbi_s52>...
"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message news:<p_XKc.135317$XM6.125873@attbi_s53>...
Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results


I've said in another post that it's a crappy example. A little bit more on
that.


You are attacking a strawman -- the design is not what's being discussed
here, concurrency control/consistency is...

Here's another simple problem:

===
Let's assume we have two tables, P ( parent) and C (child):

create table P(pid int primary key, ...);
create table C(..., pid references P, ...);

We'd like to insert some child rows but, at the same time, avoid entering
data in vain if someone happens to delete a parent row during our data
entry.

In a locking scheduler, this sequence would run OK:

select count(*) into l_cnt from P where pid=1;
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

A variation of this scenario might be that, for whatever reason, you cannot
use referential integrity and cannot delete the parent row. You'd rather
mark the parent row inactive and prevent insertions if the parent is
inactive. Our transaction would look:

select count(*) into l_cnt from P where pid=1 and status='ACTIVE';
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

===

Needless to say that in the second case Oracle won't ensure the correct
outcome in any of its isolation levels if a concurrent transaction happens
to change the parent row status to 'inactive'. In the first case, a
non-serializable history will be avoided thanks to RI, however, the
entered data will be lost. The above example can be applied to any similar
master-detail model.

An Oracle solution to the problem would be to use the "for update" clause
instead of just select. This will result in *worse* concurrency under
Oracle than under a locking scheduler since all inserts will in fact be
*serial*.


Don't see any difference between "locking scheduler" and for update here.
In both cases parent record is locked for insert/update of child records.
VC

Jul 20 '05 #108
such as ?
Jul 20 '05 #109

"VC" <bo*******@hotmail.com> wrote in message
news:SgrLc.154260$XM6.64529@attbi_s53...

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378179.508307@yasure...
VC wrote:
As I've demonstrated, only a subset of rows involved in the
transaction
has to be locked which naturally can be the whole table.
Patently false. You can not lock rows that have not yet been inserted
while the transaction is taking place.


As a matter of fact, you can. In the reservation example:

create table PASSENGERS(FLIGHT_NUMBER ..., INFO ...);
create index P_Idx on PASSENGERS(FLIGHT_NUMBER);
set transaction isolation level serializable

... the statement below will lock not only the existing rows satisfying

the FLIGHT_NUMBER=999 predicate, but also the relevant index (P_Idx) range thus preventiing potential *inserts*:

select count(*) into l_cnt from PASSENGERS where FLIGHT_NUMBER=999; --

This mechanism is called key-range locking. It allows to avoid full table
locks and is implemented in all the major locking databases. MYSQL, by the way, has it too.
And you have no means of keeping
them out of your result set except a full table lock.


See above.

VC

And if it is a page locking database then it will lock whole pages of
indexes (as the index is traversed) witch will effectively lock the entire
table (or about 95% of it)
Jim
Jul 20 '05 #110
VC

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378179.508307@yasure...
VC wrote:
As I've demonstrated, only a subset of rows involved in the transaction has to be locked which naturally can be the whole table.
Patently false. You can not lock rows that have not yet been inserted
while the transaction is taking place.


As a matter of fact, you can. In the reservation example:

create table PASSENGERS(FLIGHT_NUMBER ..., INFO ...);
create index P_Idx on PASSENGERS(FLIGHT_NUMBER);
set transaction isolation level serializable

.... the statement below will lock not only the existing rows satisfying the
FLIGHT_NUMBER=999 predicate, but also the relevant index (P_Idx) range thus
preventiing potential *inserts*:

select count(*) into l_cnt from PASSENGERS where FLIGHT_NUMBER=999; --

This mechanism is called key-range locking. It allows to avoid full table
locks and is implemented in all the major locking databases. MYSQL, by the
way, has it too.
And you have no means of keeping
them out of your result set except a full table lock.


See above.

VC
Jul 20 '05 #111
VC

"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message news:<7thLc.147195$Oq2.90613@attbi_s52>...
....
Here's another simple problem:

===
Let's assume we have two tables, P ( parent) and C (child):

create table P(pid int primary key, ...);
create table C(..., pid references P, ...);

We'd like to insert some child rows but, at the same time, avoid entering data in vain if someone happens to delete a parent row during our data
entry.

In a locking scheduler, this sequence would run OK:

select count(*) into l_cnt from P where pid=1;
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

A variation of this scenario might be that, for whatever reason, you cannot use referential integrity and cannot delete the parent row. You'd rather mark the parent row inactive and prevent insertions if the parent is
inactive. Our transaction would look:

select count(*) into l_cnt from P where pid=1 and status='ACTIVE';
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

===

Needless to say that in the second case Oracle won't ensure the correct
outcome in any of its isolation levels if a concurrent transaction happens to change the parent row status to 'inactive'. In the first case, a
non-serializable history will be avoided thanks to RI, however, the
entered data will be lost. The above example can be applied to any similar master-detail model.

An Oracle solution to the problem would be to use the "for update" clause instead of just select. This will result in *worse* concurrency under
Oracle than under a locking scheduler since all inserts will in fact be
*serial*.


Don't see any difference between "locking scheduler" and for update here.


Well, one would imagine the difference is quite obvious -- the locking
database will run the transaction happily *in parallel* whilst Oracle will
do the same *serially*.

In both cases parent record is locked for insert/update of child records.


That's true. The only difference is that Oracle relies on exclusive locks
thus preventing parallel 'select for updates' and the locking database uses
shared locks that do not inhibit concurrent selects of the parent row.

VC
Jul 20 '05 #112

"Dan" <gu***********@verizon.com> schrieb im Newsbeitrag news:Oy*******************@nwrddc03.gnilink.net...
There is a critical difference between the notion of 'serialized' and the
term 'serializable'. Perhaps you could look it up? If not, Phil Bernstein,
the guy that literally wrote the book on the subject is a real professor at
the University of Washington. You might be able to catch a class...

Looks like this thread was worth catching up...
Which book do you mean,
http://www.amazon.com/exec/obidos/tg...books&n=507846 ?

And what about
http://www.amazon.com/exec/obidos/tg...glance&s=books
and
http://www.amazon.com/exec/obidos/tg...books&n=507846
.. Any opinion on those? I have no problem with the price but I wouldn't like to waste money either. I work as DBA and developer.

Lots of greetings and thanks!
Volker

Jul 20 '05 #113
"VC" <bo*******@hotmail.com> wrote in message
news:7thLc.147195$Oq2.90613@attbi_s52...

"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message news:<p_XKc.135317$XM6.125873@attbi_s53>...
Under DB2/MSSQL, a transaction will lock only a subset of rows (not the whole table) for a given flight thus ensuring consistent results


I've said in another post that it's a crappy example. A little bit more on that.


You are attacking a strawman -- the design is not what's being discussed
here, concurrency control/consistency is...


Two sides of the same coin surely? Decide how your platform behaves and code
for its strengths (and weaknesses). Coding deciding that all database
systems should be treated as if they had the feature set of ms access would
surely be an error.
--
Niall Litchfield
Oracle DBA
http://www.niall.litchfield.dial.pipex.com





Here's another simple problem:

===
Let's assume we have two tables, P ( parent) and C (child):

create table P(pid int primary key, ...);
create table C(..., pid references P, ...);

We'd like to insert some child rows but, at the same time, avoid entering
data in vain if someone happens to delete a parent row during our data
entry.

In a locking scheduler, this sequence would run OK:

select count(*) into l_cnt from P where pid=1;
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

A variation of this scenario might be that, for whatever reason, you cannot use referential integrity and cannot delete the parent row. You'd rather
mark the parent row inactive and prevent insertions if the parent is
inactive. Our transaction would look:

select count(*) into l_cnt from P where pid=1 and status='ACTIVE';
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

===

Needless to say that in the second case Oracle won't ensure the correct
outcome in any of its isolation levels if a concurrent transaction happens to change the parent row status to 'inactive'. In the first case, a
non-serializable history will be avoided thanks to RI, however, the
entered data will be lost. The above example can be applied to any similar
master-detail model.

An Oracle solution to the problem would be to use the "for update" clause
instead of just select. This will result in *worse* concurrency under
Oracle than under a locking scheduler since all inserts will in fact be
*serial*.

VC

Jul 20 '05 #114
VC

"Jim Kennedy" <ke****************************@attbi.net> wrote in message
news:3szLc.157906$XM6.2871@attbi_s53...

"VC" <bo*******@hotmail.com> wrote in message
news:SgrLc.154260$XM6.64529@attbi_s53...

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378179.508307@yasure...
VC wrote:
> As I've demonstrated, only a subset of rows involved in the transaction
has
> to be locked which naturally can be the whole table.

Patently false. You can not lock rows that have not yet been inserted
while the transaction is taking place.
As a matter of fact, you can. In the reservation example:

create table PASSENGERS(FLIGHT_NUMBER ..., INFO ...);
create index P_Idx on PASSENGERS(FLIGHT_NUMBER);
set transaction isolation level serializable

... the statement below will lock not only the existing rows satisfying

the
FLIGHT_NUMBER=999 predicate, but also the relevant index (P_Idx) range

thus
preventiing potential *inserts*:

select count(*) into l_cnt from PASSENGERS where FLIGHT_NUMBER=999; --

This mechanism is called key-range locking. It allows to avoid full

table locks and is implemented in all the major locking databases. MYSQL, by

the
way, has it too.
And you have no means of keeping
them out of your result set except a full table lock.


See above.

VC

And if it is a page locking database then it will lock whole pages of
indexes (as the index is traversed) witch will effectively lock the entire
table (or about 95% of it)
Jim


That was so ten years ago, but is no true longer today:

=======
MSSQL:

1> create table flight(number int, seat int)
2> go
1> insert into flight(1,1)
2> go
1> insert into flight values(1,1)
2> go
1> insert into flight values(1,2)
2> go
1> insert into flight values(1,3)
2> go
1> insert into flight values(2,1)
1> set implicit_transactions on
2> go
1> create index f_idx on flight(number)
2> go
1>

We have 3 rows for Flight 1 and 1 row for Flight 2.

Then:

Session 1:
----------
1> set transaction isolation level serializable
2> go
1> set transaction isolation level serializable
2> go
1> select count(*) from flight where number=1
2> go

-----------
3
1>
Session 2:
----------
1> set transaction isolation level serializable
2> go
1> set transaction isolation level serializable
2> go

We can read all the rows:

1> select * from flight
2> go
number seat
----------- -----------
1 1
1 2
1 3
2 1
We can add seats to Flight 2::

1> insert into flight values(2,2)
2> go

.... we can see the old and new rows::

(1 rows affected)
1> select * from flight
2> go
number seat
----------- -----------
1 1
1 2
1 3
2 1
2 2

(5 rows affected)

... but we cannot add any new seats to Flight 1 thanks to key-range locking
caused by 'select count(*) from flight where number=1'.

1> insert into flight values(1,4)
2> go
** blocked **

Apparently, no page/table level locking happens here ...

VC

Jul 20 '05 #115
VC
Hi,

"Niall Litchfield" <ni**************@dial.pipex.com> wrote in message
news:40**********************@news-text.dial.pipex.com...
....
You are attacking a strawman -- the design is not what's being discussed
here, concurrency control/consistency is...
Two sides of the same coin surely? Decide how your platform behaves and

code for its strengths (and weaknesses). Coding deciding that all database
systems should be treated as if they had the feature set of ms access would surely be an error.
But of course, I do not have any argument with what you've just said. My
first response in this thread was intended to show that the claim that
Oracle's *consistency* is somehow superior to the alternative approaches is
not true at all. In some cases, even concurrency can be worse that that of
a locking database.
Regards.

VC


--
Niall Litchfield
Oracle DBA
http://www.niall.litchfield.dial.pipex.com





Here's another simple problem:

===
Let's assume we have two tables, P ( parent) and C (child):

create table P(pid int primary key, ...);
create table C(..., pid references P, ...);

We'd like to insert some child rows but, at the same time, avoid entering data in vain if someone happens to delete a parent row during our data
entry.

In a locking scheduler, this sequence would run OK:

select count(*) into l_cnt from P where pid=1;
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

A variation of this scenario might be that, for whatever reason, you

cannot
use referential integrity and cannot delete the parent row. You'd rather mark the parent row inactive and prevent insertions if the parent is
inactive. Our transaction would look:

select count(*) into l_cnt from P where pid=1 and status='ACTIVE';
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

===

Needless to say that in the second case Oracle won't ensure the correct
outcome in any of its isolation levels if a concurrent transaction

happens
to change the parent row status to 'inactive'. In the first case, a
non-serializable history will be avoided thanks to RI, however, the
entered data will be lost. The above example can be applied to any similar master-detail model.

An Oracle solution to the problem would be to use the "for update" clause instead of just select. This will result in *worse* concurrency under
Oracle than under a locking scheduler since all inserts will in fact be
*serial*.

VC


Jul 20 '05 #116
VC wrote:
That's true. The only difference is that Oracle relies on exclusive locks
thus preventing parallel 'select for updates' and the locking database uses
shared locks that do not inhibit concurrent selects of the parent row.

VC


Since when does your ignorance translate into Oracle's weakness?

You still haven't addressed how you could handle the bank transaction
question I posted many days ago.

Daniel Morgan

Jul 20 '05 #117
VC

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090474999.284806@yasure...
VC wrote:
That's true. The only difference is that Oracle relies on exclusive locks thus preventing parallel 'select for updates' and the locking database uses shared locks that do not inhibit concurrent selects of the parent row.

VC
Since when does your ignorance translate into Oracle's weakness?


I'll skip the above utterance as a non sequitur.

You still haven't addressed how you could handle the bank transaction
question I posted many days ago.
I've already addressed the reporting issue more than once.

Daniel Morgan

Jul 20 '05 #118
"VC" <bo*******@hotmail.com> wrote in message news:<7thLc.147195$Oq2.90613@attbi_s52>...
"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message news:<p_XKc.135317$XM6.125873@attbi_s53>...
Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results


I've said in another post that it's a crappy example. A little bit more on
that.


You are attacking a strawman -- the design is not what's being discussed
here, concurrency control/consistency is...

Here's another simple problem:

===
Let's assume we have two tables, P ( parent) and C (child):

create table P(pid int primary key, ...);
create table C(..., pid references P, ...);

We'd like to insert some child rows but, at the same time, avoid entering
data in vain if someone happens to delete a parent row during our data
entry.

In a locking scheduler, this sequence would run OK:

select count(*) into l_cnt from P where pid=1;
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

A variation of this scenario might be that, for whatever reason, you cannot
use referential integrity and cannot delete the parent row. You'd rather
mark the parent row inactive and prevent insertions if the parent is
inactive. Our transaction would look:

select count(*) into l_cnt from P where pid=1 and status='ACTIVE';
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

===

Needless to say that in the second case Oracle won't ensure the correct
outcome in any of its isolation levels if a concurrent transaction happens
to change the parent row status to 'inactive'. In the first case, a
non-serializable history will be avoided thanks to RI, however, the
entered data will be lost. The above example can be applied to any similar
master-detail model.

An Oracle solution to the problem would be to use the "for update" clause
instead of just select. This will result in *worse* concurrency under
Oracle than under a locking scheduler since all inserts will in fact be
*serial*.


Don't see any difference between "locking scheduler" and for update here.
In both cases parent record is locked for insert/update of child records.
VC

Jul 20 '05 #119

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090387704.737703@yasure...
Dan wrote:
Where was VC wrong?
- Dan


When you respond to what I wrote rather than what you wrote I wrote I
will gladly respond.

From where I am reading ... you came into a conversation in the middle
and are not tracking on the point I have been trying to make which
relates to point-in-time accuracy.


I think you are missing the point. There is a formal definition and
criteria for "point-in-time" accuracy in the face of concurrent manipulation
of data. Following from this there is a definition and a criteria for
"point-in-time" accuracy that allows for interleaving of transactions,
called serializability. Saying "I use multi-versioned read consistency" and
a snapshot in time is not necessarily it.

If one gets the definition right, then it becomes much easier for everyone
to talk at the same level understanding in terms of "point-in-time"
consistency and perhaps even reach consensus on understanding the
trade-offs. Your previous posts have given me the impression that you
confuse the terms 'serialized' and 'serializable'. If this is indeed the
case, then the argument will have a tendency to become circular in related
conversations such as concurrency control mechanisms and implementations
(locking, timestamp algorthings, optiimistic vs. pessimistic, 2PL, etc). ,
which it has to a degree, because you are using a premise as a basis of
definition that is fundamentally flawed, or at the very least, different.
The Oracle Concepts document does have some misleading wording and has a
very narrow scope, and if that is what you use as your Bible, then it is not
surprising.
any type of cereal: Not Corn Flakes, Not Cheerios, Not Oat Meal.
Hope you now have a sense of how I feel reading what you wrote. Hope
you are thinking "I wrote serial not cereal."
Whatever this means...great.
Daniel Morgan

Jul 20 '05 #120

"VC" <bo*******@hotmail.com> wrote in message
news:SgrLc.154260$XM6.64529@attbi_s53...

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378179.508307@yasure...
VC wrote:
As I've demonstrated, only a subset of rows involved in the
transaction
has to be locked which naturally can be the whole table.
Patently false. You can not lock rows that have not yet been inserted
while the transaction is taking place.


As a matter of fact, you can. In the reservation example:

create table PASSENGERS(FLIGHT_NUMBER ..., INFO ...);
create index P_Idx on PASSENGERS(FLIGHT_NUMBER);
set transaction isolation level serializable

... the statement below will lock not only the existing rows satisfying

the FLIGHT_NUMBER=999 predicate, but also the relevant index (P_Idx) range thus preventiing potential *inserts*:

select count(*) into l_cnt from PASSENGERS where FLIGHT_NUMBER=999; --

This mechanism is called key-range locking. It allows to avoid full table
locks and is implemented in all the major locking databases. MYSQL, by the way, has it too.
And you have no means of keeping
them out of your result set except a full table lock.


See above.

VC

And if it is a page locking database then it will lock whole pages of
indexes (as the index is traversed) witch will effectively lock the entire
table (or about 95% of it)
Jim
Jul 20 '05 #121
VC

"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message news:<7thLc.147195$Oq2.90613@attbi_s52>...
....
Here's another simple problem:

===
Let's assume we have two tables, P ( parent) and C (child):

create table P(pid int primary key, ...);
create table C(..., pid references P, ...);

We'd like to insert some child rows but, at the same time, avoid entering data in vain if someone happens to delete a parent row during our data
entry.

In a locking scheduler, this sequence would run OK:

select count(*) into l_cnt from P where pid=1;
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

A variation of this scenario might be that, for whatever reason, you cannot use referential integrity and cannot delete the parent row. You'd rather mark the parent row inactive and prevent insertions if the parent is
inactive. Our transaction would look:

select count(*) into l_cnt from P where pid=1 and status='ACTIVE';
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

===

Needless to say that in the second case Oracle won't ensure the correct
outcome in any of its isolation levels if a concurrent transaction happens to change the parent row status to 'inactive'. In the first case, a
non-serializable history will be avoided thanks to RI, however, the
entered data will be lost. The above example can be applied to any similar master-detail model.

An Oracle solution to the problem would be to use the "for update" clause instead of just select. This will result in *worse* concurrency under
Oracle than under a locking scheduler since all inserts will in fact be
*serial*.


Don't see any difference between "locking scheduler" and for update here.


Well, one would imagine the difference is quite obvious -- the locking
database will run the transaction happily *in parallel* whilst Oracle will
do the same *serially*.

In both cases parent record is locked for insert/update of child records.


That's true. The only difference is that Oracle relies on exclusive locks
thus preventing parallel 'select for updates' and the locking database uses
shared locks that do not inhibit concurrent selects of the parent row.

VC
Jul 20 '05 #122
"VC" <bo*******@hotmail.com> wrote in message
news:7thLc.147195$Oq2.90613@attbi_s52...

"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message news:<p_XKc.135317$XM6.125873@attbi_s53>...
Under DB2/MSSQL, a transaction will lock only a subset of rows (not the whole table) for a given flight thus ensuring consistent results


I've said in another post that it's a crappy example. A little bit more on that.


You are attacking a strawman -- the design is not what's being discussed
here, concurrency control/consistency is...


Two sides of the same coin surely? Decide how your platform behaves and code
for its strengths (and weaknesses). Coding deciding that all database
systems should be treated as if they had the feature set of ms access would
surely be an error.
--
Niall Litchfield
Oracle DBA
http://www.niall.litchfield.dial.pipex.com





Here's another simple problem:

===
Let's assume we have two tables, P ( parent) and C (child):

create table P(pid int primary key, ...);
create table C(..., pid references P, ...);

We'd like to insert some child rows but, at the same time, avoid entering
data in vain if someone happens to delete a parent row during our data
entry.

In a locking scheduler, this sequence would run OK:

select count(*) into l_cnt from P where pid=1;
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

A variation of this scenario might be that, for whatever reason, you cannot use referential integrity and cannot delete the parent row. You'd rather
mark the parent row inactive and prevent insertions if the parent is
inactive. Our transaction would look:

select count(*) into l_cnt from P where pid=1 and status='ACTIVE';
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

===

Needless to say that in the second case Oracle won't ensure the correct
outcome in any of its isolation levels if a concurrent transaction happens to change the parent row status to 'inactive'. In the first case, a
non-serializable history will be avoided thanks to RI, however, the
entered data will be lost. The above example can be applied to any similar
master-detail model.

An Oracle solution to the problem would be to use the "for update" clause
instead of just select. This will result in *worse* concurrency under
Oracle than under a locking scheduler since all inserts will in fact be
*serial*.

VC

Jul 20 '05 #123
VC

"Jim Kennedy" <ke****************************@attbi.net> wrote in message
news:3szLc.157906$XM6.2871@attbi_s53...

"VC" <bo*******@hotmail.com> wrote in message
news:SgrLc.154260$XM6.64529@attbi_s53...

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378179.508307@yasure...
VC wrote:
> As I've demonstrated, only a subset of rows involved in the transaction
has
> to be locked which naturally can be the whole table.

Patently false. You can not lock rows that have not yet been inserted
while the transaction is taking place.
As a matter of fact, you can. In the reservation example:

create table PASSENGERS(FLIGHT_NUMBER ..., INFO ...);
create index P_Idx on PASSENGERS(FLIGHT_NUMBER);
set transaction isolation level serializable

... the statement below will lock not only the existing rows satisfying

the
FLIGHT_NUMBER=999 predicate, but also the relevant index (P_Idx) range

thus
preventiing potential *inserts*:

select count(*) into l_cnt from PASSENGERS where FLIGHT_NUMBER=999; --

This mechanism is called key-range locking. It allows to avoid full

table locks and is implemented in all the major locking databases. MYSQL, by

the
way, has it too.
And you have no means of keeping
them out of your result set except a full table lock.


See above.

VC

And if it is a page locking database then it will lock whole pages of
indexes (as the index is traversed) witch will effectively lock the entire
table (or about 95% of it)
Jim


That was so ten years ago, but is no true longer today:

=======
MSSQL:

1> create table flight(number int, seat int)
2> go
1> insert into flight(1,1)
2> go
1> insert into flight values(1,1)
2> go
1> insert into flight values(1,2)
2> go
1> insert into flight values(1,3)
2> go
1> insert into flight values(2,1)
1> set implicit_transactions on
2> go
1> create index f_idx on flight(number)
2> go
1>

We have 3 rows for Flight 1 and 1 row for Flight 2.

Then:

Session 1:
----------
1> set transaction isolation level serializable
2> go
1> set transaction isolation level serializable
2> go
1> select count(*) from flight where number=1
2> go

-----------
3
1>
Session 2:
----------
1> set transaction isolation level serializable
2> go
1> set transaction isolation level serializable
2> go

We can read all the rows:

1> select * from flight
2> go
number seat
----------- -----------
1 1
1 2
1 3
2 1
We can add seats to Flight 2::

1> insert into flight values(2,2)
2> go

.... we can see the old and new rows::

(1 rows affected)
1> select * from flight
2> go
number seat
----------- -----------
1 1
1 2
1 3
2 1
2 2

(5 rows affected)

... but we cannot add any new seats to Flight 1 thanks to key-range locking
caused by 'select count(*) from flight where number=1'.

1> insert into flight values(1,4)
2> go
** blocked **

Apparently, no page/table level locking happens here ...

VC

Jul 20 '05 #124
VC
Hi,

"Niall Litchfield" <ni**************@dial.pipex.com> wrote in message
news:40**********************@news-text.dial.pipex.com...
....
You are attacking a strawman -- the design is not what's being discussed
here, concurrency control/consistency is...
Two sides of the same coin surely? Decide how your platform behaves and

code for its strengths (and weaknesses). Coding deciding that all database
systems should be treated as if they had the feature set of ms access would surely be an error.
But of course, I do not have any argument with what you've just said. My
first response in this thread was intended to show that the claim that
Oracle's *consistency* is somehow superior to the alternative approaches is
not true at all. In some cases, even concurrency can be worse that that of
a locking database.
Regards.

VC


--
Niall Litchfield
Oracle DBA
http://www.niall.litchfield.dial.pipex.com





Here's another simple problem:

===
Let's assume we have two tables, P ( parent) and C (child):

create table P(pid int primary key, ...);
create table C(..., pid references P, ...);

We'd like to insert some child rows but, at the same time, avoid entering data in vain if someone happens to delete a parent row during our data
entry.

In a locking scheduler, this sequence would run OK:

select count(*) into l_cnt from P where pid=1;
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

A variation of this scenario might be that, for whatever reason, you

cannot
use referential integrity and cannot delete the parent row. You'd rather mark the parent row inactive and prevent insertions if the parent is
inactive. Our transaction would look:

select count(*) into l_cnt from P where pid=1 and status='ACTIVE';
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

===

Needless to say that in the second case Oracle won't ensure the correct
outcome in any of its isolation levels if a concurrent transaction

happens
to change the parent row status to 'inactive'. In the first case, a
non-serializable history will be avoided thanks to RI, however, the
entered data will be lost. The above example can be applied to any similar master-detail model.

An Oracle solution to the problem would be to use the "for update" clause instead of just select. This will result in *worse* concurrency under
Oracle than under a locking scheduler since all inserts will in fact be
*serial*.

VC


Jul 20 '05 #125
D Guntermann wrote:
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090387704.737703@yasure...
Dan wrote:

Where was VC wrong?

- Dan


When you respond to what I wrote rather than what you wrote I wrote I
will gladly respond.

From where I am reading ... you came into a conversation in the middle
and are not tracking on the point I have been trying to make which
relates to point-in-time accuracy.

I think you are missing the point. There is a formal definition and
criteria for "point-in-time" accuracy in the face of concurrent manipulation
of data. Following from this there is a definition and a criteria for
"point-in-time" accuracy that allows for interleaving of transactions,
called serializability. Saying "I use multi-versioned read consistency" and
a snapshot in time is not necessarily it.


I think you are agreeing with me whether you realize it or not. But
perhaps you were responding to someone else.

Daniel Morgan

Jul 20 '05 #126

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090544822.559004@yasure...
D Guntermann wrote:
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090387704.737703@yasure...
Dan wrote:
Where was VC wrong?

- Dan

When you respond to what I wrote rather than what you wrote I wrote I
will gladly respond.

From where I am reading ... you came into a conversation in the middle
and are not tracking on the point I have been trying to make which
relates to point-in-time accuracy.

I think you are missing the point. There is a formal definition and
criteria for "point-in-time" accuracy in the face of concurrent manipulation of data. Following from this there is a definition and a criteria for
"point-in-time" accuracy that allows for interleaving of transactions,
called serializability. Saying "I use multi-versioned read consistency" and a snapshot in time is not necessarily it.


I think you are agreeing with me whether you realize it or not. But
perhaps you were responding to someone else.


:-) OK.
Daniel Morgan

Jul 20 '05 #127
VC wrote:
That's true. The only difference is that Oracle relies on exclusive locks
thus preventing parallel 'select for updates' and the locking database uses
shared locks that do not inhibit concurrent selects of the parent row.

VC


Since when does your ignorance translate into Oracle's weakness?

You still haven't addressed how you could handle the bank transaction
question I posted many days ago.

Daniel Morgan

Jul 20 '05 #128
VC

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090474999.284806@yasure...
VC wrote:
That's true. The only difference is that Oracle relies on exclusive locks thus preventing parallel 'select for updates' and the locking database uses shared locks that do not inhibit concurrent selects of the parent row.

VC
Since when does your ignorance translate into Oracle's weakness?


I'll skip the above utterance as a non sequitur.

You still haven't addressed how you could handle the bank transaction
question I posted many days ago.
I've already addressed the reporting issue more than once.

Daniel Morgan

Jul 20 '05 #129

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090387704.737703@yasure...
Dan wrote:
Where was VC wrong?
- Dan


When you respond to what I wrote rather than what you wrote I wrote I
will gladly respond.

From where I am reading ... you came into a conversation in the middle
and are not tracking on the point I have been trying to make which
relates to point-in-time accuracy.


I think you are missing the point. There is a formal definition and
criteria for "point-in-time" accuracy in the face of concurrent manipulation
of data. Following from this there is a definition and a criteria for
"point-in-time" accuracy that allows for interleaving of transactions,
called serializability. Saying "I use multi-versioned read consistency" and
a snapshot in time is not necessarily it.

If one gets the definition right, then it becomes much easier for everyone
to talk at the same level understanding in terms of "point-in-time"
consistency and perhaps even reach consensus on understanding the
trade-offs. Your previous posts have given me the impression that you
confuse the terms 'serialized' and 'serializable'. If this is indeed the
case, then the argument will have a tendency to become circular in related
conversations such as concurrency control mechanisms and implementations
(locking, timestamp algorthings, optiimistic vs. pessimistic, 2PL, etc). ,
which it has to a degree, because you are using a premise as a basis of
definition that is fundamentally flawed, or at the very least, different.
The Oracle Concepts document does have some misleading wording and has a
very narrow scope, and if that is what you use as your Bible, then it is not
surprising.
any type of cereal: Not Corn Flakes, Not Cheerios, Not Oat Meal.
Hope you now have a sense of how I feel reading what you wrote. Hope
you are thinking "I wrote serial not cereal."
Whatever this means...great.
Daniel Morgan

Jul 20 '05 #130
D Guntermann wrote:
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090387704.737703@yasure...
Dan wrote:

Where was VC wrong?

- Dan


When you respond to what I wrote rather than what you wrote I wrote I
will gladly respond.

From where I am reading ... you came into a conversation in the middle
and are not tracking on the point I have been trying to make which
relates to point-in-time accuracy.

I think you are missing the point. There is a formal definition and
criteria for "point-in-time" accuracy in the face of concurrent manipulation
of data. Following from this there is a definition and a criteria for
"point-in-time" accuracy that allows for interleaving of transactions,
called serializability. Saying "I use multi-versioned read consistency" and
a snapshot in time is not necessarily it.


I think you are agreeing with me whether you realize it or not. But
perhaps you were responding to someone else.

Daniel Morgan

Jul 20 '05 #131

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090544822.559004@yasure...
D Guntermann wrote:
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090387704.737703@yasure...
Dan wrote:
Where was VC wrong?

- Dan

When you respond to what I wrote rather than what you wrote I wrote I
will gladly respond.

From where I am reading ... you came into a conversation in the middle
and are not tracking on the point I have been trying to make which
relates to point-in-time accuracy.

I think you are missing the point. There is a formal definition and
criteria for "point-in-time" accuracy in the face of concurrent manipulation of data. Following from this there is a definition and a criteria for
"point-in-time" accuracy that allows for interleaving of transactions,
called serializability. Saying "I use multi-versioned read consistency" and a snapshot in time is not necessarily it.


I think you are agreeing with me whether you realize it or not. But
perhaps you were responding to someone else.


:-) OK.
Daniel Morgan

Jul 20 '05 #132
["Followup-To:" header set to comp.databases.oracle.server.]
On 2004-07-21, michael newport <mi************@yahoo.com> wrote:
such as ?


Hash partitioning and active-active clustering.

Hot/Warm backups and Point-in-time recovery may also be issues.

However, I don't know enough about Ingres to comment in this.
Although this is clearly an area where MySql and Postgres clearly
are lacking.

--
It is not true that Microsoft doesn't innovate.

They brought us the email virus.

In my Atari days, such a notion would have |||
been considered a complete absurdity. / | \


Jul 20 '05 #133
["Followup-To:" header set to comp.databases.oracle.server.]
On 2004-07-21, michael newport <mi************@yahoo.com> wrote:
such as ?


Hash partitioning and active-active clustering.

Hot/Warm backups and Point-in-time recovery may also be issues.

However, I don't know enough about Ingres to comment in this.
Although this is clearly an area where MySql and Postgres clearly
are lacking.

--
It is not true that Microsoft doesn't innovate.

They brought us the email virus.

In my Atari days, such a notion would have |||
been considered a complete absurdity. / | \


Jul 20 '05 #134

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

38
by: Mike | last post by:
No flame wars, please! We're planning a move from a non-relational system to a relational system. Our choices have been narrowed to Oracle and DB2. Since we're moving from non-relational to...
125
by: Rhino | last post by:
One of my friends, Scott, is a consultant who doesn't currently have newsgroup access so I am asking these questions for him. I'll be telling him how to monitor the answers via Google Newsgroup...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.