473,405 Members | 2,421 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,405 software developers and data experts.

oracle - mysql comparison

hey all,

I realize that this question might pop up from time to time, but I
haven't seen it a while and things might of changed, so -

Right now (July 2004) how does mysql stand up in comparison to oracle?
We are seriously considering migrating our multi-processor oracle
system to mysql to save on licensing costs, and would need several
features that mysql may or may not have:

- replication
- archive logging
- interoperability with oracle/database links with oracle
- PL/SQL type language/stored procedures
- roles
- oracle migration scripts/conversion
- embedded java API
- partitioning/tablespace assignment
- import/export tools
- equivalent oracle datatypes
- multi-processor support
- performance (relatively equivalent or better)

In addition, have any oracle DBAs out there gone through the
conversion process between mysql and oracle? If so, what were the
gotchas/catches that you went through? What are features that are
there that you like or feel are missing? I apologize in advance for
the cross-post, but I did want to get as wide a range of opinion as
possible..

Thanks much,

jon
Jul 20 '05
133 8878
VC wrote:
Please see below:

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090027247.885352@yasure...
VC wrote:

Hello Daniel,

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1089945278.134355@yasure...
VC wrote:

>A correct concurrency control/model ensures that all the permitted
>transactions are serializable. In this sense, databases like DB2 or

MSSQL
>implement a correct concurrency model albeit at the expense of lower
>concurrency in some circumstances. Funnily enough, none of the
Oracle
isolation levels can make the same promise, i.e. ensure serializable
>transaction histories, in any of its isolation levels. Usually, it's

quite
>easy to obtain correct results by augmenting an isolation level with
>something like 'select for update', though..

And if this is true why, exactly, would anyone care at the expense of
being able to extract an accurate answer from a database without
performing table locks on all resources?

I am not sure what specific part of my message the word *this* ('if this
is
true') refers to, but assuming it's the last sentence then you surely
know
that 'select for update' takes write (TX) lock on all the rows involed
A
locking scheduler would take *read* row level locks in similar
circumstances
automatically. If my interpretaion is incorrect, please elaborate.

VC


My point is that in all of the other databases to which you have
referred it is impossible to get a result set consistent to a
point-in-time without locking the resources (not at the row level but at
the table level).

Something I would think far more important than anything you brought up.

Assuming you are talking here about the locking database's SERIALIZABLE ,
no, it's not always necessary to lock the whole table(s) in order to obtain
a consistent result. Sometimes, yes, the whole table has to be locked,
but not always. I can give an example or two if you care.

On the other hand, Oracle's concurrency control makes solving some problems
harder than in a locking environment precisely because of Oracle locking
mecanism simplicity or some may say deficiency, e.g. absence of row-level
read locks, key-range locking, etc.

As I mentioned in my reply to another poster, in many situations Oracle
undoubtedly provides higher concurrency than locking database engines, after
all it was designed to do so although certain compromises have been made.

Regards.

VC


I think your understanding of Oracle is rather slight based on what I
assume you mean.

How in mysql can you obtain a result consistent to point-in-time without
table locking all resources.

And in Oracle ... I would like a demonstration of how would go about not
having a point-in-time consistent result without intentionally
corrupting a query result.

Daniel Morgan

Jul 20 '05 #51
VC wrote:
Please see below:

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090027247.885352@yasure...
VC wrote:

Hello Daniel,

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1089945278.134355@yasure...
VC wrote:

>A correct concurrency control/model ensures that all the permitted
>transactions are serializable. In this sense, databases like DB2 or

MSSQL
>implement a correct concurrency model albeit at the expense of lower
>concurrency in some circumstances. Funnily enough, none of the
Oracle
isolation levels can make the same promise, i.e. ensure serializable
>transaction histories, in any of its isolation levels. Usually, it's

quite
>easy to obtain correct results by augmenting an isolation level with
>something like 'select for update', though..

And if this is true why, exactly, would anyone care at the expense of
being able to extract an accurate answer from a database without
performing table locks on all resources?

I am not sure what specific part of my message the word *this* ('if this
is
true') refers to, but assuming it's the last sentence then you surely
know
that 'select for update' takes write (TX) lock on all the rows involed
A
locking scheduler would take *read* row level locks in similar
circumstances
automatically. If my interpretaion is incorrect, please elaborate.

VC


My point is that in all of the other databases to which you have
referred it is impossible to get a result set consistent to a
point-in-time without locking the resources (not at the row level but at
the table level).

Something I would think far more important than anything you brought up.

Assuming you are talking here about the locking database's SERIALIZABLE ,
no, it's not always necessary to lock the whole table(s) in order to obtain
a consistent result. Sometimes, yes, the whole table has to be locked,
but not always. I can give an example or two if you care.

On the other hand, Oracle's concurrency control makes solving some problems
harder than in a locking environment precisely because of Oracle locking
mecanism simplicity or some may say deficiency, e.g. absence of row-level
read locks, key-range locking, etc.

As I mentioned in my reply to another poster, in many situations Oracle
undoubtedly provides higher concurrency than locking database engines, after
all it was designed to do so although certain compromises have been made.

Regards.

VC


I think your understanding of Oracle is rather slight based on what I
assume you mean.

How in mysql can you obtain a result consistent to point-in-time without
table locking all resources.

And in Oracle ... I would like a demonstration of how would go about not
having a point-in-time consistent result without intentionally
corrupting a query result.

Daniel Morgan

Jul 20 '05 #52
VC

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090216505.569999@yasure...

I think your understanding of Oracle is rather slight based on what I
assume you mean.
Whilst the conclusion may very well be true, your argument is unsound.
Please substitute a real premis for the vague 'based on what I assume you
mean'.

How in mysql can you obtain a result consistent to point-in-time without
table locking all resources.
Please go and re-read my original response to Alex Filonov who claimed that
Oracle consistency is somehow more correct than that of DB2/MSSQL. We've
discussed DB2/MSSQL vs. Oracle consistency issues, not mysql concurrency
control.

As to mysql, the answer to your question is truly simple: MYSQL has (and
has had for quite a while) an implementation of Oracle style 'read
consistency' with rollback segments and all.

And in Oracle ... I would like a demonstration of how would go about not
having a point-in-time consistent result without intentionally
corrupting a query result.


I am not sure what you are trying to say here. Please elaborate.

VC
Jul 20 '05 #53
VC wrote:
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090216505.569999@yasure...
I think your understanding of Oracle is rather slight based on what I
assume you mean.

Whilst the conclusion may very well be true, your argument is unsound.
Please substitute a real premis for the vague 'based on what I assume you
mean'.


Ok bluntly ... I don't think you know what you are talking about. A lot
less vague but I'd have preferred not to say it in those words.
How in mysql can you obtain a result consistent to point-in-time without
table locking all resources.


Please go and re-read my original response to Alex Filonov who claimed that
Oracle consistency is somehow more correct than that of DB2/MSSQL. We've
discussed DB2/MSSQL vs. Oracle consistency issues, not mysql concurrency
control.


Same statement is true for both DB2 and MS SQL. You can not create a
point-in-time consistent read without full table locks on all resources.
As to mysql, the answer to your question is truly simple: MYSQL has (and
has had for quite a while) an implementation of Oracle style 'read
consistency' with rollback segments and all.


Since what version?
And in Oracle ... I would like a demonstration of how would go about not
having a point-in-time consistent result without intentionally
corrupting a query result.


I am not sure what you are trying to say here. Please elaborate.

VC


I'm saying you can't, without intent, create a query result that is not
read consistent to a point-in-time.

Daniel Morgan

Jul 20 '05 #54
VC

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090216505.569999@yasure...

I think your understanding of Oracle is rather slight based on what I
assume you mean.
Whilst the conclusion may very well be true, your argument is unsound.
Please substitute a real premis for the vague 'based on what I assume you
mean'.

How in mysql can you obtain a result consistent to point-in-time without
table locking all resources.
Please go and re-read my original response to Alex Filonov who claimed that
Oracle consistency is somehow more correct than that of DB2/MSSQL. We've
discussed DB2/MSSQL vs. Oracle consistency issues, not mysql concurrency
control.

As to mysql, the answer to your question is truly simple: MYSQL has (and
has had for quite a while) an implementation of Oracle style 'read
consistency' with rollback segments and all.

And in Oracle ... I would like a demonstration of how would go about not
having a point-in-time consistent result without intentionally
corrupting a query result.


I am not sure what you are trying to say here. Please elaborate.

VC
Jul 20 '05 #55
VC wrote:
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090216505.569999@yasure...
I think your understanding of Oracle is rather slight based on what I
assume you mean.

Whilst the conclusion may very well be true, your argument is unsound.
Please substitute a real premis for the vague 'based on what I assume you
mean'.


Ok bluntly ... I don't think you know what you are talking about. A lot
less vague but I'd have preferred not to say it in those words.
How in mysql can you obtain a result consistent to point-in-time without
table locking all resources.


Please go and re-read my original response to Alex Filonov who claimed that
Oracle consistency is somehow more correct than that of DB2/MSSQL. We've
discussed DB2/MSSQL vs. Oracle consistency issues, not mysql concurrency
control.


Same statement is true for both DB2 and MS SQL. You can not create a
point-in-time consistent read without full table locks on all resources.
As to mysql, the answer to your question is truly simple: MYSQL has (and
has had for quite a while) an implementation of Oracle style 'read
consistency' with rollback segments and all.


Since what version?
And in Oracle ... I would like a demonstration of how would go about not
having a point-in-time consistent result without intentionally
corrupting a query result.


I am not sure what you are trying to say here. Please elaborate.

VC


I'm saying you can't, without intent, create a query result that is not
read consistent to a point-in-time.

Daniel Morgan

Jul 20 '05 #56
.....strength - meaning, Ingres does anything that Oracle can.

CA have seen the light and are taking Ingres OpenSource.

The DB market is saturated with products that can do the job.

The advantage now is that a major database will be free of charge.

Regards
Mike
Jul 20 '05 #57
.....strength - meaning, Ingres does anything that Oracle can.

CA have seen the light and are taking Ingres OpenSource.

The DB market is saturated with products that can do the job.

The advantage now is that a major database will be free of charge.

Regards
Mike
Jul 20 '05 #58
"michael newport" <mi************@yahoo.com> wrote in message
news:63*************************@posting.google.co m...
....strength - meaning, Ingres does anything that Oracle can.

CA have seen the light and are taking Ingres OpenSource.

The DB market is saturated with products that can do the job.

The advantage now is that a major database will be free of charge.


mySQL and PostGres *are* free of charge, and it would seem to me likely to
remain so and attract community development. If CA are really saying 'nope
we can't take the competition lets leave the market', then they should say
so.
--
Niall Litchfield
Oracle DBA
http://www.niall.litchfield.dial.pipex.com
Jul 20 '05 #59
VC

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090240148.855788@yasure...
VC wrote:
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090216505.569999@yasure...
I think your understanding of Oracle is rather slight based on what I
assume you mean.

Whilst the conclusion may very well be true, your argument is unsound.
Please substitute a real premis for the vague 'based on what I assume you
mean'.


Ok bluntly ... I don't think you know what you are talking about. A lot
less vague but I'd have preferred not to say it in those words.


An example of my 'not knowing what I am talking about' would be nice.
Always eager to learn from my betters.
How in mysql can you obtain a result consistent to point-in-time without
table locking all resources.
Please go and re-read my original response to Alex Filonov who claimed that
Oracle consistency is somehow more correct than that of DB2/MSSQL. We've discussed DB2/MSSQL vs. Oracle consistency issues, not mysql concurrency
control.


Same statement is true for both DB2 and MS SQL. You can not create a
point-in-time consistent read without full table locks on all resources.


I'll kill two birds with one stone here if you don't mind. Firstly, I'll
describe a scenario where DB2/MSSQL would *not* need to lock an entire table
in order to yield a consistent result in SERIALIZABLE IL. Secondly, I'll
claim that Oracle won't be able to produce consistent data in Oracle
SERIALIZABLE under the same circumstances.

========
Imagine a simple flight reservation system represented by a single table
containing flight numbers and passenger information:

create table PASSENGERS(FLIGHT_NUMBER ..., INFO ...);
create index P_Idx on PASSENGERS(FLIGHT_NUMBER);

Let's say that a flight can accomodate maximum 50 people. Our transaction
in pseudo-code might look like this:

set transaction isolation level serializable
....
select count(*) into l_cnt from PASSENGERS where FLIGHT_NUMBER=999; --
check the number of sold tickets
if l_cnt < 50 insert into PASSENGERS(999, <info>...); -- if there is a
vacancy insert a row
commit;

Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results

P.S. I've reproduced, to the best of my recollection, the 'overbooking'
example discussed a couple of years ago,
========
..And in Oracle ... I would like a demonstration of how would go about not
having a point-in-time consistent result without intentionally
corrupting a query result.
I am not sure what you are trying to say here. Please elaborate.

VC


I'm saying you can't, without intent, create a query result that is not
read consistent to a point-in-time.


Under Oracle, the above reservation scenario will lead to overbooked
flights. Figuring out why and how to fix the problem is left as an exercise
for the interested student.
As to mysql, the answer to your question is truly simple: MYSQL has (and has had for quite a while) an implementation of Oracle style 'read
consistency' with rollback segments and all.


Since what version?


Since Version 3.23 (March 2001).

Jul 20 '05 #60
michael newport wrote:
....strength - meaning, Ingres does anything that Oracle can.

CA have seen the light and are taking Ingres OpenSource.

The DB market is saturated with products that can do the job.

The advantage now is that a major database will be free of charge.

Regards
Mike


So Ingres has equivalent functionality to RAC? How about InterMedia?
How about Range-Hash partitioning? System triggers? DDL triggers?
Packages? User defined operators? Advanced Queueing? User defined locks?
User defined index types? I'd keep going but I think you get the point.

Sure you meant "anything"?

Daniel Morgan

Jul 20 '05 #61
VC wrote:

I'll kill two birds with one stone here if you don't mind. Firstly, I'll
describe a scenario where DB2/MSSQL would *not* need to lock an entire table
in order to yield a consistent result in SERIALIZABLE IL. Secondly, I'll
claim that Oracle won't be able to produce consistent data in Oracle
SERIALIZABLE under the same circumstances.
You already missed the point. Oracle doesn't need SERIALIZABLE or
anything else other than a standard SELECT * FROM to give a read
consistent view of the data at a point-in-time.
Imagine a simple flight reservation system represented by a single table
containing flight numbers and passenger information:

create table PASSENGERS(FLIGHT_NUMBER ..., INFO ...);
create index P_Idx on PASSENGERS(FLIGHT_NUMBER);

Let's say that a flight can accomodate maximum 50 people. Our transaction
in pseudo-code might look like this:

set transaction isolation level serializable
...
select count(*) into l_cnt from PASSENGERS where FLIGHT_NUMBER=999; --
check the number of sold tickets
if l_cnt < 50 insert into PASSENGERS(999, <info>...); -- if there is a
vacancy insert a row
commit;

Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results


Good attempt but completely irrelevant to a point-in-time accuracy of
a report.

Try this scenario and see what happens.

One bank with multiple accounts. Need to determine the exact amount of
money in the bank at midnight tonight while thousands of people are
using ATM machines and while deposits, checks, and credit card
transactions are beeing simultaneously cleared. The query of the balance
in all accounts will take 3+ hours.

Serialize to your hearts content ... you aren't going to do it without
a full table lock except in Oracle and, I believe, PostgreSQL and maybe
Firebird. And I wouldn't want to be the one trying to run a bank on
either of the later.

Daniel Morgan

Jul 20 '05 #62
"michael newport" <mi************@yahoo.com> wrote in message
news:63*************************@posting.google.co m...
....strength - meaning, Ingres does anything that Oracle can.

CA have seen the light and are taking Ingres OpenSource.

The DB market is saturated with products that can do the job.

The advantage now is that a major database will be free of charge.


mySQL and PostGres *are* free of charge, and it would seem to me likely to
remain so and attract community development. If CA are really saying 'nope
we can't take the competition lets leave the market', then they should say
so.
--
Niall Litchfield
Oracle DBA
http://www.niall.litchfield.dial.pipex.com
Jul 20 '05 #63
I have no idea what CA are saying, corporate types move in anally retentive ways.

But we should all welcome competition.

regards
Mike
Jul 20 '05 #64
VC

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090240148.855788@yasure...
VC wrote:
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090216505.569999@yasure...
I think your understanding of Oracle is rather slight based on what I
assume you mean.

Whilst the conclusion may very well be true, your argument is unsound.
Please substitute a real premis for the vague 'based on what I assume you
mean'.


Ok bluntly ... I don't think you know what you are talking about. A lot
less vague but I'd have preferred not to say it in those words.


An example of my 'not knowing what I am talking about' would be nice.
Always eager to learn from my betters.
How in mysql can you obtain a result consistent to point-in-time without
table locking all resources.
Please go and re-read my original response to Alex Filonov who claimed that
Oracle consistency is somehow more correct than that of DB2/MSSQL. We've discussed DB2/MSSQL vs. Oracle consistency issues, not mysql concurrency
control.


Same statement is true for both DB2 and MS SQL. You can not create a
point-in-time consistent read without full table locks on all resources.


I'll kill two birds with one stone here if you don't mind. Firstly, I'll
describe a scenario where DB2/MSSQL would *not* need to lock an entire table
in order to yield a consistent result in SERIALIZABLE IL. Secondly, I'll
claim that Oracle won't be able to produce consistent data in Oracle
SERIALIZABLE under the same circumstances.

========
Imagine a simple flight reservation system represented by a single table
containing flight numbers and passenger information:

create table PASSENGERS(FLIGHT_NUMBER ..., INFO ...);
create index P_Idx on PASSENGERS(FLIGHT_NUMBER);

Let's say that a flight can accomodate maximum 50 people. Our transaction
in pseudo-code might look like this:

set transaction isolation level serializable
....
select count(*) into l_cnt from PASSENGERS where FLIGHT_NUMBER=999; --
check the number of sold tickets
if l_cnt < 50 insert into PASSENGERS(999, <info>...); -- if there is a
vacancy insert a row
commit;

Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results

P.S. I've reproduced, to the best of my recollection, the 'overbooking'
example discussed a couple of years ago,
========
..And in Oracle ... I would like a demonstration of how would go about not
having a point-in-time consistent result without intentionally
corrupting a query result.
I am not sure what you are trying to say here. Please elaborate.

VC


I'm saying you can't, without intent, create a query result that is not
read consistent to a point-in-time.


Under Oracle, the above reservation scenario will lead to overbooked
flights. Figuring out why and how to fix the problem is left as an exercise
for the interested student.
As to mysql, the answer to your question is truly simple: MYSQL has (and has had for quite a while) an implementation of Oracle style 'read
consistency' with rollback segments and all.


Since what version?


Since Version 3.23 (March 2001).

Jul 20 '05 #65
michael newport wrote:
....strength - meaning, Ingres does anything that Oracle can.

CA have seen the light and are taking Ingres OpenSource.

The DB market is saturated with products that can do the job.

The advantage now is that a major database will be free of charge.

Regards
Mike


So Ingres has equivalent functionality to RAC? How about InterMedia?
How about Range-Hash partitioning? System triggers? DDL triggers?
Packages? User defined operators? Advanced Queueing? User defined locks?
User defined index types? I'd keep going but I think you get the point.

Sure you meant "anything"?

Daniel Morgan

Jul 20 '05 #66
you can replace Oracle with Ingres, this is what I mean;

and very soon you can use Ingres for free.

http://news.com.com/CA+open-sources+...l?tag=nefd.top
Jul 20 '05 #67
VC wrote:

I'll kill two birds with one stone here if you don't mind. Firstly, I'll
describe a scenario where DB2/MSSQL would *not* need to lock an entire table
in order to yield a consistent result in SERIALIZABLE IL. Secondly, I'll
claim that Oracle won't be able to produce consistent data in Oracle
SERIALIZABLE under the same circumstances.
You already missed the point. Oracle doesn't need SERIALIZABLE or
anything else other than a standard SELECT * FROM to give a read
consistent view of the data at a point-in-time.
Imagine a simple flight reservation system represented by a single table
containing flight numbers and passenger information:

create table PASSENGERS(FLIGHT_NUMBER ..., INFO ...);
create index P_Idx on PASSENGERS(FLIGHT_NUMBER);

Let's say that a flight can accomodate maximum 50 people. Our transaction
in pseudo-code might look like this:

set transaction isolation level serializable
...
select count(*) into l_cnt from PASSENGERS where FLIGHT_NUMBER=999; --
check the number of sold tickets
if l_cnt < 50 insert into PASSENGERS(999, <info>...); -- if there is a
vacancy insert a row
commit;

Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results


Good attempt but completely irrelevant to a point-in-time accuracy of
a report.

Try this scenario and see what happens.

One bank with multiple accounts. Need to determine the exact amount of
money in the bank at midnight tonight while thousands of people are
using ATM machines and while deposits, checks, and credit card
transactions are beeing simultaneously cleared. The query of the balance
in all accounts will take 3+ hours.

Serialize to your hearts content ... you aren't going to do it without
a full table lock except in Oracle and, I believe, PostgreSQL and maybe
Firebird. And I wouldn't want to be the one trying to run a bank on
either of the later.

Daniel Morgan

Jul 20 '05 #68
VC

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090302131.661893@yasure...
VC wrote:

Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results
Good attempt but completely irrelevant to a point-in-time accuracy of
a report.


Well, the example I provided was a response to the two statements you've
made:
1. DB2/MSSQL have to lock the entire table in order to produce a consistent
result in SERIALIZABLE IL . (No, they have not)
2. Oracle query will produce a 'read-consistent' result. Yes, Oracle
will indeed produce a snapshot of the data ('read-consistent' result) as of
the point in time the transaction started. The problem is that the snapshot
itself may be inconsistent due to concurrent update/insert transactions (see
my example).
Try this scenario and see what happens.

One bank with multiple accounts. Need to determine the exact amount of
money in the bank at midnight tonight while thousands of people are
using ATM machines and while deposits, checks, and credit card
transactions are beeing simultaneously cleared. The query of the balance
in all accounts will take 3+ hours.
As I've already mentioned several times, no one disputes the fact that in
certain cases Oracle provides higher concurrency due to MVCC.
I also said that there are several solutions to the reporting problem in
locking databases, such as a replicated or stand-by database. There is
another solution, namely triple mirroring of the OLTP database. SAN vendor
harware can "split off" the third mirrored drive set creating almost
instantaneously a clone of the original database (e.g. EMC BCV) at a given
point in time. It's interesting to notice, that the same technique is
widely used for Oracle databases as well in order to off-load the main
instance. The clone is used both for reporting and backups.

Serialize to your hearts content ... you aren't going to do it without
a full table lock ...


As I've demonstrated, only a subset of rows involved in the transaction has
to be locked which naturally can be the whole table.
Jul 20 '05 #69
I have no idea what CA are saying, corporate types move in anally retentive ways.

But we should all welcome competition.

regards
Mike
Jul 20 '05 #70
"VC" <bo*******@hotmail.com> wrote in message news:<LDUJc.105848$Oq2.2189@attbi_s52>...
"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message news:<l9DJc.98700$XM6.25133@attbi_s53>...
"Alex Filonov" <af******@yahoo.com> wrote in message
news:33*************************@posting.google.co m...

>As for correct concurrency
> model,

A correct concurrency control/model ensures that all the permitted
transactions are serializable. In this sense, databases like DB2 or MSSQL implement a correct concurrency model albeit at the expense of lower
concurrency in some circumstances. Funnily enough, none of the Oracle
isolation levels can make the same promise, i.e. ensure serializable
transaction histories, in any of its isolation levels. Usually, it's quite easy to obtain correct results by augmenting an isolation level with
something like 'select for update', though..


Hm, what about SERIALIZABLE isolation level in Oracle? I don't see that
it's any different from similar isolation level in MSSQL.


Oracle does not have SERIALIZABLE despite their assurances to the contrary.
It's trivial to imagine a transaction history where Oracle's SERIALIZABLE
won't produce correct results. For starters, try 'insert into t1 select
max(ID)+1 from t1'. The issue with Oracle's serializable being not quite
what it's labeled was discussed extensively in this newsgroup a couple of
years ago. Unfortunately, I do not remeber the subject name -- something
to do with airline reservations.


You mean, this discussion:

http://groups.google.com/groups?hl=e...om%26rnum%3D17

Well, serializability in ANSI92 definition is probably impossible for
a
big database with big number of concurrent sessions (and airline
reservation
system is a good example). As for example of overbooking, it's total
crap.
Good design for airline reservation system would just have additional
table for each seat on each flight, and reserve seats (as in real
life),
not pieces of an airplane. No need for serializability at all here. No
problem
with overbooking. If anybody thinks that such table of seats is too
big,
they didn't work with real databases.
2. On the other hand, if the implementation was a mixed OLTP/DSS project, then judging by your story, the database architect was not qualified to
design a system like that. His/her being surprised that a locking
scheduler's reads block writes, and vice versa, is akin to a swimmer's being amazed that water is wet. There are several well-known approaches to the
reporting problem such as replicating the main database to a reporting
database, transaction log shipping, scheduling reports off-hours, etc.


Yeah, right. Now, tell me where to find managers who understand what
kind of DB architects they really need. And who prefer architects
proposing more expensive, although correct, solutions.


Well, one can hardly blame the hammer if one wants to use it as a
screwdriver can one ? Oracle has quite a few pecularities of its own that
can jeopardize a project if those responsible for design/implementation have
no clue as to what they are doing.


The problem is that most of current IT/IS management don't know hammer
from
a screwdriver. Not literally, of course, but on a level a little bit
higher
that 2 + 2 = 4.
And, of course,
offline reporting (and all solutions you propose are actually variants of
it) is not always what customer wants.


What's wrong with those solutions for long running reports, quick queries
not being a problem ? Besides, the situation is not very much different
from running a long report under Oracle where the results won't be actual
due to the very nature of the beast -- 'read consistency'.


What do you mean, not actual? They are actual for the point of time.
As for real situation, reports didn't run very long time, just
something
between 5 and 15 minutes. You wouldn't notice those if running on
Oracle.
And quick queries on read-locking databases is a big problem, if they
need to lock significant number of rows (especially when locks are
escalated).
There is no argument that a multiversioning scheduler provides higher
concurrency in many cases, after all that's its raison d'etre, but one
should not forget about trade-offs/pitfalls and think that alternative,
locking, approaches won't work.
VC

Jul 20 '05 #71
you can replace Oracle with Ingres, this is what I mean;

and very soon you can use Ingres for free.

http://news.com.com/CA+open-sources+...l?tag=nefd.top
Jul 20 '05 #72
VC

"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message news:<LDUJc.105848$Oq2.2189@attbi_s52>...


You mean, this discussion:

http://groups.google.com/groups?hl=e...f3.108267%24_m
4.1357169%40news2.giganews.com&rnum=17&prev=/groups%3Fq%3Dg:thl2804310228d%2
6dq%3D%26hl%3Den%26lr%3D%26ie%3DUTF-8%26selm%3DKAnf3.108267%2524_m4.1357169%
2540news2.giganews.com%26rnum%3D17
Yes, that's the one.
Well, serializability in ANSI92 definition is probably impossible for
a
big database with big number of concurrent sessions (and airline
reservation
system is a good example).
As I showed in another posting, any locking database handles the
reservation example OK, without locking the entire table.

As for example of overbooking, it's total
crap.
Good design for airline reservation system would just have additional
table for each seat on each flight, and reserve seats (as in real
life),
not pieces of an airplane. No need for serializability at all here. No
problem
with overbooking.
Your appeal to design aspect is disingenuous but irrelevant. The point is
that a certain class of transaction histories similar to the reservation
example cannot be handled *correctly* in Oracle's 'SERIALIZABLE', contrary
to what you claimed elsewhere (" As for correct concurrency model, I
remember one project").

Here's another textbook example for you:
==
There are two linked accounts (id=1 and id=2) in a bank. A transaction
might look as follows:

== withdraw from 1
select amount into x from accounts where id=1
select amount into y from accounts where id=2
if x+y >= withdrawal then update accounts set amount=amount-withdrawal
where id=1
commit
=====

Any commercial locking scheduler will handle the scenario correctly. Oracle
won't.

What's wrong with those solutions for long running reports, quick queries not being a problem ? Besides, the situation is not very much different from running a long report under Oracle where the results won't be actual due to the very nature of the beast -- 'read consistency'.


What do you mean, not actual? They are actual for the point of time.


Sorry, an unfortunate choice of words. I used 'actual' in the sense of
'current'. A 15 minute long Oracle report is 15 minutes out of date whilst
a locking scheduler report result is current for the moment the query ended.
As for real situation, reports didn't run very long time, just
something
between 5 and 15 minutes. You wouldn't notice those if running on
Oracle.
Those are *long* reports. Besides, for the last time I am repeating
myself, no one is disputing Oracle concurrecy being better in this
situation. .
And quick queries on read-locking databases is a big problem, if they
need to lock significant number of rows (especially when locks are
escalated).
Somehow TPC-C results (
http://www-306.ibm.com/software/data...ks/021704.html ) do not
bear you out.
There is no argument that a multiversioning scheduler provides higher
concurrency in many cases, after all that's its raison d'etre, but one
should not forget about trade-offs/pitfalls and think that alternative,
locking, approaches won't work.

> VC

Jul 20 '05 #73

"VC" <bo*******@hotmail.com> schrieb im Newsbeitrag news:RfdLc.109372$WX.92600@attbi_s51...
Here's another textbook example for you:
==
There are two linked accounts (id=1 and id=2) in a bank. A transaction
might look as follows:

== withdraw from 1
select amount into x from accounts where id=1
select amount into y from accounts where id=2
if x+y >= withdrawal then update accounts set amount=amount-withdrawal
where id=1
commit
=====

Any commercial locking scheduler will handle the scenario correctly. Oracle
won't.

Sorry for butting in but I'm just trying to learn here, so what would go wrong?
And what about "select for update" in oracle? It's supposed to lock the rows it hits.

Lots of Greetings!
Volker
Jul 20 '05 #74
VC

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090302131.661893@yasure...
VC wrote:

Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results
Good attempt but completely irrelevant to a point-in-time accuracy of
a report.


Well, the example I provided was a response to the two statements you've
made:
1. DB2/MSSQL have to lock the entire table in order to produce a consistent
result in SERIALIZABLE IL . (No, they have not)
2. Oracle query will produce a 'read-consistent' result. Yes, Oracle
will indeed produce a snapshot of the data ('read-consistent' result) as of
the point in time the transaction started. The problem is that the snapshot
itself may be inconsistent due to concurrent update/insert transactions (see
my example).
Try this scenario and see what happens.

One bank with multiple accounts. Need to determine the exact amount of
money in the bank at midnight tonight while thousands of people are
using ATM machines and while deposits, checks, and credit card
transactions are beeing simultaneously cleared. The query of the balance
in all accounts will take 3+ hours.
As I've already mentioned several times, no one disputes the fact that in
certain cases Oracle provides higher concurrency due to MVCC.
I also said that there are several solutions to the reporting problem in
locking databases, such as a replicated or stand-by database. There is
another solution, namely triple mirroring of the OLTP database. SAN vendor
harware can "split off" the third mirrored drive set creating almost
instantaneously a clone of the original database (e.g. EMC BCV) at a given
point in time. It's interesting to notice, that the same technique is
widely used for Oracle databases as well in order to off-load the main
instance. The clone is used both for reporting and backups.

Serialize to your hearts content ... you aren't going to do it without
a full table lock ...


As I've demonstrated, only a subset of rows involved in the transaction has
to be locked which naturally can be the whole table.
Jul 20 '05 #75
VC
Hi,

"Volker Hetzer" <vo***********@ieee.org> wrote in message
news:cd**********@nntp.fujitsu-siemens.com...

"VC" <bo*******@hotmail.com> schrieb im Newsbeitrag news:RfdLc.109372$WX.92600@attbi_s51...
Here's another textbook example for you:
==
There are two linked accounts (id=1 and id=2) in a bank. A transaction
might look as follows:

== withdraw from 1
select amount into x from accounts where id=1
select amount into y from accounts where id=2
if x+y >= withdrawal then update accounts set amount=amount-withdrawal
where id=1
commit
=====

Any commercial locking scheduler will handle the scenario correctly. Oracle won't.

Sorry for butting in but I'm just trying to learn here, so what would go

wrong? And what about "select for update" in oracle? It's supposed to lock the rows it hits.

'Select for update' will indeed work correctly. However, there are at
least three problems with this solution:

1. 'Serializable' is supposed to ensure correct execution by itself without
resorting to the 'for update' clause.
2. Concurrency will be *lower*, funnily enough, than that of a locking
database due to selects using exclusive locks (TX) in Oracle instead of
shared (S) locks in DB2/Sybase. For example, if x+y < withdrawal, clearly
no transaction would not block. Not so in Oracle.
3. 'For update' is non-standard (least important).

Regards.

VC

Lots of Greetings!
Volker

Jul 20 '05 #76
I took it that you were saying that Ingres becoming open source was good
news as it provided a free, major database, alternative to Oracle. That db
already exists in the form of mysql and postgres

--
Niall Litchfield
Oracle DBA
http://www.niall.litchfield.dial.pipex.com
"michael newport" <mi************@yahoo.com> wrote in message
news:63*************************@posting.google.co m...
I have no idea what CA are saying, corporate types move in anally retentive ways.
But we should all welcome competition.

regards
Mike

Jul 20 '05 #77
"VC" <bo*******@hotmail.com> wrote in message news:<LDUJc.105848$Oq2.2189@attbi_s52>...
"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message news:<l9DJc.98700$XM6.25133@attbi_s53>...
"Alex Filonov" <af******@yahoo.com> wrote in message
news:33*************************@posting.google.co m...

>As for correct concurrency
> model,

A correct concurrency control/model ensures that all the permitted
transactions are serializable. In this sense, databases like DB2 or MSSQL implement a correct concurrency model albeit at the expense of lower
concurrency in some circumstances. Funnily enough, none of the Oracle
isolation levels can make the same promise, i.e. ensure serializable
transaction histories, in any of its isolation levels. Usually, it's quite easy to obtain correct results by augmenting an isolation level with
something like 'select for update', though..


Hm, what about SERIALIZABLE isolation level in Oracle? I don't see that
it's any different from similar isolation level in MSSQL.


Oracle does not have SERIALIZABLE despite their assurances to the contrary.
It's trivial to imagine a transaction history where Oracle's SERIALIZABLE
won't produce correct results. For starters, try 'insert into t1 select
max(ID)+1 from t1'. The issue with Oracle's serializable being not quite
what it's labeled was discussed extensively in this newsgroup a couple of
years ago. Unfortunately, I do not remeber the subject name -- something
to do with airline reservations.


You mean, this discussion:

http://groups.google.com/groups?hl=e...om%26rnum%3D17

Well, serializability in ANSI92 definition is probably impossible for
a
big database with big number of concurrent sessions (and airline
reservation
system is a good example). As for example of overbooking, it's total
crap.
Good design for airline reservation system would just have additional
table for each seat on each flight, and reserve seats (as in real
life),
not pieces of an airplane. No need for serializability at all here. No
problem
with overbooking. If anybody thinks that such table of seats is too
big,
they didn't work with real databases.
2. On the other hand, if the implementation was a mixed OLTP/DSS project, then judging by your story, the database architect was not qualified to
design a system like that. His/her being surprised that a locking
scheduler's reads block writes, and vice versa, is akin to a swimmer's being amazed that water is wet. There are several well-known approaches to the
reporting problem such as replicating the main database to a reporting
database, transaction log shipping, scheduling reports off-hours, etc.


Yeah, right. Now, tell me where to find managers who understand what
kind of DB architects they really need. And who prefer architects
proposing more expensive, although correct, solutions.


Well, one can hardly blame the hammer if one wants to use it as a
screwdriver can one ? Oracle has quite a few pecularities of its own that
can jeopardize a project if those responsible for design/implementation have
no clue as to what they are doing.


The problem is that most of current IT/IS management don't know hammer
from
a screwdriver. Not literally, of course, but on a level a little bit
higher
that 2 + 2 = 4.
And, of course,
offline reporting (and all solutions you propose are actually variants of
it) is not always what customer wants.


What's wrong with those solutions for long running reports, quick queries
not being a problem ? Besides, the situation is not very much different
from running a long report under Oracle where the results won't be actual
due to the very nature of the beast -- 'read consistency'.


What do you mean, not actual? They are actual for the point of time.
As for real situation, reports didn't run very long time, just
something
between 5 and 15 minutes. You wouldn't notice those if running on
Oracle.
And quick queries on read-locking databases is a big problem, if they
need to lock significant number of rows (especially when locks are
escalated).
There is no argument that a multiversioning scheduler provides higher
concurrency in many cases, after all that's its raison d'etre, but one
should not forget about trade-offs/pitfalls and think that alternative,
locking, approaches won't work.
VC

Jul 20 '05 #78
"VC" <bo*******@hotmail.com> wrote in message news:<p_XKc.135317$XM6.125873@attbi_s53>...
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090240148.855788@yasure...
VC wrote:
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090216505.569999@yasure...

>I think your understanding of Oracle is rather slight based on what I
>assume you mean.
Whilst the conclusion may very well be true, your argument is unsound.
Please substitute a real premis for the vague 'based on what I assume you mean'.
Ok bluntly ... I don't think you know what you are talking about. A lot
less vague but I'd have preferred not to say it in those words.


An example of my 'not knowing what I am talking about' would be nice.
Always eager to learn from my betters.
>How in mysql can you obtain a result consistent to point-in-time without
>table locking all resources.

Please go and re-read my original response to Alex Filonov who claimed that Oracle consistency is somehow more correct than that of DB2/MSSQL. We've discussed DB2/MSSQL vs. Oracle consistency issues, not mysql concurrency
control.


Same statement is true for both DB2 and MS SQL. You can not create a
point-in-time consistent read without full table locks on all resources.


I'll kill two birds with one stone here if you don't mind. Firstly, I'll
describe a scenario where DB2/MSSQL would *not* need to lock an entire table
in order to yield a consistent result in SERIALIZABLE IL. Secondly, I'll
claim that Oracle won't be able to produce consistent data in Oracle
SERIALIZABLE under the same circumstances.

========
Imagine a simple flight reservation system represented by a single table
containing flight numbers and passenger information:

create table PASSENGERS(FLIGHT_NUMBER ..., INFO ...);
create index P_Idx on PASSENGERS(FLIGHT_NUMBER);

Let's say that a flight can accomodate maximum 50 people. Our transaction
in pseudo-code might look like this:

set transaction isolation level serializable
...
select count(*) into l_cnt from PASSENGERS where FLIGHT_NUMBER=999; --
check the number of sold tickets
if l_cnt < 50 insert into PASSENGERS(999, <info>...); -- if there is a
vacancy insert a row
commit;

Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results


I've said in another post that it's a crappy example. A little bit more on
that.

This model doesn't reserve seats. It reserves pieces of flight. You said
that the flight has 50 pieces and written it down in application code.
Proper design would be to have a table flight_seats, which has columns
flight_number, date, passenger_id. Then you can book not a piece of a
flight, but a seat. And there is no need for serialization.

You can use real seat numbers here, or you can map abstract seats to real
seats later, at check-in. This model also makes plane replacement easy.
Just add or remove seats from a flight.

Putting logic and constraints into code instead of a database is a sign
of a bad design.

P.S. I've reproduced, to the best of my recollection, the 'overbooking'
example discussed a couple of years ago,
========
.
>And in Oracle ... I would like a demonstration of how would go about not
>having a point-in-time consistent result without intentionally
>corrupting a query result.

I am not sure what you are trying to say here. Please elaborate.

VC


I'm saying you can't, without intent, create a query result that is not
read consistent to a point-in-time.


Under Oracle, the above reservation scenario will lead to overbooked
flights. Figuring out why and how to fix the problem is left as an exercise
for the interested student.
As to mysql, the answer to your question is truly simple: MYSQL has (and has had for quite a while) an implementation of Oracle style 'read
consistency' with rollback segments and all.


Since what version?


Since Version 3.23 (March 2001).

Jul 20 '05 #79
VC

"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message news:<LDUJc.105848$Oq2.2189@attbi_s52>...


You mean, this discussion:

http://groups.google.com/groups?hl=e...f3.108267%24_m
4.1357169%40news2.giganews.com&rnum=17&prev=/groups%3Fq%3Dg:thl2804310228d%2
6dq%3D%26hl%3Den%26lr%3D%26ie%3DUTF-8%26selm%3DKAnf3.108267%2524_m4.1357169%
2540news2.giganews.com%26rnum%3D17
Yes, that's the one.
Well, serializability in ANSI92 definition is probably impossible for
a
big database with big number of concurrent sessions (and airline
reservation
system is a good example).
As I showed in another posting, any locking database handles the
reservation example OK, without locking the entire table.

As for example of overbooking, it's total
crap.
Good design for airline reservation system would just have additional
table for each seat on each flight, and reserve seats (as in real
life),
not pieces of an airplane. No need for serializability at all here. No
problem
with overbooking.
Your appeal to design aspect is disingenuous but irrelevant. The point is
that a certain class of transaction histories similar to the reservation
example cannot be handled *correctly* in Oracle's 'SERIALIZABLE', contrary
to what you claimed elsewhere (" As for correct concurrency model, I
remember one project").

Here's another textbook example for you:
==
There are two linked accounts (id=1 and id=2) in a bank. A transaction
might look as follows:

== withdraw from 1
select amount into x from accounts where id=1
select amount into y from accounts where id=2
if x+y >= withdrawal then update accounts set amount=amount-withdrawal
where id=1
commit
=====

Any commercial locking scheduler will handle the scenario correctly. Oracle
won't.

What's wrong with those solutions for long running reports, quick queries not being a problem ? Besides, the situation is not very much different from running a long report under Oracle where the results won't be actual due to the very nature of the beast -- 'read consistency'.


What do you mean, not actual? They are actual for the point of time.


Sorry, an unfortunate choice of words. I used 'actual' in the sense of
'current'. A 15 minute long Oracle report is 15 minutes out of date whilst
a locking scheduler report result is current for the moment the query ended.
As for real situation, reports didn't run very long time, just
something
between 5 and 15 minutes. You wouldn't notice those if running on
Oracle.
Those are *long* reports. Besides, for the last time I am repeating
myself, no one is disputing Oracle concurrecy being better in this
situation. .
And quick queries on read-locking databases is a big problem, if they
need to lock significant number of rows (especially when locks are
escalated).
Somehow TPC-C results (
http://www-306.ibm.com/software/data...ks/021704.html ) do not
bear you out.
There is no argument that a multiversioning scheduler provides higher
concurrency in many cases, after all that's its raison d'etre, but one
should not forget about trade-offs/pitfalls and think that alternative,
locking, approaches won't work.

> VC

Jul 20 '05 #80
VC

"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message

news:<p_XKc.135317$XM6.125873@attbi_s53>...
Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results


I've said in another post that it's a crappy example. A little bit more on
that.


You are attacking a strawman -- the design is not what's being discussed
here, concurrency control/consistency is...

Here's another simple problem:

===
Let's assume we have two tables, P ( parent) and C (child):

create table P(pid int primary key, ...);
create table C(..., pid references P, ...);

We'd like to insert some child rows but, at the same time, avoid entering
data in vain if someone happens to delete a parent row during our data
entry.

In a locking scheduler, this sequence would run OK:

select count(*) into l_cnt from P where pid=1;
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

A variation of this scenario might be that, for whatever reason, you cannot
use referential integrity and cannot delete the parent row. You'd rather
mark the parent row inactive and prevent insertions if the parent is
inactive. Our transaction would look:

select count(*) into l_cnt from P where pid=1 and status='ACTIVE';
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

===

Needless to say that in the second case Oracle won't ensure the correct
outcome in any of its isolation levels if a concurrent transaction happens
to change the parent row status to 'inactive'. In the first case, a
non-serializable history will be avoided thanks to RI, however, the
entered data will be lost. The above example can be applied to any similar
master-detail model.

An Oracle solution to the problem would be to use the "for update" clause
instead of just select. This will result in *worse* concurrency under
Oracle than under a locking scheduler since all inserts will in fact be
*serial*.

VC
Jul 20 '05 #81

"VC" <bo*******@hotmail.com> schrieb im Newsbeitrag news:RfdLc.109372$WX.92600@attbi_s51...
Here's another textbook example for you:
==
There are two linked accounts (id=1 and id=2) in a bank. A transaction
might look as follows:

== withdraw from 1
select amount into x from accounts where id=1
select amount into y from accounts where id=2
if x+y >= withdrawal then update accounts set amount=amount-withdrawal
where id=1
commit
=====

Any commercial locking scheduler will handle the scenario correctly. Oracle
won't.

Sorry for butting in but I'm just trying to learn here, so what would go wrong?
And what about "select for update" in oracle? It's supposed to lock the rows it hits.

Lots of Greetings!
Volker
Jul 20 '05 #82
VC
Hi,

"Volker Hetzer" <vo***********@ieee.org> wrote in message
news:cd**********@nntp.fujitsu-siemens.com...

"VC" <bo*******@hotmail.com> schrieb im Newsbeitrag news:RfdLc.109372$WX.92600@attbi_s51...
Here's another textbook example for you:
==
There are two linked accounts (id=1 and id=2) in a bank. A transaction
might look as follows:

== withdraw from 1
select amount into x from accounts where id=1
select amount into y from accounts where id=2
if x+y >= withdrawal then update accounts set amount=amount-withdrawal
where id=1
commit
=====

Any commercial locking scheduler will handle the scenario correctly. Oracle won't.

Sorry for butting in but I'm just trying to learn here, so what would go

wrong? And what about "select for update" in oracle? It's supposed to lock the rows it hits.

'Select for update' will indeed work correctly. However, there are at
least three problems with this solution:

1. 'Serializable' is supposed to ensure correct execution by itself without
resorting to the 'for update' clause.
2. Concurrency will be *lower*, funnily enough, than that of a locking
database due to selects using exclusive locks (TX) in Oracle instead of
shared (S) locks in DB2/Sybase. For example, if x+y < withdrawal, clearly
no transaction would not block. Not so in Oracle.
3. 'For update' is non-standard (least important).

Regards.

VC

Lots of Greetings!
Volker

Jul 20 '05 #83
I took it that you were saying that Ingres becoming open source was good
news as it provided a free, major database, alternative to Oracle. That db
already exists in the form of mysql and postgres

--
Niall Litchfield
Oracle DBA
http://www.niall.litchfield.dial.pipex.com
"michael newport" <mi************@yahoo.com> wrote in message
news:63*************************@posting.google.co m...
I have no idea what CA are saying, corporate types move in anally retentive ways.
But we should all welcome competition.

regards
Mike

Jul 20 '05 #84
michael newport wrote:
you can replace Oracle with Ingres, this is what I mean;

and very soon you can use Ingres for free.

http://news.com.com/CA+open-sources+...l?tag=nefd.top


And lose a lot of critical functionality.

Daniel Morgan

Jul 20 '05 #85
VC wrote:
As I've already mentioned several times, no one disputes the fact that in
certain cases Oracle provides higher concurrency due to MVCC.
Agreed.
I also said that there are several solutions to the reporting problem in
locking databases, such as a replicated or stand-by database.
Many believe this but it is patently false. What do you do about
transactions that take place while you are replicating the database?

You either lock the table while replicating or the replication is also
not consistent to a point-in-time. You can not have it both ways.

There is another solution, namely triple mirroring of the OLTP database. SAN vendor
harware can "split off" the third mirrored drive set creating almost
instantaneously a clone of the original database (e.g. EMC BCV) at a given
point in time. It's interesting to notice, that the same technique is
widely used for Oracle databases as well in order to off-load the main
instance. The clone is used both for reporting and backups.


Almost instantly means ALMOST consistent to a point-in-time. But now you
are talking about data consistency by hardware intervention which is
just as valid if we were talking about 3x5 cards and a photocopier.
Serialize to your hearts content ... you aren't going to do it without
a full table lock ...


As I've demonstrated, only a subset of rows involved in the transaction has
to be locked which naturally can be the whole table.


Patently false. You can not lock rows that have not yet been inserted
while the transaction is taking place. And you have no means of keeping
them out of your result set except a full table lock.

Daniel Morgan

Jul 20 '05 #86
Volker Hetzer wrote:
"VC" <bo*******@hotmail.com> schrieb im Newsbeitrag news:RfdLc.109372$WX.92600@attbi_s51...
Here's another textbook example for you:
==
There are two linked accounts (id=1 and id=2) in a bank. A transaction
might look as follows:

== withdraw from 1
select amount into x from accounts where id=1
select amount into y from accounts where id=2
if x+y >= withdrawal then update accounts set amount=amount-withdrawal
where id=1
commit
=====

Any commercial locking scheduler will handle the scenario correctly. Oracle
won't.


Sorry for butting in but I'm just trying to learn here, so what would go wrong?
And what about "select for update" in oracle? It's supposed to lock the rows it hits.

Lots of Greetings!
Volker


You are correct and VC, as I earlier stated to him, is demonstrating far
less than a robust understanding of transaction processing: Lots of
smoke but no fire.

Daniel Morgan

Jul 20 '05 #87
"VC" <bo*******@hotmail.com> wrote in message news:<p_XKc.135317$XM6.125873@attbi_s53>...
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090240148.855788@yasure...
VC wrote:
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090216505.569999@yasure...

>I think your understanding of Oracle is rather slight based on what I
>assume you mean.
Whilst the conclusion may very well be true, your argument is unsound.
Please substitute a real premis for the vague 'based on what I assume you mean'.
Ok bluntly ... I don't think you know what you are talking about. A lot
less vague but I'd have preferred not to say it in those words.


An example of my 'not knowing what I am talking about' would be nice.
Always eager to learn from my betters.
>How in mysql can you obtain a result consistent to point-in-time without
>table locking all resources.

Please go and re-read my original response to Alex Filonov who claimed that Oracle consistency is somehow more correct than that of DB2/MSSQL. We've discussed DB2/MSSQL vs. Oracle consistency issues, not mysql concurrency
control.


Same statement is true for both DB2 and MS SQL. You can not create a
point-in-time consistent read without full table locks on all resources.


I'll kill two birds with one stone here if you don't mind. Firstly, I'll
describe a scenario where DB2/MSSQL would *not* need to lock an entire table
in order to yield a consistent result in SERIALIZABLE IL. Secondly, I'll
claim that Oracle won't be able to produce consistent data in Oracle
SERIALIZABLE under the same circumstances.

========
Imagine a simple flight reservation system represented by a single table
containing flight numbers and passenger information:

create table PASSENGERS(FLIGHT_NUMBER ..., INFO ...);
create index P_Idx on PASSENGERS(FLIGHT_NUMBER);

Let's say that a flight can accomodate maximum 50 people. Our transaction
in pseudo-code might look like this:

set transaction isolation level serializable
...
select count(*) into l_cnt from PASSENGERS where FLIGHT_NUMBER=999; --
check the number of sold tickets
if l_cnt < 50 insert into PASSENGERS(999, <info>...); -- if there is a
vacancy insert a row
commit;

Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results


I've said in another post that it's a crappy example. A little bit more on
that.

This model doesn't reserve seats. It reserves pieces of flight. You said
that the flight has 50 pieces and written it down in application code.
Proper design would be to have a table flight_seats, which has columns
flight_number, date, passenger_id. Then you can book not a piece of a
flight, but a seat. And there is no need for serialization.

You can use real seat numbers here, or you can map abstract seats to real
seats later, at check-in. This model also makes plane replacement easy.
Just add or remove seats from a flight.

Putting logic and constraints into code instead of a database is a sign
of a bad design.

P.S. I've reproduced, to the best of my recollection, the 'overbooking'
example discussed a couple of years ago,
========
.
>And in Oracle ... I would like a demonstration of how would go about not
>having a point-in-time consistent result without intentionally
>corrupting a query result.

I am not sure what you are trying to say here. Please elaborate.

VC


I'm saying you can't, without intent, create a query result that is not
read consistent to a point-in-time.


Under Oracle, the above reservation scenario will lead to overbooked
flights. Figuring out why and how to fix the problem is left as an exercise
for the interested student.
As to mysql, the answer to your question is truly simple: MYSQL has (and has had for quite a while) an implementation of Oracle style 'read
consistency' with rollback segments and all.


Since what version?


Since Version 3.23 (March 2001).

Jul 20 '05 #88
VC wrote:
"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message


news:<p_XKc.135317$XM6.125873@attbi_s53>...
Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results


I've said in another post that it's a crappy example. A little bit more on
that.

You are attacking a strawman -- the design is not what's being discussed
here, concurrency control/consistency is...
VC


The problem here VC, as I see it, is that you need to go take a decent
class on database architecture. You meet with ease the criteria to wear
the button that says "A little knowledge is a dangerous thing."

Daniel Morgan

Jul 20 '05 #89
Dan

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378179.508307@yasure...
VC wrote:
As I've already mentioned several times, no one disputes the fact that in
certain cases Oracle provides higher concurrency due to MVCC.
Agreed.
I also said that there are several solutions to the reporting problem in
locking databases, such as a replicated or stand-by database.


Many believe this but it is patently false. What do you do about
transactions that take place while you are replicating the database?

You either lock the table while replicating or the replication is also
not consistent to a point-in-time. You can not have it both ways.

There is
another solution, namely triple mirroring of the OLTP database. SAN vendor harware can "split off" the third mirrored drive set creating almost
instantaneously a clone of the original database (e.g. EMC BCV) at a given point in time. It's interesting to notice, that the same technique is
widely used for Oracle databases as well in order to off-load the main
instance. The clone is used both for reporting and backups.


Almost instantly means ALMOST consistent to a point-in-time. But now you
are talking about data consistency by hardware intervention which is
just as valid if we were talking about 3x5 cards and a photocopier.
Serialize to your hearts content ... you aren't going to do it without
a full table lock ...

As I've demonstrated, only a subset of rows involved in the transaction has to be locked which naturally can be the whole table.


Patently false. You can not lock rows that have not yet been inserted
while the transaction is taking place. And you have no means of keeping
them out of your result set except a full table lock.


Absolutely not true.

There is a critical difference between the notion of 'serialized' and the
term 'serializable'. Perhaps you could look it up? If not, Phil Bernstein,
the guy that literally wrote the book on the subject is a real professor at
the University of Washington. You might be able to catch a class...

Once you understand the difference in terminology, then perhaps you'll
understand why you don't need a full table lock to ensure a serializable
schedule even when a concurrent transaction inserts a row (again! in
contrast to serialized).
Daniel Morgan

- Dan
Jul 20 '05 #90
VC

"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message

news:<p_XKc.135317$XM6.125873@attbi_s53>...
Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results


I've said in another post that it's a crappy example. A little bit more on
that.


You are attacking a strawman -- the design is not what's being discussed
here, concurrency control/consistency is...

Here's another simple problem:

===
Let's assume we have two tables, P ( parent) and C (child):

create table P(pid int primary key, ...);
create table C(..., pid references P, ...);

We'd like to insert some child rows but, at the same time, avoid entering
data in vain if someone happens to delete a parent row during our data
entry.

In a locking scheduler, this sequence would run OK:

select count(*) into l_cnt from P where pid=1;
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

A variation of this scenario might be that, for whatever reason, you cannot
use referential integrity and cannot delete the parent row. You'd rather
mark the parent row inactive and prevent insertions if the parent is
inactive. Our transaction would look:

select count(*) into l_cnt from P where pid=1 and status='ACTIVE';
if l_cnt > 0 insert into C values(..., 1, ...);
commit;

===

Needless to say that in the second case Oracle won't ensure the correct
outcome in any of its isolation levels if a concurrent transaction happens
to change the parent row status to 'inactive'. In the first case, a
non-serializable history will be avoided thanks to RI, however, the
entered data will be lost. The above example can be applied to any similar
master-detail model.

An Oracle solution to the problem would be to use the "for update" clause
instead of just select. This will result in *worse* concurrency under
Oracle than under a locking scheduler since all inserts will in fact be
*serial*.

VC
Jul 20 '05 #91
Dan

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378283.651396@yasure...
Volker Hetzer wrote:
"VC" <bo*******@hotmail.com> schrieb im Newsbeitrag news:RfdLc.109372$WX.92600@attbi_s51...
Here's another textbook example for you:
==
There are two linked accounts (id=1 and id=2) in a bank. A transaction
might look as follows:

== withdraw from 1
select amount into x from accounts where id=1
select amount into y from accounts where id=2
if x+y >= withdrawal then update accounts set amount=amount-withdrawal
where id=1
commit
=====

Any commercial locking scheduler will handle the scenario correctly. Oraclewon't.
Sorry for butting in but I'm just trying to learn here, so what would go wrong? And what about "select for update" in oracle? It's supposed to lock the rows it hits.
Lots of Greetings!
Volker


You are correct and VC, as I earlier stated to him, is demonstrating far
less than a robust understanding of transaction processing: Lots of
smoke but no fire.


VC already explained it. Serializable schedules are always preferrable to
serialized schedules. The 'FOR UPDATE' clause is a hack that only results
in a *serialized* sequence of transactions.

Again, you need to understand the difference between serialized and
serializable, and you need to either take a basic database course or read a
textbook. All I see from you is a bunch of hand waving. If someone asks a
question in the middle of some private war of yours, you latch on like the
guy is backing you up. VC gave more than ample evidence and he was very
patient in explaining it to you, but you chose to ignore him. Why? Can you
address issues in a more apolitical manner? Do you always have to have the
last word, even if it makes you look like a moron?

The emporer really has no clothes.
Daniel Morgan

- Dan
Jul 20 '05 #92
Dan

"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378462.109262@yasure...
VC wrote:
"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message
news:<p_XKc.135317$XM6.125873@attbi_s53>...
Under DB2/MSSQL, a transaction will lock only a subset of rows (not thewhole table) for a given flight thus ensuring consistent results
I've said in another post that it's a crappy example. A little bit more onthat.

You are attacking a strawman -- the design is not what's being discussed
here, concurrency control/consistency is...
VC


The problem here VC, as I see it, is that you need to go take a decent
class on database architecture. You meet with ease the criteria to wear
the button that says "A little knowledge is a dangerous thing."


Where was VC wrong? Where is the refutation of his argument? Name calling
only makes you look bad, and my sons did it when they were five years old
and were frustrated that they couldn't get their way. VC was absolutely correct and was perfectly behaved in his discussion with
you. I wish I could stomach the B.S. as well as he did. You should *not*
be teaching anything if you claim to know general principles of good
database design or the relational model and drag it into your discussions of
low level Oracle instance configuration settings.
Daniel Morgan


- Dan
Jul 20 '05 #93
Dan wrote:
"Daniel Morgan" <da******@x.washington.edu> wrote in message
news:1090378179.508307@yasure...
VC wrote:

As I've already mentioned several times, no one disputes the fact that
in
certain cases Oracle provides higher concurrency due to MVCC.


Agreed.

I also said that there are several solutions to the reporting problem in
locking databases, such as a replicated or stand-by database.


Many believe this but it is patently false. What do you do about
transactions that take place while you are replicating the database?

You either lock the table while replicating or the replication is also
not consistent to a point-in-time. You can not have it both ways.

There is
another solution, namely triple mirroring of the OLTP database. SAN
vendor
harware can "split off" the third mirrored drive set creating almost
instantaneously a clone of the original database (e.g. EMC BCV) at a
given
point in time. It's interesting to notice, that the same technique is
widely used for Oracle databases as well in order to off-load the main
instance. The clone is used both for reporting and backups.


Almost instantly means ALMOST consistent to a point-in-time. But now you
are talking about data consistency by hardware intervention which is
just as valid if we were talking about 3x5 cards and a photocopier.

Serialize to your hearts content ... you aren't going to do it without
a full table lock ...
As I've demonstrated, only a subset of rows involved in the transaction
has
to be locked which naturally can be the whole table.


Patently false. You can not lock rows that have not yet been inserted
while the transaction is taking place. And you have no means of keeping
them out of your result set except a full table lock.

Absolutely not true.

There is a critical difference between the notion of 'serialized' and the
term 'serializable'. Perhaps you could look it up? If not, Phil Bernstein,
the guy that literally wrote the book on the subject is a real professor at
the University of Washington. You might be able to catch a class...

Once you understand the difference in terminology, then perhaps you'll
understand why you don't need a full table lock to ensure a serializable
schedule even when a concurrent transaction inserts a row (again! in
contrast to serialized).

Daniel Morgan


- Dan


No doubt you thought you understood what I intended. I did not once
mention serialized or serializable did I?

Daniel Morgan

Jul 20 '05 #94
Dan wrote:
Where was VC wrong? - Dan


When you respond to what I wrote rather than what you wrote I wrote I
will gladly respond.

From where I am reading ... you came into a conversation in the middle
and are not tracking on the point I have been trying to make which
relates to point-in-time accuracy. I have NOT once made reference to
any type of cereal: Not Corn Flakes, Not Cheerios, Not Oat Meal.
Hope you now have a sense of how I feel reading what you wrote. Hope
you are thinking "I wrote serial not cereal."

Daniel Morgan

Jul 20 '05 #95
michael newport wrote:
you can replace Oracle with Ingres, this is what I mean;

and very soon you can use Ingres for free.

http://news.com.com/CA+open-sources+...l?tag=nefd.top


And lose a lot of critical functionality.

Daniel Morgan

Jul 20 '05 #96
such as ?
Jul 20 '05 #97
VC wrote:
As I've already mentioned several times, no one disputes the fact that in
certain cases Oracle provides higher concurrency due to MVCC.
Agreed.
I also said that there are several solutions to the reporting problem in
locking databases, such as a replicated or stand-by database.
Many believe this but it is patently false. What do you do about
transactions that take place while you are replicating the database?

You either lock the table while replicating or the replication is also
not consistent to a point-in-time. You can not have it both ways.

There is another solution, namely triple mirroring of the OLTP database. SAN vendor
harware can "split off" the third mirrored drive set creating almost
instantaneously a clone of the original database (e.g. EMC BCV) at a given
point in time. It's interesting to notice, that the same technique is
widely used for Oracle databases as well in order to off-load the main
instance. The clone is used both for reporting and backups.


Almost instantly means ALMOST consistent to a point-in-time. But now you
are talking about data consistency by hardware intervention which is
just as valid if we were talking about 3x5 cards and a photocopier.
Serialize to your hearts content ... you aren't going to do it without
a full table lock ...


As I've demonstrated, only a subset of rows involved in the transaction has
to be locked which naturally can be the whole table.


Patently false. You can not lock rows that have not yet been inserted
while the transaction is taking place. And you have no means of keeping
them out of your result set except a full table lock.

Daniel Morgan

Jul 20 '05 #98
Volker Hetzer wrote:
"VC" <bo*******@hotmail.com> schrieb im Newsbeitrag news:RfdLc.109372$WX.92600@attbi_s51...
Here's another textbook example for you:
==
There are two linked accounts (id=1 and id=2) in a bank. A transaction
might look as follows:

== withdraw from 1
select amount into x from accounts where id=1
select amount into y from accounts where id=2
if x+y >= withdrawal then update accounts set amount=amount-withdrawal
where id=1
commit
=====

Any commercial locking scheduler will handle the scenario correctly. Oracle
won't.


Sorry for butting in but I'm just trying to learn here, so what would go wrong?
And what about "select for update" in oracle? It's supposed to lock the rows it hits.

Lots of Greetings!
Volker


You are correct and VC, as I earlier stated to him, is demonstrating far
less than a robust understanding of transaction processing: Lots of
smoke but no fire.

Daniel Morgan

Jul 20 '05 #99
VC wrote:
"Alex Filonov" <af******@yahoo.com> wrote in message
news:33**************************@posting.google.c om...
"VC" <bo*******@hotmail.com> wrote in message


news:<p_XKc.135317$XM6.125873@attbi_s53>...
Under DB2/MSSQL, a transaction will lock only a subset of rows (not the
whole table) for a given flight thus ensuring consistent results


I've said in another post that it's a crappy example. A little bit more on
that.

You are attacking a strawman -- the design is not what's being discussed
here, concurrency control/consistency is...
VC


The problem here VC, as I see it, is that you need to go take a decent
class on database architecture. You meet with ease the criteria to wear
the button that says "A little knowledge is a dangerous thing."

Daniel Morgan

Jul 20 '05 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

38
by: Mike | last post by:
No flame wars, please! We're planning a move from a non-relational system to a relational system. Our choices have been narrowed to Oracle and DB2. Since we're moving from non-relational to...
125
by: Rhino | last post by:
One of my friends, Scott, is a consultant who doesn't currently have newsgroup access so I am asking these questions for him. I'll be telling him how to monitor the answers via Google Newsgroup...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.