473,406 Members | 2,343 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,406 software developers and data experts.

Large object insert/update and oid use

Hey folks. I'm new to the list, and not quite what you'd call a DB
Guru, so please be patient with me. I'm afraid the lead up here is a
bit verbose . . .

I am working on an application that uses very high volume DB
transactions - in the order of tens of millions per day . . .

Anyway, the current database which will remain nameless, but begins
with O and rymes with debacle (sorta), has a problem with high volume
work when it comes to binary large objects and rowid use (or oid use
as I understand Postgres uses).

Here's the problem:

When a record containing a number of basic types (int, varchar(2048),
char(32), etc.) as well as a large object is updated with a LOB, we
used to use rowids returned in the update of the other fields. The
rowid was then the condition used when selecting the locator for the
large object.

Unfortunately, when a rowid is returned, it is always the location of
the current location, but if the data in question won't fit in the
current location, it is shifted to another rowid - but the application
can't really get notification of this. Later, when the object is
inserted using that rowid, it gets put in the wrong place. The result
is that sometimes, a request for one object will yield the wrong
object.

Of course, there are ways to simply update the whole record, object
and all, but there are bugs in the database version we are using that
causes the temp tablespace to fill up and not be reused when this
method is used, so we've had to change some of this stuff around to
use the key field as the condition, and selecting the locator after
the initial creation of the empty object. The point is that the
direct updates of large objects have problems, as do the use of rowids
in high volume conditions. Regardless, we've had to implement some
verification checks to ensure the objects don't get swapped - this is
done by prefixing the object with a 32 bit MD5 hash that is also
stored in a separate field of the same record.

I've been looking at the Postgres docs, and I've found the lo_*()
routines, which appear to use OIDs to write to the object. Is there
any 'shifting' of records during insert because of space availability?
And, probably more important, is there a way to do a direct update of
a large object along with the other fields in a record - without
having to call the lo_creat() and lo_write() functions? I've done
some searching in the archives, but can't find anything that appears
to answer the question.

If I've been unclear on any of this, I'm sorry, feel free to prod for
more info.

Thanks in advance

Lou
--
Louis LeBlanc le*****@keyslapper.org
Fully Funded Hobbyist, KeySlapper Extrordinaire :)
http://www.keyslapper.org Ô¿Ô¬

Scientists are people who build the Brooklyn Bridge and then buy it.
-- William Buckley

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #1
5 5539
Louis LeBlanc <db@keyslapper.org> writes:
[ large objects don't work real well in That Other Database ]
Fascinating. I'd have thought they'd use a less flaky design.
I've been looking at the Postgres docs, and I've found the lo_*()
routines, which appear to use OIDs to write to the object. Is there
any 'shifting' of records during insert because of space availability?
No, OIDs are stable identifiers. It sounds like their rowid equates
to what we call a TID or CTID --- that is, a tuple's current physical
location. CTID isn't fixed across updates in Postgres, but OID is.
And, probably more important, is there a way to do a direct update of
a large object along with the other fields in a record - without
having to call the lo_creat() and lo_write() functions?


Well, I guess the question is how large are your BLOBs and what are you
doing with them exactly?

Postgres offers two different ways of dealing with large chunks of data.
One is just to store them as big text or bytea values (which data type
you use depends on whether the data is textual or not). This works well
up to maybe a few megabytes, but it gets unwieldy for larger values,
mainly because there's no easy way to store sections of a value. In
some cases you can use substr() to fetch sections of a large value,
but there's no equivalent means of writing just part of it. Also, if
the data isn't textual then you have to deal with messy quoting rules
when constructing SQL commands. (But as of 7.4 it's possible to
alleviate the quoting problem by using out-of-line parameters instead.)

The other way is to use lo_creat() and related functions. In this case
each large object has its own identity (an OID assigned by lo_creat) and
what you put in your table row is just a reference to the object. The
plus side is that you can read and write a large object in sections
(using lo_read/lo_write/lo_seek), the downside is that you have a more
complicated model of what the database structure is, and lots more
management headaches. For instance you need to explicitly delete a
large object when you don't need it any more --- deleting a table row
doesn't in itself make referenced large objects go away.

Personally I'd avoid the large object facility if there were any chance
of dealing with the data as ordinary wide values instead. It's just
easier.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 22 '05 #2
Apologies if I missed something, but why are you using the rowid in Oracle
instead of your own primary key value (even if you are inserting 10's of
millions of records a day). A number(38) provides a range of magnitude
1E-130 .. 10E125

Using a non rowid PK value would mean that you were not at the mercy of
the database moving your data record elsewhere if it cannot be accomodated
in the original block following an update.

If you use a number(38) PK value instead of a row ID, you are *sure* that
your record is accessible regardless of updates.

Of course if you wanted to avoid building/maintaining the number(38) PK
index then that is a different matter...

John Sidney-Woollett
Louis LeBlanc said:
Hey folks. I'm new to the list, and not quite what you'd call a DB
Guru, so please be patient with me. I'm afraid the lead up here is a
bit verbose . . .

I am working on an application that uses very high volume DB
transactions - in the order of tens of millions per day . . .

Anyway, the current database which will remain nameless, but begins
with O and rymes with debacle (sorta), has a problem with high volume
work when it comes to binary large objects and rowid use (or oid use
as I understand Postgres uses).

Here's the problem:

When a record containing a number of basic types (int, varchar(2048),
char(32), etc.) as well as a large object is updated with a LOB, we
used to use rowids returned in the update of the other fields. The
rowid was then the condition used when selecting the locator for the
large object.

Unfortunately, when a rowid is returned, it is always the location of
the current location, but if the data in question won't fit in the
current location, it is shifted to another rowid - but the application
can't really get notification of this. Later, when the object is
inserted using that rowid, it gets put in the wrong place. The result
is that sometimes, a request for one object will yield the wrong
object.

Of course, there are ways to simply update the whole record, object
and all, but there are bugs in the database version we are using that
causes the temp tablespace to fill up and not be reused when this
method is used, so we've had to change some of this stuff around to
use the key field as the condition, and selecting the locator after
the initial creation of the empty object. The point is that the
direct updates of large objects have problems, as do the use of rowids
in high volume conditions. Regardless, we've had to implement some
verification checks to ensure the objects don't get swapped - this is
done by prefixing the object with a 32 bit MD5 hash that is also
stored in a separate field of the same record.

I've been looking at the Postgres docs, and I've found the lo_*()
routines, which appear to use OIDs to write to the object. Is there
any 'shifting' of records during insert because of space availability?
And, probably more important, is there a way to do a direct update of
a large object along with the other fields in a record - without
having to call the lo_creat() and lo_write() functions? I've done
some searching in the archives, but can't find anything that appears
to answer the question.

If I've been unclear on any of this, I'm sorry, feel free to prod for
more info.

Thanks in advance

Lou
--
Louis LeBlanc le*****@keyslapper.org
Fully Funded Hobbyist, KeySlapper Extrordinaire :)
http://www.keyslapper.org Ô¿Ô¬

Scientists are people who build the Brooklyn Bridge and then buy it.
-- William Buckley

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 22 '05 #3
On 01/31/04 06:57 PM, John Sidney-Woollett sat at the `puter and typed:
Apologies if I missed something, but why are you using the rowid in
Oracle instead of your own primary key value (even if you are
inserting 10's of millions of records a day). A number(38) provides
a range of magnitude 1E-130 .. 10E125
Ah, good question. Mainly because I didn't write the initial
implementation. As I understand it, my predecessor was under the
impression that rowid use was much faster than using the records
existing key, which is a varchar(2048). Now that I've changed the
code to use the key rather than the rowid, it's not any slower
(faster, in fact, though that may be attributable to any number of
other things . . .). On top of that, the key is pretty much
guaranteed to be unique.
Using a non rowid PK value would mean that you were not at the mercy of
the database moving your data record elsewhere if it cannot be accomodated
in the original block following an update.
Exactly. Which is why we don't use rowids at all anymore.
If you use a number(38) PK value instead of a row ID, you are *sure* that
your record is accessible regardless of updates.
We thought of that, O is able to generate them through triggers, and
they should be uniqueue enough, but probably not worth the effort
since the tables pk isn't any slower.
Of course if you wanted to avoid building/maintaining the number(38) PK
index then that is a different matter...
That isn't so much the problem as trying to stop the swapped LOBs.
Ideally, the LOB would be included in the update when all the data is
collected. Here's how it goes:

Get a key (varchar(2048)) and create a placeholder record to prevent
other threads wasting effort on it (this is autocommitted).
Collect all the data for that key, including the object for the LOB.
Open an atomic transaction
Update the record with an empty LOB and the other data
Select the LOB locator using the pk
Write the object to the LOB locator
Close the atomic transaction, which commits.

The first step of this three step transaction used to return the
rowid, but from time to time it would also result in a shift of the
record to another rowid without notifying the app. The second step
would use that rowid rather than the pk to write the object in.

Ideally, the three step atomic transaction would be reduced to a
single step transaction. This is possible in O, but 8.1.7 has a bug
that causes temp tablespace to be eaten up and not freed.

That's what I'm trying to do in Postgres.

So, can it be done?

Thanks

Lou
Louis LeBlanc said:
Hey folks. I'm new to the list, and not quite what you'd call a DB
Guru, so please be patient with me. I'm afraid the lead up here is a
bit verbose . . .

I am working on an application that uses very high volume DB
transactions - in the order of tens of millions per day . . .

Anyway, the current database which will remain nameless, but begins
with O and rymes with debacle (sorta), has a problem with high volume
work when it comes to binary large objects and rowid use (or oid use
as I understand Postgres uses).

Here's the problem:

When a record containing a number of basic types (int, varchar(2048),
char(32), etc.) as well as a large object is updated with a LOB, we
used to use rowids returned in the update of the other fields. The
rowid was then the condition used when selecting the locator for the
large object.

Unfortunately, when a rowid is returned, it is always the location of
the current location, but if the data in question won't fit in the
current location, it is shifted to another rowid - but the application
can't really get notification of this. Later, when the object is
inserted using that rowid, it gets put in the wrong place. The result
is that sometimes, a request for one object will yield the wrong
object.

Of course, there are ways to simply update the whole record, object
and all, but there are bugs in the database version we are using that
causes the temp tablespace to fill up and not be reused when this
method is used, so we've had to change some of this stuff around to
use the key field as the condition, and selecting the locator after
the initial creation of the empty object. The point is that the
direct updates of large objects have problems, as do the use of rowids
in high volume conditions. Regardless, we've had to implement some
verification checks to ensure the objects don't get swapped - this is
done by prefixing the object with a 32 bit MD5 hash that is also
stored in a separate field of the same record.

I've been looking at the Postgres docs, and I've found the lo_*()
routines, which appear to use OIDs to write to the object. Is there
any 'shifting' of records during insert because of space availability?
And, probably more important, is there a way to do a direct update of
a large object along with the other fields in a record - without
having to call the lo_creat() and lo_write() functions? I've done
some searching in the archives, but can't find anything that appears
to answer the question.

If I've been unclear on any of this, I'm sorry, feel free to prod for
more info.

Thanks in advance

Lou
--
Louis LeBlanc le*****@keyslapper.org
Fully Funded Hobbyist, KeySlapper Extrordinaire :)
http://www.keyslapper.org Ô¿Ô¬

Scientists are people who build the Brooklyn Bridge and then buy it.
-- William Buckley

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


--
Louis LeBlanc le*****@keyslapper.org
Fully Funded Hobbyist, KeySlapper Extrordinaire :)
http://www.keyslapper.org Ô¿Ô¬

Federal grants are offered for... research into the recreation
potential of interplanetary space travel for the culturally disadvantaged.

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 22 '05 #4
On 01/31/04 02:00 PM, Tom Lane sat at the `puter and typed:
Louis LeBlanc <db@keyslapper.org> writes:
[ large objects don't work real well in That Other Database ]
Fascinating. I'd have thought they'd use a less flaky design.


No doubt. I haven't had the chance to look at 9i or 10g, but 8.1.7 is
pretty bogus in that respect.
I've been looking at the Postgres docs, and I've found the lo_*()
routines, which appear to use OIDs to write to the object. Is there
any 'shifting' of records during insert because of space availability?


No, OIDs are stable identifiers. It sounds like their rowid equates
to what we call a TID or CTID --- that is, a tuple's current physical
location. CTID isn't fixed across updates in Postgres, but OID is.


Nice. Good to know.
And, probably more important, is there a way to do a direct update of
a large object along with the other fields in a record - without
having to call the lo_creat() and lo_write() functions?


Well, I guess the question is how large are your BLOBs and what are you
doing with them exactly?


Well, we have a couple that are in the 650M range (CD ISOs), but the
average size is around 4K. They are retrieved from the database via a
server process, and inserted via a client process. Can't go into too
much more, but basically, they get stored in the core and kept while
they are useful (rollbacks for the table in question are unnecessary,
and O won't let you turn them off either - can you turn them off in
Postgres?).
Postgres offers two different ways of dealing with large chunks of data.
One is just to store them as big text or bytea values (which data type
you use depends on whether the data is textual or not). This works well
up to maybe a few megabytes, but it gets unwieldy for larger values,
mainly because there's no easy way to store sections of a value. In
some cases you can use substr() to fetch sections of a large value,
but there's no equivalent means of writing just part of it. Also, if
the data isn't textual then you have to deal with messy quoting rules
when constructing SQL commands. (But as of 7.4 it's possible to
alleviate the quoting problem by using out-of-line parameters instead.)
Sometimes the data is textual, but sometimes it isn't. It can be an
iso image, a graphic image file, an executable, literally anything.
Any datatype that requires escaping certain characters would be
impractical because of the potential size variation. Fetching part or
all of the object is important so that large objects can be retrieved
in parts, like a continued ftp transaction would do. Also, the
objects are usually written piecemeal when they exceed the 10M mark.
I suppose this could be done with an import though . . .
The other way is to use lo_creat() and related functions. In this case
each large object has its own identity (an OID assigned by lo_creat) and
what you put in your table row is just a reference to the object. The
plus side is that you can read and write a large object in sections
(using lo_read/lo_write/lo_seek), the downside is that you have a more
complicated model of what the database structure is, and lots more
management headaches. For instance you need to explicitly delete a
large object when you don't need it any more --- deleting a table row
doesn't in itself make referenced large objects go away.
This still sounds like our current model except for the fact that the
OID has to be explicitly deleted. I assume a trigger could be set up
to perform this on the fly though?
Personally I'd avoid the large object facility if there were any chance
of dealing with the data as ordinary wide values instead. It's just
easier.


Agreed, but probably not feasable when you consider that some of these
objects could eventually reach the Gigabyte threshold.

Thanks for the feedback.
Lou
--
Louis LeBlanc le*****@keyslapper.org
Fully Funded Hobbyist, KeySlapper Extrordinaire :)
http://www.keyslapper.org Ô¿Ô¬

Water, taken in moderation cannot hurt anybody.
-- Mark Twain

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 22 '05 #5
Louis LeBlanc <db@keyslapper.org> writes:
For instance you need to explicitly delete a
large object when you don't need it any more --- deleting a table row
doesn't in itself make referenced large objects go away.
This still sounds like our current model except for the fact that the
OID has to be explicitly deleted. I assume a trigger could be set up
to perform this on the fly though?


Yeah, as long as you keep things simple (no multiple references to
BLOBs) you can just add an ON DELETE trigger to handle that.

Given that this model is close to what you're already doing, large
objects probably are the way to go. Good luck with it.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 22 '05 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
by: harborboy76 | last post by:
Hi, I am trying to insert a large number of rows into a table inside a SPL. But every time, I run the SPL, the table is locked because of the INSERT. When I tried to issue a COMMIT, right after...
5
by: Greg | last post by:
Hi, I have been unable to resolve this error message: Object Ref not set to an instance of an object. The issue is that I cannot determine which object reference is causing the problem. ...
0
by: Khuzema | last post by:
Dear All, I am using issue tracker architecture and developed business object for my application. Now, in VS Beta 2, I humbly want to know how i can have same feature as dataset, in my business...
0
by: NM | last post by:
Hello, I've got a problem inserting binary objects into the postgres database. I have binary objects (e.g. images or smth else) of any size which I want to insert into the database. Funny is it...
24
by: Frank Swarbrick | last post by:
We have a batch process that inserts large numbers (100,000 - 1,000,000) of records into a database each day. (DL/I database.) We're considering converting it to a DB2 table. Currently we have...
8
by: =?Utf-8?B?QXNo?= | last post by:
Hi, I have an object, for example User. User contains various properties which i have been able to bind to successfully using wizards and the form view. However if the class User has a property...
0
by: eduardasm | last post by:
Hello, I have a problem with XML schema update for one XML column (problem exists in both SP1 and SP2 for SQL Server 2005). 1. I have a table that looks like this: CREATE TABLE .( NOT NULL...
4
by: MLH | last post by:
Suppose you're using a query to update or append many records in a large table. And suppose one of those fields is a T/D stamp table and you're using Now() to write its value. It is conceivable...
3
by: ireallyneedtoknow2007 | last post by:
hello i am using c# in visual studio 2005 and need to insert a large number of records contained in a dataset into an access database. the following is too slow. how can i update my DataAdapter...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.