By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
425,763 Members | 1,609 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 425,763 IT Pros & Developers. It's quick & easy.

Storing/retrieving BLOBs with JSPs

P: n/a
I am in extremely urgent need (by tomorrow) of a way to store files in and
retrieve files from an Oracle database using TopLink as an intermediary. I
have the JSPs for it, and it works for small files, but larger ones like
Word documents and Excel Spreadsheets give an error saying that the data is
too large for the field. Can anyone help with this? Our file object has a
fileData field which is an array of bytes which is mapped in TopLink to the
BLOB field of the database. As I said, it works for very small files like a
small GIF image, but with larger ones, I believe the error number is 17002,
a database error. I'm sorry I don't have any more details, but I don't have
access to the project at the moment. Any solutions/help/nudges in the right
direction are greatly appreciated

--
Ryan Stewart, A1C USAF
805 CSPTS/SCBE
Jul 19 '05 #1
Share this Question
Share on Google+
12 Replies


P: n/a
Hello,

it seem´s like an error because the file-size. with jdbc there is a
max-size to instert with jdbc (original oracle-jdbc driver). if you
inserted a file was greater if fails and generate an error. i didn´t now
at the moment, but i think the size is like 4M or 4K or so.

see for more information metalink!

best regards
thorsten häs
On Thu, 11 Dec 2003 11:54:29 -0600, Ryan Stewart
<za****@no.texas.spam.net> wrote:
I am in extremely urgent need (by tomorrow) of a way to store files in
and
retrieve files from an Oracle database using TopLink as an intermediary.
I
have the JSPs for it, and it works for small files, but larger ones like
Word documents and Excel Spreadsheets give an error saying that the data
is
too large for the field. Can anyone help with this? Our file object has a
fileData field which is an array of bytes which is mapped in TopLink to
the
BLOB field of the database. As I said, it works for very small files
like a
small GIF image, but with larger ones, I believe the error number is
17002,
a database error. I'm sorry I don't have any more details, but I don't
have
access to the project at the moment. Any solutions/help/nudges in the
right
direction are greatly appreciated


--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
Jul 19 '05 #2

P: n/a
"Ryan Stewart" <za****@no.texas.spam.net> wrote in message
news:TI********************@texas.net...
I am in extremely urgent need (by tomorrow)
As it happens, I don't care - you are not
paying me enough.

And in future, please do not cross-post
so widely. You are not that important,
trust me.
..of a way to store files in and
retrieve files from an Oracle database using TopLink as an intermediary. I
have the JSPs for it,
Good for you. Just how do you expect
others to debug them for you? Telepathy?
..and it works for small files, but larger ones like
Word documents and Excel Spreadsheets give an error saying that the data is too large for the field.
What field?
..Can anyone help with this?
The only thing this absolute server-side noob
can suggest to somebody that does not supply
code, for free, is..

Maybe you are 'getting' the data rather than
posting it. ..but..
..Our file object has a
fileData field which is an array of bytes which is mapped in TopLink to the BLOB field of the database. As I said, it works for very small files like a small GIF image, but with larger ones, I believe the error number is 17002, a database error.
That number sounds too large, I thought 'get's were
limited to much smaller sizes than that.
..I'm sorry I don't have any more details, but I don't have
access to the project at the moment.
See first comment.
..Any solutions/help/nudges in the right
direction are greatly appreciated


The ones I have to give, you've got..

--
Andrew Thompson
* http://www.PhySci.org/ PhySci software suite
* http://www.1point1C.org/ 1.1C - Superluminal!
* http://www.AThompson.info/andrew/ personal site
Jul 19 '05 #3

P: n/a
"Andrew Thompson" <an******@bigNOSPAMpond.com> wrote in message news:<wN******************@news-server.bigpond.net.au>...

[snip]
As it happens, I don't care - you are not
paying me enough.

And in future, please do not cross-post
so widely. You are not that important,
trust me.

[snip]

Good for you. Just how do you expect
others to debug them for you? Telepathy?


Geez, could you be any more of a jackass?
[snip]
..Any solutions/help/nudges in the right
direction are greatly appreciated


The ones I have to give, you've got..


After all that you can't even help.
And in future, please do not post if
you can't help. You are not that important,
trust me.
Jul 19 '05 #4

P: n/a
"Ryan Stewart" <za****@no.texas.spam.net> wrote in
news:TI********************@texas.net:
and it works for small files,
but larger ones ... give an
error saying that the data is too large for the field.
Any solutions/help/nudges in the right
direction are greatly appreciated


As a workaround store big files as several slices of 4k (or whatever the
limit is) (maybe in a new table - with a slice-# as part of the key) and
reassemble them at retrieval time.
Jul 19 '05 #5

P: n/a

"moose" <sk********@excite.com> wrote in message
news:57*************************@posting.google.co m...
"Andrew Thompson" <an******@bigNOSPAMpond.com> wrote in message news:<wN******************@news-server.bigpond.net.au>...
[snip]
As it happens, I don't care - you are not
paying me enough.

And in future, please do not cross-post
so widely. You are not that important,
trust me.

[snip]

Good for you. Just how do you expect
others to debug them for you? Telepathy?


Geez, could you be any more of a jackass?
[snip]
..Any solutions/help/nudges in the right
direction are greatly appreciated


The ones I have to give, you've got..


After all that you can't even help.


Did you miss..
"Maybe you are 'getting' the data rather than
posting it. ..but.."
[ As it turns out, I was completely wrong. ]
And in future, please do not post if
you can't help. You are not that important,
trust me.


:)

So let's sum this up.

I offered the OP 1 (albeit wrong) suggestion
23 hrs prior to a deadline that lay (presumedly)
within 24 hrs.

You offered the OP the above (which I did not
trim a character of), that is, nothing at all - some
120+ hours past the deadline.

That would seem to make you as useful as
an udder on a male moose, no?
Jul 19 '05 #6

P: n/a
Funny, your math doesn't add up.
I never critisized the OP or offered any help to the OP.
That was YOU, in your astounding arrogance.

I stand by my first statement regarding your behavior.
What a jackass!

"Andrew Thompson" <an******@bigNOSPAMpond.com> wrote in message news:<Zc******************@news-server.bigpond.net.au>...
So let's sum this up.

I offered the OP 1 (albeit wrong) suggestion
23 hrs prior to a deadline that lay (presumedly)
within 24 hrs.

You offered the OP the above (which I did not
trim a character of), that is, nothing at all - some
120+ hours past the deadline.

That would seem to make you as useful as
an udder on a male moose, no?

Jul 19 '05 #7

P: n/a
This is cool stuff, Is there anyway to implement a sort of Network
File System using Oracle using blobs. The question is, will this kill
the oracle server?
Will the performance being in better using File I/O for example using
servlets?

I would like to try it but I hate to mess up our server for doing so.
Are there any benchmarks?
Thomas Schodt <news0310@xenoc.$DEMON.co.uk> wrote in message news:<Xn*******************@158.152.254.254>...
"Ryan Stewart" <za****@no.texas.spam.net> wrote in
news:TI********************@texas.net:
and it works for small files,
but larger ones ... give an
error saying that the data is too large for the field.
Any solutions/help/nudges in the right
direction are greatly appreciated


As a workaround store big files as several slices of 4k (or whatever the
limit is) (maybe in a new table - with a slice-# as part of the key) and
reassemble them at retrieval time.

Jul 19 '05 #8

P: n/a
bigbinc wrote:
This is cool stuff,
"Really idiotic" is the expression I'd use.
Is there anyway to implement a sort of Network
File System using Oracle using blobs. The question is, will this kill
the oracle server?
If used even halfway intensively, yes.
Will the performance being in better using File I/O for example using
servlets?


Yes. A LOT better. Easily 100 times better. It's exactly the thing that
modern file systems try so hard to prevent and CS teaches you to avoid like
the plague: fragment big files into small chunks and scatter them all over
the place so that the HD's latency completely dominates its transfer speed.

Jul 19 '05 #9

P: n/a
"bigbinc" <bi*****@hotmail.com> wrote in message
news:d1**************************@posting.google.c om...
*made bottom post*
Thomas Schodt <news0310@xenoc.$DEMON.co.uk> wrote in message

news:<Xn*******************@158.152.254.254>...
"Ryan Stewart" <za****@no.texas.spam.net> wrote in
news:TI********************@texas.net:
and it works for small files,
but larger ones ... give an
error saying that the data is too large for the field.
Any solutions/help/nudges in the right
direction are greatly appreciated


As a workaround store big files as several slices of 4k (or whatever the
limit is) (maybe in a new table - with a slice-# as part of the key) and
reassemble them at retrieval time.

This is cool stuff, Is there anyway to implement a sort of Network
File System using Oracle using blobs. The question is, will this kill
the oracle server?
Will the performance being in better using File I/O for example using
servlets?

I would like to try it but I hate to mess up our server for doing so.
Are there any benchmarks?


That's pretty much what I was/am doing: making a file-sharing system. We
managed to get around the 4k limit by bypassing certain things and manually
inserting the file data into the database.
Jul 19 '05 #10

P: n/a
Michael Borgwardt <br****@brazils-animeland.de> wrote in message news:<bt************@ID-161931.news.uni-berlin.de>...
bigbinc wrote:
This is cool stuff,


"Really idiotic" is the expression I'd use.


You are kidding me right, A filesystem is basically a database full
of file nodes.

If you are dealing in a one OS, one machine fine, standard file access
is great. If you are dealing with heterogenous networks, where NFS
is not available, then a Databased filesystem may in fact be the only
way to go.

See oracle internet filesystem.

http://otn.oracle.com/documentation/ifs_arch.html

Plus the fact, where is your data and performance stats, as you very
well know to say an approach in the computer industry is completely
idiotic, is idiotic.
Jul 19 '05 #11

P: n/a
"bigbinc" <bi*****@hotmail.com> wrote in message
news:d1**************************@posting.google.c om...
Michael Borgwardt <br****@brazils-animeland.de> wrote in message

news:<bt************@ID-161931.news.uni-berlin.de>...
bigbinc wrote:
This is cool stuff,


"Really idiotic" is the expression I'd use.


You are kidding me right, A filesystem is basically a database full
of file nodes.

If you are dealing in a one OS, one machine fine, standard file access
is great. If you are dealing with heterogenous networks, where NFS
is not available, then a Databased filesystem may in fact be the only
way to go.

See oracle internet filesystem.

http://otn.oracle.com/documentation/ifs_arch.html

Plus the fact, where is your data and performance stats, as you very
well know to say an approach in the computer industry is completely
idiotic, is idiotic.


I think what he was referring to was the idea of breaking the files into
tiny chunks. Essentially, that would be purposely fragmenting the database.
And with a maximum fragment size of 4k, that would be really bad.
Jul 19 '05 #12

P: n/a
"Ryan Stewart" <za****@no.texas.spam.net> wrote in message news:<EK********************@texas.net>...
"bigbinc" <bi*****@hotmail.com> wrote in message
news:d1**************************@posting.google.c om...
Michael Borgwardt <br****@brazils-animeland.de> wrote in message

news:<bt************@ID-161931.news.uni-berlin.de>...
bigbinc wrote:

> This is cool stuff,

"Really idiotic" is the expression I'd use.


You are kidding me right, A filesystem is basically a database full
of file nodes.

If you are dealing in a one OS, one machine fine, standard file access
is great. If you are dealing with heterogenous networks, where NFS
is not available, then a Databased filesystem may in fact be the only
way to go.

See oracle internet filesystem.

http://otn.oracle.com/documentation/ifs_arch.html

Plus the fact, where is your data and performance stats, as you very
well know to say an approach in the computer industry is completely
idiotic, is idiotic.


I think what he was referring to was the idea of breaking the files into
tiny chunks. Essentially, that would be purposely fragmenting the database.
And with a maximum fragment size of 4k, that would be really bad.

of course, sorry.
Jul 19 '05 #13

This discussion thread is closed

Replies have been disabled for this discussion.