I am in extremely urgent need (by tomorrow) of a way to store files in and
retrieve files from an Oracle database using TopLink as an intermediary. I
have the JSPs for it, and it works for small files, but larger ones like
Word documents and Excel Spreadsheets give an error saying that the data is
too large for the field. Can anyone help with this? Our file object has a
fileData field which is an array of bytes which is mapped in TopLink to the
BLOB field of the database. As I said, it works for very small files like a
small GIF image, but with larger ones, I believe the error number is 17002,
a database error. I'm sorry I don't have any more details, but I don't have
access to the project at the moment. Any solutions/help/nudges in the right
direction are greatly appreciated
--
Ryan Stewart, A1C USAF
805 CSPTS/SCBE 12 5269
Hello,
it seem´s like an error because the file-size. with jdbc there is a
max-size to instert with jdbc (original oracle-jdbc driver). if you
inserted a file was greater if fails and generate an error. i didn´t now
at the moment, but i think the size is like 4M or 4K or so.
see for more information metalink!
best regards
thorsten häs
On Thu, 11 Dec 2003 11:54:29 -0600, Ryan Stewart
<za****@no.texas.spam.net> wrote: I am in extremely urgent need (by tomorrow) of a way to store files in and retrieve files from an Oracle database using TopLink as an intermediary. I have the JSPs for it, and it works for small files, but larger ones like Word documents and Excel Spreadsheets give an error saying that the data is too large for the field. Can anyone help with this? Our file object has a fileData field which is an array of bytes which is mapped in TopLink to the BLOB field of the database. As I said, it works for very small files like a small GIF image, but with larger ones, I believe the error number is 17002, a database error. I'm sorry I don't have any more details, but I don't have access to the project at the moment. Any solutions/help/nudges in the right direction are greatly appreciated
--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
"Ryan Stewart" <za****@no.texas.spam.net> wrote in message
news:TI********************@texas.net... I am in extremely urgent need (by tomorrow)
As it happens, I don't care - you are not
paying me enough.
And in future, please do not cross-post
so widely. You are not that important,
trust me.
..of a way to store files in and retrieve files from an Oracle database using TopLink as an intermediary. I have the JSPs for it,
Good for you. Just how do you expect
others to debug them for you? Telepathy?
..and it works for small files, but larger ones like Word documents and Excel Spreadsheets give an error saying that the data
is too large for the field.
What field?
..Can anyone help with this?
The only thing this absolute server-side noob
can suggest to somebody that does not supply
code, for free, is..
Maybe you are 'getting' the data rather than
posting it. ..but..
..Our file object has a fileData field which is an array of bytes which is mapped in TopLink to
the BLOB field of the database. As I said, it works for very small files like
a small GIF image, but with larger ones, I believe the error number is
17002, a database error.
That number sounds too large, I thought 'get's were
limited to much smaller sizes than that.
..I'm sorry I don't have any more details, but I don't have access to the project at the moment.
See first comment.
..Any solutions/help/nudges in the right direction are greatly appreciated
The ones I have to give, you've got..
--
Andrew Thompson
* http://www.PhySci.org/ PhySci software suite
* http://www.1point1C.org/ 1.1C - Superluminal!
* http://www.AThompson.info/andrew/ personal site
"Andrew Thompson" <an******@bigNOSPAMpond.com> wrote in message news:<wN******************@news-server.bigpond.net.au>...
[snip] As it happens, I don't care - you are not paying me enough.
And in future, please do not cross-post so widely. You are not that important, trust me.
[snip] Good for you. Just how do you expect others to debug them for you? Telepathy?
Geez, could you be any more of a jackass?
[snip] ..Any solutions/help/nudges in the right direction are greatly appreciated
The ones I have to give, you've got..
After all that you can't even help.
And in future, please do not post if
you can't help. You are not that important,
trust me.
"Ryan Stewart" <za****@no.texas.spam.net> wrote in
news:TI********************@texas.net: and it works for small files, but larger ones ... give an error saying that the data is too large for the field. Any solutions/help/nudges in the right direction are greatly appreciated
As a workaround store big files as several slices of 4k (or whatever the
limit is) (maybe in a new table - with a slice-# as part of the key) and
reassemble them at retrieval time.
"moose" <sk********@excite.com> wrote in message
news:57*************************@posting.google.co m... "Andrew Thompson" <an******@bigNOSPAMpond.com> wrote in message
news:<wN******************@news-server.bigpond.net.au>... [snip]
As it happens, I don't care - you are not paying me enough.
And in future, please do not cross-post so widely. You are not that important, trust me.
[snip]
Good for you. Just how do you expect others to debug them for you? Telepathy?
Geez, could you be any more of a jackass?
[snip]
..Any solutions/help/nudges in the right direction are greatly appreciated
The ones I have to give, you've got..
After all that you can't even help.
Did you miss..
"Maybe you are 'getting' the data rather than
posting it. ..but.."
[ As it turns out, I was completely wrong. ]
And in future, please do not post if you can't help. You are not that important, trust me.
:)
So let's sum this up.
I offered the OP 1 (albeit wrong) suggestion
23 hrs prior to a deadline that lay (presumedly)
within 24 hrs.
You offered the OP the above (which I did not
trim a character of), that is, nothing at all - some
120+ hours past the deadline.
That would seem to make you as useful as
an udder on a male moose, no?
Funny, your math doesn't add up.
I never critisized the OP or offered any help to the OP.
That was YOU, in your astounding arrogance.
I stand by my first statement regarding your behavior.
What a jackass!
"Andrew Thompson" <an******@bigNOSPAMpond.com> wrote in message news:<Zc******************@news-server.bigpond.net.au>... So let's sum this up.
I offered the OP 1 (albeit wrong) suggestion 23 hrs prior to a deadline that lay (presumedly) within 24 hrs.
You offered the OP the above (which I did not trim a character of), that is, nothing at all - some 120+ hours past the deadline.
That would seem to make you as useful as an udder on a male moose, no?
This is cool stuff, Is there anyway to implement a sort of Network
File System using Oracle using blobs. The question is, will this kill
the oracle server?
Will the performance being in better using File I/O for example using
servlets?
I would like to try it but I hate to mess up our server for doing so.
Are there any benchmarks?
Thomas Schodt <news0310@xenoc.$DEMON.co.uk> wrote in message news:<Xn*******************@158.152.254.254>... "Ryan Stewart" <za****@no.texas.spam.net> wrote in news:TI********************@texas.net:
and it works for small files, but larger ones ... give an error saying that the data is too large for the field. Any solutions/help/nudges in the right direction are greatly appreciated
As a workaround store big files as several slices of 4k (or whatever the limit is) (maybe in a new table - with a slice-# as part of the key) and reassemble them at retrieval time.
bigbinc wrote: This is cool stuff,
"Really idiotic" is the expression I'd use.
Is there anyway to implement a sort of Network File System using Oracle using blobs. The question is, will this kill the oracle server?
If used even halfway intensively, yes.
Will the performance being in better using File I/O for example using servlets?
Yes. A LOT better. Easily 100 times better. It's exactly the thing that
modern file systems try so hard to prevent and CS teaches you to avoid like
the plague: fragment big files into small chunks and scatter them all over
the place so that the HD's latency completely dominates its transfer speed.
"bigbinc" <bi*****@hotmail.com> wrote in message
news:d1**************************@posting.google.c om...
*made bottom post* Thomas Schodt <news0310@xenoc.$DEMON.co.uk> wrote in message
news:<Xn*******************@158.152.254.254>... "Ryan Stewart" <za****@no.texas.spam.net> wrote in news:TI********************@texas.net:
and it works for small files, but larger ones ... give an error saying that the data is too large for the field. Any solutions/help/nudges in the right direction are greatly appreciated
As a workaround store big files as several slices of 4k (or whatever the limit is) (maybe in a new table - with a slice-# as part of the key) and reassemble them at retrieval time. This is cool stuff, Is there anyway to implement a sort of Network File System using Oracle using blobs. The question is, will this kill the oracle server? Will the performance being in better using File I/O for example using servlets?
I would like to try it but I hate to mess up our server for doing so. Are there any benchmarks?
That's pretty much what I was/am doing: making a file-sharing system. We
managed to get around the 4k limit by bypassing certain things and manually
inserting the file data into the database.
Michael Borgwardt <br****@brazils-animeland.de> wrote in message news:<bt************@ID-161931.news.uni-berlin.de>... bigbinc wrote:
This is cool stuff,
"Really idiotic" is the expression I'd use.
You are kidding me right, A filesystem is basically a database full
of file nodes.
If you are dealing in a one OS, one machine fine, standard file access
is great. If you are dealing with heterogenous networks, where NFS
is not available, then a Databased filesystem may in fact be the only
way to go.
See oracle internet filesystem. http://otn.oracle.com/documentation/ifs_arch.html
Plus the fact, where is your data and performance stats, as you very
well know to say an approach in the computer industry is completely
idiotic, is idiotic.
"bigbinc" <bi*****@hotmail.com> wrote in message
news:d1**************************@posting.google.c om... Michael Borgwardt <br****@brazils-animeland.de> wrote in message
news:<bt************@ID-161931.news.uni-berlin.de>... bigbinc wrote:
This is cool stuff,
"Really idiotic" is the expression I'd use.
You are kidding me right, A filesystem is basically a database full of file nodes.
If you are dealing in a one OS, one machine fine, standard file access is great. If you are dealing with heterogenous networks, where NFS is not available, then a Databased filesystem may in fact be the only way to go.
See oracle internet filesystem.
http://otn.oracle.com/documentation/ifs_arch.html
Plus the fact, where is your data and performance stats, as you very well know to say an approach in the computer industry is completely idiotic, is idiotic.
I think what he was referring to was the idea of breaking the files into
tiny chunks. Essentially, that would be purposely fragmenting the database.
And with a maximum fragment size of 4k, that would be really bad.
"Ryan Stewart" <za****@no.texas.spam.net> wrote in message news:<EK********************@texas.net>... "bigbinc" <bi*****@hotmail.com> wrote in message news:d1**************************@posting.google.c om... Michael Borgwardt <br****@brazils-animeland.de> wrote in message news:<bt************@ID-161931.news.uni-berlin.de>... bigbinc wrote:
> This is cool stuff,
"Really idiotic" is the expression I'd use.
You are kidding me right, A filesystem is basically a database full of file nodes.
If you are dealing in a one OS, one machine fine, standard file access is great. If you are dealing with heterogenous networks, where NFS is not available, then a Databased filesystem may in fact be the only way to go.
See oracle internet filesystem.
http://otn.oracle.com/documentation/ifs_arch.html
Plus the fact, where is your data and performance stats, as you very well know to say an approach in the computer industry is completely idiotic, is idiotic.
I think what he was referring to was the idea of breaking the files into tiny chunks. Essentially, that would be purposely fragmenting the database. And with a maximum fragment size of 4k, that would be really bad.
of course, sorry. This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Juergen Gerner |
last post by:
Hello Python fans,
I'm trying and searching for many days for an acceptable solution...
without success. I want to store files in a database using BLOB
fields. The database table has an ID field...
|
by: David Horowitz |
last post by:
Hi folks,
I want to be able to store and retrieve UNSAVED Word documents as BLOBs. I
got all the info for storing them if they're already saved on the file
system. But what if they're not...
|
by: hamvil79 |
last post by:
I'm implementig a java web application using MySQL as database.
The main function of the application is basically to redistribuite
documents. Those documents (PDF, DOC with an average size around...
|
by: Joolz |
last post by:
Hello everyone,
Sorry if this is a FAQ, but I've groups.googled the subject and
can't find a definite answer (if such a thing exists). I'm working
on a db in postgresql on a debian stable...
|
by: Sniper |
last post by:
Hi guys
I am retrieving an attachment using a web service. what would be the
fastest way to download the attcahment ?
WSE Attacahment or byte array or any other methods ???
tahnks in...
|
by: bruce_phipps |
last post by:
JSPs seem to offer more than one way of storing Objects to a session
context for session tracking purposes.
Both these seem to work Ok in my JSP code on Tomcat 5.0.x:
// using pageContext with...
|
by: coony |
last post by:
Hi everyone,
I got an annoying thing going on.
I've got an MSSQL db which is filled with different data, used by another program (T-Plan). I should read some table to import in another DB. The...
|
by: Ryan Stewart |
last post by:
I am in extremely urgent need (by tomorrow) of a way to store files in and
retrieve files from an Oracle database using TopLink as an intermediary. I
have the JSPs for it, and it works for small...
|
by: Annonymous Coward |
last post by:
I remember readng that BLOBs can be stored externally (with reference to
the BLOB file stored in tables instead).
Does anyone have any experience doing this ? I have a few questions:
1).what...
|
by: taylorcarr |
last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
|
by: ryjfgjl |
last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
|
by: ryjfgjl |
last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
|
by: emmanuelkatto |
last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud.
Please let me know.
Thanks!
Emmanuel
|
by: BarryA |
last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
|
by: Sonnysonu |
last post by:
This is the data of csv file
1 2 3
1 2 3
1 2 3
1 2 3
2 3
2 3
3
the lengths should be different i have to store the data by column-wise with in the specific length.
suppose the i have to...
|
by: marktang |
last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
| |