I'm needed to insert large BLOBs into a database. With the 1MB packet
limit, sending larger amounts of data would be difficult, so I had a neat
idea. I would do an initial insert of an empty record and get the
auto_insert ID from the response, and then loop through, appending data to
the record.
My table is simple. One unsigned int auto_increment field (DataID), and
one long blob field (BinaryData).
When I loop through the data to send, I run:
UPDATE BinaryTable SET BinaryData=CONCAT(BinaryData, 'My binary data
here') WHERE DataID = 35
The binary data I insert I escape null characters, backslashes, single and
double quotes. The data seems to insert fine.
The problem is that as I increase the amount of data in the field, CONCAT
seems to drop all but the last 416k of the data. Thus if I loop through
adding 400k blocks at a time (Which I do) I am left with at most 800k of
data in the blob field.
Using 4k blocks I end up with 419k of data in the field when all is said
and done.
Please let me know if/when this will be fixed, and if there is a
work around that might be used, r a better way to insert BLOB data is
known.
- Garrett Kajmowicz
gk******@tbaytel.net