By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
429,480 Members | 763 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 429,480 IT Pros & Developers. It's quick & easy.

mySQL import text file too big?

P: n/a
Our production database in an exported textfil runs about 60 MB.
Compressed that's about 9 MB.

I'm trying to import the export into another machine running FC3 and
mySQL 11.18, and it appears as though the file may be too big!
When I try to do it via command line:

mysql -u root --host=localhost printing < ./printing.txt

It eventually errors out with a "syntax error on line X" and only about
half the tables get imported, and MyPHPAdmin GUI interface says it has
a 2MB file limit on importing.

This concerns me. If something were to happen to our production
database, how am I going to restore it?

I tried to do a search on mysql.com, but that's a less than helpful
site for support. A newsgroup search on the topic didn't come up with
much, so maybe I'm just bad at using keywords.

Any pointers or suggestions on this issue?
Thanks!
Liam

Jul 23 '05 #1
Share this Question
Share on Google+
4 Replies


P: n/a
<ne**@celticbear.com> wrote in message
news:11**********************@z14g2000cwz.googlegr oups.com...
Our production database in an exported textfil runs about 60 MB.
Compressed that's about 9 MB.

I'm trying to import the export into another machine running FC3 and
mySQL 11.18, and it appears as though the file may be too big!
<snip> It eventually errors out with a "syntax error on line X" ... <snip> Any pointers or suggestions on this issue?


Have you tried to manually isolate "line X" and put it into a *small* text
file all by itself.? And then tried to import it?

Perhaps what you are dealing with is a "syntax error at line X"!

I know it sounds crazy but what the heck!
Thomas Bartkus
Jul 23 '05 #2

P: n/a
This problem appears to be more complex.

The export I do of the database is in a cron I wrote, which includes
this line:

mysqldump --opt --databases printing > /var/backup/dbtemp/printing.txt

This creates a file about 60MB in size.

However, I just did an export in phpMyAdmin of that database, and it
created a 250MB file!

I scanned through the 60MB file, and while I have no way of knowing
about the individual items, I do find commands to dump and create all
the tables the database contains, followed by the commands to insert
each record for the table.

Yet when I try to import that file, I only get a few of the tables
created.

When I tried importing the 250MB file phpMyAdmin created, it
successfully created all the tables, and as far as I can tell, all the
records. So, it's evidently not a size issue at all.

I looked back at the automated export which I have set up to create a
database export every 30 minutes, and looking at the file sizes,
they're all 60MB, growing gradually smaller as you go back in time. So
it always appeared to me that it was working as they grew in size
gradually as more records entered the database. But seeing how it's
nearly a quarter of the size phpMyAdin export is, and failing on the
import, there's something very wrong with it.

But this is the export script. I can't see anything wrong with it:

#! /bin/bash
mysqldump --opt --databases offers > /var/backup/dbtemp/offers.txt
mysqldump --opt --databases priceretail >
/var/backup/dbtemp/priceretail.txt
mysqldump --opt --databases pricewholesale >
/var/backup/dbtemp/pricewholesale.txt
mysqldump --opt --databases printing > /var/backup/dbtemp/printing.txt
mysqldump --opt --databases db_users > /var/backup/dbtemp/db_users.txt
mysqldump --opt --databases test > /var/backup/dbtemp/test.txt
DOY=`date +%Y%m%d` # DATE YYYYMMDD
TIME=`date +%H%M` # TIME HHMM
/bin/tar -czf /var/backup/db/db-$DOY-$TIME.tgz /var/backup/dbtemp/*
rm -f /var/backup/dbtemp/*
scp -B -S ssh -i /home/backup/.ssh/id_dsa
/var/backup/db/db-$DOY-$TIME.tgz backup@(IP ADDRESS OF REMOTE
PC):/home/backup/db/

Any suggestions?
Thanks!
Liam

Jul 23 '05 #3

P: n/a
<ne**@celticbear.com> wrote in message
news:11**********************@f14g2000cwb.googlegr oups.com...
This problem appears to be more complex.

The export I do of the database is in a cron I wrote, which includes
this line:

mysqldump --opt --databases printing > /var/backup/dbtemp/printing.txt

This creates a file about 60MB in size.

However, I just did an export in phpMyAdmin of that database, and it
created a 250MB file!

I scanned through the 60MB file, and while I have no way of knowing
about the individual items, I do find commands to dump and create all
the tables the database contains, followed by the commands to insert
each record for the table.

Yet when I try to import that file, I only get a few of the tables
created.

When I tried importing the 250MB file phpMyAdmin created, it
successfully created all the tables, and as far as I can tell, all the
records. So, it's evidently not a size issue at all.

I looked back at the automated export which I have set up to create a
database export every 30 minutes, and looking at the file sizes,
they're all 60MB, growing gradually smaller as you go back in time. So
it always appeared to me that it was working as they grew in size
gradually as more records entered the database. But seeing how it's
nearly a quarter of the size phpMyAdin export is, and failing on the
import, there's something very wrong with it.

But this is the export script. I can't see anything wrong with it:

#! /bin/bash
mysqldump --opt --databases offers > /var/backup/dbtemp/offers.txt
mysqldump --opt --databases priceretail >
/var/backup/dbtemp/priceretail.txt
mysqldump --opt --databases pricewholesale >
/var/backup/dbtemp/pricewholesale.txt
mysqldump --opt --databases printing > /var/backup/dbtemp/printing.txt
mysqldump --opt --databases db_users > /var/backup/dbtemp/db_users.txt
mysqldump --opt --databases test > /var/backup/dbtemp/test.txt
DOY=`date +%Y%m%d` # DATE YYYYMMDD
TIME=`date +%H%M` # TIME HHMM
/bin/tar -czf /var/backup/db/db-$DOY-$TIME.tgz /var/backup/dbtemp/*
rm -f /var/backup/dbtemp/*
scp -B -S ssh -i /home/backup/.ssh/id_dsa
/var/backup/db/db-$DOY-$TIME.tgz backup@(IP ADDRESS OF REMOTE
PC):/home/backup/db/


I have no experience with phpMyAdmin so I can't comment here.
And
Your "mySQL 11.18" version info is indecipherable. What version(s) are you
running? Are you porting data between 2 identical versions? If not -
beware of your --opt switch. A nickle says you creating something in one
version that is not recognized by the other - IF you are porting between
different MySQLs.

Personally, I always separate backup up of database structures from backing
up the data. It makes it easier to keep things straight.

I use mysqldump with -d to get data structures only. If I want to restore
or recreate a database. I start empty (no tables) and execute the script
created by mysqldump.

And I don't use mysqldump at all for the data. I use SELECT * INTO OUTFILE
'sometable.dat' to backup data and a corresponding LOAD TABLE
INFILE('sometable.dat') to restore the data. These files are substantially
smaller anyway, since they don't need to carry all the SQL syntax (INSERT,
VALUES and such). Just a clean tab delimited (or comma, or what have you)
text data table that is portable.

I don't know of an import file size limit and I am having difficulty
believing this can be the problem. You mention that the file size of your
phpMyAdmin dump is substantially larger than what you get from mysqldump.
*Clearly*, each is throwing something different at MySQL when you try to
import.

You *still* need to isolate "line X" (where MySQL is choking!) if you really
want to know what's shaking. MySQL is telling you where it's having a
problem. You owe it to yourself to look at that spot!
Thomas Bartkus
Jul 23 '05 #4

P: n/a
Thomas Bartkus wrote:
<ne**@celticbear.com> wrote in message
news:11**********************@f14g2000cwb.googlegr oups.com...
This problem appears to be more complex.

I have no experience with phpMyAdmin so I can't comment here.
And
Your "mySQL 11.18" version info is indecipherable. What version(s) are you running? Are you porting data between 2 identical versions? If not - beware of your --opt switch. A nickle says you creating something in one version that is not recognized by the other - IF you are porting between different MySQLs.
Sorry, I did a "mysql -V" and wrote down the wrong info.
Source mySQL = 3.23.58
Destination mySQL = 3.23.58

OK, I guess that's not the issue.

Personally, I always separate backup up of database structures from backing up the data. It makes it easier to keep things straight.

I use mysqldump with -d to get data structures only. If I want to restore or recreate a database. I start empty (no tables) and execute the script created by mysqldump.

And I don't use mysqldump at all for the data. I use SELECT * INTO OUTFILE 'sometable.dat' to backup data and a corresponding LOAD TABLE
INFILE('sometable.dat') to restore the data. These files are substantially smaller anyway, since they don't need to carry all the SQL syntax (INSERT, VALUES and such). Just a clean tab delimited (or comma, or what have you) text data table that is portable.

That's certainly an idea to consider. But how viable is that? For
example, in the one database (and we have 5 we need to back up on a
regular basis) we have over 30 tables. Would be pretty onerous to do a
query export and import for each table. Is that even something than can
be automated? I have a cron doing the mysqldump every 30 minutes and
sending them to a folder that stores the last 7 days of backups (yeah,
that's a lot, but I do that in case there's some terrible event that
happens that makes the data invalid for up to 6 days before we realize
it.) Now that I think about it, I suppose I could make a shell
executable PHP script that can be croned. Anyway....
I don't know of an import file size limit and I am having difficulty
believing this can be the problem. You mention that the file size of your phpMyAdmin dump is substantially larger than what you get from mysqldump. *Clearly*, each is throwing something different at MySQL when you try to import.
Yeah, because I was able to import a backed up file that was 4 times
larger than the automated exports, successfully, I have ruled out size
being an issue.
Comparing the two files, the 60MB file I get from mysqldump and the
250MB file I get from phpMyAdmin, I can't see anything blatantly
different. Both have the DUMP/CREATE commands for all the tables, and
from a superficial look, it appears that the format of the inserts are
the same. The only thing I can think of is missing data...but even if
that's the case, since the 60MB mysqldump file has commands to create
all the tables, it SHOULD do at least THAT much.

Well, also, if there IS missing data, doesn't make for a viable backup.

You *still* need to isolate "line X" (where MySQL is choking!) if you really want to know what's shaking. MySQL is telling you where it's having a problem. You owe it to yourself to look at that spot!
Thomas Bartkus


Yeah, new twist in that topic. It appears that that one mysqldump 60MB
version I grabbed yesterday to test with, was actually an incomplete
file. Only by about 2 MB though. I copied it out from the backup dir
before it was actually finished dumping. How unluck is that. =/

Anyway, I tried it again today with a completed mysqldump file and I
don't get the error. Although, it's STILL only creating some of the
tables and is a quarter the size of the phpMyAdmin export.

Well, thanks for replying. I appreciate the tips!
Liam

Jul 23 '05 #5

This discussion thread is closed

Replies have been disabled for this discussion.