<ne**@celticbear.com> wrote in message
news:11**********************@f14g2000cwb.googlegr oups.com...
This problem appears to be more complex.
The export I do of the database is in a cron I wrote, which includes
this line:
mysqldump --opt --databases printing > /var/backup/dbtemp/printing.txt
This creates a file about 60MB in size.
However, I just did an export in phpMyAdmin of that database, and it
created a 250MB file!
I scanned through the 60MB file, and while I have no way of knowing
about the individual items, I do find commands to dump and create all
the tables the database contains, followed by the commands to insert
each record for the table.
Yet when I try to import that file, I only get a few of the tables
created.
When I tried importing the 250MB file phpMyAdmin created, it
successfully created all the tables, and as far as I can tell, all the
records. So, it's evidently not a size issue at all.
I looked back at the automated export which I have set up to create a
database export every 30 minutes, and looking at the file sizes,
they're all 60MB, growing gradually smaller as you go back in time. So
it always appeared to me that it was working as they grew in size
gradually as more records entered the database. But seeing how it's
nearly a quarter of the size phpMyAdin export is, and failing on the
import, there's something very wrong with it.
But this is the export script. I can't see anything wrong with it:
#! /bin/bash
mysqldump --opt --databases offers > /var/backup/dbtemp/offers.txt
mysqldump --opt --databases priceretail >
/var/backup/dbtemp/priceretail.txt
mysqldump --opt --databases pricewholesale >
/var/backup/dbtemp/pricewholesale.txt
mysqldump --opt --databases printing > /var/backup/dbtemp/printing.txt
mysqldump --opt --databases db_users > /var/backup/dbtemp/db_users.txt
mysqldump --opt --databases test > /var/backup/dbtemp/test.txt
DOY=`date +%Y%m%d` # DATE YYYYMMDD
TIME=`date +%H%M` # TIME HHMM
/bin/tar -czf /var/backup/db/db-$DOY-$TIME.tgz /var/backup/dbtemp/*
rm -f /var/backup/dbtemp/*
scp -B -S ssh -i /home/backup/.ssh/id_dsa
/var/backup/db/db-$DOY-$TIME.tgz backup@(IP ADDRESS OF REMOTE
PC):/home/backup/db/
I have no experience with phpMyAdmin so I can't comment here.
And
Your "mySQL 11.18" version info is indecipherable. What version(s) are you
running? Are you porting data between 2 identical versions? If not -
beware of your --opt switch. A nickle says you creating something in one
version that is not recognized by the other - IF you are porting between
different MySQLs.
Personally, I always separate backup up of database structures from backing
up the data. It makes it easier to keep things straight.
I use mysqldump with -d to get data structures only. If I want to restore
or recreate a database. I start empty (no tables) and execute the script
created by mysqldump.
And I don't use mysqldump at all for the data. I use SELECT * INTO OUTFILE
'sometable.dat' to backup data and a corresponding LOAD TABLE
INFILE('sometable.dat') to restore the data. These files are substantially
smaller anyway, since they don't need to carry all the SQL syntax (INSERT,
VALUES and such). Just a clean tab delimited (or comma, or what have you)
text data table that is portable.
I don't know of an import file size limit and I am having difficulty
believing this can be the problem. You mention that the file size of your
phpMyAdmin dump is substantially larger than what you get from mysqldump.
*Clearly*, each is throwing something different at MySQL when you try to
import.
You *still* need to isolate "line X" (where MySQL is choking!) if you really
want to know what's shaking. MySQL is telling you where it's having a
problem. You owe it to yourself to look at that spot!
Thomas Bartkus