"Raquel" <ra****************@yahoo.com> wrote in message
news:9a**************************@posting.google.c om...
Could someone explain to me what the reason is for having a character
delimiter (which is double quotes by default) for performing
Loads/Imports on UDB? I would think that column delimiter along should
have sufficed (of course it is the responsibility of the person
performing the Load to ensure that the character he has chosen as
column delimiter does not exist as a part of data).
I did not write the DB2 data movement utilities so I can't say this with
certainty. However, I *think* the reason for giving the user separate,
overrideable delimiters for characters and columns was simply to give
him/her maximum flexibility in getting their data loaded without having to
write additional programs.
You are right that these utilities - LOAD, IMPORT, EXPORT, etc. - could have
been written to have a single fixed column delimiter and no character
delimiter but I expect that this would have caused unhappiness in the user
community. Whatever tools they used to unload their data from wherever it
was originally might not have the flexibility to format the data so that it
matched the expectations of the DB2 utilities. Then they would have had to
write their own programs simply to convert the files into a format that DB2
could use.
I believe that IBM was wise enough to know that this wouldn't particularly
please DB2 users, particularly if DB2's competitors had flexible utilities
that didn't require users to write conversion programs. Which would *you*
prefer: an IMPORT utility that let you specify whatever delimiters were
present in your data file, regardless of what they are -or- an IMPORT
utility that was inflexible and forced you to write custom conversion
programs just so that your data would meet the expectations of this
inflexible utility?
The problem I am facing is that I have a file generated by BCP Sybase
command and contains data that I need to upload in UDB. BCP command
does not produce a character delimiter; just the field delimiter and
hence, I am unable to Load the file; that also got me thinking as to
why was a character delimiter required at all?
I had a very similar situation to yours just last week. I'm not sure if our
input files and tables are comparable but my imports worked fine. Here is
some information on what I did that may help you.
I have a variety of small databases on my PDA. The database program I use
there is called HandBase. HandBase can export data in a few different
formats; the one that is closest to DB2's expectations is CSV (Comma
Separated Values). I exported each HandBase table to a separate CSV table. I
had no choice of a column delimiter and the only option with respect to
character delimiters was to put quotes around each field or not; I chose not
to put quotes around the fields. I also chose to make the first line of the
output file a comma separated list so that I could be sure exactly what the
sequence of columns was. Here are the first few lines of one of my CSV
files:
Member,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,BBQ,Sep,Oct ,Nov,Dec,Xmas
Alexander Thompson,0,1,1,0,0,0,1,1,1,1,1,1,1,1
Ana Thompson,1,0,0,0,0,0,0,0,1,1,1,1,1,1
Andrew Vitale,1,0,0,0,0,0,0,0,0,0,0,0,0,0
Anthony Thompson,0,1,1,0,0,0,1,1,1,1,1,0,1,1
Arthur Martin,1,1,0,0,0,0,0,1,0,0,0,0,0,0
Brian Pincombe,1,1,1,0,0,0,1,0,0,0,0,0,0,1
As you can see, the column delimiter is a comma and there are no quotes
around any of the columns. Here is the definition of the table:
drop table db2admin.attendance_2003;
create table db2admin.attendance_2003
(member char(40) not null,
jan_2003 smallint not null default 0,
feb_2003 smallint not null default 0,
mar_2003 smallint not null default 0,
apr_2003 smallint not null default 0,
may_2003 smallint not null default 0,
jun_2003 smallint not null default 0,
jul_2003 smallint not null default 0,
aug_2003 smallint not null default 0,
bbq_2003 smallint not null default 0,
sep_2003 smallint not null default 0,
oct_2003 smallint not null default 0,
nov_2003 smallint not null default 0,
dec_2003 smallint not null default 0,
xmas_2003 smallint not null default 0,
primary key(member));
Here is the import command used to load this file:
import from "C:\Program Files\Sony Handheld\Me\HandBase\Attendance_2003.CSV"
of del
modified by usedefaults
method p(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15)
commitcount 0 restartcount 1
replace into db2admin.attendance_2003;
The import worked fine and the DB2 table contained exactly the same data as
the HandBase table after the import. Note that I didn't override the chardel
or coldel parameters so they are using their default values.
I don't know if your situation mirrors mine exactly but if it is not too
different, you can probably adapt my solution to solve your problem.
One small footnote: the reason I used "restartcount 1" in my import command
was to tell DB2 not to load the first line of the file, the line that
contained the column names. This results in the following in the Import
statistics:
Number of rows read = 27
Number of rows skipped = 1
Number of rows inserted = 26
Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 27
At first glance, you might think there was an error of some kind since
"Number of rows read" isn't equal to "Number of rows inserted" but the
different of 1 is because of the skipped row, the list of column names.
I hope this helps you....
Rhino