On 29 Jul 2005 07:03:54 -0700,
te**********@hotmail.com wrote:
I routinely have to qry several databases and combine the records for a
mailing list. In all of the tables the fiedl names are different. For
example: FName First name and FirstName
Right not I have append queries, 1 for each list I bring in, to append
to a main table. I would like to have one query or process that will
process assorted different lists and append the correct fields to the
main table.
I also have a qry for each list that cleans it up before appending to
the main table. Like one list won't have the name split into first and
last name, but another one does. One list routinely has customer name
in 2 fields (if it is a long name) but others it is all in one field.
Can anyone head me in the right direction for doing this?
What you're describing is better thought of as a data transfer component, not
a database component.
I find that it's usually better to use code loops in these cases rather than
SQL. The problem is that any SQL-based model you try to use ends up getting
more and more complex overt time to handle all the different input cases,
eventually requiring complex and error-prone automated SQL generation to keep
from duplicating logic on all the queries and having to maintain it in
multiple places. When you're done, you have 2 or layers of logic to
separately debug, and you have something that can't always tell you anything
about where or why it failed when something goes wrong. It might be 20 to 50%
faster than the code loop, but who cares.
Using code loops, it's easier to adapt the code, and you can implement error
handers that can actually tell you something about what row and column caused
a problem when, say, the source field data is too wide for its target.