I'm wanting to convert/import an ancient, custom coded (in c) language
database into mysql. I understand the mechanics of the process on
both sides, but I need advice on the performance ramifications.
Essentially the database is a core record with some identifying info
and a very large spreadsheet of line by line data; as well as some
relational hooks into other, more typical tables. The problem is that
I'd like to know whether to maintain this current, basic structure
which would create a bunch of typical tables, and one table with the
large line data as a single "field" that I cook in the client app; or
whether I should do a more typical relational method letting each line
of the "spreadsheet" be a record in a table keyed on the record id of
the core table. Switching it to this method would gain the benefit
that a record that only has 20 line items would take up much less
space, and a record with 1500 line items wouldn't have to be split
between two records (yes, icky).
The scale I'm working on is about 20,000 core records; and each core
record having an associated sheet of data 800 lines long. The other
relational tables are between 30k and 50k in number of records. FWIW,
it was originally built this way, in c, because speed was considered
absolutely paramount and it was first run on an 80286 with 1 meg of
ram.
I guess the basic question is, what kind of speed penalty am I going
to pay in laying that core record out in a more rational, relational
way, using mysql.
[If important, I'll be using the mysql that is installed with
slackware 10.0; clients will be custom coded in C++ for windows and
linux.]