473,394 Members | 1,759 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,394 software developers and data experts.

Access Scalability - records vs. filesize

Do I get more scalability if I split my database? The way I calculate
things now, I'll be lucky to get 100,000 records in my Access 2003 mdb.

Here some math:

Max mdb/mde size = 2000 x 1024 = 2,048,000k

Let's say on average each record in the database consumes 15k

2,048,000/15 = 136,533 records

Tables, forms, queries, etc. take up about 5,125k

2,048,000 - 5,125 = 2,042,875k

2,042,875/15 = 136,191 records

Realistically, I don't want to get that close to the edge, so let's lop off
500mg. That leaves 1,530,875k

1,530,875/15 = 102,058 records

Does this sound right?
Nov 13 '05 #1
35 3168
"deko" <www.clearpointsystems.com@use_contact_form.com> wrote in message
news:Rm******************@newssvr21.news.prodigy.c om...
Do I get more scalability if I split my database? The way I calculate
things now, I'll be lucky to get 100,000 records in my Access 2003 mdb.

Here some math:

Max mdb/mde size = 2000 x 1024 = 2,048,000k

Let's say on average each record in the database consumes 15k

2,048,000/15 = 136,533 records

Tables, forms, queries, etc. take up about 5,125k

2,048,000 - 5,125 = 2,042,875k

2,042,875/15 = 136,191 records

Realistically, I don't want to get that close to the edge, so let's lop off 500mg. That leaves 1,530,875k

1,530,875/15 = 102,058 records

Does this sound right?


15K is some very large records.

--
Randy Harris
(tech at promail dot com)
Nov 13 '05 #2
How easy would it be to use Access as a front end and a SQL back end? Is it
easy or even possible to port Access in SQL or MySQL?

Just curious - I've got a database that keeps growing, and slowing...

"deko" <www.clearpointsystems.com@use_contact_form.com> wrote in message
news:Rm******************@newssvr21.news.prodigy.c om...
Do I get more scalability if I split my database? The way I calculate
things now, I'll be lucky to get 100,000 records in my Access 2003 mdb.

Here some math:

Max mdb/mde size = 2000 x 1024 = 2,048,000k

Let's say on average each record in the database consumes 15k

2,048,000/15 = 136,533 records

Tables, forms, queries, etc. take up about 5,125k

2,048,000 - 5,125 = 2,042,875k

2,042,875/15 = 136,191 records

Realistically, I don't want to get that close to the edge, so let's lop off 500mg. That leaves 1,530,875k

1,530,875/15 = 102,058 records

Does this sound right?

Nov 13 '05 #3
The only way you'd be able to get a record of 15k is through the use of Memo
fields. A "normal" record cannot exceed 4000 bytes.

--
Doug Steele, Microsoft Access MVP
http://I.Am/DougSteele
(no e-mails, please!)

"deko" <www.clearpointsystems.com@use_contact_form.com> wrote in message
news:Rm******************@newssvr21.news.prodigy.c om...
Do I get more scalability if I split my database? The way I calculate
things now, I'll be lucky to get 100,000 records in my Access 2003 mdb.

Here some math:

Max mdb/mde size = 2000 x 1024 = 2,048,000k

Let's say on average each record in the database consumes 15k

2,048,000/15 = 136,533 records

Tables, forms, queries, etc. take up about 5,125k

2,048,000 - 5,125 = 2,042,875k

2,042,875/15 = 136,191 records

Realistically, I don't want to get that close to the edge, so let's lop
off
500mg. That leaves 1,530,875k

1,530,875/15 = 102,058 records

Does this sound right?

Nov 13 '05 #4
how do you know how big your records are anyway?

Nov 13 '05 #5
Scott Loupin wrote:
How easy would it be to use Access as a front end and a SQL back end? Is it
easy or even possible to port Access in SQL or MySQL?


Easy to do, bit harder to get right :-)

--
This sig left intentionally blank
Nov 13 '05 #6
Jamey wrote:
how do you know how big your records are anyway?


This can be calculated:

Numerical and dates (which are numerical anyway) data types are fixed
1,2,4 or 8 bytes in length depending on type. Using Len() on each
text/memo field will give a rough indication of space taken by a record.
Not sure what overheads there are for text types. IIRC about 2 or 3
bytes for strings in RAM for a descriptor, not sure what overhead the
storage would take.

Record size is the first part though, then you need to calculate the
size of your indices, don't forget the tabledefs thenselves and system
tables that exist in there too :-)

--
This sig left intentionally blank
Nov 13 '05 #7
> > how do you know how big your records are anyway?


This can be calculated:

Numerical and dates (which are numerical anyway) data types are fixed
1,2,4 or 8 bytes in length depending on type. Using Len() on each
text/memo field will give a rough indication of space taken by a record.
Not sure what overheads there are for text types. IIRC about 2 or 3
bytes for strings in RAM for a descriptor, not sure what overhead the
storage would take.

Record size is the first part though, then you need to calculate the
size of your indices, don't forget the tabledefs thenselves and system
tables that exist in there too :-)


I just did some division. First I deleted everything from all the tables
(38 total) to get the size of the empty database. Then subtracted that from
the size of the database fully populated and divided the result by the
number of records currently in the database. I came up with about 15k per
record, and about 100,000 records as the estimated ceiling (for a 1.5Gb
database).

Perhaps the term "record" is something I'm not clear about. "Entry" might
better describe the variable in my calculations. As Doug Steele pointed out
in his post, a "record" is limited to 4000 bytes. I assume this means the
size of a distinct record in a single table row. For example, if I had a
table 1000 columns wide and each field was a Long Integer (4 bytes), each
record would be at the 4000k limit (is this correct?). If I use that to
calculate the maximum "records" (in a 1.5Gb Access mdb) I get about 400,000.

But each "entry" in my mdb has multiple records (related transactions,
dates, addresses, etc.) in multiple tables. And there's a memo field in the
Notes table. Clearly, notes can bloat each "entry" (especially when an
"entry" has multiple long notes). So "My Database Application" will scale
differently than an "Access Database".

I have yet to see how a 1.5Gb Access database performs. My guess is not
very well. But in terms of a desktop database, what's the alternative?
FileMaker Pro? Paradox? AFAIK, MSDE has the same size limitations as
Access. If I need more than 100,000 "entries" I should probably move to an
SQL Server or Oracle back end. But then I'll need to rewrite the front end,
upgrade the network infrastructure, hire a dba, etc, etc.

The beauty of Access is that it's a true Rapid Application Development
environment. The problem is - What if I need to scale my app to more than
100,000 records? At this point, I don't (and probably never will).
Nov 13 '05 #8
Access doesn't allow more than 255 fields in a table (and that's at least 5
times larger than it needs to be). However, you can define a table that
cannot be fully populated. For instance, you can define a table that has 20
fields, each defined as text 255. Access won't complain until you begin to
populate the fields and exceed the 4000 byte limit.

--
Doug Steele, Microsoft Access MVP
http://I.Am/DougSteele
(no e-mails, please!)

"deko" <www.clearpointsystems.com@use_contact_form.com> wrote in message
news:VX******************@newssvr21.news.prodigy.c om...
Perhaps the term "record" is something I'm not clear about. "Entry" might
better describe the variable in my calculations. As Doug Steele pointed
out
in his post, a "record" is limited to 4000 bytes. I assume this means the
size of a distinct record in a single table row. For example, if I had a
table 1000 columns wide and each field was a Long Integer (4 bytes), each
record would be at the 4000k limit (is this correct?). If I use that to
calculate the maximum "records" (in a 1.5Gb Access mdb) I get about
400,000.

Nov 13 '05 #9
"deko" <www.clearpointsystems.com@use_contact_form.com> wrote in
news:VX******************@newssvr21.news.prodigy.c om:
The beauty of Access is that it's a true Rapid Application
Development environment. The problem is - What if I need to scale
my app to more than 100,000 records? At this point, I don't (and
probably never will).


You seem not to be separating the application development
capabilities from the database engine that ships with it as the
default (Jet). You can use any back end you want to store the data
and connect to it with your Access application.

In other words, there is no limitation within Access for record size
or data store size -- that limitation exists only in database
engines, of which Jet is only one of many possibilities.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #10
deko wrote:
SQL Server or Oracle back end. But then I'll need to rewrite the front end,
Why? What did you write it in before? <g>
Unless you're heavily involved in ISAM and/or call lots of functions
from within queries there's not a lot different. Some vanillia queries
will run slow but just defining them as views and no other tweaks can
reap big rewards.
upgrade the network infrastructure,
How do you mean? A beefier server might be in order but your
workstations, wiring, hubs, switches, topology, etc wouldn't need changing.
hire a dba, etc, etc.


A dba for SQL Server? You do know that thing will practically look after
itself don't you? A dba would be twiddling his thumbs all day long and
making up stuff to justify his job. You wouldn't hire a dba for MSDE,
and that's is SQL Server with about 3 limitations (size, query threads,
replication).

--
This sig left intentionally blank
Nov 13 '05 #11
"deko" <www.clearpointsystems.com@use_contact_form.com> wrote in message
news:6l*************@newssvr24.news.prodigy.net...
Access doesn't allow more than 255 fields in a table (and that's at least

5
times larger than it needs to be).


hmmm... Are you suggesting the width of an Access table should be no more
than 50 columns? I've heard it's better to have more narrow tables and
fewer wide tables, but have not heard what the best practice limit is for
number of columns.


There's no magic number. However, if you properly normalize your tables, I
think you'll be hard pressed to find a legitimate reason for more than, say,
20 or 30 fields in a single table. If you find you have more, then I'd
suggest double-checking your normalization.

--
Doug Steele, Microsoft Access MVP
http://I.Am/DougSteele
(no e-mails, please!)


Nov 13 '05 #12
Scott Loupin wrote:
How easy would it be to use Access as a front end and a SQL back end? Is it
easy or even possible to port Access in SQL or MySQL?


I've got SQL Server 2000 Developer on the home machine I'm writing this
on, with several databases in it. I use Access 2k front-ends with ODBC
links. Piece of cake.

The application I am developing at work uses Access XP as a front-end to
a SQL 2k box. Works fine, though on an enterprise level you obviously
need dedicated servers, sa's and dba's.

One caveat: importing your old Access tables to SQL Server is a bit of a
headache the first time you do it. SQL has a wonderful import/export
wizard (part of its massively useful DTS package) that makes moving data
from one source to another a breeze, *after* you get over the steep but
short learning curve. So your first couple of days with a SQL back-end
will be a hassle, but after that you'll never go back, it's that good.

The import/export utility in SQL can easily be used for transferring
data to and from data sources that have nothing to do with SQL Server,
btw. You can use it to move data in a spreadsheet into Access, or you
can move a SAS dataset to Excel, or whatever you need to do.

But the short answer is yes, Access works very well indeed as a
front-end to a SQL Server data source. It's much more scalable and
reliable in a multiuser setting than having an Access file-server
configuration.

--
Terrell Miller
mi******@bellsouth.net

"Every gardener knows nature's random cruelty"
-Paul Simon RE: George Harrison
Nov 13 '05 #13
Douglas J. Steele wrote:
hmmm... Are you suggesting the width of an Access table should be no more
than 50 columns? I've heard it's better to have more narrow tables and
fewer wide tables, but have not heard what the best practice limit is for
number of columns.

There's no magic number. However, if you properly normalize your tables, I
think you'll be hard pressed to find a legitimate reason for more than, say,
20 or 30 fields in a single table. If you find you have more, then I'd
suggest double-checking your normalization.


that depends heavily on what the table does.

In health-care environments, you will have a lot of tables that are data
stores for a patient questionnaire/case history, for example. They can
run on forever, where the medical staff is tracking lots of different
data points like date of treatment, symptom onset date, bilirubin count,
antibiotic usage, dozens of "in the past two weeks have you..."
questions, etc.

Technically a questionnaire like that could be normalized so you'd have
a PatientInfo table with PatientID, FormID, QuestionID, QuestionType and
Response fields, and you'd dump all your Yes/No, text and Numeric
responses into the same Response text field of this table, then convert
the text field to numeric when you need to. So one patient questionnaire
could generate a couple hundred records in this "many" table.

That's proper 3NF database design, but in practical use you don't often
do that. The staff who need to run regression analyses on the data don't
want to have to do a bunch of complex queries to get the data into a
flat-file format for SAS or whatever, and they are scientists, not
programmers. A lot of what they need to do will have to be done with ad
hoc querying instead of canned reports that an Access developer can
design for them. So you build the app around their limited ability to
construct SQL clauses, either manually or through the Access query grid.

So you have to denormalize the tables to a large extent, with separate
fields for all those yes/no and "how much?" questions on the form. In
that case you can easily surpass the 255-field limit in Access. So what
you do in that case is you have a tabbed control on your form, with
subforms for each "page" of the questionnaire on each page of the tab
control. Each subform has its own linked table. These often wind up
being one-to-one, which again isn't pure database design procedure, but
that's life.

I'd imagine there are a lot of cases in other professions where very
similar circumstances arise. Professional database design is a constant
tradeoff between theoretical purity (where all database tables are in
Third Normal Form or better) and practical reality. So you spend a lot
of your analysis and design phases deciding how *de*normalized you want
your tables to be.

As a side note, remember that the entire concept of database
normalization is really an artifact of the time when storage space was
at more of a premium than raw processing speed. In those days it was
crucial to keep the amount of data you captured to a bare minimum. But
these days storage space is cheap and plentiful, and it often makes more
sense to accommodate other considerations (the slight time-performance
gain from not having the database engine have to join tables, or the
programming ability of the end-users) at the expense of storage efficiency.

--
Terrell Miller
mi******@bellsouth.net

"Every gardener knows nature's random cruelty"
-Paul Simon RE: George Harrison
Nov 13 '05 #14
deko wrote:
Well, the point was to contrast the simplicity of developing the app in
Access with a full blown n-tier project. Even though Access (excuse me,
"Jet") doesn't scale well, there's a lot to be said for it's RAD
capabilities (not to mention cost).


bear in mind that the Jet engine is the actual data-storage product. It
does not and was never designed to scale well at all, as has been mentioned.

Microsoft Access is just the user interface, it's the software that
controls the form design screens, query grid, database window, etc. etc.
Access *does not* actually store table data itself, it just creates a
slick graphical environment for users to interact with that data. The
rule of thumb is that if you have a dozen people in the same physical
location using a Jet data source on the same server you'll be okay, but
anything beyond that and it's time to go client-server.

Microsoft doesn't really point all that out very well unless you drill
down to the knowledge base or white-paper level, so most people don't
realize the difference between Access and Jet.

There are very few practical limitations on the scalability of *Access*
if it's a front-end to a linked data source (typically ODBC). You just
need to make sure that each user has their own copy of the front-end
(*not* a single copy up on the server that everyone opens; they all need
to physically copy the front-end Access file to their own machine and
use their personal copy) and that the table links do not use any mapped
drive letters but instead use the full filepath (i.e. you don't link to
M:\OurApp\OurAppData.mdb, since on some users' machines they may have a
different drive letter than M: mapped for that server; instead you link
to it as \\OurOrg\OurServer\Share\OurDept\OurApp\OurAppData .mdb or
whatever).

The real limiting factor for using Access as a multiuser front-end is
that it has very rudimentary "security" implementation. One of my old
clients (a major financial institution) has a policy that strictly
forbids any Access files on their internet/intranet sites, because it's
too easy for someone to hack in and have lots of malicious fun with VBA.

If you need to share the data externally or have your own people use the
app through the company's firewall, you really need something like a
web app that has much more robust security implementation.

So I wouldn't be afraid of setting up your database as an in-house
client-server application, but don't try to web-enable it or anything.

HTH,

--
Terrell Miller
mi******@bellsouth.net

"Every gardener knows nature's random cruelty"
-Paul Simon RE: George Harrison
Nov 13 '05 #15
rkc
Terrell Miller wrote:
<snip a-lot-o stuff>
As a side note, remember that the entire concept of database
normalization is really an artifact of the time when storage space was
at more of a premium than raw processing speed. In those days it was
crucial to keep the amount of data you captured to a bare minimum. But
these days storage space is cheap and plentiful, and it often makes more
sense to accommodate other considerations (the slight time-performance
gain from not having the database engine have to join tables, or the
programming ability of the end-users) at the expense of storage efficiency.


So data integrity and the elimination of update, insertion and deletion
anomalies are just inconsequential side effects of the normalization
process?
Nov 13 '05 #16
"Terrell Miller" <mi******@bellsouth.net> wrote in message
news:BN*******************@bignews4.bellsouth.net. ..
Douglas J. Steele wrote:
hmmm... Are you suggesting the width of an Access table should be no morethan 50 columns? I've heard it's better to have more narrow tables and
fewer wide tables, but have not heard what the best practice limit is fornumber of columns.

There's no magic number. However, if you properly normalize your tables, I think you'll be hard pressed to find a legitimate reason for more than, say, 20 or 30 fields in a single table. If you find you have more, then I'd
suggest double-checking your normalization.


that depends heavily on what the table does.

In health-care environments, you will have a lot of tables that are data
stores for a patient questionnaire/case history, for example. They can
run on forever, where the medical staff is tracking lots of different
data points like date of treatment, symptom onset date, bilirubin count,
antibiotic usage, dozens of "in the past two weeks have you..."
questions, etc.

Technically a questionnaire like that could be normalized so you'd have
a PatientInfo table with PatientID, FormID, QuestionID, QuestionType and
Response fields, and you'd dump all your Yes/No, text and Numeric
responses into the same Response text field of this table, then convert
the text field to numeric when you need to. So one patient questionnaire
could generate a couple hundred records in this "many" table.

That's proper 3NF database design, but in practical use you don't often
do that. The staff who need to run regression analyses on the data don't
want to have to do a bunch of complex queries to get the data into a
flat-file format for SAS or whatever, and they are scientists, not
programmers. A lot of what they need to do will have to be done with ad
hoc querying instead of canned reports that an Access developer can
design for them. So you build the app around their limited ability to
construct SQL clauses, either manually or through the Access query grid.

So you have to denormalize the tables to a large extent, with separate
fields for all those yes/no and "how much?" questions on the form. In
that case you can easily surpass the 255-field limit in Access. So what
you do in that case is you have a tabbed control on your form, with
subforms for each "page" of the questionnaire on each page of the tab
control. Each subform has its own linked table. These often wind up
being one-to-one, which again isn't pure database design procedure, but
that's life.

I'd imagine there are a lot of cases in other professions where very
similar circumstances arise. Professional database design is a constant
tradeoff between theoretical purity (where all database tables are in
Third Normal Form or better) and practical reality. So you spend a lot
of your analysis and design phases deciding how *de*normalized you want
your tables to be.

As a side note, remember that the entire concept of database
normalization is really an artifact of the time when storage space was
at more of a premium than raw processing speed. In those days it was
crucial to keep the amount of data you captured to a bare minimum. But
these days storage space is cheap and plentiful, and it often makes more
sense to accommodate other considerations (the slight time-performance
gain from not having the database engine have to join tables, or the
programming ability of the end-users) at the expense of storage

efficiency.
Wow, this is very wrong. Benefits from normalization are much more than
theoretical. There are a great many benefits to database normalization and
yes, there are times when some denormalization is appropriate. In neither
case, however, is disk space usage a significant factor. As suggested by
rkc, data integrity and manageability are far more significant.

--
Randy Harris
(tech at promail dot com)
--
Terrell Miller
mi******@bellsouth.net

"Every gardener knows nature's random cruelty"
-Paul Simon RE: George Harrison

Nov 13 '05 #17
Tom
Terrell:

Interesting. Leaving aside the your final paragraph (an arguement I
don't want to get in the middle of), I understand your issue regarding
designing medical questionnaires. However, I have found that as the
size of the questionnaire grows, you will reach limits on the number of
objects you can place on a form and the number of fields in a table.
At some point you have to tell the scientists that easy for them is no
longer better because the User Interface will not be able to support
what they want to do.

I spent some time designing a normalized database that could accomodate
multipe response types and was never fully satisfied. It was a
freebie, so eventually I moved on, but if you have done something along
those lines that you are willing to share, I'd be interested in seeing
it.

Tom

Nov 13 '05 #18
> > Well, the point was to contrast the simplicity of developing the app in
Access with a full blown n-tier project. Even though Access (excuse me,
"Jet") doesn't scale well, there's a lot to be said for it's RAD
capabilities (not to mention cost).
bear in mind that the Jet engine is the actual data-storage product. It
does not and was never designed to scale well at all, as has been

mentioned.
Microsoft Access is just the user interface, it's the software that
controls the form design screens, query grid, database window, etc. etc.
Access *does not* actually store table data itself, it just creates a
slick graphical environment for users to interact with that data. The
rule of thumb is that if you have a dozen people in the same physical
location using a Jet data source on the same server you'll be okay, but
anything beyond that and it's time to go client-server.

Microsoft doesn't really point all that out very well unless you drill
down to the knowledge base or white-paper level, so most people don't
realize the difference between Access and Jet.

There are very few practical limitations on the scalability of *Access*
if it's a front-end to a linked data source (typically ODBC). You just
need to make sure that each user has their own copy of the front-end
(*not* a single copy up on the server that everyone opens; they all need
to physically copy the front-end Access file to their own machine and
use their personal copy) and that the table links do not use any mapped
drive letters but instead use the full filepath (i.e. you don't link to
M:\OurApp\OurAppData.mdb, since on some users' machines they may have a
different drive letter than M: mapped for that server; instead you link
to it as \\OurOrg\OurServer\Share\OurDept\OurApp\OurAppData .mdb or
whatever).

The real limiting factor for using Access as a multiuser front-end is
that it has very rudimentary "security" implementation. One of my old
clients (a major financial institution) has a policy that strictly
forbids any Access files on their internet/intranet sites, because it's
too easy for someone to hack in and have lots of malicious fun with VBA.

If you need to share the data externally or have your own people use the
app through the company's firewall, you really need something like a
web app that has much more robust security implementation.

So I wouldn't be afraid of setting up your database as an in-house
client-server application, but don't try to web-enable it or anything.


Helpful stuff - thanks. So it sounds like the best way to implement a
multi-user Access/Jet database is to first spilt the mdb, put the backend on
a file share, then distribute the front-end in MDE format. When number of
users or data grows too big for Jet, then a dedicated mssqlserver can be
installed and the back end upsized. For users that already have Access on
their desktops and don't need a full-blown mssqlserver backend (or the
enhanced security), this saves the cost of mssqlserver hardware/software.
Nov 13 '05 #19
> In health-care environments, you will have a lot of tables that are data
stores for a patient questionnaire/case history, for example. They can
run on forever, where the medical staff is tracking lots of different
data points like date of treatment, symptom onset date, bilirubin count,
antibiotic usage, dozens of "in the past two weeks have you..."
questions, etc.

Technically a questionnaire like that could be normalized so you'd have
a PatientInfo table with PatientID, FormID, QuestionID, QuestionType and
Response fields, and you'd dump all your Yes/No, text and Numeric
responses into the same Response text field of this table, then convert
the text field to numeric when you need to. So one patient questionnaire
could generate a couple hundred records in this "many" table.

That's proper 3NF database design, but in practical use you don't often
do that. The staff who need to run regression analyses on the data don't
want to have to do a bunch of complex queries to get the data into a
flat-file format for SAS or whatever
I've run into this kind of thing when trying to accomodate different export
formats so an "entry" can be imported into some other app from .xls or .txt
format. The data associated with each "entry" is comprised of records in
multiple tables so I have a query to get it all and then use DoCmd.OutputTo
to export it. I'm currently working on a comma-delimited export and will
probally use a DDL query to create a very wide temp table, then insert the
data with another query, then export with DoCmd.TransferText.
, and they are scientists, not
programmers. A lot of what they need to do will have to be done with ad
hoc querying instead of canned reports that an Access developer can
design for them. So you build the app around their limited ability to
construct SQL clauses, either manually or through the Access query grid.

So you have to denormalize the tables to a large extent, with separate
fields for all those yes/no and "how much?" questions on the form. In
that case you can easily surpass the 255-field limit in Access. So what
you do in that case is you have a tabbed control on your form, with
subforms for each "page" of the questionnaire on each page of the tab
control. Each subform has its own linked table. These often wind up
being one-to-one, which again isn't pure database design procedure, but
that's life.

I'd imagine there are a lot of cases in other professions where very
similar circumstances arise. Professional database design is a constant
tradeoff between theoretical purity (where all database tables are in
Third Normal Form or better) and practical reality. So you spend a lot
of your analysis and design phases deciding how *de*normalized you want
your tables to be.

As a side note, remember that the entire concept of database
normalization is really an artifact of the time when storage space was
at more of a premium than raw processing speed. In those days it was
crucial to keep the amount of data you captured to a bare minimum. But
these days storage space is cheap and plentiful, and it often makes more
sense to accommodate other considerations (the slight time-performance
gain from not having the database engine have to join tables, or the
programming ability of the end-users) at the expense of storage efficiency.

Nov 13 '05 #20
Terrell Miller <mi******@bellsouth.net> wrote in
news:BN*******************@bignews4.bellsouth.net:
Douglas J. Steele wrote:
hmmm... Are you suggesting the width of an Access table should be
no more than 50 columns? I've heard it's better to have more
narrow tables and fewer wide tables, but have not heard what the
best practice limit is for number of columns.

There's no magic number. However, if you properly normalize your
tables, I think you'll be hard pressed to find a legitimate
reason for more than, say, 20 or 30 fields in a single table. If
you find you have more, then I'd suggest double-checking your
normalization.


that depends heavily on what the table does.

In health-care environments, you will have a lot of tables that
are data stores for a patient questionnaire/case history, for
example. They can run on forever, where the medical staff is
tracking lots of different data points like date of treatment,
symptom onset date, bilirubin count, antibiotic usage, dozens of
"in the past two weeks have you..." questions, etc.

Technically a questionnaire like that could be normalized so you'd
have a PatientInfo table with PatientID, FormID, QuestionID,
QuestionType and Response fields, and you'd dump all your Yes/No,
text and Numeric responses into the same Response text field of
this table, then convert the text field to numeric when you need
to. So one patient questionnaire could generate a couple hundred
records in this "many" table.

That's proper 3NF database design, but in practical use you don't
often do that.


Been there, done that.

I should have fully normalized.

Putting a questionaire response in a single record means that EVERY
QUESTIONAIRE MUST HAVE ITS OWN TABLE (since the questions are
different).

In the application I was doing, I had the equivalent of only one
questionaire, but it was dependent on an outside data source, which,
during the development process (which stretched on for years -- you
doctors are trained to make quick diagnoses, but if you give them
the same evidence on separate occasions, they often come to
different conclusions; this is bad enough for application
development, but I shudder to think what the implications are for
actual patients!) changed its structure. Had I normalized the
structure from the beginning, it would have been very easy to change
the structure of the "questionaire." As it was, I had to incorporate
the new fields into the existing tables while retaining the old
fields, then I had to deal with aggregating non-comparable records.

In any event, I learned my lesson: tempting as it is to denormalize,
it's simply the worst possible thing you could do. Maintenance will
be a nightmare.

And it absolutely will *not* work once you get to your second
questionaire.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #21
Terrell Miller <mi******@bellsouth.net> wrote in
news:BN*******************@bignews4.bellsouth.net:
That's proper 3NF database design, but in practical use you don't
often do that. The staff who need to run regression analyses on
the data don't want to have to do a bunch of complex queries to
get the data into a flat-file format for SAS or whatever, and they
are scientists, not programmers.


Well, then you build a denormalized view for them.

It's not complicated.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #22
Terrell Miller <mi******@bellsouth.net> wrote in
news:BN*******************@bignews4.bellsouth.net:
A lot of what they need to do will have to be done with ad
hoc querying instead of canned reports that an Access developer
can design for them. So you build the app around their limited
ability to construct SQL clauses, either manually or through the
Access query grid.


It's not that difficult to build an interface to allow completely ad
hoc querying of properly normalized data, output in a flat file. It
was one of the easiest parts of the medical database application I
built.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #23
Terrell Miller <mi******@bellsouth.net> wrote in
news:BN*******************@bignews4.bellsouth.net:
As a side note, remember that the entire concept of database
normalization is really an artifact of the time when storage space
was at more of a premium than raw processing speed. In those days
it was crucial to keep the amount of data you captured to a bare
minimum. But these days storage space is cheap and plentiful, and
it often makes more sense to accommodate other considerations (the
slight time-performance gain from not having the database engine
have to join tables, or the programming ability of the end-users)
at the expense of storage efficiency.


Bulls*t*.

Normalization is *all* about creating a data structure that will
consistently logically model the real-world data it describes.

A flat structure doesn't do that, though it's very useful for end
users to have a flat view available to them.

A questionaire really is a hierarchical structure, even if you've
got only one of them (it's quite common for studies to want to vary
the order of questions or swap in and out different versions of some
questions in order to test certain hypotheses, so even a
single-questionaire application may very well need the normalized
structure). And that's *only* reason to normalize -- it makes
maintaining and retrieving the data easier.

I couldn't disagree more strenuously with every word of your post on
this subject.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #24
"Tom" <rt*****@swbell.net> wrote in
news:11**********************@c13g2000cwb.googlegr oups.com:
Interesting. Leaving aside the your final paragraph (an arguement
I don't want to get in the middle of), I understand your issue
regarding designing medical questionnaires. However, I have found
that as the size of the questionnaire grows, you will reach limits
on the number of objects you can place on a form and the number of
fields in a table. At some point you have to tell the scientists
that easy for them is no longer better because the User Interface
will not be able to support what they want to do.


Why is it that Access programmers tend to design the data storage
side of their applications and then seem to ignore the fact that
they need to design a user-friendly data retrieval interface?

It's no big deal to design a flat output format for normalized data.
And it's also no big deal to design an ad hoc querying UI that
allows the users to pick and choose whichever variables they want in
their flat output file.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #25
David W. Fenton wrote:
Why is it that Access programmers tend to design the data storage
side of their applications and then seem to ignore the fact that
they need to design a user-friendly data retrieval interface?
because the client usually has been "helpful" and gives you an Excel
data dictionary with all the columns they want. And tells you that this
database should be no big deal, "it's just a couple of screens".

Then when you explain the benefits of normalization to them (And the
application design lifecycle) they get glassy-eyed.
It's no big deal to design a flat output format for normalized data.
unless your client is a non-IT type who has been using SAS for years and
panics at the thought of having to learn how to do Access queries. And
of course they want you to do a turnkey solution, they don't want to
have to keep funding you to maintain the normalized db even though they
will keep coming up with new functionality they want added in.

There are a lot more people like that out there than you'd think.
And it's also no big deal to design an ad hoc querying UI that
allows the users to pick and choose whichever variables they want in
their flat output file.


very true, but then the client sees a lot of semi-object-oriented VBA
code that is completely inscrutable to someone weaned on procedural
software, and they get pissed because their neighbor lost their
retirement savings in the dotcom crash, and they think that all IT
professionals are money-grubbing leeches, and they *know* they can
handle the maintenance themselves.

The IT environment (or more specifically, the client environment) is a
lot different than it was back in the go-go days half a decade ago.

--
Terrell Miller
mi******@bellsouth.net

"Every gardener knows nature's random cruelty"
-Paul Simon RE: George Harrison
Nov 13 '05 #26
David W. Fenton wrote:
In any event, I learned my lesson: tempting as it is to denormalize,
it's simply the worst possible thing you could do. Maintenance will
be a nightmare.

And it absolutely will *not* work once you get to your second
questionaire.


I've seen both sides of this, actually. I worked on an international
case control study where the sponsor/principal investigator was playing
funding games with my agency. In short, he wanted our funding but not
our help and he did not get along with our co-PI. So every three weeks
the forms would get redesigned, and often questions that would get
deleted in odd-numbered revs would magically appear in the even-numbered
ones. Luckily I left that project after four months. Eighteen months
later and we *still* haven't gotten any usable data from the partner.

But I have also done cohort questionnaires that had a strict protocol
ironed out before the study ever began. No changes, just design it,
build it, debug it, move on to the next fire.

Moral of the story: if the project has a review board or steering
committee or data safety monitoring board or the like, wait until they
finalize the protocol until you do any real design work. The protocol
becomes your set-in-stone requirements doc.

--
Terrell Miller
mi******@bellsouth.net

"Every gardener knows nature's random cruelty"
-Paul Simon RE: George Harrison
Nov 13 '05 #27
David W. Fenton wrote:
That's proper 3NF database design, but in practical use you don't
often do that. The staff who need to run regression analyses on
the data don't want to have to do a bunch of complex queries to
get the data into a flat-file format for SAS or whatever, and they
are scientists, not programmers.

Well, then you build a denormalized view for them.

It's not complicated.


complicated, no. Political, yes.

There will be the one office troublemaker who will use the fact that you
are giving them a denormalized version of normed tables to call into
question the integrity of the normalized data. That person will have no
clue about database design...which is the point entirely. It's in their
own best interest to *not* have proper database design, because that way
they can exert far more influence over the proceedings than if they have
a properly-built app.

--
Terrell Miller
mi******@bellsouth.net

"Every gardener knows nature's random cruelty"
-Paul Simon RE: George Harrison
Nov 13 '05 #28
David W. Fenton wrote:
Normalization is *all* about creating a data structure that will
consistently logically model the real-world data it describes.


Except that the real-world data is neither consistent nor logical in the
first place. Neither are the users of that data...
--
Terrell Miller
mi******@bellsouth.net

"Every gardener knows nature's random cruelty"
-Paul Simon RE: George Harrison
Nov 13 '05 #29
Terrell Miller <mi******@bellsouth.net> wrote in
news:a7***********@bignews1.bellsouth.net:
David W. Fenton wrote:
Why is it that Access programmers tend to design the data storage
side of their applications and then seem to ignore the fact that
they need to design a user-friendly data retrieval interface?
because the client usually has been "helpful" and gives you an
Excel data dictionary with all the columns they want. And tells
you that this database should be no big deal, "it's just a couple
of screens".

Then when you explain the benefits of normalization to them (And
the application design lifecycle) they get glassy-eyed.


So, you've gotten through the first 20 minutes of the initial client
meeting.

The rest of it should be sent selling them on the idea of doing it
right, rather than acquiescing to their ignorance. They are hiring
you because you are an *expert* in the field -- if they then won't
let you use your expertise, then perhaps you should not take them on
as a client.
It's no big deal to design a flat output format for normalized
data.


unless your client is a non-IT type who has been using SAS for
years and panics at the thought of having to learn how to do
Access queries. . . .


What are you talking about?

I mean you *program* the output for them.
. . . And of course they want you to do a turnkey
solution, they don't want to have to keep funding you to maintain
the normalized db even though they will keep coming up with new
functionality they want added in.

There are a lot more people like that out there than you'd think.
I never require my clients to know *anything* about Access.
And it's also no big deal to design an ad hoc querying UI that
allows the users to pick and choose whichever variables they want
in their flat output file.


very true, but then the client sees a lot of semi-object-oriented
VBA code that is completely inscrutable to someone weaned on
procedural software, and they get pissed because their neighbor
lost their retirement savings in the dotcom crash, and they think
that all IT professionals are money-grubbing leeches, and they
*know* they can handle the maintenance themselves.


Why would a client ever see any VBA code?
The IT environment (or more specifically, the client environment)
is a lot different than it was back in the go-go days half a
decade ago.


I have absolutely no idea what you are talking about.

I'm currently involved in 4 different development projects, 1 new (a
revision of an amateur app), 1 semi-new (ongoing revisions to a
well-designed professionally developed app), and 2 old projects
(ongoing revisions to projects that I developed from scratch). I
have at least one other project on the drawing board, which may be a
revision of an amateur app or (preferably) a from-scratch rebuild of
the same app.

Nothing is different in any of these apps now than was different
when I started doing Access development in 1996.

Clients want it to be easy to use.

But they want it to do all sorts of really special things in terms
of UI and output formats.

It's the job of the developer to walk them through the cost/benefit
structure of getting them what they *want* versus getting them what
they *need*.

An application that allows them to store data easily but doesn't
also include tools for allowing them to easily retrieve the data
they've put in is a half-finished application.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #30
Terrell Miller <mi******@bellsouth.net> wrote in
news:gf***********@bignews1.bellsouth.net:
David W. Fenton wrote:
In any event, I learned my lesson: tempting as it is to
denormalize, it's simply the worst possible thing you could do.
Maintenance will be a nightmare.

And it absolutely will *not* work once you get to your second
questionaire.
I've seen both sides of this, actually. I worked on an
international case control study where the sponsor/principal
investigator was playing funding games with my agency. In short,
he wanted our funding but not our help and he did not get along
with our co-PI. So every three weeks the forms would get
redesigned, and often questions that would get deleted in
odd-numbered revs would magically appear in the even-numbered
ones. Luckily I left that project after four months. Eighteen
months later and we *still* haven't gotten any usable data from
the partner.

But I have also done cohort questionnaires that had a strict
protocol ironed out before the study ever began. No changes, just
design it, build it, debug it, move on to the next fire.


Or, you could design it fireproof to begin with and not have to
rebuild it once it burns down.
Moral of the story: if the project has a review board or steering
committee or data safety monitoring board or the like, wait until
they finalize the protocol until you do any real design work. The
protocol becomes your set-in-stone requirements doc.


And if I'd done that, I never would have collected a dime from the
client.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #31
Terrell Miller <mi******@bellsouth.net> wrote in
news:8i************@bignews1.bellsouth.net:
David W. Fenton wrote:
That's proper 3NF database design, but in practical use you don't
often do that. The staff who need to run regression analyses on
the data don't want to have to do a bunch of complex queries to
get the data into a flat-file format for SAS or whatever, and
they are scientists, not programmers.


Well, then you build a denormalized view for them.

It's not complicated.


complicated, no. Political, yes.

There will be the one office troublemaker who will use the fact
that you
are giving them a denormalized version of normed tables to call
into
question the integrity of the normalized data. That person will
have no clue about database design...which is the point entirely.
It's in their own best interest to *not* have proper database
design, because that way they can exert far more influence over
the proceedings than if they have a properly-built app.


That's a political problem.

The client hired you for your database application expertise. If
they don't want you to use it, then they shouldn't have hired you.

It's really that simple.

I've been through micro-management many times. It's pretty easy to
explain clearly why the underling doesn't know what they are talking
about (unless, of course, they actually *do* have a point!).

If you don't have that skill, then you really shouldn't be doing
freelance db development.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #32
Terrell Miller <mi******@bellsouth.net> wrote in
news:_j*************@bignews1.bellsouth.net:
David W. Fenton wrote:
Normalization is *all* about creating a data structure that will
consistently logically model the real-world data it describes.


Except that the real-world data is neither consistent nor logical
in the first place. Neither are the users of that data...


If that's the case, then it's not a database in the first place.

There is some level at which there is consistency and logic to the
relationships between entities. If there were not, it would be
impossible to build any kind of database to store the information.

The database need not model every aspect of the real-world entities
-- it need only model those aspects that are essential to the
application that uses that data.

But normalization is an essential part of modelling even loose data
relationships. In fact, I'd argue that it's more important to insure
consistency and non-duplication in a scheme where the entities are
loosely related than it is in one where they are strictly defined.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #33
David W. Fenton wrote:
I've been through micro-management many times. It's pretty easy to
explain clearly why the underling doesn't know what they are talking
about (unless, of course, they actually *do* have a point!).

If you don't have that skill, then you really shouldn't be doing
freelance db development.


again, that's fine as long as the "underling" (interesting choice of
phrase, that, I hope you never use it in front of the client) perceives
thjat their own best interest is being rational. And most of them will,
but it's that one office politician that will go to enormous lengths to
*not* be rational. Efforts to convince that person they have no clue
what they are talking about are...counterproductive, shall we say?
Because they will just redouble their efforts to sabotage the project,
you will never convert them to the Light Of Reason.

--
Terrell Miller
mi******@bellsouth.net

"Every gardener knows nature's random cruelty"
-Paul Simon RE: George Harrison
Nov 13 '05 #34
David W. Fenton wrote:
Then when you explain the benefits of normalization to them (And
the application design lifecycle) they get glassy-eyed.

So, you've gotten through the first 20 minutes of the initial client
meeting.

The rest of it should be sent selling them on the idea of doing it
right, rather than acquiescing to their ignorance. They are hiring
you because you are an *expert* in the field -- if they then won't
let you use your expertise, then perhaps you should not take them on
as a client.


in theory, yes. But what happens in RL is that the client staff you
explain all that to are convinced and all is well. But then time passes
and those people drift off onto different projects/jobs. The new people
that replace them tend to have a "not invented here" outlook, and they
specifically don't want to just pick up the other person's project
design. So they want changes. Just "I'm in control now" political
posturing. In a large organization their best chance to gain "ownership"
of the project and demonstrate themselves to be decisive,
results-oriented types is to discredit whatever design menthodology
their predecessors agreed on.

If the IT professional waves a signed design doc at them that doesn't
really help, because then they'll just change their approach to "it's
designed properly in a technical sense, but that application just isn't
what we need to be spending time and money doing".

Again, David: application design does not exist in a vacuum, it bumps up
against human beings who are petty, illogical, selfish, and ignorant.
Waving a theoretical level of perfection at them does not work.

It's no big deal to design a flat output format for normalized
data.


unless your client is a non-IT type who has been using SAS for
years and panics at the thought of having to learn how to do
Access queries. . . .

What are you talking about?

I mean you *program* the output for them.


okay, here's how it works in RL: some people (esp. in large
organizations) like to "retire on the payroll". They specifically do
*not* want efficiency, they want to be able to spend their entire time
going through this tedious manual process that is so convoluted that
they are the only ones qualified to do it. Instant job security.

If you program the output for them, that frees up lots of their time
that they can use for more productive activities. Except that if that
happens, then their bosses and peers start to notice that they don't
actually produce anything with all that new productive time, and *that
is that absolute last thing that person wants to happen*.

Why not? Because then it becomes clear to everybody that this person
really doesn't know what they are doing and doesn't add any value to the
organization, they are just taking up space and chewing payroll.
. . . And of course they want you to do a turnkey
solution, they don't want to have to keep funding you to maintain
the normalized db even though they will keep coming up with new
functionality they want added in.

There are a lot more people like that out there than you'd think.

I never require my clients to know *anything* about Access.


and you give them packaged solutions that do very specific things very
well. That's great, but we're talking about different things here. I've
never met a client that wanted an Access app they couldn't use for
ad-hoc querying. Most of the time, in fact, they want you to design an
app that they will be able to maintain themselves (so they get to be the
in-house Access power user, which is good for their careers).
And it's also no big deal to design an ad hoc querying UI that
allows the users to pick and choose whichever variables they want
in their flat output file.


very true, but then the client sees a lot of semi-object-oriented
VBA code that is completely inscrutable to someone weaned on
procedural software, and they get pissed because their neighbor
lost their retirement savings in the dotcom crash, and they think
that all IT professionals are money-grubbing leeches, and they
*know* they can handle the maintenance themselves.

Why would a client ever see any VBA code?


umm...because they're scientists and they are used to doing their own
programs in stuff like SAS and SPSS? Or because they read an article in
a magazine they picked up at the newsstand that was raving about all the
cool things you can do with VBA, and they want to get in on the action?
Or because they read somewhere on Monster that the IT field is (was) one
of the fastest-growing professions, and they want some job-hopping
flexibility?

Or because they know that their department is getting flat-funded for
the next three years, so if they can do IT stuff that gives them a
little extra advantage over their peers when it comes time for layoffs?

This is the kind of behind-the-scenes stuff that freelancers usually
don't notice because they're not around while it's happening.
The IT environment (or more specifically, the client environment)
is a lot different than it was back in the go-go days half a
decade ago.

I have absolutely no idea what you are talking about.

I'm currently involved in 4 different development projects, 1 new (a
revision of an amateur app), 1 semi-new (ongoing revisions to a
well-designed professionally developed app), and 2 old projects
(ongoing revisions to projects that I developed from scratch). I
have at least one other project on the drawing board, which may be a
revision of an amateur app or (preferably) a from-scratch rebuild of
the same app.


yep, sounds about right. But you notice that almost all of twhat you are
talking about is "maintenance" stuff, revising and enhancing existing
apps. I don't see a lot of genuinely "new" projects to develop something
that doesn't exist at all.

That's the massive difference between today and 2000. Back then it was
all "I need you to build me a program that will...". These days it's "I
need you to fix this thing that the guy I replaced came up with three
eyars ago, and it doesn't work anymore".
Nothing is different in any of these apps now than was different
when I started doing Access development in 1996.
Doesn't sound like much growth there, to put it bluntly...
Clients want it to be easy to use.

But they want it to do all sorts of really special things in terms
of UI and output formats.

It's the job of the developer to walk them through the cost/benefit
structure of getting them what they *want* versus getting them what
they *need*.
well, in a large IT shop and/or large organization that is *not* the
developer's job, that's the PM's job.
An application that allows them to store data easily but doesn't
also include tools for allowing them to easily retrieve the data
they've put in is a half-finished application.


too true

--
Terrell Miller
mi******@bellsouth.net

"Every gardener knows nature's random cruelty"
-Paul Simon RE: George Harrison
Nov 13 '05 #35
Terrell Miller <mi******@bellsouth.net> wrote in
news:g0*****************@bignews6.bellsouth.net:
David W. Fenton wrote:
Then when you explain the benefits of normalization to them (And
the application design lifecycle) they get glassy-eyed.

So, you've gotten through the first 20 minutes of the initial
client meeting.

The rest of it should be sent selling them on the idea of doing
it right, rather than acquiescing to their ignorance. They are
hiring you because you are an *expert* in the field -- if they
then won't let you use your expertise, then perhaps you should
not take them on as a client.


in theory, yes. But what happens in RL . . .


I live in real life, too.
. . . is that the client staff
you explain all that to are convinced and all is well. But then
time passes and those people drift off onto different
projects/jobs. The new people that replace them tend to have a
"not invented here" outlook, and they specifically don't want to
just pick up the other person's project design. . . .
So, you charge them extra money for re-designing the application,
just as you would if the *original* client asked for a re-design
after work had already commenced.

It's not complicated.

[]
If the IT professional waves a signed design doc at them that
doesn't really help, because then they'll just change their
approach to "it's designed properly in a technical sense, but that
application just isn't what we need to be spending time and money
doing".
Then you tell "well, regardless of whether it's the *right* design
or not, it's the one you signed a contract for."
Again, David: application design does not exist in a vacuum, it
bumps up against human beings who are petty, illogical, selfish,
and ignorant. Waving a theoretical level of perfection at them
does not work.
The contract should specify the design in sufficient detail that
you're never in a position where you end up creating two
applications for the price of one.

I just went through this last year. The person who was the original
leader for the project became ill and decided to retire. This was
after the contract had been signed, 1/3 of the project cost had been
paid, and the project had been begun. It happened just at the time
that I was ready to present a prototype for feedback. Months later,
someone else took over and we tried to work through the design, but
nobody else was on board with the whole concept. There was no longer
anyone in the organization who wanted the application in the first
place. As I'd already delivered a finished application by that point
(and received a 2nd 3rd of the cost in payment), I gave them a
choice: abandon the project having paid 2/3s of the cost, or
continue and finish it up and pay the full amount. They opted to
abandon the project rather than sink more time into "perfecting" it.

So, I know perfectly well that these things happen. And whether
you're a slave to them or not depends on your original agreement and
how well you can communicate with them.
It's no big deal to design a flat output format for normalized
data.

unless your client is a non-IT type who has been using SAS for
years and panics at the thought of having to learn how to do
Access queries. . . .


What are you talking about?

I mean you *program* the output for them.


okay, here's how it works in RL: some people (esp. in large
organizations) like to "retire on the payroll". They specifically
do *not* want efficiency, they want to be able to spend their
entire time going through this tedious manual process that is so
convoluted that they are the only ones qualified to do it. Instant
job security.

If you program the output for them, that frees up lots of their
time that they can use for more productive activities. Except that
if that happens, then their bosses and peers start to notice that
they don't actually produce anything with all that new productive
time, and *that is that absolute last thing that person wants to
happen*.


This is *not* your problem.
Why not? Because then it becomes clear to everybody that this
person really doesn't know what they are doing and doesn't add any
value to the organization, they are just taking up space and
chewing payroll.
This is *not* your problem.

Well, it *becomes* your problem if you let bad management at the
client influence your schema design!

How idiotic.
. . . And of course they want you to do a turnkey
solution, they don't want to have to keep funding you to maintain
the normalized db even though they will keep coming up with new
functionality they want added in.

There are a lot more people like that out there than you'd think.


I never require my clients to know *anything* about Access.


and you give them packaged solutions that do very specific things
very well. That's great, but we're talking about different things
here. I've never met a client that wanted an Access app they
couldn't use for ad-hoc querying. Most of the time, in fact, they
want you to design an app that they will be able to maintain
themselves (so they get to be the in-house Access power user,
which is good for their careers).


I've had plenty of clients where I created denormalized queries for
them to use so that they would not have to know anything about the
normalized structure. With a properly normalized questionaire
database, this would require a crosstab (thus, not editable), but
it's still perfectly doable.

Again, you pre-build what's necessary to allow them to do what they
want to do, rather than wasting time on trying to train them in the
behind-the-scenes complexities of the application's schema.
And it's also no big deal to design an ad hoc querying UI that
allows the users to pick and choose whichever variables they
want in their flat output file.

very true, but then the client sees a lot of semi-object-oriented
VBA code that is completely inscrutable to someone weaned on
procedural software, and they get pissed because their neighbor
lost their retirement savings in the dotcom crash, and they think
that all IT professionals are money-grubbing leeches, and they
*know* they can handle the maintenance themselves.


Why would a client ever see any VBA code?


umm...because they're scientists and they are used to doing their
own programs in stuff like SAS and SPSS? Or because they read an
article in a magazine they picked up at the newsstand that was
raving about all the cool things you can do with VBA, and they
want to get in on the action? Or because they read somewhere on
Monster that the IT field is (was) one of the fastest-growing
professions, and they want some job-hopping flexibility?


I would *never* allow a client to touch the code in any of my
applications. If they *did*, then I would work only on an hourly
basis, and would provide no support whatsoever, except at an hourly
rate. If the application has been changed in ways that I do not know
about, then it's basically an entirely new application every time I
come to it.
Or because they know that their department is getting flat-funded
for the next three years, so if they can do IT stuff that gives
them a little extra advantage over their peers when it comes time
for layoffs?

This is the kind of behind-the-scenes stuff that freelancers
usually don't notice because they're not around while it's
happening.
Sounds like you're working for a very poorly managed organization. I
would seek clients elsewhere -- I have never had a project for a
poorly managed client that was a success. That includes one that the
client walked away from (well, they wouldn't pay me for work
completed that they had agreed to pay for before the work was
commenced, so I stopped work until they paid, and they never paid),
and another that I walked away from (the boss was micro-managing the
project, so I finally just gave up and said I couldn't work for
them; the basic application was finished, so this didn't leave them
in the lurch -- it just meant the next stage of the app never got
built). Sometimes the client makes it impossible for you to do your
job. In that case it can be better to walk away and potentially lose
money -- with the deadbeat clients, I lost over $2,000 that I'd
already paid to a subcontractor to complete a sub-module, in
addition to over $1,000 in the disputed bill and an additional
$4,000 or so for the final payment at completion of the project.

But I sure am glad I don't have to work with that client any more.
Fortunately, at the time it happened, I had other clients starting
up big projects, so I wasn't hurting financially.
The IT environment (or more specifically, the client environment)
is a lot different than it was back in the go-go days half a
decade ago.


I have absolutely no idea what you are talking about.

I'm currently involved in 4 different development projects, 1 new
(a revision of an amateur app), 1 semi-new (ongoing revisions to
a well-designed professionally developed app), and 2 old projects
(ongoing revisions to projects that I developed from scratch). I
have at least one other project on the drawing board, which may
be a revision of an amateur app or (preferably) a from-scratch
rebuild of the same app.


yep, sounds about right. But you notice that almost all of twhat
you are talking about is "maintenance" stuff, revising and
enhancing existing apps. I don't see a lot of genuinely "new"
projects to develop something that doesn't exist at all.


Practially all of my projects have been ports of one kind or
another, whether from Access or from some other database. My first
four projects, back in 1996-97, were a conversion from an Excel
spreadsheet, a conversion from Q&A, a conversion from dBase and a
conversion from a Clipper app. From an Access point of view, these
were all from-scratch, but with existing data, there were already a
lot of restrictions on what was possible.

My next big project (in late 1997) was one of only 3 that I've ever
had where I got to design the schema and the application itself (and
even two of those had dependencies on outside data sources that
determined a large part of the schema).

Now, since about 1999, most of the "conversions" I've been doing
have been from earlier versions of Access. This is natural, given
that no one after the release of Paradox for Windows and Access was
using dBase and Clipper and Q&A to create database applications. So,
starting in 1999 or so, I started getting requests to update
existing Access applications.

This is *exactly the same scenario* as I was seeing when I started
in 1996 -- companies with existing apps wanted them
revised/converted to do things they didn't do in their original
state. The only change is that the databases that I'm revising are
in Access, and, on the whole, much better for the clients' purposes
than the old non-Access apps that I was being asked to replace when
I first started.

Now, much of my time is spent maintaining my own clients' apps,
rather than seeking out new clients, but that is not because of the
current business client, but because of the maturing of my own
database development business. In 1996 I was starting from scratch,
with no Access clients. By 2000, I had a half dozen or so clients
using my applications who needed ongoing maintenance. It is simply
the natural evolution of my own business that I shifted from
creating entirely new apps to maintaining existing apps.

In 1996-97, I took on 6 projects.

In 2004, I took on 3 new projects, but was also doing significant
maintenance work on 4 existing client applications. It was not a
good year financially, but that wasn't because I was not taking on
new projects -- I didn't *want* to take on new projects. The key
problem was that my existing clients had cut back their budgets for
maintenance. That has been changing, starting in fall 2004.
That's the massive difference between today and 2000. Back then it
was all "I need you to build me a program that will...". These
days it's "I need you to fix this thing that the guy I replaced
came up with three eyars ago, and it doesn't work anymore".


That's principally the kind of work I was doing in 1996-97, as well
-- the apps just weren't in Access.
Nothing is different in any of these apps now than was different
when I started doing Access development in 1996.


Doesn't sound like much growth there, to put it bluntly...


You were alleging a *decline*.

And database development is far from the only thing I do (I provide
basic system adminstration/support/training for small businesses and
individuals).

The decline in 2003-04 was because of the economy, and was mostly
caused by cutbacks on the part of my existing clients in their
budgets for maintenance/enhancement of their applications. One such
client went from $20K in 2001 to $35K in 2002 to $5K in 2003 and to
$0 in 2004. But that was partly the natural curve of the enhancement
project that we'd embarked upon. The big reduction in 2003 was an
intended cutback (though not for economic reasons -- that client was
wholly insulated from economics; their budgets are tied to the
political season, and 2003 was not an election year, so they had no
need for many enhancements).

My income has contracted, but it's not because clients don't *want*
to do Access development (new or revisions). It's because their
budgets have been cut and the money just isn't there. That's
changing. I'm just starting one project this month, have another
project ready to start and am working up a third project, while also
continuing to do major enhancements on two other existing clients'
apps.

The money is there again, and that's the issue.

Keep in mind that people doing web development have experienced a
much larger decline in demand for their services, and they aren't
using Access.
Clients want it to be easy to use.

But they want it to do all sorts of really special things in
terms of UI and output formats.

It's the job of the developer to walk them through the
cost/benefit structure of getting them what they *want* versus
getting them what they *need*.


well, in a large IT shop and/or large organization that is *not*
the developer's job, that's the PM's job.


Well, if that's not happening, then the PM is not doing his job.
And, as the developer, you should make that clear to the PM and the
PM's supervisor. You shouldn't be taking the heat for someone else's
failures.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #36

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: John | last post by:
When I enter data into my access db through the gui, my SQL Query recognizes the data it is looking for and returns the result. ie I enter "Dog" through the Access GUI and when my SQL query asks...
49
by: Relaxin | last post by:
It is just me or has MS created some of the worst ways to access and display data? You can use a DataSet, but if you want to sort or filter the data to must use a DataView which is created from...
13
by: JasBascom | last post by:
int main() { char record_typec = 'c'; record_typec = toupper(record_typec); * rec->Newcrecord.record_type = record_typec; etc etc \*********************************************\
17
by: DaveG | last post by:
Hi all I am planning on writing a stock and accounts program for the family business, I understand this is likely to take close to 2 years to accomplish. The stock is likely to run into over a...
35
by: robert d via AccessMonster.com | last post by:
I was asked to provide a proposal. I provided a proposal on my application and the prospective client likes what I have but is wary of it having been developed in Access. I don't understand this...
9
by: Wayne Smith | last post by:
I've come up against a major headache that I can't seem to find a solution for but I'm sure there must be a workaround and I would really be grateful of any help. I'm currently building a web...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.