By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
464,302 Members | 1,232 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 464,302 IT Pros & Developers. It's quick & easy.

many columns in a table

P: n/a
Is it reasonable to have a table with about 300 columns? There will be
about 2500 rows.

The data does not normalize. The alternative would be about 30 tables
of about 10 columns each (all 30 tables with about 2500 rows). If I do
this, about half of the selects will be against one table, but the
other half will require joins of all 30.

Assume they're all varchar (mostly less than 32 characters, with about
half of the columns in any given row null).

Selects outnumber inserts/updates by about a thousand to one.

I will probably need an index on every column.
Jul 19 '05 #1
Share this Question
Share on Google+
2 Replies

P: n/a
It doesn't sound reasonable to me. Denormalizing to avoid a 30 table join
sounds fine. But with half the columns null, I think you've gone to far.

The alternative: My instinct says the table with half the selects against it
should be that table alone. The other 29 tables can likely be denormalized
into some handful of tables that are logically related.

e

In article <3c**************************@posting.google.com >,
tr*****@yahoo.com (Thomas Reat) wrote:
Is it reasonable to have a table with about 300 columns? There will be
about 2500 rows.

The data does not normalize. The alternative would be about 30 tables
of about 10 columns each (all 30 tables with about 2500 rows). If I do
this, about half of the selects will be against one table, but the
other half will require joins of all 30.

Assume they're all varchar (mostly less than 32 characters, with about
half of the columns in any given row null).

Selects outnumber inserts/updates by about a thousand to one.

I will probably need an index on every column.

Jul 19 '05 #2

P: n/a
I missed your comment about needing an index on every column. Yikes! I don't
know enough (or care :-) what you're doing but alarms should be going off.

e

In article <ta***************@bignews1.bellsouth.net>, ef**********@yahoo.com
(Eric Friedman) wrote:
It doesn't sound reasonable to me. Denormalizing to avoid a 30 table join
sounds fine. But with half the columns null, I think you've gone to far.

The alternative: My instinct says the table with half the selects against it
should be that table alone. The other 29 tables can likely be denormalized
into some handful of tables that are logically related.

e

In article <3c**************************@posting.google.com >,
tr*****@yahoo.com (Thomas Reat) wrote:
Is it reasonable to have a table with about 300 columns? There will be
about 2500 rows.

The data does not normalize. The alternative would be about 30 tables
of about 10 columns each (all 30 tables with about 2500 rows). If I do
this, about half of the selects will be against one table, but the
other half will require joins of all 30.

Assume they're all varchar (mostly less than 32 characters, with about
half of the columns in any given row null).

Selects outnumber inserts/updates by about a thousand to one.

I will probably need an index on every column.

Jul 19 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.