470,647 Members | 1,056 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 470,647 developers. It's quick & easy.

many columns in a table

Is it reasonable to have a table with about 300 columns? There will be
about 2500 rows.

The data does not normalize. The alternative would be about 30 tables
of about 10 columns each (all 30 tables with about 2500 rows). If I do
this, about half of the selects will be against one table, but the
other half will require joins of all 30.

Assume they're all varchar (mostly less than 32 characters, with about
half of the columns in any given row null).

Selects outnumber inserts/updates by about a thousand to one.

I will probably need an index on every column.
Jul 19 '05 #1
2 2275
It doesn't sound reasonable to me. Denormalizing to avoid a 30 table join
sounds fine. But with half the columns null, I think you've gone to far.

The alternative: My instinct says the table with half the selects against it
should be that table alone. The other 29 tables can likely be denormalized
into some handful of tables that are logically related.

e

In article <3c**************************@posting.google.com >,
tr*****@yahoo.com (Thomas Reat) wrote:
Is it reasonable to have a table with about 300 columns? There will be
about 2500 rows.

The data does not normalize. The alternative would be about 30 tables
of about 10 columns each (all 30 tables with about 2500 rows). If I do
this, about half of the selects will be against one table, but the
other half will require joins of all 30.

Assume they're all varchar (mostly less than 32 characters, with about
half of the columns in any given row null).

Selects outnumber inserts/updates by about a thousand to one.

I will probably need an index on every column.

Jul 19 '05 #2
I missed your comment about needing an index on every column. Yikes! I don't
know enough (or care :-) what you're doing but alarms should be going off.

e

In article <ta***************@bignews1.bellsouth.net>, ef**********@yahoo.com
(Eric Friedman) wrote:
It doesn't sound reasonable to me. Denormalizing to avoid a 30 table join
sounds fine. But with half the columns null, I think you've gone to far.

The alternative: My instinct says the table with half the selects against it
should be that table alone. The other 29 tables can likely be denormalized
into some handful of tables that are logically related.

e

In article <3c**************************@posting.google.com >,
tr*****@yahoo.com (Thomas Reat) wrote:
Is it reasonable to have a table with about 300 columns? There will be
about 2500 rows.

The data does not normalize. The alternative would be about 30 tables
of about 10 columns each (all 30 tables with about 2500 rows). If I do
this, about half of the selects will be against one table, but the
other half will require joins of all 30.

Assume they're all varchar (mostly less than 32 characters, with about
half of the columns in any given row null).

Selects outnumber inserts/updates by about a thousand to one.

I will probably need an index on every column.

Jul 19 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

6 posts views Thread by Doug Baroter | last post: by
4 posts views Thread by Florian | last post: by
1 post views Thread by Yaron C. | last post: by
15 posts views Thread by kimi | last post: by
4 posts views Thread by Sherwood Botsford | last post: by
1 post views Thread by Korara | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.