By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,117 Members | 1,093 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,117 IT Pros & Developers. It's quick & easy.

Optimal architecture for a postgres DB?

P: 1
Hi all

I'm putting together a web app, and I'm starting to think about the architecture of the database. I'm wondering what kind of set up will result in the fastest retrieval times? Here's a quick run-down:

The app is effectively a checklist, with 100-200 items that will be common to all users. Users will not be able to add new items to this list. So that's one table, with 100-200 rows:

Now, each new user will be added with values for each of these rows, in the form of:
[primary key] [foreign key to user table] [foreign key to above table] [boolean] [varchar]

I'm wondering what best practices say about how to structure the second table.

Should I have one big table, and use sql statements to draw out the values (100-200 rows in this table for each user, SQL will retrieve 20-30 rows at a time)?

OR, should I go the same as above (one big table), adding a new view for each new user (so their view only exposes their data), then query the view?

OR, should I create a new table for each user to isolate their data, then query the user's table?

What this basically boils down to is: will postgres perform better when selecting / updating one table with 1,000,000 - 2,000,000 rows (10,000 users x 100-200 rows), or selecting / updating one table with 100-200 rows, in a database with 10,000 other tables of equal size?

Thanks in advance
Sep 3 '07 #1
Share this question for a faster answer!
Share on Google+

Post your reply

Sign in to post your reply or Sign up for a free account.