473,379 Members | 1,330 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,379 software developers and data experts.

Best practice? Web application: single PostgreSQL user vs. multipleusers

I'm trying to get a feel for what most people are doing or consider best
practice.

Given a mod_perl application talking to a PostgreSQL database on the
same host, where different users are logging onto the web server using
LDAP for authentication, do most people

1) have the web server connecting to the database using its own user
account (possibly through ident), and controlling access to different
database entities strictly through the application itself

2) have the web server connecting to the database actually using the
user's account (possibly using LDAP authentication against PostgreSQL),
and controlling access to different database entities through GRANT, etc.

Obviously, (2) leads to more database connections, and you still have to
have the application do some work in terms of which forms are available
to which users, etc. But I'm a little worried about whether it's best
security practice.

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #1
20 6545
Keith G. Murphy said:
2) have the web server connecting to the database actually using the
user's account (possibly using LDAP authentication against PostgreSQL),
and controlling access to different database entities through GRANT, etc.


My experience with java web/app servers indicates that for most setups
using a pool of connections is preferable to using a single connection per
connected user - it scales much better.

What you could consider is one or more pools which map to the "roles" that
your (web) app supports. For example, if a user needs "minimal rights"
access to db resources, then your cgi (request handler) accesses the data
using a connection from the "minimal rights" connection pool. A user
needing "greater rights" would have the cgi access the database from the
"greater rights" pool.

Normally, I place the database functions/tables/objects into different
logical schemas, then I create one or more specific users (for the web/app
server only) which equates to a logical role, and I grant specific rights
on the different schema objects to those users.

Your mileage may vary.

John Sidney-Woollett

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 22 '05 #2
John Sidney-Woollett wrote:
Keith G. Murphy said:
2) have the web server connecting to the database actually using the
user's account (possibly using LDAP authentication against PostgreSQL),
and controlling access to different database entities through GRANT, etc.

My experience with java web/app servers indicates that for most setups
using a pool of connections is preferable to using a single connection per
connected user - it scales much better.

What you could consider is one or more pools which map to the "roles" that
your (web) app supports. For example, if a user needs "minimal rights"
access to db resources, then your cgi (request handler) accesses the data
using a connection from the "minimal rights" connection pool. A user
needing "greater rights" would have the cgi access the database from the
"greater rights" pool.

That sounds like an excellent compromise. How do you typically handle
the mechanics of authentication from web server to PostgreSQL on the
connect, using this scheme?
--
Why waste time learning when ignorance is instantaneous?
-- Hobbes
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 22 '05 #3
Keith G. Murphy said:
That sounds like an excellent compromise. How do you typically handle
the mechanics of authentication from web server to PostgreSQL on the
connect, using this scheme?


Sorry but I can't help you out here, I'm too much of a newbie with
Postgres - I was hoping that someone else would answer your part 1! :)

John

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #4
John Sidney-Woollett wrote:
Keith G. Murphy said:
That sounds like an excellent compromise. How do you typically handle
the mechanics of authentication from web server to PostgreSQL on the
connect, using this scheme?

Sorry but I can't help you out here, I'm too much of a newbie with
Postgres - I was hoping that someone else would answer your part 1! :)

John

Perhaps I can answer my own question. I could use ident and a map that
lists the web server username as able to map to the different "role"
usernames. Unfortunately, that still would allow the web server account
to "fake" role names.

If the "real" PostgreSQL accounts do not coincide to the
browser-authenticated usernames, I don't see a good way to use PAM/LDAP
or another mechanism to require that PostgreSQL itself makes sure that
the given username and password are valid. Not saying that's a big
problem, but...

Hmmm, mightn't it be kind of nice if there were PAM or krb5 maps in
addition to ident maps?
--
Why waste time learning when ignorance is instantaneous?
-- Hobbes
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 22 '05 #5
Keith G. Murphy said:
Perhaps I can answer my own question. I could use ident and a map that
lists the web server username as able to map to the different "role"
usernames.
Someone else also mentioned and I personally agree that it's better to
authenticate in the application layer (using whatever technology takes
your fancy), and then use the webserver's generic/pooled connection to
communicate with the database.

Your user and role mapping info could be stored within the database, or
accessed from an LDAP server, or some such.
Unfortunately, that still would allow the web server account
to "fake" role names.


Make the application layer robust and secure and it may not be so much of
a problem.

John

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 22 '05 #6
On Tue, 13 Jan 2004, Keith G. Murphy wrote:
I'm trying to get a feel for what most people are doing or consider best
practice.

Given a mod_perl application talking to a PostgreSQL database on the
same host, where different users are logging onto the web server using
LDAP for authentication, do most people

1) have the web server connecting to the database using its own user
account (possibly through ident), and controlling access to different
database entities strictly through the application itself

2) have the web server connecting to the database actually using the
user's account (possibly using LDAP authentication against PostgreSQL),
and controlling access to different database entities through GRANT, etc.

Obviously, (2) leads to more database connections, and you still have to
have the application do some work in terms of which forms are available
to which users, etc. But I'm a little worried about whether it's best
security practice.


I do 1. different language (PHP) same basic thing though. All security
is handled by ACLS I build myself in Postgresql and interrogate via my own
application.
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 22 '05 #7
"Keith G. Murphy" <ke******@mindspring.com> writes:
Hmmm, mightn't it be kind of nice if there were PAM or krb5 maps in
addition to ident maps?


ISTM the whole point of PAM is that you plug in your desired security
policy outside of the application. You shouldn't be asking for more
security frammishes from Postgres, you should be off coding a PAM module
that does things exactly the way you want.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 22 '05 #8
Tom Lane wrote:
"Keith G. Murphy" <ke******@mindspring.com> writes:
Hmmm, mightn't it be kind of nice if there were PAM or krb5 maps in
addition to ident maps?

ISTM the whole point of PAM is that you plug in your desired security
policy outside of the application. You shouldn't be asking for more
security frammishes from Postgres, you should be off coding a PAM module
that does things exactly the way you want.


I believe I see what you mean. Given the original premise, I imagine
you could have the PAM module do something like:

(1) Authenticate via LDAP using the user's username and password

(2) Look up the "role" name (real PostgreSQL username) via LDAP, using
the username

(3) Tell PostsgreSQL that the user is authenticated under role name.

I really hadn't thought much about how the PAM module might work.
--
Why waste time learning when ignorance is instantaneous?
-- Hobbes
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #9
On Tue, Jan 13, 2004 at 11:15:30 -0600,
"Keith G. Murphy" <ke******@mindspring.com> wrote:
Perhaps I can answer my own question. I could use ident and a map that
lists the web server username as able to map to the different "role"
usernames. Unfortunately, that still would allow the web server account
to "fake" role names.
If you can't trust the web server account then you probably want to use
a system where cgi-bin programs are run as different users.

If you have untrusted users who can supply their own cgi-bin programs
then using a common uid which all cgi-bin programs run under isn't
secure.
If the "real" PostgreSQL accounts do not coincide to the
browser-authenticated usernames, I don't see a good way to use PAM/LDAP
or another mechanism to require that PostgreSQL itself makes sure that
the given username and password are valid. Not saying that's a big
problem, but...


I don't think using information received from the browser to authenticate
versus the postgres server works when you can't be assured that the
cgi-bin program doing the checking is trustworthy.

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 22 '05 #10
On Tue, 13 Jan 2004, Keith G. Murphy wrote:
John Sidney-Woollett wrote:
Keith G. Murphy said:
2) have the web server connecting to the database actually using the
user's account (possibly using LDAP authentication against PostgreSQL),
and controlling access to different database entities through GRANT, etc.

My experience with java web/app servers indicates that for most setups
using a pool of connections is preferable to using a single connection per
connected user - it scales much better.

What you could consider is one or more pools which map to the "roles" that
your (web) app supports. For example, if a user needs "minimal rights"
access to db resources, then your cgi (request handler) accesses the data
using a connection from the "minimal rights" connection pool. A user
needing "greater rights" would have the cgi access the database from the
"greater rights" pool.

That sounds like an excellent compromise. How do you typically handle
the mechanics of authentication from web server to PostgreSQL on the
connect, using this scheme?


I create individual databases for unrelated projects (like say, phonebook
and sales_projections and then connect to each database as a different
artificial user often named for the database. Then I usually wrap that in
an include file I just add at the top of each page that connects and has
the password (on systems using password authentication) or that connects
without a password if I'm on a system using trust.

Then, any access by users is handled by ACLs I just build in a table in
that database.

We authenticate with auth_ldap, so we always know the user's name / groups
etc...
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #11
On Tue, 13 Jan 2004, Keith G. Murphy wrote:
John Sidney-Woollett wrote:
Keith G. Murphy said:
2) have the web server connecting to the database actually using the
user's account (possibly using LDAP authentication against PostgreSQL),
and controlling access to different database entities through GRANT, etc.

My experience with java web/app servers indicates that for most setups
using a pool of connections is preferable to using a single connection per
connected user - it scales much better.

What you could consider is one or more pools which map to the "roles" that
your (web) app supports. For example, if a user needs "minimal rights"
access to db resources, then your cgi (request handler) accesses the data
using a connection from the "minimal rights" connection pool. A user
needing "greater rights" would have the cgi access the database from the
"greater rights" pool.

That sounds like an excellent compromise. How do you typically handle
the mechanics of authentication from web server to PostgreSQL on the
connect, using this scheme?


Just an addition, we do all our groups in LDAP too. Generally ACLs point
back to groups, not users. that way if billy bob moves from finance to HR
we just change his group memberships, not all the ACLs in all the
databases.
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 22 '05 #12
On Tue, 13 Jan 2004, Keith G. Murphy wrote:
I'm trying to get a feel for what most people are doing or consider best
practice.

Given a mod_perl application talking to a PostgreSQL database on the
same host, where different users are logging onto the web server using
LDAP for authentication, do most people

1) have the web server connecting to the database using its own user
account (possibly through ident), and controlling access to different
database entities strictly through the application itself

2) have the web server connecting to the database actually using the
user's account (possibly using LDAP authentication against PostgreSQL),
and controlling access to different database entities through GRANT, etc.

Obviously, (2) leads to more database connections, and you still have to
have the application do some work in terms of which forms are available
to which users, etc. But I'm a little worried about whether it's best
security practice.


I do #1- most connections are not persistent though I have done those
before as well. Security-wise I'm been reviewing the pros and cons
of this and so far I really can't make much of an argument for #2.
Just the opposite in fact. The one thing I will be doing though,
for more security, is create a special webuser for that client.

---
Keith C. Perry
Director of Networks & Applications
Visions Communications Support Network, Inc.
ne******@vcsn.com
http://vcsn.com
---

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 22 '05 #13
scott.marlowe wrote:
On Tue, 13 Jan 2004, Keith G. Murphy wrote:

I'm trying to get a feel for what most people are doing or consider best
practice.

Given a mod_perl application talking to a PostgreSQL database on the
same host, where different users are logging onto the web server using
LDAP for authentication, do most people

1) have the web server connecting to the database using its own user
account (possibly through ident), and controlling access to different
database entities strictly through the application itself

2) have the web server connecting to the database actually using the
user's account (possibly using LDAP authentication against PostgreSQL),
and controlling access to different database entities through GRANT, etc.

Obviously, (2) leads to more database connections, and you still have to
have the application do some work in terms of which forms are available
to which users, etc. But I'm a little worried about whether it's best
security practice.

I do 1. different language (PHP) same basic thing though. All security
is handled by ACLS I build myself in Postgresql and interrogate via my own
application.

Thanks to all for the discussion so far. I have been doing option 1 so
far as well, but was a bit uncomfortable allowing the web server account
to have complete control over the database, and wondered what the
alternatives might be.

At some point, I may try rolling my own PAM module (as Tom Lane
suggested) that uses the user's browser-authenticated username and
password to map to a PostgreSQL username that constitutes a "role"
(assuming that's possible). The benefit I can see to such a scheme is
that it means anyone who might manage to run a process under the web
server's account would still have to have the appropriate username and
password to do anything to the database. And the limited number of
"role" PostgreSQL usernames would mean fewer connections than using the
original usernames, as John Sidney-Woollett pointed out.

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 22 '05 #14
Keith Murphy wrote:
At some point, I may try rolling my own PAM module (as Tom Lane
suggested) that uses the user's browser-authenticated username and
password to map to a PostgreSQL username that constitutes a "role"
(assuming that's possible).


One option is to add an extra layer of indirection: the web server
interacts with a "transaction server" through eg: XML-RPC or CORBA.

The list of transactions ("interactions") you can perform is controlled,
each transaction can be logged, and each transaction handler can have
its own access rights to the postgresql database.

The transaction server can be hardened by only allowing access from the
web server. This does mean that anyone breaking into your web server can
potentially alter data by interacting with the transaction server - but
only to the extent allowed by the existing transactions. They can't make
wholesale changes to your database such as "select * from
credit_card_details; truncate invoices; drop table accounts_receivable;"
etc.

Regards
Alex Satrapa
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #15
On Tuesday 13 January 2004 22:13, Keith G. Murphy wrote:
John Sidney-Woollett wrote:
What you could consider is one or more pools which map to the "roles"
that your (web) app supports. For example, if a user needs "minimal
rights" access to db resources, then your cgi (request handler) accesses
the data using a connection from the "minimal rights" connection pool. A
user needing "greater rights" would have the cgi access the database from
the "greater rights" pool.


That sounds like an excellent compromise. How do you typically handle
the mechanics of authentication from web server to PostgreSQL on the
connect, using this scheme?


Umm.. I doubt role specific pooling is required. You can use set session
authorization to switch authorization of any connection.

Of course you need to connet to database as super user though. That is real
downside. If your webapp. breaks, the database can not provide any security.

Shridhar
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 22 '05 #16
Hi Keith and others,

Personally I am rather humble regarding my ability to write unassailable
programs from a security perspective, so I tend to use individual database
accounts for individual users. I know that under certain environments this
may not scale well and connection pooling may be required, but generally I
am not comfortable with that approach.

Basically my logic is that the most important security is where the
information is stored. For this reason I try to "push security back"
towards the database manager as far as possible. Then everythign else can
be given as few permissions as possible (for example, the web application
itself has no access to the database apart from the user). Usernames and
passwords can be stored separately in order to reduce the cost of compromise
(f. ex. HERMES stores the username in a cookie but the password as a session
variable), etc. These strategies are simply not possible under the
connection pooling scenario.

For me it comes down to the following question: What is the cost of
enforcing security yourself? If there is a security flaw in PostgreSQL,
there is NOTHING that will keep you safe, but why multiply single points of
security failure?

However you have another problem in the scenario you describe-- that is that
the actual authentication occurs via LDAP. This changes the assumptions and
security environment a bit. If I had complete control over such an
environment, I would do one of the following things:

1: Use Kerberos to authenticate and LDAP to store profiles. LDAP and
PostgreSQL authentication are now handled by Kerberos and you can pass the
authentication token via the web app in mod_perl (not sure how to do it in
PHP though). For many browsers, this would allow for single signon and
transparent logins.

2: Have multiple accounts for different roles and store these in the LDAP
user profiles. This creates a large number of headaches (are the role
fields properly secured, for example).

3: Finally you could have a user profiles table which contained the allowed
accounts to use.

Best Wishes,
Chris Travers
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 22 '05 #17
> I do #1- most connections are not persistent though I have done those
before as well. Security-wise I'm been reviewing the pros and cons
of this and so far I really can't make much of an argument for #2.
Just the opposite in fact. The one thing I will be doing though,
for more security, is create a special webuser for that client.


My concern with connection pooling is that the application itself must run
with permission to do anything that any user may do in the application. If
you have a way of using postgresql native accounts, you can actually force
the application to have no rights to the database at all unless the
credentials are supplied by a user. The application then has NO rights to
anything that the user doesn't, and this may allow for better security.

The real drawback for multiple accounts is that each account needs to be
maintained. In environments where this is an issue, I usually create
functions to do this so that my information does not get out of sync.
However, I can imagine this being a concern, security wise.

Best Wishes,
Chris Travers
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 22 '05 #18
Quoting Chris Travers <ch***@travelamericas.com>:
I do #1- most connections are not persistent though I have done those
before as well. Security-wise I'm been reviewing the pros and cons
of this and so far I really can't make much of an argument for #2.
Just the opposite in fact. The one thing I will be doing though,
for more security, is create a special webuser for that client.

My concern with connection pooling is that the application itself must run
with permission to do anything that any user may do in the application. If
you have a way of using postgresql native accounts, you can actually force
the application to have no rights to the database at all unless the
credentials are supplied by a user. The application then has NO rights to
anything that the user doesn't, and this may allow for better security.


I think we might be talking about two different things- unless you are equating
persistant connections to connection pooling. Perhaps that is correct but let
me example a little more what I was talking about. It might be a little
off-topic but I want to be clear in the archives.

My particular environment is Linux, Apache, mod_perl and PostgreSQL. The user
and group that the web server runs as has no permissions to anything. When a
client's app need a DB, we create an account in PG for them and grant
permissions to the web user for the objects it needs access to.

In the mod_perl script, if I do persistent, the first thing that is done is a
check for a previous connection to **that** client's database (via a global
variable). If not (or if that connection is "busy") a new one is created. The
maximum number of connections would be equal to the maximum number of Apache
listeners. That's assuming you don't use other application logic to control
it. For those who aren't familiar with mod_perl, the benefit is that not
only does your script run as an object in the Apache server but the database
connection objects will persist. When you don't do the persistent connections,
the difference is that your application object in Apache will have to open the
database connection everytime. For local connections to PostgreSQL, I really
haven't seen that much of a difference but in my "test" case, I'm
pretty sure I wasn't implementing the persistent-ness of the script correctly
and the application was not being hit with any significant load.

I thought connection pooling was more generic- any connection from the web
server/application business logic could be reused for any db connection. Please
correct me if I'm wrong here.

Depending on the application, the
authorization/authentication would be wrapped in an SSL transport (HTTPS instead
of HTTP). After that each connection is "tracked" (via the Apache
cookie-tracking and and pgAuth module I wrote) by always checking the validity
of the client's browser cookie. The authorization pairs for the client's
application are stored in a user table in their database and the cookie gets
linked to that and is valid only for that browser session (i.e. as long as the
browser instance is open and you did not "log out"). No direct information
about the database is revealed to the client interface (web browser) at all.
The real drawback for multiple accounts is that each account needs to be
maintained. In environments where this is an issue, I usually create
functions to do this so that my information does not get out of sync.
However, I can imagine this being a concern, security wise.
Its the pros and cons of PG having its own accounts. Definitely a security
benefit, which generally means a headache somewhere else- in this case double
account unless, like you said, have way to keep things in sync. For something
secure, I don't mind the headache... Advil is my friend :)
Best Wishes,
Chris Travers

--
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

____________________________________
This email account is being host by:
VCSN, Inc : http://vcsn.com

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 22 '05 #19
Mensaje citado por "scott.marlowe" <sc***********@ihs.com>:
1) have the web server connecting to the database using its own user
account (possibly through ident), and controlling access to different
database entities strictly through the application itself
[snip]
I do 1. different language (PHP) same basic thing though. All security
is handled by ACLS I build myself in Postgresql and interrogate via my own
application.


In the case of PHP there are very nice ACL stuff in PEAR, like PEAR::Auth.

I personally do all the auth stuff by myself (table with users, table with
function, table with permissions, etc.), just like Scott. :-)

--
select 'mmarques' || '@' || 'unl.edu.ar' AS email;
---------------------------------------------------------
Martín Marqués | Programador, DBA
Centro de Telemática | Administrador
Universidad Nacional
del Litoral
---------------------------------------------------------

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 22 '05 #20
Hi Keith, et. al
I think we might be talking about two different things- unless you are equating persistant connections to connection pooling. Perhaps that is correct but let me example a little more what I was talking about. It might be a little
off-topic but I want to be clear in the archives.
Ok, but connection pooling necessitates "account pooling" which is the first
approach.
My particular environment is Linux, Apache, mod_perl and PostgreSQL. The user and group that the web server runs as has no permissions to anything. When a client's app need a DB, we create an account in PG for them and grant
permissions to the web user for the objects it needs access to.
This sounds very much like the "one account per role" compromise suggested
earlier.
a previous connection to **that** client's database (via a global
variable). If not (or if that connection is "busy") a new one is created. The maximum number of connections would be equal to the maximum number of Apache listeners. That's assuming you don't use other application logic to control it. For those who aren't familiar with mod_perl, the benefit is that not
only does your script run as an object in the Apache server but the database connection objects will persist. When you don't do the persistent connections, the difference is that your application object in Apache will have to open the database connection everytime. For local connections to PostgreSQL, I really haven't seen that much of a difference but in my "test" case, I'm
pretty sure I wasn't implementing the persistent-ness of the script correctly and the application was not being hit with any significant load.
Ok, I see what you are talking about here.

Does this mean, then, that the mod_perl scripts must have access to be able
to log into the database as any user even without the user supplying
credentials? i.e. are the credentials for the database itself provided by
the user or by configuring the app? It sounds to me like you then have to
check the username and password against a table in your database, requiring
a pre-existing connection.

Of course if all users at a client have the same rights, then your approach
is similar to mine. If not, then the db provides little security if the app
breaks.
I thought connection pooling was more generic- any connection from the web
server/application business logic could be reused for any db connection. Please correct me if I'm wrong here.

You are right, but I generally think that the account pooling approach is
mostly important if you are also pooling connections. At least with one
session per user, you can do better enforcement. For example, here is how
my largest application handles it.

HERMES (http://hermesweb.sourceforge.net) calls a PHP script (using its own
PAM model) to authenticate the user based on username and password, though I
should change this to make it more general. Currently two modes are
supported on PostgreSQL: DB Native and Shared Account.

DB Native is the preferred access method. In this method, the username and
password supplied to the application correspond to the database account.
The authentication is handled by logging into the database with the username
and password supplied. Permissions are then enforced by the database level
permissions. The only drawback to this approack is that the fact that
HERMES uses its own permission catalogs that allow administrators to assign
the consistant permissions to related groups of tables. In this mode, these
permissions need to be compiled, or activated, into database permissions
before they take effect, but users may have whatever permissions that might
have (including create table and temp permissions).

In shared account mode, things are handled differently (requiring only 2
accounts). The user provides a username and password. The username and
password are passed to the login function, which logs in with a shared
account and runs the authenticate(username, password) stored proceedure
(which runs as security definer). This function creates a temporary table
of authenticated users from the current connection (allowing a su-like
function, but not currently supported). The shared user does NOT have
permission to write to this table.

Permissions are then enforced via triggers which check current_user against
a list of db users exempted from trigger-based permissions enforcement. The
authenticated username is then used to check insert, update, or delete
permissions directly against the HERMES permission catalogs.

Select Permissions are enforced by moving the tables (when installing the
shared hosting layer) into a shadow schema and revoking permission from the
shared user to select from these tables. Views are then created in place of
the tables which contain oid's as well the normal columns, and perform a
similar check to the ones that the insert/update/delete triggers do.
Update, insert, and delete functions pass back to the previous table either
by oid or by primary key (unsupported at the moment, but being worked on).

The major restrictions here include a performance hit, and the fact that the
shared user must not have create table or temp permissions in the current
database. However, as a list of db users which bypass the trigger
permissions are maintained, automated backup tools can still be used. The
other user MUST have temp permissions (and preferably create table perms
too).

Of course, the choice of approaches also requires that user creation,
permissions activation, etc. are all handled by stored proceedures, though
most fo the logic will probably be moved back into triggers.

Best Wishes,
Chris Travers
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #21

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

11
by: DrUg13 | last post by:
In java, this seems so easy. You need a new object Object test = new Object() gives me exactly what I want. could someone please help me understand the different ways to do the same thing in...
136
by: Matt Kruse | last post by:
http://www.JavascriptToolbox.com/bestpractices/ I started writing this up as a guide for some people who were looking for general tips on how to do things the 'right way' with Javascript. Their...
6
by: RS | last post by:
Hi, What's the best practice to save user preferences for a .NET application . For example if the user does not want anymore to see Tip-Of-The-Day (TOTD), the user can tick a checkbox on the...
3
by: Phil Campaigne | last post by:
I am developing a java/postgresql application using ant and junit. I want to deploy tested builds along with matching tables with test data in them. What is the best way to deploy the tables...
4
by: Collin Peters | last post by:
I have searched the Internet... but haven't found much relating to this. I am wondering on what the best practices are for migrating a developmemnt database to a release database. Here is the...
3
by: Michael Glaesemann | last post by:
Hello all, Recently I've been thinking about different methods of managing users that log into a PostgreSQL-backed application. The users I'm thinking of are not necessarily DBAs: they're...
5
by: BK | last post by:
We've got a fairly large scale development process under way in .NET 2003. We are about a month away from go-live for phase 1, second phase is rather short and all work should be completed in the...
3
by: Venkat | last post by:
Hi, We have a windows application developed in c# and SQL Server 2005. Our application need to execute more than one command (ExecuteReader and ExecuteScalar) at a single time. Till now we have...
2
by: sabbadin12 | last post by:
Hi, I'm going to work on an application that uses a postgreSQL database so that it can uses SQLServer 2005. I think I solved most problems on the programming side, but I still have some doubts...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.