By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,960 Members | 1,142 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,960 IT Pros & Developers. It's quick & easy.

PHP or PostgreSQL problem?

P: 15
I'm running Sambar 7.0, PHP 5.2 and PostgreSQL 8.2 with WinXP. I built a database that's fed with form data, and created summaries with views and PHP scripts. Pulling a web summary that has a years worth of data comes up in less than 20 seconds. I cloned the database two different ways. One way used pg_dump out of the existing and pg_restore into the new, the other method was to maually execute the SQL to create all the objects. Using the same summary script edited to point to the other database no matter how it was created takes more than 20 minutes to display. Running the queries in pgAdmin gives results in ms, so I'm assuming the problem is a quirk in PHP. I've searched all the logs and can find no errors, warnings or notices. Postgres and PHP documentation indicate there shouldn't be a problem. Ideas?
Mar 7 '08 #1
Share this Question
Share on Google+
9 Replies


ronverdonk
Expert 2.5K+
P: 4,258
Welcome to the Scripts!

Show us some of that PHP code you use. And please within appropriate code tags!

Ronald
Mar 10 '08 #2

P: 15
I really appreciate the quick response! Thanks for the reply, but after finding the statistics portion of PostGreSQL, I've figured out the second database isn't scanning the indexes that were set up. This would explain the fact that the PHP script works, but is extremely slow. Why would PostGres use the indexes in one database, but not the other?
Mar 11 '08 #3

RoninOni
P: 9
Configuration likely....
I would set up and configure a database with all the configurations users etc set up, then when you need a new one, export, rename (with php even) and import guranteeing that all your settings will be there each and every time.

I am currently setting up a system with postgres where I have a template schema that php will export, parse the exportfile replacing all "schemaTemplate" with the user name giving each user an identical build schema.

Schema's are pretty awesome and they very well be the answer rather than using multiple DB's depending on needs.
Mar 11 '08 #4

P: 15
So from PHP how do you address each schema if the tables and queries are named the same? The deeper I get into this, the harder it gets. I noticed that someone in another forum created two identical databases in a cluster and can connect to one but not the other. This doesn't bode well with me. Using pgAdmin III to execute the views in both databases, they run comparably, in the ms. Pulling the data through a web server with PHP gives a full page from the first database in 20 seconds or so. Doing so with the second, but identical database takes 20 - 30 minutes. The PHP code is identical with the exception of the database name. The indexes are being hit in the first database but not the second. Configuration settings appear to be global, and so far small changes have yielded some improvement, but have not closed the gap in performance. There's not much documentation that addresses multiple databases in a cluster, and what there is simply indicates they should run comparably. I'm finding no documentation at all of how to configure an individual database in a cluster. I created the second database twice, once by using pg_dump out of the first and pg_restore into the second, so the settings should be the same. The second copy of the databse was created by running the SQL from the first database manually in the second to build each of the items. The performance of both copies is the same.... dismal. Why be able to build two databases if only the first one works correctly? No errors, no log entries.... just horrible performance.
Mar 11 '08 #5

P: 15
Maybe I should post to the PostGreSQL forum?
Mar 11 '08 #6

RoninOni
P: 9
This question does belong in the postgres forum, but maybe this will help you get your answer.

Postgres Users have a defualt schema that they query, and by default (I believe) the default schema would be of the same name as the user. therefore if you were to copy the schema replacing the default schema name with the user name, then that user should always query that schema. If the table is not in their schema, then it checks the public schema (if you have one)

Ask in postgres how to set this sort of situation up
Mar 12 '08 #7

ronverdonk
Expert 2.5K+
P: 4,258
I will move this to the PostgreSQL forum.

moderator
Mar 12 '08 #8

P: 15
The plot thickens. I created a schema in the original database and duplicated all the tables and views. Imported all the data from the Access database. Performance of the newly created schema is he same as the duplicated databases. This time, however, the performance of the original schema suffered by doubling the pull time. Dropping the schema restored the performance of the original schema. I give up. There's no documentation that helps, and appears no one here knows what the problem is. I'll be uninstalling postgres and going back to Access. At least it works and doesn't require a masters degree to make it work. I've wasted days on this.
Mar 14 '08 #9

P: 15
I figured it out. It's all about how PHP and Postgres use memory.
Mar 21 '08 #10

Post your reply

Sign in to post your reply or Sign up for a free account.