473,836 Members | 1,490 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

postgresql + apache under heavy load

Hello,
I am testing a web application (using the DBX PHP function to call a
Postgresql backend).
I have 375Mb RAM on my test home box.
I ran ab (apache benchmark) to test the behaviour of the application
under heavy load.
When increasing the number of requests, all my memory is filled, and the
Linux server begins to cache and remains frozen.

ab -n 100 -c 10 http://localsite/testscript
behaves OK.

If I increases to
ab -n 1000 -c 100 http://localsite/testscript
I get this memory problem.

If I eliminate the connection to the (UNIX) socket of Postgresql, the
script behaves well even under very high load (and of course with much
less time spent per request).

I tried to change some parameters in postgresql.conf
max_connections = 32
to max_connections = 8

and

shared_buffers = 64
to shared_buffers = 16

without success.

I tried to use pmap on httpd and postmaster Process ID but don't get
much help.

Does anybody have some idea to help to debug/understand/solve this
issue? Any feedback is appreciated.
To me, it would not be a problem if the box is very slow under heavy
load (DoS like), but I really dislike having my box out of service after
such a DoS attack.
I am looking for a way to limit the memory used by postgres.

Thanks
Alex
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 22 '05 #1
19 5263
On Wed, 21 Jan 2004, Alex Madon wrote:
Hello,
I am testing a web application (using the DBX PHP function to call a
Postgresql backend).
I'm not familiar with DBX. Is that connection pooling or what?
I have 375Mb RAM on my test home box.
I ran ab (apache benchmark) to test the behaviour of the application
under heavy load.
When increasing the number of requests, all my memory is filled, and the
Linux server begins to cache and remains frozen.
Are you SURE all your memory is in use? What exactly does top say about
things like cached and buff memory (I'm assuming you're on linux, any
differences in top on another OS would be minor.) If the kernel still
shows a fair bit of cached and buff memory, your memory is not getting all
used up.
ab -n 100 -c 10 http://localsite/testscript
behaves OK.
Keep in mind, this is 10 simo users beating the machine continuously.
that's functionally equivalent to about 100 to 200 people running through
pages as fast as people can.
If I increases to
ab -n 1000 -c 100 http://localsite/testscript
I get this memory problem.
Where's the break point? Just wondering. Does it show up at 20, 40, 60,
80, or only at 100? If so, that's really not bad.
If I eliminate the connection to the (UNIX) socket of Postgresql, the
script behaves well even under very high load (and of course with much
less time spent per request).
Of course, the database is the most expensive part of an application,
CPU/Memory wise, written on apache/php
I tried to change some parameters in postgresql.conf
max_connections = 32
to max_connections = 8
Wrong direction. The number of connections postgresql CAN create costs
very little. The number of connections it does create, still, costs very
little. Have you checked to see if ab is getting valid pages, and not
"connection failed, too many connections already open" pages?
shared_buffers = 64
to shared_buffers = 16
Way the wrong way. Shared buffers are the max memory all the backends
together share. The old setting was 512k ram, now you're down to 128k.
while 128k would be a lot of memory for a Commodore 128, for a machine
with 384 meg ram, it's nothing. Since this is a TOTAL shared memory
setting, not a per process thing, you can hand it a good chunk of ram and
not usually worry about it. Set it to 512 and just leave it. That's only
4 megs of shared memory, if your machine is running that low, other things
have gone wrong.
without success.

I tried to use pmap on httpd and postmaster Process ID but don't get
much help.

Does anybody have some idea to help to debug/understand/solve this
issue? Any feedback is appreciated.
To me, it would not be a problem if the box is very slow under heavy
load (DoS like), but I really dislike having my box out of service after
such a DoS attack.
Does it not come back? That's bad.

I am looking for a way to limit the memory used by postgres.


Don't it's likely not using too much.

What does top say is the highest memory user?
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 22 '05 #2
Alex Madon wrote:
Hello,
I am testing a web application (using the DBX PHP function to call a
Postgresql backend).
I have 375Mb RAM on my test home box.
I ran ab (apache benchmark) to test the behaviour of the application
under heavy load.
When increasing the number of requests, all my memory is filled, and
the Linux server begins to cache and remains frozen.

ab -n 100 -c 10 http://localsite/testscript
behaves OK.

If I increases to
ab -n 1000 -c 100 http://localsite/testscript
I get this memory problem.

We would need a lot more information. What version of Linux? What
version of the Kernel? What is your shmmax settting?
What is your sort_mem setting? Did you use top to see where the hang up?
Are there any messages in /var/log/messages?

Sincerely,

Joshua D. Drake

If I eliminate the connection to the (UNIX) socket of Postgresql, the
script behaves well even under very high load (and of course with much
less time spent per request).

I tried to change some parameters in postgresql.conf
max_connections = 32
to max_connections = 8

and

shared_buffers = 64
to shared_buffers = 16

without success.

I tried to use pmap on httpd and postmaster Process ID but don't get
much help.

Does anybody have some idea to help to debug/understand/solve this
issue? Any feedback is appreciated.
To me, it would not be a problem if the box is very slow under heavy
load (DoS like), but I really dislike having my box out of service
after such a DoS attack.
I am looking for a way to limit the memory used by postgres.

Thanks
Alex
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if
your
joining column's datatypes do not match


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - jd@commandpromp t.com - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 22 '05 #3
On Wednesday 21 January 2004 14:11, Alex Madon wrote:
Hello,
I am testing a web application (using the DBX PHP function to call a
Postgresql backend).
I have 375Mb RAM on my test home box. [10 connections is fine, 100 is not]
I tried to change some parameters in postgresql.conf
max_connections = 32
to max_connections = 8


Are you saying you had more than 8 connections open simultaneously? Are you
sure you restarted PG so that it noticed the new values? You can check config
settings with "show all;" from psql, or "show <setting>".

You'll want to use the "top" command to show the amount of memory each process
is using and then check the configuration/tuning articles at the following
URL:

http://www.varlena.com/varlena/Gener...bits/index.php

First step is to make sure your changes are being detected. Then, I'd guess
you want to set:
max_connections
shared_buffers
sort_mem
vacuum_mem (less important)
and then adjust effective_cache _size so it matches your normal load.

--
Richard Huxton
Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 22 '05 #4
Could be problem be that PHP is not using connection efficiently?
Apache KeepAlive with PHP, is a dual edged sword with you holding the
blade :-)

If I am not mistaken, what happens is that a connection is kept alive
because Apache believes that other requests will come in from the client
who made the initial connection. So 10 concurrent connections are fine,
but they are not released timely enough with 100 concurrent connections.
The system ends up waiting around for other KeepAlive connections to
timeout before Apache allows others to come in. We had this exact
problem in an environment with millions of impressions per day going to
the database. Because of the nature of our business, we were able to
disable KeepAlive and the load immediately dropped (concurrent
connection on the Postgresql database also dropped sharply). We also
turned off PHP persistent connections to the database.

The drawback is that connections are built up and torn down all the
time, and with Postgresql, it is sort of expensive. But thats a fraction
of the expense of having KeepAlive on.

Warmest regards,
Ericson Smith
Tracking Specialist/DBA
+-----------------------+--------------------------------------+
| http://www.did-it.com | "Crush my enemies, see then driven |
| er**@did-it.com | before me, and hear the lamentations |
| 516-255-0500 | of their women." - Conan |
+-----------------------+--------------------------------------+

Alex Madon wrote:
Hello,
I am testing a web application (using the DBX PHP function to call a
Postgresql backend).
I have 375Mb RAM on my test home box.
I ran ab (apache benchmark) to test the behaviour of the application
under heavy load.
When increasing the number of requests, all my memory is filled, and
the Linux server begins to cache and remains frozen.

ab -n 100 -c 10 http://localsite/testscript
behaves OK.

If I increases to
ab -n 1000 -c 100 http://localsite/testscript
I get this memory problem.

If I eliminate the connection to the (UNIX) socket of Postgresql, the
script behaves well even under very high load (and of course with much
less time spent per request).

I tried to change some parameters in postgresql.conf
max_connections = 32
to max_connections = 8

and

shared_buffers = 64
to shared_buffers = 16

without success.

I tried to use pmap on httpd and postmaster Process ID but don't get
much help.

Does anybody have some idea to help to debug/understand/solve this
issue? Any feedback is appreciated.
To me, it would not be a problem if the box is very slow under heavy
load (DoS like), but I really dislike having my box out of service
after such a DoS attack.
I am looking for a way to limit the memory used by postgres.

Thanks
Alex
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if
your
joining column's datatypes do not match

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 22 '05 #5
Hello Scott,
Thank you for your answer.
I'm not familiar with DBX. Is that connection pooling or what?

I could not find this information, sorry.
Are you SURE all your memory is in use? What exactly does top say about
things like cached and buff memory (I'm assuming you're on linux, any
differences in top on another OS would be minor.) If the kernel still
shows a fair bit of cached and buff memory, your memory is not getting all
used up.

Well my xosview show that caching begin at a concurrency of 40.
At 80 my cache begins to be filled completely, so machine having big
problems.
If I increases to
ab -n 1000 -c 100 http://localsite/testscript
I get this memory problem.


Where's the break point? Just wondering. Does it show up at 20, 40, 60,
80, or only at 100? If so, that's really not bad.

Here is some results (I kept -n 100 an just vraied the -c option)
--c 1
Failed requests: 0
Time per request: 322.096 [ms] (mean, across all concurrent requests)

-c 2
Failed requests: 0
Time per request: 374.220 [ms] (mean, across all concurrent requests)

-c 10
Failed requests: 68
(Connect: 0, Length: 68, Exceptions: 0)
Time per request: 314.779 [ms] (mean, across all concurrent requests)

-c 20
Failed requests: 68
Time per request: 369.290 [ms] (mean, across all concurrent requests)

-c 30
Failed requests: 43
Time per request: 441.947 [ms] (mean, across all concurrent requests)

=====Here begins caching to disk====

-c 40
Failed requests: 65
Time per request: 528.829 [ms] (mean, across all concurrent requests)

-c 50
Failed requests: 66
Time per request: 993.674 [ms] (mean, across all concurrent requests)

For a higher concurrency, the cache is completly filled, and have to
reboot the machine.
(I didn't leave the system caching forever, just press to reboot
button)... could be interesting to wait to see if the systems recovers
after a while
To me, it would not be a problem if the box is very slow under heavy
load (DoS like), but I really dislike having my box out of service after
such a DoS attack.


Does it not come back? That's bad.

see above

thanks
Alex
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postg resql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #6
On Wed, 21 Jan 2004, Alex Madon wrote:
Hello Scott,
Thank you for your answer.
I'm not familiar with DBX. Is that connection pooling or what?

I could not find this information, sorry.
Are you SURE all your memory is in use? What exactly does top say about
things like cached and buff memory (I'm assuming you're on linux, any
differences in top on another OS would be minor.) If the kernel still
shows a fair bit of cached and buff memory, your memory is not getting all
used up.

Well my xosview show that caching begin at a concurrency of 40.
At 80 my cache begins to be filled completely, so machine having big
problems.


I think you're confusing what I meant. Caching is good. Swapping is bad.
Having a large amount of cache is a good thing. It means the OS is
caching all your data in memory for faster access.
If I increases to
ab -n 1000 -c 100 http://localsite/testscript
I get this memory problem.


Where's the break point? Just wondering. Does it show up at 20, 40, 60,
80, or only at 100? If so, that's really not bad.

Here is some results (I kept -n 100 an just vraied the -c option)
--c 1
Failed requests: 0
Time per request: 322.096 [ms] (mean, across all concurrent requests)

-c 2
Failed requests: 0
Time per request: 374.220 [ms] (mean, across all concurrent requests)

-c 10
Failed requests: 68
(Connect: 0, Length: 68, Exceptions: 0)
Time per request: 314.779 [ms] (mean, across all concurrent requests)


OK, there's a problem, you're getting failed requests at -c 10, which
means you likely have postgresql configured in the wrong
direction. configure postgresql to use more memory (sort_mem can be set
to about 8 megs without a lot of issues on most boxes, going higher may
use up all your memory in certain situations (high concurrency)).
For a higher concurrency, the cache is completly filled, and have to
reboot the machine.
No, you should NEVER have to reboot a unix box. period. filled cache,
again, is a GOOD THING. not bad.
(I didn't leave the system caching forever, just press to reboot
button)... could be interesting to wait to see if the systems recovers
after a while


Yes, please do. Also, show us a save of top while under load.

I'm betting your machine has plenty of memory, and is not using it
effectively, due to postgresql being too conservatively configured.


---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 22 '05 #7
Hello Joshua,
Thank you for your reply.
Joshua D. Drake wrote:


We would need a lot more information. What version of Linux?
uname -a
Linux daube 2.4.20-8 #1 Thu Mar 13 17:18:24 EST 2003 i686 athlon i386
GNU/Linux

What version of the Kernel? What is your shmmax settting?
cat /proc/sys/kernel/shmmax
33554432

What is your sort_mem setting?
I didn't change the postgresql.conf settings:
#sort_mem = 1024 # min 64, size in KB
Did you use top to see where the hang up? Are there any messages in
/var/log/messages?


Well as I said before the box is almost out of control: disk is caching
intensively; I run X Windows and the mouse can not point a shell... very
bad. The only thing I see is that cache is filling quickly with
xosview... and then X become frozen (or better said extremely slow).

Abou the logs:
I sent the PHP error messages to a file, and yes there are errors:
pg_connect(): Unable to connect to PostgreSQL server: FATAL:
Non-superuser connection limit exceeded
or
pg_connect(): Unable to connect to PostgreSQL server: FATAL: Sorry, too
many clients already
Thanks
Alex
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 22 '05 #8
Hello Richard,
Thank you for your answer.
Richard Huxton wrote:
Are you saying you had more than 8 connections open simultaneously?
Well, I don't know how to find that out.
What I did is issuing several tomes a ps aux and I never saw more than
6-7 postmaster SELECT.
Are you
sure you restarted PG so that it noticed the new values? You can check config
settings with "show all;" from psql, or "show <setting>".
Yes I restart it. The show command outputs the correct value (8).

You'll want to use the "top" command to show the amount of memory each process


A typical output (in a concurrency of 20, no cache) is:
ps aux | grep postgres
postgres 2332 0.0 0.0 8804 328 ? R 18:54 0:01
/usr/bin/postmaster -p 5432 -d2
postgres 2334 0.0 0.0 9792 68 ? S 18:54 0:00 postgres:
stats buffer process
postgres 2335 0.0 0.0 8828 200 ? S 18:54 0:00 postgres:
stats collector process
postgres 4386 0.0 0.2 4312 956 pts/3 S 19:22 0:00 -bash
postgres 4871 0.0 0.5 9480 2304 ? S 20:36 0:00 postgres:
user db [local] SELECT
postgres 4873 0.0 0.2 8816 1032 ? R 20:36 0:00 postgres:
user db [local] startup
myuser 4877 0.0 0.1 3572 624 pts/4 S 20:36 0:00 grep postgres
postgres 4878 0.0 0.5 9220 2228 ? R 20:36 0:00 postgres:
user db [local] SELECT
postgres 4879 0.0 0.5 9204 2016 ? R 20:36 0:00 postgres:
user db [local] SELECT
---------------------------top-----------------------------
114 processes: 99 sleeping, 12 running, 3 zombie, 0 stopped
CPU states: 91.8% user 8.1% system 0.0% nice 0.0% iowait 0.0% idle
Mem: 384580k av, 316328k used, 68252k free, 0k shrd, 25424k
buff
253976k actv, 36916k in_d, 4704k in_c
Swap: 265064k av, 64788k used, 200276k free 71132k
cached

PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
4914 apache 16 0 9016 8416 2552 S 6.7 2.1 0:00 0 httpd
4832 apache 16 0 9016 8416 2552 S 6.3 2.1 0:01 0 httpd
4915 apache 16 0 9016 8416 2552 S 5.9 2.1 0:00 0 httpd
4917 apache 16 0 9016 8416 2552 S 5.9 2.1 0:00 0 httpd
4919 apache 16 0 9020 8420 2536 S 5.9 2.1 0:00 0 httpd
4774 apache 16 0 9016 8416 2552 S 5.7 2.1 0:02 0 httpd
4896 apache 16 0 9060 8460 2568 S 5.7 2.1 0:00 0 httpd
4908 apache 15 0 9016 8416 2552 S 5.7 2.1 0:00 0 httpd
4909 apache 16 0 9016 8416 2552 S 5.7 2.1 0:00 0 httpd
4658 apache 16 0 9136 8536 2568 S 5.5 2.2 0:04 0 httpd
4921 apache 16 0 9016 8416 2552 S 5.5 2.1 0:00 0 httpd
2581 root 16 0 14492 4544 1252 R 5.3 1.1 2:26 0 X
4795 apache 16 0 9104 8504 2568 S 5.3 2.2 0:02 0 httpd
4796 apache 16 0 9080 8480 2568 S 5.3 2.2 0:01 0 httpd
4782 apache 16 0 8924 8324 2568 R 3.5 2.1 0:02 0 httpd
2612 madona 15 0 4524 4136 2380 S 1.5 1.0 0:18 0 metacity
4656 apache 15 0 9084 8484 2568 S 1.3 2.2 0:03 0 httpd
4950 postgres 25 0 0 0 0 Z 1.1 0.0 0:00 0
postmaster <defunct>
3812 madona 15 0 44728 42M 17460 S 0.7 11.2 3:21 0
mozilla-bin
4947 postgres 25 0 2540 2392 1688 S 0.7 0.6 0:00 0 postmaster
4952 postgres 25 0 2812 2664 1872 R 0.7 0.6 0:00 0 postmaster
4610 madona 15 0 7460 7460 2152 R 0.5 1.9 0:00 0 xterm
4904 madona 15 0 1108 1108 856 R 0.3 0.2 0:00 0 top
4954 postgres 24 0 1916 1768 1244 R 0.1 0.4 0:00 0 postmaster
4959 postgres 25 0 1596 1448 940 S 0.1 0.3 0:00 0 postmaster
4961 postgres 25 0 984 824 640 R 0.1 0.2 0:00 0 postmaster
1 root 15 0 88 60 40 S 0.0 0.0 0:04 0 init

Thanks
Alex

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 22 '05 #9
Hello Ericson,
Thank you for your reply.
Ericson Smith wrote:
Could be problem be that PHP is not using connection efficiently?
Apache KeepAlive with PHP, is a dual edged sword with you holding the
blade :-)
I turned off the KeepAlive option in httpd.conf

[
I think keepalive is not used by default by "ab" and that apche uses it
only on static content)-- see the last paragraph of:
http://httpd.apache.org/docs/keepalive.html
]
and set
pgsql.allow_per sistent = Off
in php.ini,
it didn't work for me.
thanks
Alex

If I am not mistaken, what happens is that a connection is kept alive
because Apache believes that other requests will come in from the
client who made the initial connection. So 10 concurrent connections
are fine, but they are not released timely enough with 100 concurrent
connections. The system ends up waiting around for other KeepAlive
connections to timeout before Apache allows others to come in. We had
this exact problem in an environment with millions of impressions per
day going to the database. Because of the nature of our business, we
were able to disable KeepAlive and the load immediately dropped
(concurrent connection on the Postgresql database also dropped
sharply). We also turned off PHP persistent connections to the database.

The drawback is that connections are built up and torn down all the
time, and with Postgresql, it is sort of expensive. But thats a
fraction of the expense of having KeepAlive on.

Warmest regards, Ericson Smith
Tracking Specialist/DBA
+-----------------------+--------------------------------------+
| http://www.did-it.com | "Crush my enemies, see then driven |
| er**@did-it.com | before me, and hear the lamentations |
| 516-255-0500 | of their women." - Conan |
+-----------------------+--------------------------------------+
Alex Madon wrote:
Hello,
I am testing a web application (using the DBX PHP function to call a
Postgresql backend).
I have 375Mb RAM on my test home box.
I ran ab (apache benchmark) to test the behaviour of the application
under heavy load.
When increasing the number of requests, all my memory is filled, and
the Linux server begins to cache and remains frozen.

ab -n 100 -c 10 http://localsite/testscript
behaves OK.

If I increases to
ab -n 1000 -c 100 http://localsite/testscript
I get this memory problem.

If I eliminate the connection to the (UNIX) socket of Postgresql, the
script behaves well even under very high load (and of course with
much less time spent per request).

I tried to change some parameters in postgresql.conf
max_connections = 32
to max_connections = 8

and

shared_buffers = 64
to shared_buffers = 16

without success.

I tried to use pmap on httpd and postmaster Process ID but don't get
much help.

Does anybody have some idea to help to debug/understand/solve this
issue? Any feedback is appreciated.
To me, it would not be a problem if the box is very slow under heavy
load (DoS like), but I really dislike having my box out of service
after such a DoS attack.
I am looking for a way to limit the memory used by postgres.

Thanks
Alex
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if
your
joining column's datatypes do not match


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 22 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
2443
by: John H | last post by:
Can anyone help with why our pages are slow to load during peak-use hours? We run a database site getting 4 million hits a month which becomes painfully slow during peak hours. The slowness is not reflected in Coldfusion's debug output; for example the total run-time of 5 heavy queries is typically 1000 ms, but during peak times the page does not load for 4-10 seconds. Processor activity at peak times is "busy, but not flat-out"...
33
5607
by: Joshua D. Drake | last post by:
Hello, I think the below just about says it all: http://www.commandprompt.com/images/mammoth_versus_dolphin_500.jpg Sincerely, Joshua Drake
20
1509
by: John Wells | last post by:
Yes, I know you've seen the above subject before, so please be gentle with the flamethrowers. I'm preparing to enter a discussion with management at my company regarding going forward as either a MySql shop or a Postgresql shop. It's my opinion that we should be using PG, because of the full ACID support, and the license involved. A consultant my company hired before bringing me in is pushing hard for MySql, citing speed and community...
20
6649
by: Keith G. Murphy | last post by:
I'm trying to get a feel for what most people are doing or consider best practice. Given a mod_perl application talking to a PostgreSQL database on the same host, where different users are logging onto the web server using LDAP for authentication, do most people 1) have the web server connecting to the database using its own user account (possibly through ident), and controlling access to different database entities strictly through...
6
3614
by: Lee Harr | last post by:
I have a database where I remove the schema public. When I try to use the createlang script, it fails like this ... >createdb foo CREATE DATABASE >psql foo -c "select version()" version --------------------------------------------------------------------- PostgreSQL 7.4.1 on i386-portbld-freebsd4.9, compiled by GCC 2.95.4 (1 row)
3
2430
by: Froggy / Froggy Corp. | last post by:
Thx for your quick answer too :) Richard Huxton wrote: > > On Wednesday 18 February 2004 20:18, Froggy / Froggy Corp. wrote: > > Hello, > > > > I asked one time for more "benchmark" soft to know where is the cpu > > average, and read the post about optimising the postgresql.conf (and use > > them), but i allways get a load > 1 on fire time (dunno the right name,
9
2187
by: Andy B | last post by:
If I bought one of these boxes/OS combos as a postgresql database server, would postgresql be able to make the best use of it with a huge (e.g. 40GB) database? Box: HP ProLiant DL585, with 4 AMD64 CPUs and 64GB of RAM. (other vendor options also exist) OS: SUSE enterprise 8 linux for AMD (links to product info at bottom)
17
3455
by: Jim Strickland | last post by:
We currently are running a data intensive web service on a Mac using 4D. The developers of our site are looking at converting this web service to PostgreSQL. We will have a backup of our three production servers at our location. The developers are recommending that I purchase a 2GHz Dual Processor G5 with between 2GB and 4 GB RAM. They say that this configuration would be able to easily run a copy of all three production servers. My...
1
4552
by: nampet | last post by:
We set up a new system with the OS and database below. Now some pages are not loading in browser. the error is: but the same code is no problem with old server.. i can also insert the data into database with some pages.
0
9813
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
10541
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
9367
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7782
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6976
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5645
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
4446
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
4006
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
3108
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.