By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,834 Members | 2,240 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,834 IT Pros & Developers. It's quick & easy.

Postgres vs. Progress performance

P: n/a
Guys,

Bruce mentioned I should repost this with Progress in the title. My
friend's company is desperately trying to move from Progress to an open
platform and is seriously considering Postgres as a replacement. If you
have any experience with this and could provide a performance comparison,
I'd really appreciate it. Thanks!

Here's the original post:

A manager friend of mine sent me the following concern. He's preparing to
shift to Postgresql from a proprietary DB and 4gl system:

-----------
To that end, I've also started studying up on Postgresql. It seems to
have all the necessary features for a transaction heavy DB. The recent
release is 7.3. Of course, "the proof will be in the pudding." We
average 2.5 million transactions per day or 800 per second.
Unfortunately, we would have no way of testing that until we committed to
getting the business logic moved over and had something to test it with.
This is a bit of a "catch 22" situation. Just wished I knew of someone
locally who was running Postgresql in such a heavy environment. I'd love
to find out how it performs for them. -----------

While I have a lot of experience with PG, it's not really been in a heavy
processing environment. Could I get some input to send him from anyone
out in the field using Postgres in a similar environment.

If PG isn't the best option here, what is?

Thanks very much for your input!

John


---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #1
Share this Question
Share on Google+
3 Replies


P: n/a
On Mon, 2003-09-29 at 12:43, John Wells wrote:
We
average 2.5 million transactions per day or 800 per second.


800*60*60*24 = 69 million per day... are you doing 2.5 million with
burst of up to 800 per second?

we average around 190 tps, though the high burst i see in the last few
seconds is only 270... about 1/3 of those are inserts and/or updates.

the box its running on is a dual pentium 1.3ghz with 1GB of RAM. it's
not optimal hardware either (only 2 disks for starters), but it runs
pretty solidly and the server its on doesn't seem too taxed..

i feel pretty confident that postgresql can handle your workload without
much trouble, you just need to give it enough hardware.

Robert Treat
--
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 12 '05 #2

P: n/a
Robert Treat <xz****@users.sourceforge.net> writes:
i feel pretty confident that postgresql can handle your workload without
much trouble, you just need to give it enough hardware.


I guess the interesting question is how much iron are they using to
handle the workload now on Progress? Really there's no doubt that PG
can handle the load, the question is what size box would you have to
run it on, and whether that's cost-effective compared to Progress'
requirements.

I vaguely recall some past statements by Progress-to-PG migrators to
the effect that they found PG's performance just fine by comparison.
Try digging in the mail list archives (although "progress" is likely
to be a horrible search term :-()

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 12 '05 #3

P: n/a
jb@sourceillustrated.com ("John Wells") writes:
To that end, I've also started studying up on Postgresql. It seems to
have all the necessary features for a transaction heavy DB. The recent
release is 7.3. Of course, "the proof will be in the pudding." We
average 2.5 million transactions per day or 800 per second.
Unfortunately, we would have no way of testing that until we committed to
getting the business logic moved over and had something to test it with.
This is a bit of a "catch 22" situation. Just wished I knew of someone
locally who was running Postgresql in such a heavy environment. I'd love
to find out how it performs for them. -----------


The killer question is of what exactly it is that is being done 800
times per second.

I have seen PostgreSQL handling tens of millions of "things" per day,
when those things are relatively small and non-interacting. If most
of the 800 are read-only, then that seems not at all frightening.

If the activity is update-heavy, with complex interactions, then the
"level of challenge" goes up, irrespective of what database system you
plan on using.

It would seem surprising for a well-run PostgreSQL site to not be
quite readily as capable as Progress on similar hardware, but it is
not a trivial task to verify that with something resembling your kind
of transaction load.

What you, in effect, need to do is to construct a prototype and see
how it holds up under load. That's a nontrivial amount of work,
irrespective of the database in use.

I think you'll need to construct that prototype, perhaps as a set of
scripted "clients" that you can spawn to hammer at your "server." A
wise approach is to write this in a somewhat generic fashion so that
you can try it out on several different databases. Or so that you can
at least express, to management, the possibility of doing so :-).

Question: What kind of hardware are you using for the present system?
--
output = reverse("ofni.smrytrebil" "@" "enworbbc")
<http://dev6.int.libertyrms.com/>
Christopher Browne
(416) 646 3304 x124 (land)
Nov 12 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.