473,385 Members | 2,180 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

State of Beta 2


Anyone out there using beta 2 in production situations? Comments on
stability? I am rolling out a project in the next 4 weeks, and really
don't want to go though an upgrade soon after its released on an
Unsuspecting Client, so I would LIKE to start working with 7.4.

--------------------

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 11 '05
236 9737
Andrew Rawnsley wrote:

eRserver should be able to migrate the data. If you make heavy use of
sequences, schemas and other such things it won't help you for those.

<snip> Using eRserver may help you work around the problem, given certain
conditions. It doesn't solve it. I think if we can get Mr. Drake's
initiative off the ground we may at least figure out if there is a
solution.

So a replication application
IS
a method to migrate
OR CAN BE MADE
to do it somewhat
AND is a RELATED
project to the migration tool.

Again, I wonder what on the TODO's or any other roadmap is related and
should be part of a comprehensive plan to drain the swamp and not just
club alligators over the head?
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #151
If bugfixes were consistently backported, and support was provided for
older versions running on newer OS's, then this wouldn't be as much of
a problem. But we orphan our code afte one version cycle; 7.0.x is
completely unsupported, for instance, while even 7.2.x is virtually
unsupported. My hat's off to Red Hat for backporting the buffer
overflow fixes to all their supported versions; we certainly wouldn't
have don it. And 7.3.x will be unsupported once we get past 7.4
release, right? So in order to get critical bug fixes, users must
upgrade to a later codebase, and go through the pain of upgrading
their data.

Command Prompt is supporting the 7.3 series until 2005 and that includes
backporting certain features and bug fixes. The reality is that most
(with the exception of the Linux kernel and maybe Apache) open source
projects don't support back releases. That is the point of commercial
releases such as RedHat DB and Mammoth. We will support the the older
releases for some time.

If you want to have continued support for an older rev, purchase a
commercial version. I am not trying to push my product here, but frankly
I think your argument is weak. There is zero reason for the community to
support previous version of code. Maybe until 7.4 reaches 7.4.1 or
something but longer? Why? The community should be focusing on
generating new, better, faster, cleaner code.

That is just my .02.

Joshua Drake

K, looking back through that it almost sounds like a ramble ...
hopefully
you understand what I'm asking ...

*I* should complain about a ramble? :-)

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 11 '05 #152
"Joshua D. Drake" <jd@commandprompt.com> writes:
If you want to have continued support for an older rev, purchase a
commercial version. I am not trying to push my product here, but frankly
I think your argument is weak. There is zero reason for the community to
support previous version of code. Maybe until 7.4 reaches 7.4.1 or
something but longer? Why? The community should be focusing on
generating new, better, faster, cleaner code.


I tend to agree on this point. Red Hat is also in the business of
supporting back-releases of PG, and I believe PG Inc, SRA, and others
will happily do it too. I don't think it's the development community's
job to do that.

[ This does not, however, really bear on the primary issue, which is how
can we make upgrading less unpleasant for people with large databases.
We do need to address that somehow. ]

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #153
On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
So instead of 1TB of 15K fiber channel disks (and the requisite
controllers, shelves, RAID overhead, etc), we'd need *two* TB of
15K fiber channel disks (and the requisite controllers, shelves,
RAID overhead, etc) just for the 1 time per year when we'd upgrade
PostgreSQL?


Nope. You also need it for the time when your vendor sells
controllers or chips or whatever with known flaws, and you end up
having hardware that falls over 8 or 9 times in a row.

A

--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<an****@libertyrms.info> M2P 2A8
+1 416 646 3304 x110
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #154
On Sat, Sep 13, 2003 at 07:16:28PM -0400, Lamar Owen wrote:

Can eRserver replicate a 7.3.x to a 7.2.x? Or 7.4.x to 7.3.x?


Yes. Well, 7.3 to 7.2, anyway: we just tested it (my colleague,
Tariq Muhammad did it).

A

----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<an****@libertyrms.info> M2P 2A8
+1 416 646 3304 x110
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 11 '05 #155
On Sat, Sep 13, 2003 at 10:27:59PM -0300, Marc G. Fournier wrote:

I thought we were talking about upgrades here?


You do upgrades without being able to roll back?

A

--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<an****@libertyrms.info> M2P 2A8
+1 416 646 3304 x110
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 11 '05 #156
On Thu, Sep 18, 2003 at 12:11:18PM -0400, Lamar Owen wrote:
RTA. It's been hashed, rehashed, and hashed again. I've asked twice if
eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a
7.3); that question has yet to be answered. If it can do this, then I


Sorry, I've been swamped, and not reading mail as much as I'd like.
But I just answered this for 7.2/7.3.

A

--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<an****@libertyrms.info> M2P 2A8
+1 416 646 3304 x110
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 11 '05 #157


On Thu, 18 Sep 2003, Andrew Sullivan wrote:
On Sat, Sep 13, 2003 at 10:27:59PM -0300, Marc G. Fournier wrote:

I thought we were talking about upgrades here?


You do upgrades without being able to roll back?


Hadn't thought of it that way ... but, what would prompt someone to
upgrade, then use something like erserver to roll back? All I can think
of is that the upgrade caused alot of problems with the application
itself, but in a case like that, would you have the time to be able to
're-replicate' back to the old version?

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 11 '05 #158
sc*****@postgresql.org ("Marc G. Fournier") writes:
On Thu, 18 Sep 2003, Andrew Sullivan wrote:
On Sat, Sep 13, 2003 at 10:27:59PM -0300, Marc G. Fournier wrote:
>
> I thought we were talking about upgrades here?


You do upgrades without being able to roll back?


Hadn't thought of it that way ... but, what would prompt someone to
upgrade, then use something like erserver to roll back? All I can
think of is that the upgrade caused alot of problems with the
application itself, but in a case like that, would you have the time
to be able to 're-replicate' back to the old version?


Suppose we have two dbs:

db_a - Old version
db_b - New version

Start by replicating db_a to db_b.

The approach would presumably be that at the time of the upgrade, you
shut off the applications hitting db_a (injecting changes into the
source), and let the final set of changes flow thru to db_b.

That brings db_a and db_b to having the same set of data.

Then reverse the flow, so that db_b becomes master, flowing changes to
db_a. Restart the applications, configuring them to hit db_b.

db_a should then be just a little bit behind db_b, and be a "recovery
plan" in case the new version played out badly.

That's surely not what you'd _expect_; the point of the exercise was
for the upgrade to be an improvement. But if something Truly Evil
happened, you might have to. And when people are talking about "risk
management," and ask what you do if Evil Occurs, this is the way the
answer works.

It ought to be pretty cheap, performance-wise, to do things this way,
certainly not _more_ expensive than the replication was to keep db_b
up to date.
--
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://www.ntlug.org/~cbbrowne/oses.html
Rules of the Evil Overlord #149. "Ropes supporting various fixtures
will not be tied next to open windows or staircases, and chandeliers
will be hung way at the top of the ceiling."
<http://www.eviloverlord.com/>
Nov 11 '05 #159
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
So instead of 1TB of 15K fiber channel disks (and the requisite
controllers, shelves, RAID overhead, etc), we'd need *two* TB of
15K fiber channel disks (and the requisite controllers, shelves,
RAID overhead, etc) just for the 1 time per year when we'd upgrade
PostgreSQL?


Nope. You also need it for the time when your vendor sells
controllers or chips or whatever with known flaws, and you end up
having hardware that falls over 8 or 9 times in a row.


????

--
-----------------------------------------------------------------
Ron Johnson, Jr. ro***********@cox.net
Jefferson, LA USA

"A C program is like a fast dance on a newly waxed dance floor
by people carrying razors."
Waldi Ravens
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 11 '05 #160
Centuries ago, Nostradamus foresaw when ro***********@cox.net (Ron Johnson) would write:
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> 15K fiber channel disks (and the requisite controllers, shelves,
> RAID overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?


Nope. You also need it for the time when your vendor sells
controllers or chips or whatever with known flaws, and you end up
having hardware that falls over 8 or 9 times in a row.


????


This of course never happens in real life; expensive hardware is
_always_ UTTERLY reliable.

And the hardware vendors all have the same high standards as, well,
certain database vendors we might think of.

After all, Oracle and MySQL AB would surely never mislead their
customers about the merits of their database products any more than
HP, Sun, or IBM would about the possibility of their hardware having
tiny flaws.

And I would never mislead anyone, either. I'm sure I got a full 8
hours sleep last night. I'm sure of it...
--
"cbbrowne","@","cbbrowne.com"
http://www3.sympatico.ca/cbbrowne/finances.html
"XML combines all the inefficiency of text-based formats with most of the
unreadability of binary formats :-) " -- Oren Tirosh
Nov 11 '05 #161
Centuries ago, Nostradamus foresaw when ro***********@cox.net (Ron Johnson) would write:
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> 15K fiber channel disks (and the requisite controllers, shelves,
> RAID overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?


Nope. You also need it for the time when your vendor sells
controllers or chips or whatever with known flaws, and you end up
having hardware that falls over 8 or 9 times in a row.


????


This of course never happens in real life; expensive hardware is
_always_ UTTERLY reliable.

And the hardware vendors all have the same high standards as, well,
certain database vendors we might think of.

After all, Oracle and MySQL AB would surely never mislead their
customers about the merits of their database products any more than
HP, Sun, or IBM would about the possibility of their hardware having
tiny flaws.

And I would never mislead anyone, either. I'm sure I got a full 8
hours sleep last night. I'm sure of it...
--
"cbbrowne","@","cbbrowne.com"
http://www3.sympatico.ca/cbbrowne/finances.html
"XML combines all the inefficiency of text-based formats with most of the
unreadability of binary formats :-) " -- Oren Tirosh
Nov 11 '05 #162
ro***********@cox.net (Ron Johnson) wrote:
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> 15K fiber channel disks (and the requisite controllers, shelves,
> RAID overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?


Nope. You also need it for the time when your vendor sells
controllers or chips or whatever with known flaws, and you end up
having hardware that falls over 8 or 9 times in a row.


????


This of course never happens in real life; expensive hardware is
_always_ UTTERLY reliable.

And the hardware vendors all have the same high standards as, well,
certain database vendors we might think of.

After all, Oracle and MySQL AB would surely never mislead their
customers about the merits of their database products any more than
HP, Sun, or IBM would about the possibility of their hardware having
tiny flaws.

And I would /never/ claim to have lost sleep as a result of flakey
hardware. Particularly not when it's a HA fibrechannel array. I'm
/sure/ that has never happened to anyone. [The irony herre should be
causing people to say "ow!"]
--
"cbbrowne","@","cbbrowne.com"
http://www3.sympatico.ca/cbbrowne/finances.html
"XML combines all the inefficiency of text-based formats with most of the
unreadability of binary formats :-) " -- Oren Tirosh
Nov 11 '05 #163
ro***********@cox.net (Ron Johnson) wrote:
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:
> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> 15K fiber channel disks (and the requisite controllers, shelves,
> RAID overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?


Nope. You also need it for the time when your vendor sells
controllers or chips or whatever with known flaws, and you end up
having hardware that falls over 8 or 9 times in a row.


????


This of course never happens in real life; expensive hardware is
_always_ UTTERLY reliable.

And the hardware vendors all have the same high standards as, well,
certain database vendors we might think of.

After all, Oracle and MySQL AB would surely never mislead their
customers about the merits of their database products any more than
HP, Sun, or IBM would about the possibility of their hardware having
tiny flaws.

And I would /never/ claim to have lost sleep as a result of flakey
hardware. Particularly not when it's a HA fibrechannel array. I'm
/sure/ that has never happened to anyone. [The irony herre should be
causing people to say "ow!"]
--
"cbbrowne","@","cbbrowne.com"
http://www3.sympatico.ca/cbbrowne/finances.html
"XML combines all the inefficiency of text-based formats with most of the
unreadability of binary formats :-) " -- Oren Tirosh
Nov 11 '05 #164


On Thu, 18 Sep 2003, Lamar Owen wrote:
Marc G. Fournier wrote:
'K, I had already answered it as part of this thread when I suggested
doing exactly that ... in response to which several ppl questioned the
feasibility of setting up a duplicate system with >1TB of disk space to do
the replication over to ...


The quote mentioned is a question, not an answer. You said:
'k, but is it out of the question to pick up a duplicate server, and use
something like eRServer to replicate the databases between the two
systems, with the new system having the upgraded database version running
on it, and then cutting over once its all in sync?


'Something like eRserver' doesn't give me enough detail; so I asked if
eRserver could do this, mentioning specific version numbers. A straight
answer -- yes it can, or no it can't -- would be nice. So you're saying
that eRserver can do this, right? Now if there just wasn't that java
dependency.... Although the contrib rserv might suffice for data
migration capabilities.


Sorry, but I hadn't actually seen your question about it ... but, yes,
erserver can do this ... as far as I know, going from, say, v7.2 -> v7.4
shouldn't be an issue either, but I only know of a few doing v7.2->v7.3
migrations with it so far ...
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 11 '05 #165
That sounds good save two things. We need to state what are the project run
dates and what happens at or around the due date. That to say we have the
deliverable for testing (beta ready), more time is needed to complete core
features (alpha ready) and therefore more funds are needed, project is one hold
due to features needed outside the scope of the project, etc, etc, etc...

You get the idea.

Quoting "Joshua D. Drake" <jd@commandprompt.com>:
Hello,

O.k. here are my thoughts on how this could work:

Command Prompt will set up an escrow account online at www.escrow.com.
When the Escrow account totals 2000.00 and is released, Command Prompt
will dedicate a
programmer for one month to debugging, documenting, reviewing,
digging, crying,
screaming, begging and bleeding with the code. At the end of the month
and probably during
depending on how everything goes Command Prompt will release its
findings. The findings
will include a project plan on moving forward over the next 5 months
(if that is what it takes) to
produce the first functional pg_upgrade.

If the project is deemed as moving in the right direction by the
community members and specifically
the core members we will setup milestone payments for the project.

What does everyone think?

Sincerely,

Joshua D. Drake
Dennis Gearon wrote:
I had already committed $50/mo.

Robert Creager wrote:
Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
Dennis Gearon <ge*****@fireserve.net> uttered something amazingly
similar to:

Robert Creager wrote:

> Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
> "Joshua D. Drake" <jd@commandprompt.com> uttered something
> amazingly similar
> to:
>
>
>
>
>
>> If someone is willing to pony up 2000.00 per month for a period of
>> at
>
> Well, if you're willing to set up some sort of escrow, I'll put in
> $100. I
>

Is that $100 times once, or $100 X 6mos anticiapated develop time.

That's $100 once. And last I looked, there are well over 1800
subscribers on
this list alone. On the astronomically small chance everyone one of
them did
what I'm doing, it would cover more than 6 months of development time
;-) This
strikes me as like supporting public radio. The individuals do some,
and the
corporations do a bunch.

I'm just putting my money toward a great product, rather than
complaining that
it's not done. Just like Joshua is doing. You cannot hire a competent
programmer for $24k a year, so he is putting up some money on this also.

There have been a couple of other bytes from small businesses, so who
knows!

You game?

Cheers,
Rob


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

--
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

____________________________________
This email account is being host by:
VCSN, Inc : http://vcsn.com

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 11 '05 #166
On Thu, 18 Sep 2003 12:11:18 -0400, Lamar Owen <lo***@pari.edu> wrote:
Marc G. Fournier wrote:
[...] upgrading is a key feature [...] a migration tool
that could read the old format _without_a_running_old_backend_ [...]
the new backend is powerless to recover the old data.
OS upgrades [...], FreeBSD ports upgrades, and RPM
upgrades are absolutely horrid at this point. [...]
[censored] has a better system than we
[...] the pain of upgrading [...]
*I* should complain about a ramble? :-)


Lamar, I *STRONGLY* agree with almost everything you say here and in
other posts, except perhaps ...

You et al. seem to think that system catalog changes wouldn't be a
problem if only we could avoid page format changes. This is not
necessarily so. Page format changes can be handled without much
effort, if

.. the changes are local to each page (the introduction of a level
indicator in btree pages is a counter-example),

.. we can tell page type and version for every page,

.. the new format does not need more space than the old one.

You wrote earlier:
| the developers who changed the on-disk format ...

Oh, that's me, I think. I am to blame for the heap tuple header
changes between 7.2 and 7.3; Tom did some cleanup work behind me but
cannot be held responsible for the on-disk-format incompatibilities.
I'm not aware of any other changes falling into this category for 7.3.
So you might as well have used the singular form ;-)

| ... felt it wasn't important to make it continue working.

This is simply not true. Seamless upgrade is *very* important, IMHO.
See http://archives.postgresql.org/pgsql...6/msg00136.php
for example, and please keep in mind that I was still very "fresh" at
that time. Nobody demanded that I keep my promise and I got the
impression that a page format conversion tool was not needed because
there wouldn't be a pg_upgrade anyway.

Later, in your "Upgrading rant" thread, I even posted some code
(http://archives.postgresql.org/pgsql.../msg00294.php).
Unfortunately this went absolutely unnoticed, probably because it
looked so long because I fat-fingered the mail and included the code
twice. :-(
It's all
in the archives that nobdy seems willing to read over again. Why do we
even have archives if they're not going to be used?


Sic!

While I'm at it, here are some comments not directly addressed to
Lamar:

Elsewhere in this current thread it has been suggested that the
on-disk format will stabilize at some time in the future and should
then be frozen to ensure painless upgrades. IMHO, at the moment when
data structures are declared stable and immutable the project is dead.

And I don't believe the myth that commercial database vendors have
reached a stable on-disk representation. Whoever said this, is kindly
asked to reveal his source of insight.

A working pg_upgrade is *not* the first thing we need. What we need
first is willingness to not break backwards compatibility. When
Postgres adopts a strategy of not letting in any change unless it is
fully compatible with the previous format or accompanied by an upgrade
script/program/whatever, that would be a huge step forward. First
breaking things for six month or more and then, when the release date
comes nearer, trying to build an upgrade tool is not the right
approach.

A - hopefully not too unrealistic - vision: _At_any_time_ during a
development cycle for release n+1 it is possible to take a cvs
snapshot, build it, take any release n database cluster, run a
conversion script over it (or not), and start the new postmaster with
-D myOldDataDir ...

Granted, this slows down development, primarily while developers are
not yet used to it. But once the infrastructure is in place, things
should get easier. While a developer is working on a new feature he
knows the old data structures as well as the new ones; this is the
best moment to design and implement an upgrade path, which is almost
hopeless if tried several months later by someone else.

And who says that keeping compatibility in mind while developing new
features cannot be fun? I assure you, it is!

Servus
Manfred

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #167
"Marc G. Fournier" <sc*****@postgresql.org> writes:
hmmm ... k, is it feasible to go a release or two at a time without on
disk changes? if so, pg_upgrade might not be as difficult to maintain,
since, unless someone an figure out a way of doing it, 'on disk change
releases' could still require dump/reloads, with a period of stability in
between?
Yeah, for the purposes of this discussion I'm just taking "pg_upgrade"
to mean something that does what Bruce's old script does, namely
transfer the schema into the new installation using "pg_dump -s" and
then push the user tables and indexes physically into place. We could
imagine that pg_upgrade would later get some warts added to it to handle
some transformations of the user data, but that might or might not ever
need to happen.

I think we could definitely adopt a policy of "on-disk changes not
oftener than every X releases" if we had a working pg_upgrade, even
without doing any extra work to allow updates. People who didn't
want to wait for the next incompatible release could have their change
sooner if they were willing to do the work to provide an update path.
*Or* ... as we've seen more with this dev cycle then previous ones, how
much could be easily back-patched to the previous version(s) relatively
easily, without requiring on-disk changes?


It's very difficult to back-port anything beyond localized bug fixes.
We change the code too much --- for instance, almost no 7.4 patch will
apply exactly to 7.3 or before because of the elog-to-ereport changes.

But the real problem IMHO is we don't have the manpower to do adequate
testing of back-branch changes that would need to be substantially
different from what gets applied to HEAD. I think it's best to leave
that activity to commercial support outfits, rather than put community
resources into it.

(Some might say I have a conflict of interest here, since I work for Red
Hat which is one of said commercial support outfits. But I really do
think it's more reasonable to let those companies do this kind of
gruntwork than to expect the community hackers to do it.)

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #168
On Fri, 19 Sep 2003 17:38:13 -0400, Tom Lane <tg*@sss.pgh.pa.us>
wrote:
A working pg_upgrade is *not* the first thing we need.
Yes it is.


At the risk of being called a stubborn hairsplitter, I continue to say
that pg_upgrade is not the *first* thing we need. Maybe the second
....
As you say later,
... But once the infrastructure is in place, things
should get easier.

Yes, at some point in time we need an infrastructure/upgrade
process/tool/pg_upgrade, whatever we call it. What I tried to say is
that *first* developers must change their point of view and give
backwards compatibility a higher priority. As long as I don't write
page conversion functions because you changed the system catalogs and
you see no need for pg_upgrade because I broke the page format,
seamless upgrade cannot become a reality.
Until we have a working pg_upgrade, every little catalog change will
break backwards compatibility. And I do not feel that the appropriate
way to handle catalog changes is to insist on one-off solutions for each
one.
I tend to believe that every code change or new feature that gets
implemented is unique by its nature, and if it involves catalog
changes it requires a unique upgrade script/tool. How should a
generic tool guess the contents of a new catalog relation?

Rod's adddepend is a good example. It is a one-off upgrade solution,
which is perfectly adequate because Rod's dependency patch was a
singular work, too. Somebody had to sit down and code some logic into
a script.
Any quick look at the CVS logs will show that minor and major
catalog revisions occur *far* more frequently than changes that would
affect on-disk representation of user data.


Some catalog changes can be done by scripts executed by a standalone
backend, others might require more invasive surgery. Do you have any
feeling which kind is the majority?

I've tried to produce a prototype for seamless upgrade with the patch
announced in
http://archives.postgresql.org/pgsql...8/msg00937.php. It
implements new backend functionality (index scan cost estimation using
index correlation) and needs a new system table (pg_indexstat) to
work. I wouldn't call it perfect (for example, I still don't know how
to insert the new table into template0), but at least it shows that
there is a class of problems that require catalog changes and *can* be
solved without initdb.

Servus
Manfred

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 11 '05 #169
On Fri, 19 Sep 2003 18:51:00 -0400, Tom Lane <tg*@sss.pgh.pa.us>
wrote:
transfer the schema into the new installation using "pg_dump -s" and
then push the user tables and indexes physically into place.


I'm more in favour of in-place upgrade. This might seem risky, but I
think we can expect users to backup their PGDATA directory before they
start the upgrade.

I don't trust pg_dump because

.. it doesn't help when the old postmaster binaries are not longer
available

.. it does not always produce scripts that can be loaded without manual
intervention. Sometimes you create a dump and cannot restore it with
the same Postmaster version. RTA.

Servus
Manfred

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #170
On Fri, 19 Sep 2003 20:06:39 -0400, Tom Lane <tg*@sss.pgh.pa.us>
wrote:
Perhaps you should go back and study what
pg_upgrade actually did.
Thanks for the friendly invitation. I did that.
It needed only minimal assumptions about the
format of either old or new catalogs. The reason is that it mostly
relied on portability work done elsewhere (in pg_dump, for example).


I was hoping that you had a more abstract concept in mind when you
said pg_upgrade; not that particular implementation. I should have
been more explicit that I'm not a friend of that pg_dump approach, cf.
my other mail.
Rod's adddepend is a good example.

I don't think it's representative.

... I wouldn't call it perfect

... in other words, it doesn't work and can't be made to work.


Hmm, "not perfect" == "can't be made to work". Ok. If you want to
see it this way ...

Servus
Manfred

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 11 '05 #171


On Fri, 19 Sep 2003, Tom Lane wrote:
I think we could definitely adopt a policy of "on-disk changes not
oftener than every X releases" if we had a working pg_upgrade, even
without doing any extra work to allow updates. People who didn't want
to wait for the next incompatible release could have their change sooner
if they were willing to do the work to provide an update path.
'K, but let's put the horse in front of the cart ... adopt the policy so
that the work on a working pg_upgrade has a chance of succeeding ... if we
said no on disk changes for, let's say, the next release, then that would
provide an incentive (I think!) for someone(s) to pick up the ball and
make sure that pg_upgrade would provide a non-dump/reload upgrade for it
....
But the real problem IMHO is we don't have the manpower to do adequate
testing of back-branch changes that would need to be substantially
different from what gets applied to HEAD. I think it's best to leave
that activity to commercial support outfits, rather than put community
resources into it.


What would be nice is if we could create a small QA group ...
representative of the various supported platforms, who could be called
upon for testing purposes ... any bugs reported get fixed, its finding the
bugs ...

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #172
"Marc G. Fournier" <sc*****@postgresql.org> writes:
On Fri, 19 Sep 2003, Tom Lane wrote:
I think we could definitely adopt a policy of "on-disk changes not
oftener than every X releases" if we had a working pg_upgrade,
'K, but let's put the horse in front of the cart ... adopt the policy so
that the work on a working pg_upgrade has a chance of succeeding ... if we
said no on disk changes for, let's say, the next release, then that would
provide an incentive (I think!) for someone(s) to pick up the ball and


No can do, unless your intent is to force people to work on pg_upgrade
and nothing else (a position I for one would ignore ;-)). With such a
policy and no pg_upgrade we'd be unable to apply any catalog changes at
all, which would pretty much mean that 7.5 would look exactly like 7.4.

If someone wants to work on pg_upgrade, great. But I'm not in favor of
putting all other development on hold until it happens.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 11 '05 #173
Manfred Koizar <mk*****@aon.at> writes:
I'm more in favour of in-place upgrade. This might seem risky, but I
think we can expect users to backup their PGDATA directory before they
start the upgrade. I don't trust pg_dump because
You don't trust pg_dump, but you do trust in-place upgrade? I think
that's backwards.

The good thing about the pg_upgrade process is that if it's gonna fail,
it will fail before any damage has been done to the old installation.
(If we multiply-link user data files instead of moving them, we could
even promise that the old installation is still fully valid at the
completion of the process.) The failure scenarios for in-place upgrade
are way nastier.

As for "expect users to back up in case of trouble", I thought the whole
point here was to make life simpler for people who couldn't afford the
downtime needed for a complete backup. To have a useful backup for an
in-place-upgrade failure, you'd have to run that full backup after
stopping the old postmaster, so you are still looking at long downtime
for an update.
it doesn't help when the old postmaster binaries are not longer
available


[shrug] This is a matter of design engineering for pg_upgrade. The fact
that we've packaged it in the past as a script that depends on having
the old postmaster executable available is not an indication of how it
ought to be built when we redesign it. Perhaps it should include
back-version executables in it. Or not; but clearly it has to be built
with an understanding of what the total upgrade process would look like.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #174


On Sat, 20 Sep 2003, Tom Lane wrote:
"Marc G. Fournier" <sc*****@postgresql.org> writes:
On Fri, 19 Sep 2003, Tom Lane wrote:
I think we could definitely adopt a policy of "on-disk changes not
oftener than every X releases" if we had a working pg_upgrade,

'K, but let's put the horse in front of the cart ... adopt the policy so
that the work on a working pg_upgrade has a chance of succeeding ... if we
said no on disk changes for, let's say, the next release, then that would
provide an incentive (I think!) for someone(s) to pick up the ball and


No can do, unless your intent is to force people to work on pg_upgrade
and nothing else (a position I for one would ignore ;-)). With such a
policy and no pg_upgrade we'd be unable to apply any catalog changes at
all, which would pretty much mean that 7.5 would look exactly like 7.4.


No, I'm not suggesting no catalog changes ... wait, I might be wording
this wrong ... there are two changes that right now requires a
dump/reload, changes to the catalogs and changes to the data structures,
no? Or are these effectively inter-related?

If they aren't inter-related, what I'm proposing is to hold off on any
data structure changes, but still make catalog changes ... *if*, between
v7.4 and v7.5, nobody can bring pg_upgrade up to speed to be able to
handle the catalog changes without a dump/reload, then v7.5 will require
one ... but, at least it would give a single 'moving target' for the
pg_upgrade development to work on, instead of two ...

Make better sense?
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 11 '05 #175
I don't trust pg_dump because
You don't trust pg_dump, but you do trust in-place upgrade? I think
that's backwards.

Well to be honest. I have personally had nightmares of problems with
pg_dump. In fact I have
a large production database right now that can't use it to restore
because of the way pg_dump
handles large objects. So I can kind of see his point here. I had to
move to a rsync based backup/restore system.

The reality of pg_dump is not a good one. It is buggy and not very
reliable. This I am hoping
changes in 7.4 as we moved to a pure "c" implementation.

But I do not argue any of the other points you make below.

Sincerely,

Joshua Drake
The good thing about the pg_upgrade process is that if it's gonna fail,
it will fail before any damage has been done to the old installation.
(If we multiply-link user data files instead of moving them, we could
even promise that the old installation is still fully valid at the
completion of the process.) The failure scenarios for in-place upgrade
are way nastier.

As for "expect users to back up in case of trouble", I thought the whole
point here was to make life simpler for people who couldn't afford the
downtime needed for a complete backup. To have a useful backup for an
in-place-upgrade failure, you'd have to run that full backup after
stopping the old postmaster, so you are still looking at long downtime
for an update.
it doesn't help when the old postmaster binaries are not longer
available


[shrug] This is a matter of design engineering for pg_upgrade. The fact
that we've packaged it in the past as a script that depends on having
the old postmaster executable available is not an indication of how it
ought to be built when we redesign it. Perhaps it should include
back-version executables in it. Or not; but clearly it has to be built
with an understanding of what the total upgrade process would look like.

regards, tom lane


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 11 '05 #176
"Marc G. Fournier" <sc*****@postgresql.org> writes:
No, I'm not suggesting no catalog changes ... wait, I might be wording
this wrong ... there are two changes that right now requires a
dump/reload, changes to the catalogs and changes to the data structures,
no? Or are these effectively inter-related?


Oh, what you're saying is no changes in user table format. Yeah, we
could probably commit to that now. Offhand the only thing I think it
would hold up is the one idea about converting "interval" into a
three-component value, and I'm not sure if anyone had really committed
to work on that anyway ...

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 11 '05 #177
You know, I can't help but thinking that there are a NUMBER of major
items on the TO DO list, this one, and several others that are related.
The point made that future clients and backends can't talk to old tables
is a good one. I used to rant and rave about Microslop doing that every
third or fourth version, and Postgres does it every minor revision. Hmmmm.

Is there a ROADMAP of integrated todo's somewhere?

Marc G. Fournier wrote:
On Thu, 18 Sep 2003, Lamar Owen wrote:
Huh? I have no disagreement that upgrading is a key feature that we are
lacking ... but, if there are any *on disk* changes between releases, how
do you propose 'in place upgrades'?

RTA. It's been hashed, rehashed, and hashed again. I've asked twice if
eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a
7.3); that question has yet to be answered.


'K, I had already answered it as part of this thread when I suggested
doing exactly that ... in response to which several ppl questioned the
feasibility of setting up a duplicate system with >1TB of disk space to do
the replication over to ...

See: http://archives.postgresql.org/pgsql...9/msg00886.php

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 11 '05 #178
Marc G. Fournier wrote:
'K, I had already answered it as part of this thread when I suggested
doing exactly that ... in response to which several ppl questioned the
feasibility of setting up a duplicate system with >1TB of disk space to do
the replication over to ...
The quote mentioned is a question, not an answer. You said: 'k, but is it out of the question to pick up a duplicate server, and use
something like eRServer to replicate the databases between the two
systems, with the new system having the upgraded database version running
on it, and then cutting over once its all in sync?


'Something like eRserver' doesn't give me enough detail; so I asked if
eRserver could do this, mentioning specific version numbers. A straight
answer -- yes it can, or no it can't -- would be nice. So you're saying
that eRserver can do this, right? Now if there just wasn't that java
dependency.... Although the contrib rserv might suffice for data
migration capabilities.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #179
Hello,

Sure this is all reasonable but would come after the initial 30 days.
The first 30 days is specifically a proof of concept,
validitity study, review of findings type of thing. Which is why I
stated that we would produce the findings and our project
plan to proceed after the initial 30 days.

Sincerely,

Joshua Drake

Network Administrator wrote:
That sounds good save two things. We need to state what are the project run
dates and what happens at or around the due date. That to say we have the
deliverable for testing (beta ready), more time is needed to complete core
features (alpha ready) and therefore more funds are needed, project is one hold
due to features needed outside the scope of the project, etc, etc, etc...

You get the idea.

Quoting "Joshua D. Drake" <jd@commandprompt.com>:
Hello,

O.k. here are my thoughts on how this could work:

Command Prompt will set up an escrow account online at www.escrow.com.
When the Escrow account totals 2000.00 and is released, Command Prompt
will dedicate a
programmer for one month to debugging, documenting, reviewing,
digging, crying,
screaming, begging and bleeding with the code. At the end of the month
and probably during
depending on how everything goes Command Prompt will release its
findings. The findings
will include a project plan on moving forward over the next 5 months
(if that is what it takes) to
produce the first functional pg_upgrade.

If the project is deemed as moving in the right direction by the
community members and specifically
the core members we will setup milestone payments for the project.

What does everyone think?

Sincerely,

Joshua D. Drake
Dennis Gearon wrote:
I had already committed $50/mo.

Robert Creager wrote:

Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700)
Dennis Gearon <ge*****@fireserve.net> uttered something amazingly
similar to:

>Robert Creager wrote:
>
>
>
>
>
>>Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700)
>>"Joshua D. Drake" <jd@commandprompt.com> uttered something
>>amazingly similar
>>to:
>>
>>
>>
>>
>>
>>
>>
>>>If someone is willing to pony up 2000.00 per month for a period of
>>>at
>>>
>>>
>>Well, if you're willing to set up some sort of escrow, I'll put in
>>$100. I
>>
>>
>>
>Is that $100 times once, or $100 X 6mos anticiapated develop time.
>
>
>
That's $100 once. And last I looked, there are well over 1800
subscribers on
this list alone. On the astronomically small chance everyone one of
them did
what I'm doing, it would cover more than 6 months of development time
;-) This
strikes me as like supporting public radio. The individuals do some,
and the
corporations do a bunch.

I'm just putting my money toward a great product, rather than
complaining that
it's not done. Just like Joshua is doing. You cannot hire a competent
programmer for $24k a year, so he is putting up some money on this also.

There have been a couple of other bytes from small businesses, so who
knows!

You game?

Cheers,
Rob


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html



--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 11 '05 #180
Manfred Koizar <mk*****@aon.at> writes:
I tend to believe that every code change or new feature that gets
implemented is unique by its nature, and if it involves catalog
changes it requires a unique upgrade script/tool. How should a
generic tool guess the contents of a new catalog relation?
*It does not have to*. Perhaps you should go back and study what
pg_upgrade actually did. It needed only minimal assumptions about the
format of either old or new catalogs. The reason is that it mostly
relied on portability work done elsewhere (in pg_dump, for example).
Rod's adddepend is a good example.
adddepend was needed because it was inserting knowledge not formerly
present. I don't think it's representative. Things we do more commonly
involve refactoring information --- for example, changing the division
of labor between pg_aggregate and pg_proc, or adding pg_cast to replace
what had been some hard-wired parser behavior.
... I wouldn't call it perfect (for example, I still don't know how
to insert the new table into template0),


.... in other words, it doesn't work and can't be made to work.

pg_upgrade would be a one-time solution for a fairly wide range of
upgrade problems. I don't want to get into developing custom solutions
for each kind of catalog change we might want to make. That's not a
productive use of time.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #181
Manfred Koizar <mk*****@aon.at> writes:
Elsewhere in this current thread it has been suggested that the
on-disk format will stabilize at some time in the future and should
then be frozen to ensure painless upgrades. IMHO, at the moment when
data structures are declared stable and immutable the project is dead.
This is something that concerns me also.
A working pg_upgrade is *not* the first thing we need.
Yes it is. As you say later,
... But once the infrastructure is in place, things
should get easier.


Until we have a working pg_upgrade, every little catalog change will
break backwards compatibility. And I do not feel that the appropriate
way to handle catalog changes is to insist on one-off solutions for each
one. Any quick look at the CVS logs will show that minor and major
catalog revisions occur *far* more frequently than changes that would
affect on-disk representation of user data. If we had a working
pg_upgrade then I'd be willing to think about committing to "no user
data changes without an upgrade path" as project policy. Without it,
any such policy would simply stop development in its tracks.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 11 '05 #182


On Fri, 19 Sep 2003, Tom Lane wrote:
Manfred Koizar <mk*****@aon.at> writes:
Elsewhere in this current thread it has been suggested that the
on-disk format will stabilize at some time in the future and should
then be frozen to ensure painless upgrades. IMHO, at the moment when
data structures are declared stable and immutable the project is dead.


This is something that concerns me also.


But, is there anything wrong with striving for something you mentioned
earlier ... "spooling" data structure changes so that they don't happen
every release, but every other one, maybe?
... But once the infrastructure is in place, things
should get easier.


Until we have a working pg_upgrade, every little catalog change will
break backwards compatibility. And I do not feel that the appropriate
way to handle catalog changes is to insist on one-off solutions for each
one. Any quick look at the CVS logs will show that minor and major
catalog revisions occur *far* more frequently than changes that would
affect on-disk representation of user data. If we had a working
pg_upgrade then I'd be willing to think about committing to "no user
data changes without an upgrade path" as project policy. Without it,
any such policy would simply stop development in its tracks.


hmmm ... k, is it feasible to go a release or two at a time without on
disk changes? if so, pg_upgrade might not be as difficult to maintain,
since, unless someone an figure out a way of doing it, 'on disk change
releases' could still require dump/reloads, with a period of stability in
between?

*Or* ... as we've seen more with this dev cycle then previous ones, how
much could be easily back-patched to the previous version(s) relatively
easily, without requiring on-disk changes?

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 11 '05 #183
"Joshua D. Drake" <jd@commandprompt.com> writes:
The reality of pg_dump is not a good one. It is buggy and not very
reliable.
I think everyone acknowledges that we have more work to do on pg_dump.
But we have to do that work anyway. Spreading ourselves thinner by
creating a whole new batch of code for in-place upgrade isn't going to
improve the situation. The thing I like about the pg_upgrade approach
is that it leverages a lot of code we already have and will need to
continue to maintain in any case.

Also, to be blunt: if pg_dump still has problems after all the years
we've put into it, what makes you think that in-place upgrade will
magically work reliably?
This I am hoping
changes in 7.4 as we moved to a pure "c" implementation.


Eh? AFAIR, pg_dump has always been in C.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 11 '05 #184
On Sat, 2003-09-20 at 11:17, Tom Lane wrote:
"Marc G. Fournier" <sc*****@postgresql.org> writes:
No, I'm not suggesting no catalog changes ... wait, I might be wording
this wrong ... there are two changes that right now requires a
dump/reload, changes to the catalogs and changes to the data structures,
no? Or are these effectively inter-related?


Oh, what you're saying is no changes in user table format. Yeah, we


Whew, we're finally on the same page!

So, some definitions we can agree on?
"catalog change": CREATE or ALTER a pg_* table.
"on-disk structure", a.k.a. "user table format": the way that the
tables/fields are actually stored on disk.

So, a catalog change should *not* require a dump/restore, but an
ODS/UTF change should.

Agreed?

--
-----------------------------------------------------------------
Ron Johnson, Jr. ro***********@cox.net
Jefferson, LA USA

"they love our milk and honey, but preach about another way of living"
Merle Haggard, "The Fighting Side Of Me"
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 11 '05 #185


On Sat, 20 Sep 2003, Ron Johnson wrote:
On Sat, 2003-09-20 at 11:17, Tom Lane wrote:
"Marc G. Fournier" <sc*****@postgresql.org> writes:
No, I'm not suggesting no catalog changes ... wait, I might be wording
this wrong ... there are two changes that right now requires a
dump/reload, changes to the catalogs and changes to the data structures,
no? Or are these effectively inter-related?


Oh, what you're saying is no changes in user table format. Yeah, we


Whew, we're finally on the same page!

So, some definitions we can agree on?
"catalog change": CREATE or ALTER a pg_* table.
"on-disk structure", a.k.a. "user table format": the way that the
tables/fields are actually stored on disk.

So, a catalog change should *not* require a dump/restore, but an
ODS/UTF change should.


As long as pg_update is updated/tested for this, yes, that is what the
thought is ... but, that still requires someone(s) to step up and work
on/maintain pg_upgrade for this to happen ... all we are agreeing to right
now is implement a policy whereby maintaining pg_upgrade is *possible*,
not one where maintaining pg_upgrade is *done* ...
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 11 '05 #186
On Fri, 2003-09-19 at 06:37, Christopher Browne wrote:
ro***********@cox.net (Ron Johnson) wrote:
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote:
On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote:

> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of
> 15K fiber channel disks (and the requisite controllers, shelves,
> RAID overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?

Nope. You also need it for the time when your vendor sells
controllers or chips or whatever with known flaws, and you end up
having hardware that falls over 8 or 9 times in a row.
????


This of course never happens in real life; expensive hardware is
_always_ UTTERLY reliable.

And the hardware vendors all have the same high standards as, well,
certain database vendors we might think of.

After all, Oracle and MySQL AB would surely never mislead their
customers about the merits of their database products any more than
HP, Sun, or IBM would about the possibility of their hardware having
tiny flaws.


Well, I use Rdb, so I wouldn't know about that!

(But then, it's an Oracle product, and runs on HPaq h/w...)
And I would /never/ claim to have lost sleep as a result of flakey
hardware. Particularly not when it's a HA fibrechannel array. I'm
/sure/ that has never happened to anyone. [The irony herre should be
causing people to say "ow!"]


Sure, I've seen expensive h/e flake out. It was the "8 or 9 times
in a row" that confused me.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ro***********@cox.net
Jefferson, LA USA

The difference between drunken sailors and Congressmen is that
drunken sailors spend their own money.
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #187
> No can do, unless your intent is to force people to work on pg_upgrade
and nothing else (a position I for one would ignore ;-)). With such a
policy and no pg_upgrade we'd be unable to apply any catalog changes at
all, which would pretty much mean that 7.5 would look exactly like 7.4.


Not sure about your position here. You claimed that it would be a good idea to
freeze the on disk format for at least a couple of versions. Do you argue
here that this cycle shouldn't start with the next version, or did you
reverse your thought ?

If the former, I think you're right. There are some too big changes close to
being made - if I have read this list correctly. Table spaces and PITR would
certainly change it.

But if the freeze could start after 7.5 and last two-three years, it might
help things.

--
Kaare Rasmussen --Linux, spil,-- Tlf: 3816 2582
Kaki Data tshirts, merchandize Fax: 3816 2501
Howitzvej 75 Åben 12.00-18.00 Email: ka*@kakidata.dk
2000 Frederiksberg Lørdag 12.00-16.00 Web: www.suse.dk

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 11 '05 #188
Kaare Rasmussen <ka*@kakidata.dk> writes:
Not sure about your position here. You claimed that it would be a good idea to
freeze the on disk format for at least a couple of versions.
I said it would be a good idea to freeze the format of user tables (and
indexes) across multiple releases. That's distinct from the layout and
contents of system catalogs, which are things that we revise constantly.
We could not freeze the system catalogs without blocking development
work, and we should not make every individual catalog change responsible
for devising its own in-place-upgrade scheme either. We need some
comprehensive tool for handling catalog upgrades automatically. I think
pg_upgrade points the way to one fairly good solution, though I'd not
rule out other approaches if someone has a bright idea.

Clear now?
Do you argue here that this cycle shouldn't start with the next
version,


I have not said anything about that in this thread. Now that you
mention it, I do think it'd be easier to start with the freeze cycle
after tablespaces are in place. On the other hand, tablespaces might
not appear in 7.5 (they already missed the boat for 7.4). And
tablespaces are something that we could expect pg_upgrade to handle
without a huge amount more work. pg_upgrade would already need to
contain logic to determine the mapping from old-installation user table
file names to new-installation ones, because the table OIDs would
normally be different. Migrating to tablespaces simply complicates that
mapping somewhat. (I am assuming that tablespaces won't affect the
contents of user table files, only their placement in the Unix directory
tree.)

I think a reasonable development plan is to work on pg_upgrade assuming
the current physical database layout (no tablespaces), and concurrently
work on tablespaces. The eventual merge would require teaching
pg_upgrade about mapping old to new filenames in a tablespace world.
It should only be a small additional amount of work to teach it how to
map no-tablespaces to tablespaces.

In short, if people actually are ready to work on pg_upgrade now,
I don't see any big reason not to let them ...

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 11 '05 #189
Manfred Koizar wrote:
On Thu, 18 Sep 2003 12:11:18 -0400, Lamar Owen <lo***@pari.edu> wrote:
Marc G. Fournier wrote:
[...] upgrading is a key feature [...]
a migration tool
that could read the old format _without_a_running_old_backend_ [...]
the new backend is powerless to recover the old data.
OS upgrades [...], FreeBSD ports upgrades, and RPM
upgrades are absolutely horrid at this point. [...]
[censored] has a better system than we
[...] the pain of upgrading [...]
*I* should complain about a ramble? :-)
Lamar, I *STRONGLY* agree with almost everything you say here and in
other posts, except perhaps ... You et al. seem to think that system catalog changes wouldn't be a
problem if only we could avoid page format changes. This is not
necessarily so. Page format changes can be handled without much
effort, if
No, I'm aware of the difference, and I understand the issues with
catalog changes. Tom and I, among others, have discussed this. We
talked about reorganizing the system catalog to separate the data that
typically changes with a release from the data that describes the user's
tables. It is a hard thing to do, separating this data.
Oh, that's me, I think. I am to blame for the heap tuple header
changes between 7.2 and 7.3;
It has happened at more than one version change, not just 7.2->7.3. I
actually was thinking about a previous flag day. So the plural still
stands.
Later, in your "Upgrading rant" thread, I even posted some code
(http://archives.postgresql.org/pgsql.../msg00294.php).
Unfortunately this went absolutely unnoticed, probably because it
looked so long because I fat-fingered the mail and included the code
twice. :-(
I don't recall that, but I believe you. My antivirus software may have
flagged it if it had more than one . in the file name. But I may go
back and look at it. Again, I wasn't fingering 7.2->7.3 -- it has
happened more than once prior to that.
A working pg_upgrade is *not* the first thing we need. What we need
first is willingness to not break backwards compatibility.


To this I agree. But it must be done in stages, as Tom, Marc, and
others have already said (I read the rest of the thread before replying
to this message). We can't simply declare a catalog freeze (which you
didn't do, I know), nor can we declare an on-disk format change freeze.
We need to think about what is required to make upgrades easy, not
what is required to write a one-off upgrade tool (which each version of
pg_upgrade ends up being). Can the system catalog be made more
friendly? Is upgrading by necessity a one-step process (that is, can we
stepwise migrate tables as they are used/upgraded individually)? Can we
decouple the portions of the system catalogs that change from the
portions that give basic access to the user's data? That is, what would
be required to allow a backend to read old data tables? An upgrade tool
is redundant if the backend is version agnostic and version aware.

Look, my requirements are simple. I should be able to upgrade the
binaries and not lose access to my data. That's the bottom line.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #190
On Thu, Sep 18, 2003 at 06:49:56PM -0300, Marc G. Fournier wrote:

Hadn't thought of it that way ... but, what would prompt someone to
upgrade, then use something like erserver to roll back? All I can think
of is that the upgrade caused alot of problems with the application
itself, but in a case like that, would you have the time to be able to
're-replicate' back to the old version?


The trick is to have your former master set up as slave before you
turn your application back on.

The lack of a rollback strategy in PostgreSQL upgrades is a major
barrier for corporate use. One can only do so much testing, and it's
always possible you've missed something. You need to be able to go
back to some known-working state.

A

--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<an****@libertyrms.info> M2P 2A8
+1 416 646 3304 x110
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #191
On Sat, Sep 20, 2003 at 04:54:30PM -0500, Ron Johnson wrote:
Sure, I've seen expensive h/e flake out. It was the "8 or 9 times
in a row" that confused me.


You need to talk to people who've had Sun Ex500s with the UltraSPARC
II built with the IBM e-cache modules. Ask 'em about the reliability
of replacement parts.

A

--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<an****@libertyrms.info> M2P 2A8
+1 416 646 3304 x110
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 11 '05 #192
Marc G. Fournier wrote:


On Mon, 15 Sep 2003, Joshua D. Drake wrote:
I'm not going to rehash the arguments I have made before; they are all
archived. Suffice to say you are simply wrong. The number of
complaints over the years shows that there IS a need.

I at no point suggested that there was not a need. I only suggest that
the need may not be as great as some suspect or feel. To be honest -- if
your arguments were the "need" that everyone had... it would have been
implemented some how. It hasn't yet which would suggest that the number
of people that have the "need" at your level is not as great as the
number of people who have different "needs" from PostgreSQL.


Just to add to this ... Bruce *did* start pg_upgrade, but I don't recall
anyone else looking at extending it ... if the *need* was so great,
someone would have step'd up and looked into adding to what was already
there ...


I was thinking of working on pg_upgrade for 7.4, but other things seemed
more important.

--
Bruce Momjian | http://candle.pha.pa.us
pg***@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #193

Also, to be blunt: if pg_dump still has problems after all the years
we've put into it, what makes you think that in-place upgrade will
magically work reliably?


Fair enough. On another front then... would all this energy we are
talking about with pg_upgrade
be better spent on pg_dump/pg_dumpall/pg_restore?

This I am hoping
changes in 7.4 as we moved to a pure "c" implementation.


Your right that was a mistype. I was very tired and reading three
different threads at the same time.

Sincerely,

Joshua Drake

Eh? AFAIR, pg_dump has always been in C.

regards, tom lane


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandprompt.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #194
Tom Lane wrote:
Kaare Rasmussen <ka*@kakidata.dk> writes:
Not sure about your position here. You claimed that it would be a good idea to
freeze the on disk format for at least a couple of versions.

I said it would be a good idea to freeze the format of user tables (and
indexes) across multiple releases.


Indexes aren't as big a deal. Reindexing is less painful than dump/restore. It could
still lead to significant downtime for very large databases (at least the for the tables
that are being reindexed), but not nearly as much.
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 11 '05 #195
"Joshua D. Drake" <jd@commandprompt.com> writes:
Fair enough. On another front then... would all this energy we are
talking about with pg_upgrade
be better spent on pg_dump/pg_dumpall/pg_restore?


Well, we need to work on pg_dump too. But I don't foresee it ever
getting fast enough to satisfy the folks who want zero-downtime
upgrades. So pg_upgrade is also an important project.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #196
On Mon, 2003-09-22 at 18:30, Tom Lane wrote:
"Joshua D. Drake" <jd@commandprompt.com> writes:
Fair enough. On another front then... would all this energy we are
talking about with pg_upgrade
be better spent on pg_dump/pg_dumpall/pg_restore?


Well, we need to work on pg_dump too. But I don't foresee it ever
getting fast enough to satisfy the folks who want zero-downtime


Multi-threaded pg_dump.

"It'll choke the IO system!!!" you say? Well, heck, get a better
IO system!!!!

Or... use fewer threads.

No, it won't eliminate down-time, but is necessary for big data-
bases.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ro***********@cox.net
Jefferson, LA USA

"You ask us the same question every day, and we give you the
same answer every day. Someday, we hope that you will believe us..."
U.S. Secretary of Defense Donald Rumsfeld, to a reporter
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #197
>>>>> "JS" == Joseph Shraibman <jk*@selectacast.net> writes:

JS> Indexes aren't as big a deal. Reindexing is less painful than
JS> dump/restore. It could still lead to significant downtime for very
JS> large databases (at least the for the tables that are being
JS> reindexed), but not nearly as much.

Well, for me the create index part of the restore is what takes about
3x the time for the data load. Total about 4 hours. The dump takes 1
hour.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: kh***@kciLink.com Rockville, MD +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 11 '05 #198
Vivek Khera <kh***@kcilink.com> writes:
Well, for me the create index part of the restore is what takes about
3x the time for the data load. Total about 4 hours. The dump takes 1
hour.


What sort_mem do you use for the restore? Have you tried increasing it?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 11 '05 #199


On Tue, 23 Sep 2003, Tom Lane wrote:
Vivek Khera <kh***@kcilink.com> writes:
Well, for me the create index part of the restore is what takes about
3x the time for the data load. Total about 4 hours. The dump takes 1
hour.


What sort_mem do you use for the restore? Have you tried increasing it?


I've tried restoring a >5gig database with sort_mem up to 100Meg in size,
and didn't find that it sped up the index creation enough to make a
difference ... shaved off a couple of minutes over the whole reload, so
seconds off of each index ... and that was with the WAL logs also disabled
:(
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #200

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Eric Raymond | last post by:
When installing the rpms (or the tar file) onto Red Hat Enteprise Linux AS Beta 1 (Taroon), we get the follwing error: Installing all prepared tables /usr/bin/mysql_install_db: line 1: 7690...
1
by: Andrew Rawnsley | last post by:
Anyone out there using beta 2 in production situations? Comments on stability? I am rolling out a project in the next 4 weeks, and really don't want to go though an upgrade soon after its released...
4
by: Chad Crowder | last post by:
I've taken a look at this article http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnaspnet/html/asp12282000.asp which someone posted a month or so ago regarding setting up SQL...
0
by: Kevin Davidson | last post by:
I am trying to get a clean installation of the .NET framework 1.1 on Windows 2000 Professional. I had previously installed a beta of the 2.0 Framework and uninstalled it. I also uninstalled...
1
by: alundi | last post by:
Greetings: I'd like to post this as a new thread to an article in microsoft.public.dotnet.languages.vb originally made by nevin and replied to by Alan Pretre back in December ("ThreadState ==...
0
by: Shell | last post by:
Hi All, We have an application that uses SQL Server 2000 for State Management data. We previously use to build and deploy this application using ASP.NET Beta 2 version. Now we are using ASP.NET...
8
by: Steve Thompson | last post by:
Hello all, I was wondering the differnced there were betwee Active State's python and the open source version of python. Would I have to unistall my opend souce python? Additonally, how does...
3
by: Nathan Sokalski | last post by:
I am recieving the following error on the second postback of a page I have written: The state information is invalid for this page and might be corrupted Stack Trace: ...
70
by: Anson.Stuggart | last post by:
I'm designing a debounce filter using Finite State Machine. The FSM behavior is it follows the inital input bit and thinks that's real output until it receives 3 consecutive same bits and it...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.