473,883 Members | 1,627 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

State of Beta 2


Anyone out there using beta 2 in production situations? Comments on
stability? I am rolling out a project in the next 4 weeks, and really
don't want to go though an upgrade soon after its released on an
Unsuspecting Client, so I would LIKE to start working with 7.4.

--------------------

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 11 '05
236 10127
Lamar Owen wrote:

As most everyone here knows, I am a big proponent of in-place
upgrades, and have been so for a very long time. Read the archives;
I've said my piece, and am not going to rehash at this time.


I look forward to when or if a sponsor can add in-place upgrades to
Postgres. Big projects like that, vs. upgrades, take focused, mid or
long term efforts with people who are committted to only that project.
Translation: money and skills.
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postg resql.org

Nov 11 '05 #81
Lamar Owen wrote:

As most everyone here knows, I am a big proponent of in-place
upgrades, and have been so for a very long time. Read the archives;
I've said my piece, and am not going to rehash at this time.


I look forward to when or if a sponsor can add in-place upgrades to
Postgres. Big projects like that, vs. upgrades, take focused, mid or
long term efforts with people who are committted to only that project.
Translation: money and skills.
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postg resql.org

Nov 11 '05 #82


On Sat, 13 Sep 2003, Lamar Owen wrote:
Marc G. Fournier wrote:
'k, but is it out of the question to pick up a duplicate server, and use
something like eRServer to replicate the databases between the two
systems, with the new system having the upgraded database version running
on it, and then cutting over once its all in sync?


Can eRserver replicate a 7.3.x to a 7.2.x? Or 7.4.x to 7.3.x?


I thought we were talking about upgrades here?
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 11 '05 #83


On Sat, 13 Sep 2003, Lamar Owen wrote:
Marc G. Fournier wrote:
'k, but is it out of the question to pick up a duplicate server, and use
something like eRServer to replicate the databases between the two
systems, with the new system having the upgraded database version running
on it, and then cutting over once its all in sync?


Can eRserver replicate a 7.3.x to a 7.2.x? Or 7.4.x to 7.3.x?


I thought we were talking about upgrades here?
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 11 '05 #84
Not that I know anything about the internal workings of PG but it seems like a
big part of the issue is the on disk representation of database. I've never had
a problem with the whole dump/restore process and in fact anyone that has been
doing this long enough will remember when that process was gospel associated
with db upgrades. However, with 24x7 opertations or in general anyone who
simply can NOT tolerant the downtown to do an upgrade I wondering if there is
perhaps a way to abstract the on disk representation of PG data so that 1)
Future upgrades to not have to maintain the same structure if it is deem another
respresentation is better 2) Upgrade could be down in place.

The abstraction I am talking about would be a logical layer that would handle
disk I/O including the format of that data (lets call this the ADH). By
abstracting that information, the upgrade concerns *could* because if, "I
upgrade to say 7.2.x to 7.3.x or 7.4.x, do I *want* to take advantage of the new
disk representation. If yes, then you would have go through the necessary
process of upgrading the database with would always default to the most current
representation. If not, then because the ADH is abstact to the application, it
could run in a 7.2.x or 7.3.x "compatibil ity mode" so that you would not *need*
to do the dump and restore.

Again, I am completely ignorant to how this really works (and I don't have time
to read through the code) but I what I think I'm getting at is a DBI/DBD type
scenario. As a result, there would be another layer of complexity and I would
think some performance loss as well but how much complexity and performance loss
to me is the question and when you juxtapose that against the ability to do
upgrades without the dump/restore I would think many organizations would say,
"ok, I'll take the x% performance hit and wait util I have the resources to
upgrade disk representation"

One of the things involved with in Philadelphia is
providing IT services to social service programs for outsourced agencies of the
local government. In particular, there have been and are active moves in PA to
have these social service datawarehouses go up. Even though it will probably
take years to actually realize this, by that time once you aggregate all the
local agency databases together, we're going to be talking about very large
datasets. That means that (at least for) social service programs, IT is going
to have to take into account this whole upgrade question from what I think will
be a standpoint of availability. In short, I don't think it is too far off to
consider that the "little guys" will need to do reliable "in place" upgrades
with 100% confidence.

Hopefully, I was clear on my macro-concept even if I got the micro-concepts wrong.

Quoting Tom Lane <tg*@sss.pgh.pa .us>:
Kaare Rasmussen <ka*@kakidata.d k> writes:
"interestin g" category. It is in the category of things that will only
happen if people pony up money to pay someone to do uninteresting work.
And for all the ranting, I've not seen any ponying.

Just for the record now that there's an argument that big companies need

24x7
- could you or someone else with knowledge of what's involved give a
guesstimate of how many ponies we're talking. Is it one man month, one man

year, more, or what?


Well, the first thing that needs to happen is to redesign and
reimplement pg_upgrade so that it works with current releases and is
trustworthy for enterprise installations (the original script version
depended far too much on being run by someone who knew what they were
doing, I thought). I guess that might take, say, six months for one
well-qualified hacker. But it would be an open-ended commitment,
because pg_upgrade only really solves the problem of installing new
system catalogs. Any time we do something that affects the contents or
placement of user table and index files, someone would have to figure
out and implement a migration strategy.

Some examples of things we have done recently that could not be handled
without much more work: modifying heap tuple headers to conserve
storage, changing the on-disk representation of array values, fixing
hash indexes. Examples of probable future changes that will take work:
adding tablespaces, adding point-in-time recovery, fixing the interval
datatype, generalizing locale support so you can have more than one
locale per installation.

It could be that once pg_upgrade exists in a production-ready form,
PG developers will voluntarily do that extra work themselves. But
I doubt it (and if it did happen that way, it would mean a significant
slowdown in the rate of development). I think someone will have to
commit to doing the extra work, rather than just telling other people
what they ought to do. It could be a permanent full-time task ...
at least until we stop finding reasons we need to change the on-disk
data representation, which may or may not ever happen.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postg resql.org so that your
message can get through to the mailing list cleanly

--
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

_______________ _______________ ______
This email account is being host by:
VCSN, Inc : http://vcsn.com

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postg resql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #85
Not that I know anything about the internal workings of PG but it seems like a
big part of the issue is the on disk representation of database. I've never had
a problem with the whole dump/restore process and in fact anyone that has been
doing this long enough will remember when that process was gospel associated
with db upgrades. However, with 24x7 opertations or in general anyone who
simply can NOT tolerant the downtown to do an upgrade I wondering if there is
perhaps a way to abstract the on disk representation of PG data so that 1)
Future upgrades to not have to maintain the same structure if it is deem another
respresentation is better 2) Upgrade could be down in place.

The abstraction I am talking about would be a logical layer that would handle
disk I/O including the format of that data (lets call this the ADH). By
abstracting that information, the upgrade concerns *could* because if, "I
upgrade to say 7.2.x to 7.3.x or 7.4.x, do I *want* to take advantage of the new
disk representation. If yes, then you would have go through the necessary
process of upgrading the database with would always default to the most current
representation. If not, then because the ADH is abstact to the application, it
could run in a 7.2.x or 7.3.x "compatibil ity mode" so that you would not *need*
to do the dump and restore.

Again, I am completely ignorant to how this really works (and I don't have time
to read through the code) but I what I think I'm getting at is a DBI/DBD type
scenario. As a result, there would be another layer of complexity and I would
think some performance loss as well but how much complexity and performance loss
to me is the question and when you juxtapose that against the ability to do
upgrades without the dump/restore I would think many organizations would say,
"ok, I'll take the x% performance hit and wait util I have the resources to
upgrade disk representation"

One of the things involved with in Philadelphia is
providing IT services to social service programs for outsourced agencies of the
local government. In particular, there have been and are active moves in PA to
have these social service datawarehouses go up. Even though it will probably
take years to actually realize this, by that time once you aggregate all the
local agency databases together, we're going to be talking about very large
datasets. That means that (at least for) social service programs, IT is going
to have to take into account this whole upgrade question from what I think will
be a standpoint of availability. In short, I don't think it is too far off to
consider that the "little guys" will need to do reliable "in place" upgrades
with 100% confidence.

Hopefully, I was clear on my macro-concept even if I got the micro-concepts wrong.

Quoting Tom Lane <tg*@sss.pgh.pa .us>:
Kaare Rasmussen <ka*@kakidata.d k> writes:
"interestin g" category. It is in the category of things that will only
happen if people pony up money to pay someone to do uninteresting work.
And for all the ranting, I've not seen any ponying.

Just for the record now that there's an argument that big companies need

24x7
- could you or someone else with knowledge of what's involved give a
guesstimate of how many ponies we're talking. Is it one man month, one man

year, more, or what?


Well, the first thing that needs to happen is to redesign and
reimplement pg_upgrade so that it works with current releases and is
trustworthy for enterprise installations (the original script version
depended far too much on being run by someone who knew what they were
doing, I thought). I guess that might take, say, six months for one
well-qualified hacker. But it would be an open-ended commitment,
because pg_upgrade only really solves the problem of installing new
system catalogs. Any time we do something that affects the contents or
placement of user table and index files, someone would have to figure
out and implement a migration strategy.

Some examples of things we have done recently that could not be handled
without much more work: modifying heap tuple headers to conserve
storage, changing the on-disk representation of array values, fixing
hash indexes. Examples of probable future changes that will take work:
adding tablespaces, adding point-in-time recovery, fixing the interval
datatype, generalizing locale support so you can have more than one
locale per installation.

It could be that once pg_upgrade exists in a production-ready form,
PG developers will voluntarily do that extra work themselves. But
I doubt it (and if it did happen that way, it would mean a significant
slowdown in the rate of development). I think someone will have to
commit to doing the extra work, rather than just telling other people
what they ought to do. It could be a permanent full-time task ...
at least until we stop finding reasons we need to change the on-disk
data representation, which may or may not ever happen.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postg resql.org so that your
message can get through to the mailing list cleanly

--
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

_______________ _______________ ______
This email account is being host by:
VCSN, Inc : http://vcsn.com

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postg resql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #86
At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
'migration' server. And I really don't want to think about dump/restore
of 100TB (if PostgreSQL actually stores the image files, which it might).


Hmm. Just curious, do people generally backup 100TB of data, or once most
reach this point they have to hope that it's just hardware failures they'll
deal with and not software/other issues?

100TB sounds like a lot of backup media and time... Not to mention ensuring
that the backups will work with available and functioning backup hardware.

Head hurts just to think about it,

Link.

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postg resql.org

Nov 11 '05 #87
Network Administrator <ne******@vcsn. com> writes:
The abstraction I am talking about would be a logical layer that would handle
disk I/O including the format of that data (lets call this the ADH).


This sounds good in the abstract, but I don't see how you would define
such a layer in a way that was both thin and able to cope with large
changes in representation. In a very real sense, "handle disk I/O
including the format of the data" describes the entire backend. To
create an abstraction layer that will actually give any traction for
maintenance, you'd have to find a way to slice it much more narrowly
than that.

Even if the approach can be made to work, defining such a layer and then
revising all the existing code to go through it would be a huge amount
of work.

Ultimately there's no substitute for hard work :-(

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postg resql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #88
El Dom 14 Sep 2003 12:20, Lincoln Yeoh escribió:
At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
'migration' server. And I really don't want to think about dump/restore
of 100TB (if PostgreSQL actually stores the image files, which it might).
Hmm. Just curious, do people generally backup 100TB of data, or once most
reach this point they have to hope that it's just hardware failures they'll
deal with and not software/other issues?


Normally you would have a RAID with mirroring and CRC, so that if one of the
disks in the array of disks falls, the system keeps working. You can even
have hot-pluggable disks, so you can change the disk that is broken without
rebooting.

You can also have a hot backup using eRServ (Replicate your DB server on a
backup server, just in case).
100TB sounds like a lot of backup media and time... Not to mention ensuring
that the backups will work with available and functioning backup hardware.


I don't know, but there may be backup systems for that amount of space. We
have just got some 200Gb tape devices, and they are about 2 years old. With a
5 tape robot, you have 1TB of backup.

--
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
-----------------------------------------------------------------
Martín Marqués | mm******@unl.ed u.ar
Programador, Administrador, DBA | Centro de Telematica
Universidad Nacional
del Litoral
-----------------------------------------------------------------
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postg resql.org

Nov 11 '05 #89
After a long battle with technology,ma** **@bugs.unl.edu .ar (Martin Marques), an earthling, wrote:
El Dom 14 Sep 2003 12:20, Lincoln Yeoh escribió:
>At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
>'migration' server. And I really don't want to think about dump/restore
>of 100TB (if PostgreSQL actually stores the image files, which it might).


Hmm. Just curious, do people generally backup 100TB of data, or once most
reach this point they have to hope that it's just hardware failures they'll
deal with and not software/other issues?


Normally you would have a RAID with mirroring and CRC, so that if one of the
disks in the array of disks falls, the system keeps working. You can even
have hot-pluggable disks, so you can change the disk that is broken without
rebooting.

You can also have a hot backup using eRServ (Replicate your DB server on a
backup server, just in case).


In a High Availability situation, there is little choice but to create
some form of "hot backup." And if you can't afford that, then reality
is that you can't afford to pretend to have "High Availability."
100TB sounds like a lot of backup media and time... Not to mention
ensuring that the backups will work with available and functioning
backup hardware.


I don't know, but there may be backup systems for that amount of
space. We have just got some 200Gb tape devices, and they are about
2 years old. With a 5 tape robot, you have 1TB of backup.


Certainly there are backup systems designed to cope with those sorts
of quantities of data. With 8 tape drives, and a rack system that
holds 200 cartridges, you not only can store a HUGE pile of data, but
you can push it onto tape about as quickly as you can generate it.

<http://spectralogic.co m> discusses how to use their hardware and
software products to do terabytes of backups in an hour. They sell a
software product called "Alexandria " that knows how to (at least
somewhat) intelligently backup SAP R/3, Oracle, Informix, and Sybase
systems. (When I was at American Airlines, that was the software in
use._

Generally, this involves having a bunch of tape drives that are
simultaneously streaming different parts of the backup.

When it's Oracle that's in use, a common strategy involves
periodically doing a "hot" backup (so you can quickly get back to a
known database state), and then having a robot tape drive assigned to
regularly push archive logs to tape as they are produced.

That would more or less resemble taking a "consistent filesystem
backup" of a PG database, and then saving the sequence of WAL files.
(The disanalogies are considerable; that should improve at least a
_little_ once PITR comes along for PostgreSQL...)

None of this is particularly cheap or easy; need I remind gentle
readers that if you can't afford that, then you essentially can't
afford to claim "High Availability?"
--
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://www.ntlug.org/~cbbrowne/nonrdbms.html
Who's afraid of ARPA?
Nov 11 '05 #90

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
2393
by: Eric Raymond | last post by:
When installing the rpms (or the tar file) onto Red Hat Enteprise Linux AS Beta 1 (Taroon), we get the follwing error: Installing all prepared tables /usr/bin/mysql_install_db: line 1: 7690 Segmentation fault /usr/sbin/mysqld --bootstrap --skip-grant-tables --basedir=/ --datadir=/var/lib/mysql --skip-innodb --skip-bdb Installation of grant tables failed The log file simply shows a start and a stop of the server.
1
389
by: Andrew Rawnsley | last post by:
Anyone out there using beta 2 in production situations? Comments on stability? I am rolling out a project in the next 4 weeks, and really don't want to go though an upgrade soon after its released on an Unsuspecting Client, so I would LIKE to start working with 7.4. -------------------- Andrew Rawnsley President The Ravensfield Digital Resource Group, Ltd.
4
2349
by: Chad Crowder | last post by:
I've taken a look at this article http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnaspnet/html/asp12282000.asp which someone posted a month or so ago regarding setting up SQL server to handle state data. The article references .Net beta, and the file state.sql in the framwork/1.0x directory, but that file doesn't exist for version 1.1.4322. I'm wondering if there's a component that I need to install, or if I need to...
0
1462
by: Kevin Davidson | last post by:
I am trying to get a clean installation of the .NET framework 1.1 on Windows 2000 Professional. I had previously installed a beta of the 2.0 Framework and uninstalled it. I also uninstalled framework 1.1 and 1.0 and reinstalled 1.1 and service pack 1. When I attempt to start the ASP.NET State Service, I get event log: Event Type: Error Event Source: Service Control Manager Event Category: None
1
3001
by: alundi | last post by:
Greetings: I'd like to post this as a new thread to an article in microsoft.public.dotnet.languages.vb originally made by nevin and replied to by Alan Pretre back in December ("ThreadState == not in the enumeration"). Like nevin, I have a VB.NET multi-threaded application, and quite frequently, after my main application class will call Thread.Suspend() on the thread the ThreadState of the Suspended Thread will be a value of 96. From...
0
1004
by: Shell | last post by:
Hi All, We have an application that uses SQL Server 2000 for State Management data. We previously use to build and deploy this application using ASP.NET Beta 2 version. Now we are using ASP.NET RTM version.With that, we are able to build & deploy the application successfully. But it gives us the error "The Session State information is valid or might be corrupted." while retrieving data from SQL Server. We can see the data stored...
8
2263
by: Steve Thompson | last post by:
Hello all, I was wondering the differnced there were betwee Active State's python and the open source version of python. Would I have to unistall my opend souce python? Additonally, how does Active State's Komodo IDE vs. the eric3 IDE unler SuSE Linux v. 10.i? Addionally, is the eric IDE (version 3) an acceptible IDE or are there more easy and more productive IDE's for perl?
3
8719
by: Nathan Sokalski | last post by:
I am recieving the following error on the second postback of a page I have written: The state information is invalid for this page and might be corrupted Stack Trace: System.Convert.FromBase64String(String s) +0
70
4368
by: Anson.Stuggart | last post by:
I'm designing a debounce filter using Finite State Machine. The FSM behavior is it follows the inital input bit and thinks that's real output until it receives 3 consecutive same bits and it changes output to that 3 consecutive bit until next 3 consecutive bits are received. A reset will set the FSM to output 1s until it receives the correct input and ouput. This is the test sequence with input and correct output. 1 0 0 1 0 1 0 0 0 1...
0
9936
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9791
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
11137
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10742
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
10412
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
1
7970
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
7123
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5990
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
4215
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.