473,883 Members | 1,695 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

State of Beta 2


Anyone out there using beta 2 in production situations? Comments on
stability? I am rolling out a project in the next 4 weeks, and really
don't want to go though an upgrade soon after its released on an
Unsuspecting Client, so I would LIKE to start working with 7.4.

--------------------

Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 11 '05
236 10121
On Sun, 2003-09-14 at 14:17, Christopher Browne wrote:
After a long battle with technology,ma** **@bugs.unl.edu .ar (Martin Marques), an earthling, wrote:
El Dom 14 Sep 2003 12:20, Lincoln Yeoh escribió:
>At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
[snip] Certainly there are backup systems designed to cope with those sorts
of quantities of data. With 8 tape drives, and a rack system that
holds 200 cartridges, you not only can store a HUGE pile of data, but
you can push it onto tape about as quickly as you can generate it.

<http://spectralogic.co m> discusses how to use their hardware and
software products to do terabytes of backups in an hour. They sell a
software product called "Alexandria " that knows how to (at least
somewhat) intelligently backup SAP R/3, Oracle, Informix, and Sybase
systems. (When I was at American Airlines, that was the software in
use._
HP, Hitachi, and a number of other vendors make similar hardware.

You mean the database vendors don't build that parallelism into
their backup procedures?
Generally, this involves having a bunch of tape drives that are
simultaneously streaming different parts of the backup.

When it's Oracle that's in use, a common strategy involves
periodically doing a "hot" backup (so you can quickly get back to a
known database state), and then having a robot tape drive assigned to
regularly push archive logs to tape as they are produced.
Rdb does the same thing. You mean DB/2 can't/doesn't do that?

[snip] None of this is particularly cheap or easy; need I remind gentle
readers that if you can't afford that, then you essentially can't
afford to claim "High Availability?"


--
-----------------------------------------------------------------
Ron Johnson, Jr. ro***********@c ox.net
Jefferson, LA USA

"(Women are) like compilers. They take simple statements and
make them into big productions."
Pitr Dubovitch
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 11 '05 #91
In the last exciting episode, ro***********@c ox.net (Ron Johnson) wrote:
On Sun, 2003-09-14 at 14:17, Christopher Browne wrote:
<http://spectralogic.co m> discusses how to use their hardware and
software products to do terabytes of backups in an hour. They sell a
software product called "Alexandria " that knows how to (at least
somewhat) intelligently backup SAP R/3, Oracle, Informix, and Sybase
systems. (When I was at American Airlines, that was the software in
use._


HP, Hitachi, and a number of other vendors make similar hardware.

You mean the database vendors don't build that parallelism into
their backup procedures?


They don't necessarily build every conceivable bit of possible
functionality into the backup procedures they provide, if that's what
you mean.

Of thee systems mentioned, I'm most familiar with SAP's backup
regimen; if you're using it with Oracle, you'll use tools called
"brbackup" and "brarchive" , which provide a _moderately_ sophisticated
scheme for dealing with backing things up.

But if you need to do something wild, involving having two servers
each having 8 tape drives on a nearby servers that are used to manage
backups for a whole cluster of systems, including a combination of OS
backups, DB backups, and application backups, it's _not_ reasonable to
expect one DB vendor's backup tools to be totally adequate to that.

Alexandria (and similar software) certainly needs tool support from DB
makers to allow them to intelligently handle streaming the data out of
the databases.

At present, this unfortunately _isn't_ something PostgreSQL does, from
two perspectives:

1. You can't simply keep the WALs and reapply them in order to bring
a second database up to date;

2. A pg_dump doesn't provide a way of streaming parts of the
database in parallel, at least not if all the data is in
one database. (There's some nifty stuff in eRServ that
might eventually be relevant, but probably not yet...)

There are partial answers:

- If there are multiple databases, starting multiple pg_dump
sessions provides some useful parallelism;

- A suitable logical volume manager may allow splitting off
a copy atomically, and then you can grab the resulting data
in "strips" to pull it in parallel.

Life isn't always perfect.
Generally, this involves having a bunch of tape drives that are
simultaneously streaming different parts of the backup.

When it's Oracle that's in use, a common strategy involves
periodically doing a "hot" backup (so you can quickly get back to a
known database state), and then having a robot tape drive assigned
to regularly push archive logs to tape as they are produced.


Rdb does the same thing. You mean DB/2 can't/doesn't do that?


I haven't the foggiest idea, although I would be somewhat surprised if
it doesn't have something of the sort.
--
(reverse (concatenate 'string "moc.enworb bc" "@" "enworbbc") )
http://www.ntlug.org/~cbbrowne/wp.html
Rules of the Evil Overlord #139. "If I'm sitting in my camp, hear a
twig snap, start to investigate, then encounter a small woodland
creature, I will send out some scouts anyway just to be on the safe
side. (If they disappear into the foliage, I will not send out another
patrol; I will break out napalm and Agent Orange.)"
<http://www.eviloverlor d.com/>
Nov 11 '05 #92
Quoting Tom Lane <tg*@sss.pgh.pa .us>:
Network Administrator <ne******@vcsn. com> writes:
The abstraction I am talking about would be a logical layer that would handle
disk I/O including the format of that data (lets call this the ADH).


This sounds good in the abstract, but I don't see how you would define
such a layer in a way that was both thin and able to cope with large
changes in representation. In a very real sense, "handle disk I/O
including the format of the data" describes the entire backend. To
create an abstraction layer that will actually give any traction for
maintenance, you'd have to find a way to slice it much more narrowly
than that.


*nod* I thought that would probably be the case. The "thickness" of that layer
would be directly related to how the backend was sliced. However it seems to me
that right now that this might not be possible while the backend is changing
between major releases. Perhaps once that doesn't fluxate as much it might be
feasible to create these layer so that it is not too fat.

Maybe the goal is too aggressive. To ask (hopefully) a simpler question. Would
it be possible to at compile time choose the on disk representation? I'm not
sure but I think that might reduce the complexity since the abstraction would
only exist before the application is built. Once compiled there would be no
ambiguity in what representation is chosen.
Even if the approach can be made to work, defining such a layer and then
revising all the existing code to go through it would be a huge amount
of work.

Ultimately there's no substitute for hard work :-(

regards, tom lane


True, which is why I've never been bothered about going through a process to
maintain my database's integrity and performance. However, over time, that
across my entire client base I will eventually reach a point where I will need
to do an "in place" upgrade or at least limit database downtime to a 60 minute
window- or less.

--
Keith C. Perry
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com

_______________ _______________ ______
This email account is being host by:
VCSN, Inc : http://vcsn.com

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 11 '05 #93
Network Administrator <ne******@vcsn. com> writes:
... However it seems to me that right now that this might not be
possible while the backend is changing between major releases.
Perhaps once that doesn't fluxate as much it might be feasible to
create these layer so that it is not too fat.


Yeah, that's been in the back of my mind also. Once we have tablespaces
and a couple of the other basic features we're still missing, it might
be a more reasonable proposition to freeze the on-disk representation.

At the very least we could quantize it a little more --- say, group
changes that affect user table representation into every third or fourth
release.

But until we have a production-quality "pg_upgrade " this is all moot.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 11 '05 #94
On Sun, 2003-09-14 at 23:08, Tom Lane wrote:
Network Administrator <ne******@vcsn. com> writes:
... However it seems to me that right now that this might not be
possible while the backend is changing between major releases.
Perhaps once that doesn't fluxate as much it might be feasible to
create these layer so that it is not too fat.


Yeah, that's been in the back of my mind also. Once we have tablespaces
and a couple of the other basic features we're still missing, it might
be a more reasonable proposition to freeze the on-disk representation.


I think that every effort should be made so that the on-disk struct-
ure (ODS) doesn't have to change when tablespaces is implemented.
I.e., oid-based files live side-by-side with tablespaces.

At a minimum, it should be "ok, you don't *have* to do a dump/restore
to migrate to v7.7, but if you want the tablespaces that are brand
new in v7.7, you must dump data, and recreate the schema with table-
spaces before restoring".

--
-----------------------------------------------------------------
Ron Johnson, Jr. ro***********@c ox.net
Jefferson, LA USA

"(Women are) like compilers. They take simple statements and
make them into big productions."
Pitr Dubovitch
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 11 '05 #95
Ron Johnson <ro***********@ cox.net> writes:
On Sun, 2003-09-14 at 23:08, Tom Lane wrote:
Yeah, that's been in the back of my mind also. Once we have tablespaces
and a couple of the other basic features we're still missing, it might
be a more reasonable proposition to freeze the on-disk representation.
I think that every effort should be made so that the on-disk struct-
ure (ODS) doesn't have to change when tablespaces is implemented.


That's not going to happen --- tablespaces will be complex enough
without trying to support a backwards-compatible special case.

If we have a workable pg_upgrade by the time tablespaces happen, it
would be reasonable to expect it to be able to rearrange the user data
files of an existing installation into the new directory layout. If
we don't, the issue is moot anyway.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #96
Martin Marques wrote:
El Dom 14 Sep 2003 12:20, Lincoln Yeoh escribió:
At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
'migration ' server. And I really don't want to think about dump/restore
of 100TB (if PostgreSQL actually stores the image files, which it might).

Hmm. Just curious, do people generally backup 100TB of data, or once most
reach this point they have to hope that it's just hardware failures they'll
deal with and not software/other issues?

Normally you would have a RAID with mirroring and CRC, so that if one of the
disks in the array of disks falls, the system keeps working. You can even
have hot-pluggable disks, so you can change the disk that is broken without
rebooting.


I did mention a SAN running Fibre Channel. I would have a portion of
the array in one building, and a portion of the array in another
building 1500 feet away. I have lots of fiber between buildings, a
portion of which I am currently using. So I can and will be doing RAID
over FC in a SAN, with spatial separation between portions of the array.
Now whether it is geographically separate _enough_, well that's a
different question. But I have thought through those issues already.

Using FC as a SAN in this way will complement my HA solution, which may
just be a hot failover server connected to the same SAN. I am still
investigating the failover mechanism; having two separate database data
stores has its advantages (software errors can render a RAID worse than
useless, since the RAID will distribute file corruption very
effectively). But I am not sure how it will work at present.

The buildings in question are somewhat unique, being that the portions
of the buildings I would be using were constructed by the US Army Corps
of Engineers. See www.pari.edu for more information.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #97
Marc G. Fournier wrote:
On Sat, 13 Sep 2003, Lamar Owen wrote:
Can eRserver replicate a 7.3.x to a 7.2.x? Or 7.4.x to 7.3.x?

I thought we were talking about upgrades here?


If eRserver can be used as a funnel for upgrading, then it by definition
must be able to replicate an older version to a newer. I was just
asking to see if indeed eRserver has that capability. If so, then that
may be usefule for those who can deal with a fully replicated datastore,
which might be an issue for various reasons.
--
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 11 '05 #98

100TB sounds like a lot of backup media and time... Not to mention
ensuring that the backups will work with available and functioning
backup hardware.
It is alot but is is not a lot for something like an Insurance company
or a bank. Also 100TB is probably non-compressed although 30TB is still
large.


Head hurts just to think about it,

Link.

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postg resql.org

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandpromp t.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #99
Strawmen. If we provide a good upgrade capability, we would just
simply have to think about upgrades before changing features like
that. The upgrade code could be cognizant of these sorts of things;
and shoud be, in fact.


Sure but IMHO it would be more important to fix bugs like the parser not
correctly using indexes on bigint unless the value is quoted...

I think everyone would agree that not having to use initdb would be nice
but I think there is much more important things to focus on.

Besides if you are upgrading PostgreSQL in a production environment I
would assume there would be an extremely valid reason. If the reason is
big enough to do a major version upgrade then an initdb shouldn't be all
that bad of a requirement.

J

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - jd@commandpromp t.com - http://www.commandprompt.com
The most reliable support for the most reliable Open Source database.

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 11 '05 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
2393
by: Eric Raymond | last post by:
When installing the rpms (or the tar file) onto Red Hat Enteprise Linux AS Beta 1 (Taroon), we get the follwing error: Installing all prepared tables /usr/bin/mysql_install_db: line 1: 7690 Segmentation fault /usr/sbin/mysqld --bootstrap --skip-grant-tables --basedir=/ --datadir=/var/lib/mysql --skip-innodb --skip-bdb Installation of grant tables failed The log file simply shows a start and a stop of the server.
1
389
by: Andrew Rawnsley | last post by:
Anyone out there using beta 2 in production situations? Comments on stability? I am rolling out a project in the next 4 weeks, and really don't want to go though an upgrade soon after its released on an Unsuspecting Client, so I would LIKE to start working with 7.4. -------------------- Andrew Rawnsley President The Ravensfield Digital Resource Group, Ltd.
4
2349
by: Chad Crowder | last post by:
I've taken a look at this article http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnaspnet/html/asp12282000.asp which someone posted a month or so ago regarding setting up SQL server to handle state data. The article references .Net beta, and the file state.sql in the framwork/1.0x directory, but that file doesn't exist for version 1.1.4322. I'm wondering if there's a component that I need to install, or if I need to...
0
1462
by: Kevin Davidson | last post by:
I am trying to get a clean installation of the .NET framework 1.1 on Windows 2000 Professional. I had previously installed a beta of the 2.0 Framework and uninstalled it. I also uninstalled framework 1.1 and 1.0 and reinstalled 1.1 and service pack 1. When I attempt to start the ASP.NET State Service, I get event log: Event Type: Error Event Source: Service Control Manager Event Category: None
1
3001
by: alundi | last post by:
Greetings: I'd like to post this as a new thread to an article in microsoft.public.dotnet.languages.vb originally made by nevin and replied to by Alan Pretre back in December ("ThreadState == not in the enumeration"). Like nevin, I have a VB.NET multi-threaded application, and quite frequently, after my main application class will call Thread.Suspend() on the thread the ThreadState of the Suspended Thread will be a value of 96. From...
0
1004
by: Shell | last post by:
Hi All, We have an application that uses SQL Server 2000 for State Management data. We previously use to build and deploy this application using ASP.NET Beta 2 version. Now we are using ASP.NET RTM version.With that, we are able to build & deploy the application successfully. But it gives us the error "The Session State information is valid or might be corrupted." while retrieving data from SQL Server. We can see the data stored...
8
2263
by: Steve Thompson | last post by:
Hello all, I was wondering the differnced there were betwee Active State's python and the open source version of python. Would I have to unistall my opend souce python? Additonally, how does Active State's Komodo IDE vs. the eric3 IDE unler SuSE Linux v. 10.i? Addionally, is the eric IDE (version 3) an acceptible IDE or are there more easy and more productive IDE's for perl?
3
8719
by: Nathan Sokalski | last post by:
I am recieving the following error on the second postback of a page I have written: The state information is invalid for this page and might be corrupted Stack Trace: System.Convert.FromBase64String(String s) +0
70
4368
by: Anson.Stuggart | last post by:
I'm designing a debounce filter using Finite State Machine. The FSM behavior is it follows the inital input bit and thinks that's real output until it receives 3 consecutive same bits and it changes output to that 3 consecutive bit until next 3 consecutive bits are received. A reset will set the FSM to output 1s until it receives the correct input and ouput. This is the test sequence with input and correct output. 1 0 0 1 0 1 0 0 0 1...
0
9935
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9791
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10742
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10844
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10410
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9571
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
7122
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5990
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4609
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.