473,479 Members | 2,060 Online
Bytes | Software Development & Data Engineering Community
Create Post

Home Posts Topics Members FAQ

More psql problems... >.<

Hi all,

I have got to say that my first foray into postgresSQL is becoming a
very madening experience... I am sure it is my own fault for not knowing
very much but it seems that everything I have tried so far to improve
performance has in fact made it a lot worse. Now my program dies after
roughly 300 seconds of processing directories and updates take literally
10 time longer than inserts (which are themselves very slow).

I am sorry for winning... I've been trying to figure this out non
stop for nearly two weeks...

Anyway, I moved my backup program to another dedicated machine (an
AMD Athlon 1.2GHz (1700+) with 512MB RAM and a Seagate Barracuda 7200.7,
2MB buffer ATA/100 IDE drive). As it stands now I have increased shmmax
to 128MB and in the 'postgresql.conf' I dropped max_connections to 10
and upped shared_buffers to 4096.

What is happening now is that the program does an 'ls' (system call)
to get a list of the files and directories starting at the root of a
mounted partition. These are read into an array which perl then
processes one at a time. the 'ls' value is searched for in the database
and if it doesn't exist, the values are inserted. If they do exist, they
are updated (at 1/10th the speed). If the file is in fact a directory
perl jumps into it and again reads in it's contents into another array
and processes the one at a time. It will do this until all files or
directories on the partition have been processed.

My previous question was performance based, now I just need to get
the darn thing working again. Like I said, after ~300 seconds perl dies.
If I disable auto-commit then it dies the first time it runs an insert.
(this is all done on the same table; 'file_dir'). If I add a 'commit'
before each select than a bunch of selects will work (a few dozen) and
then it dies anyway.

Does this sound at all like a common problem? Thanks for reading my
gripe.

Madison

PS - PostgresSQL 7.4 on Fedora Core 2; indexes on the three columns I
search and my SELECT, UPDATE and INSERT calls are prepared.

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #1
5 1218
On Tue, Jun 22, 2004 at 03:52:03PM -0400, Madison Kelly wrote:
What is happening now is that the program does an 'ls' (system call)
to get a list of the files and directories starting at the root of a
mounted partition. These are read into an array which perl then
processes one at a time. the 'ls' value is searched for in the database
and if it doesn't exist, the values are inserted. If they do exist, they
are updated (at 1/10th the speed). If the file is in fact a directory
perl jumps into it and again reads in it's contents into another array
and processes the one at a time. It will do this until all files or
directories on the partition have been processed.
So you read the entire filesystem again and again? Sounds like a
horrible idea to me. Have you tried using the mtimes, etc?

My previous question was performance based, now I just need to get
the darn thing working again. Like I said, after ~300 seconds perl dies.


Out of memory? If you save your whole filesystem in a Perl array you
are going to consume a lot of memory. This is, of course, not Postgres
related, so I'm not sure why you are asking here.

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
Jajaja! Solo hablaba en serio!
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #2
Alvaro Herrera wrote:
On Tue, Jun 22, 2004 at 03:52:03PM -0400, Madison Kelly wrote:

What is happening now is that the program does an 'ls' (system call)
to get a list of the files and directories starting at the root of a
mounted partition. These are read into an array which perl then
processes one at a time. the 'ls' value is searched for in the database
and if it doesn't exist, the values are inserted. If they do exist, they
are updated (at 1/10th the speed). If the file is in fact a directory
perl jumps into it and again reads in it's contents into another array
and processes the one at a time. It will do this until all files or
directories on the partition have been processed.

So you read the entire filesystem again and again? Sounds like a
horrible idea to me. Have you tried using the mtimes, etc?


I haven't heard of 'mtimes' before, I'll google for it now.
My previous question was performance based, now I just need to get
the darn thing working again. Like I said, after ~300 seconds perl dies.

Out of memory? If you save your whole filesystem in a Perl array you
are going to consume a lot of memory. This is, of course, not Postgres
related, so I'm not sure why you are asking here.


Running just the perl portion which reads and parses the file system
works fine and fast. It isn't until I make the DB calls that everything
breaks. I know that the DB will slow things down but the amount of
performance loss I am seeing and the flat out breaking of the program
can't be reasonable.

Besides, postgresSQL should be able to handle 250,000 SELECTs
followed by an UPDATE or INSERT for each on an AMD Athlon 1700+ with
512MB RAM, shouldn't it? Besides, the program is dieing after 5 minutes
when the calls are being commited automatically so the work being done
shouldn't be filling any memory, should it?

Madison

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #3
On Tue, Jun 22, 2004 at 03:52:03PM -0400, Madison Kelly wrote:
Hi all,

I have got to say that my first foray into postgresSQL is becoming a
very madening experience... I am sure it is my own fault for not knowing
very much but it seems that everything I have tried so far to improve
performance has in fact made it a lot worse. Now my program dies after
roughly 300 seconds of processing directories and updates take literally
10 time longer than inserts (which are themselves very slow).
<snip>
My previous question was performance based, now I just need to get
the darn thing working again. Like I said, after ~300 seconds perl dies.
If I disable auto-commit then it dies the first time it runs an insert.
(this is all done on the same table; 'file_dir'). If I add a 'commit'
before each select than a bunch of selects will work (a few dozen) and
then it dies anyway.
What is "dying"? Do you get an error message? From your message there
is nowhere near enough information to give you any good answers. Have
you run VACUUM [FULL|ANALYZE] recently? Are your indexes being used? If
you really want help, post your Per code, queries, database schema,
exmaple output, anything that might indicate what your actual problem.
We are, unfortunatly, not psychic.

We run PostgreSQL with several simultaneous users on several million
rows of data doing a few hundred thousand queries a day and the
database is not the bottleneck.

--
Martijn van Oosterhout <kl*****@svana.org> http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
tool for doing 5% of the work and then sitting around waiting for someone
else to do the other 95% so you can sue them.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQFA2KRaY5Twig3Ge+YRAlqzAJ95nawrYtVRnCUyHEPYfQ DZ//QGngCbBJZt
f3GjmPREfr+BrqEmDEiiPQ4=
=WVzM
-----END PGP SIGNATURE-----

Nov 23 '05 #4
On Tue, 2004-06-22 at 13:52, Madison Kelly wrote:
Hi all,

I have got to say that my first foray into postgresSQL is becoming a
very madening experience... I am sure it is my own fault for not knowing
very much but it seems that everything I have tried so far to improve
performance has in fact made it a lot worse. Now my program dies after
roughly 300 seconds of processing directories and updates take literally
10 time longer than inserts (which are themselves very slow).
/SNIP
My previous question was performance based, now I just need to get
the darn thing working again. Like I said, after ~300 seconds perl dies.
If I disable auto-commit then it dies the first time it runs an insert.
(this is all done on the same table; 'file_dir'). If I add a 'commit'
before each select than a bunch of selects will work (a few dozen) and
then it dies anyway.


Is it exactly 300 seconds? Sounds like a timeout somewhere to me. Does
perl have one of those? Or maybe your OS / Shell combo do, or something
like it? Just guessing here.
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #5
Scott Marlowe wrote:
My previous question was performance based, now I just need to get
the darn thing working again. Like I said, after ~300 seconds perl dies.
If I disable auto-commit then it dies the first time it runs an insert.
(this is all done on the same table; 'file_dir'). If I add a 'commit'
before each select than a bunch of selects will work (a few dozen) and
then it dies anyway.

Is it exactly 300 seconds? Sounds like a timeout somewhere to me. Does
perl have one of those? Or maybe your OS / Shell combo do, or something
like it? Just guessing here.


Hi Scott,

Thanks for replying. In my frustration I missed the obvious; the
web-browser was timing out (web front end)... Darn that makes me feel
foolish! Anyway, I went for a walk to relax and calm down and now I can
get back to working on performance.

Madison

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

9
2938
by: Francesco Moi | last post by:
Hello. I'm trying to build a RSS feed for my website. It starts: ----------------//--------------------- <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE rss PUBLIC "-//Netscape...
1
6793
by: Christian Schmidbauer | last post by:
Hello! I prepare my XML document like this way: ------------------------------------------------------- PrintWriter writer; Document domDocument; Element domElement; // Root tag
2
3182
by: Eshrath | last post by:
Hi, What I am trying to do: ======================= I need to form a table in html using the xsl but the table that is formed is quite long and cannot be viewed in our application. So we are...
2
10529
by: Donald Firesmith | last post by:
I am having trouble having Google Adsense code stored in XSL converted properly into HTML. The <> unfortunately become &lt; and &gt; and then no longer work. XSL code is: <script...
0
2034
by: Arne Schirmacher | last post by:
I want to display a MySQL database field that can contain HTML markup. If I use <esql:get-string> then I get all of the database field, but all tags are escaped which is not what I want. If I use...
34
10963
by: Mark Moore | last post by:
It looks like there's a pretty serious CSS bug in IE6 (v6.0.2800.1106). The HTML below is validated STRICT HTML 4.01 and renders as I would expect in Opera, FrontPage, and Netscape. For some...
39
2761
by: Antoon Pardon | last post by:
I was wondering how people would feel if the cmp function and the __cmp__ method would be a bit more generalised. The problem now is that the cmp protocol has no way to indicate two objects are...
0
304
by: Knighterrant | last post by:
Is it posible that one element correspond with two or more types? The example below could be implemented? &lt;xsd:element name="MyElement" type="type1 or type2" /&gt; &lt;xsd:complexType...
4
1588
by: Lee Chapman | last post by:
Hi, I am having difficulty getting the ASP.NET framework to generate valid XHTML. My immediate problem surrounds user input in, for example, textbox controls. I consider characters such as...
7
37737
by: Willem Herremans | last post by:
I am developing a client application for postgreSQL in Tcl/Tk (see http://gborg.postgresql.org/project/pfm ). It mainly uses PgTcl or pgintcl. I don't have any problems with those, but I am also...
0
7027
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
7071
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
1
6726
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
6861
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
5318
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
1
4763
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
2987
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The...
1
557
muto222
php
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
0
170
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.