473,662 Members | 2,595 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

New to the list; would this be an okay question?

Hi all,

I am new to the list and I didn't want to seem rude at all so I
wanted to ask if this was okay first.

I have a program I have written in perl which uses a postgresSQL
database as the backend. The program works but the performance is really
bad. I have been reading as much as I can on optimizing performance but
still it isn't very reasonable. At one point I had my program able to
process 175,000 records in 16min 10sec on a Pentium3 650MHz, 448MB RAM
test machine. Since then I got a Pentium3 1GHz, 512MB system and I have
tried a lot of things to get the performance up but now it is
substantially slower and I can't seem to figure out what I am doing wrong.

Would it be appropriate to ask for help on my program on this list?
Full disclosure: The program won't be initially GPL'ed because it is for
my company but it will be released for free to home users and the source
code will be made available (similar to other split-license programs)
though once my company makes it's money back I think they will fully GPL
it (I am on my boss's case about it :p ).

Thanks all!

Madison Kelly

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #1
11 2263
Standard questions:
- Have you VACUUMed?
- Have you VACUUM ANALYZEd?
- Have you done EXPLAIN ANALYZE on the complex queries?
- Have you put INDEXes on the appropriate columns.

You need to give more details is you want more detailed answers.

On Mon, Jun 21, 2004 at 09:38:14AM -0400, Madison Kelly wrote:
Hi all,

I am new to the list and I didn't want to seem rude at all so I
wanted to ask if this was okay first.

I have a program I have written in perl which uses a postgresSQL
database as the backend. The program works but the performance is really
bad. I have been reading as much as I can on optimizing performance but
still it isn't very reasonable. At one point I had my program able to
process 175,000 records in 16min 10sec on a Pentium3 650MHz, 448MB RAM
test machine. Since then I got a Pentium3 1GHz, 512MB system and I have
tried a lot of things to get the performance up but now it is
substantially slower and I can't seem to figure out what I am doing wrong.

Would it be appropriate to ask for help on my program on this list?
Full disclosure: The program won't be initially GPL'ed because it is for
my company but it will be released for free to home users and the source
code will be made available (similar to other split-license programs)
though once my company makes it's money back I think they will fully GPL
it (I am on my boss's case about it :p ).

Thanks all!

Madison Kelly

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
--
Martijn van Oosterhout <kl*****@svana. org> http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
tool for doing 5% of the work and then sitting around waiting for someone
else to do the other 95% so you can sue them.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQFA1ujOY5T wig3Ge+YRAvAqAK C1taBrD99lD9UpU +LX1JfHfW26CQCg oaSj
bfMFOqq/anFUVDdoDBREkRY =
=kX5D
-----END PGP SIGNATURE-----

Nov 23 '05 #2
Standard questions:
- Have you VACUUMed?
- Have you VACUUM ANALYZEd?
- Have you done EXPLAIN ANALYZE on the complex queries?
- Have you put INDEXes on the appropriate columns.

You need to give more details is you want more detailed answers.

On Mon, Jun 21, 2004 at 09:38:14AM -0400, Madison Kelly wrote:
Hi all,

I am new to the list and I didn't want to seem rude at all so I
wanted to ask if this was okay first.

I have a program I have written in perl which uses a postgresSQL
database as the backend. The program works but the performance is really
bad. I have been reading as much as I can on optimizing performance but
still it isn't very reasonable. At one point I had my program able to
process 175,000 records in 16min 10sec on a Pentium3 650MHz, 448MB RAM
test machine. Since then I got a Pentium3 1GHz, 512MB system and I have
tried a lot of things to get the performance up but now it is
substantially slower and I can't seem to figure out what I am doing wrong.

Would it be appropriate to ask for help on my program on this list?
Full disclosure: The program won't be initially GPL'ed because it is for
my company but it will be released for free to home users and the source
code will be made available (similar to other split-license programs)
though once my company makes it's money back I think they will fully GPL
it (I am on my boss's case about it :p ).

Thanks all!

Madison Kelly

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
--
Martijn van Oosterhout <kl*****@svana. org> http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
tool for doing 5% of the work and then sitting around waiting for someone
else to do the other 95% so you can sue them.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQFA1ujOY5T wig3Ge+YRAvAqAK C1taBrD99lD9UpU +LX1JfHfW26CQCg oaSj
bfMFOqq/anFUVDdoDBREkRY =
=kX5D
-----END PGP SIGNATURE-----

Nov 23 '05 #3
Madison Kelly wrote:
Hi all,

I am new to the list and I didn't want to seem rude at all so I wanted
to ask if this was okay first.
No problem. Reading your message below, you might want to try the
performance list, but general is a good place to start.
I have a program I have written in perl which uses a postgresSQL
database as the backend. The program works but the performance is really
bad. I have been reading as much as I can on optimizing performance but
still it isn't very reasonable. At one point I had my program able to
process 175,000 records in 16min 10sec on a Pentium3 650MHz, 448MB RAM
test machine. Since then I got a Pentium3 1GHz, 512MB system and I have
tried a lot of things to get the performance up but now it is
substantially slower and I can't seem to figure out what I am doing wrong.
A few places to start:
1. VACUUM FULL
This will make sure any unused space is reclaimed
2. ANALYZE
This will recalculate stats for the tables
3. Basic performce tuning:
http://www.varlena.com/varlena/Gener...bits/index.php
There's also a good guide to the postgresql.conf file on varlena.com
Would it be appropriate to ask for help on my program on this list?
Full disclosure: The program won't be initially GPL'ed because it is for
my company but it will be released for free to home users and the source
code will be made available (similar to other split-license programs)
though once my company makes it's money back I think they will fully GPL
it (I am on my boss's case about it :p ).


No problem - what you licence your software under is your concern. Once
you've taken the basic steps described above, try to pick out a specific
query that you think is too slow and provide:

1. PostgreSQL version
2. Basic hardware info (as you have)
3. Sizes of tables.
4. Output of EXPLAIN ANALYZE <query here>

The EXPLAIN ANALYZE runs the query and shows how much work PG thought it
would be and how much it actually turned out to be.

HTH
--
Richard Huxton
Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #4
Sorry; I didn't include details at first because I wanted to make sure
that was an approprate request for this list.

I have the program run 'VACUUM ANALYZE' after every major update/insert
job and I have in fact indexed the three columns that I search through
when I need to decide to update (if the record exists) or insert (if it
does not). I have read about the EXPLAIN option/tool but I haven't been
able to get my head around how to properly use it yet.

Here is what I am trying to do:

The program is a Linux backup program that uses a web front-end as it's
interface (so that a client can access it from any system, inc. MS
workstations). The program is also designed to allow the user to search
for a given file or file by spec (file size, date modified, etc) on
media that is offline. The program is designed with externally connected
USB2 and firewire drives so it is all partition-based.

In order to make the web front-end stateful and to allow for the ability
to search I needed to keep in the database detailed information on every
file and directory on a given partition. Some of the information also
needs to be maintained so I can't just clear and rescan a partition. For
example, I need to keep track of what directories and files a user has
selected or not selected to be backed up in a given partition. This
means that whenever I need to update the contents of a partition I need
to run 'ls' starting at the mount point for the partition and scanning
down through all sub directories.

As each file is scanned I check the database to see if the file name I
am looking at already exists in the database. I do this by searching for
the file_name, File_parent_dir (parent directory) and file_src_uuid (the
UUID [serial number] of the partition the file is on). If there is a
match I run an "UPDATE" where the backup state is not touched. If the
file is new then the file is added to the database along with all of
it's particular information such as owning user, group, permissions,
filesize and so on.

Given that some file systems have 250,000 files and directories I need
to make sure that the database calls are as optimized as I can make
them. I have verified that the lag is in the database by commenting out
the actual database calls and letting the program traverse the file
system. In that case a job that with the database calls in place and
took nearly 200 seconds finishes in roughly 2 seconds.

If this is an okay request I would be happy to post the schema I am
using and the perl code I am using to make the DB calls.

Thanks!!

Madison Kelly

Martijn van Oosterhout wrote:
Standard questions:
- Have you VACUUMed?
- Have you VACUUM ANALYZEd?
- Have you done EXPLAIN ANALYZE on the complex queries?
- Have you put INDEXes on the appropriate columns.

You need to give more details is you want more detailed answers.

On Mon, Jun 21, 2004 at 09:38:14AM -0400, Madison Kelly wrote:
Hi all,

I am new to the list and I didn't want to seem rude at all so I
wanted to ask if this was okay first.

I have a program I have written in perl which uses a postgresSQL
database as the backend. The program works but the performance is really
bad. I have been reading as much as I can on optimizing performance but
still it isn't very reasonable. At one point I had my program able to
process 175,000 records in 16min 10sec on a Pentium3 650MHz, 448MB RAM
test machine. Since then I got a Pentium3 1GHz, 512MB system and I have
tried a lot of things to get the performance up but now it is
substantial ly slower and I can't seem to figure out what I am doing wrong.

Would it be appropriate to ask for help on my program on this list?
Full disclosure: The program won't be initially GPL'ed because it is for
my company but it will be released for free to home users and the source
code will be made available (similar to other split-license programs)
though once my company makes it's money back I think they will fully GPL
it (I am on my boss's case about it :p ).

Thanks all!

Madison Kelly

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match


---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #5
Richard Huxton wrote:
Madison Kelly wrote:
Hi all,

I am new to the list and I didn't want to seem rude at all so I
wanted to ask if this was okay first.

No problem. Reading your message below, you might want to try the
performance list, but general is a good place to start.
I have a program I have written in perl which uses a postgresSQL
database as the backend. The program works but the performance is
really bad. I have been reading as much as I can on optimizing
performance but still it isn't very reasonable. At one point I had my
program able to process 175,000 records in 16min 10sec on a Pentium3
650MHz, 448MB RAM test machine. Since then I got a Pentium3 1GHz,
512MB system and I have tried a lot of things to get the performance
up but now it is substantially slower and I can't seem to figure out
what I am doing wrong.

A few places to start:
1. VACUUM FULL
This will make sure any unused space is reclaimed
2. ANALYZE
This will recalculate stats for the tables
3. Basic performce tuning:
http://www.varlena.com/varlena/Gener...bits/index.php
There's also a good guide to the postgresql.conf file on varlena.com
Would it be appropriate to ask for help on my program on this list?
Full disclosure: The program won't be initially GPL'ed because it is
for my company but it will be released for free to home users and the
source code will be made available (similar to other split-license
programs) though once my company makes it's money back I think they
will fully GPL it (I am on my boss's case about it :p ).

No problem - what you licence your software under is your concern. Once
you've taken the basic steps described above, try to pick out a specific
query that you think is too slow and provide:

1. PostgreSQL version
2. Basic hardware info (as you have)
3. Sizes of tables.
4. Output of EXPLAIN ANALYZE <query here>

The EXPLAIN ANALYZE runs the query and shows how much work PG thought it
would be and how much it actually turned out to be.

HTH


Thank you very much!! I am using Psql 7.4 on a stock install of Fedora
Core 2 on my IBM thinkpad a22m (P3 1GHz, 512MB RAM, not the fastest
HDD). The drive carrier I am using is connected via USB2 and uses a few
different hard drives with the fastest being a couple of Barracuda
7200.7 drives (2MB cache, 7,200rpm). I described the program in my reply
to Martijn so here is some of the code (code not related to psql
snipped, let me know if posting it would help - sorry for the wrapping...):

=-[ Calling the database ]-=
# Open the connection to the database
my $DB = DBI->connect("DBI:P g:dbname=$db_na me","$user")| | die("Connect
error (Is PostgresSQL running?): $DBI::errstr");

# Prepare the select statements before using them for speed:
$select_sth = $DB->prepare("SELEC T null FROM file_dir WHERE
file_src_uuid=? AND file_parent_dir =? AND file_name=?") || die
"$DBI::errs tr";
$select_up = $DB->prepare("UPDAT E file_dir SET file_perm=?,
file_own_user=? , file_own_grp=?, file_size=?, file_mod_date=? ,
file_mod_time=? , file_mod_time_z one=?, file_exist=? WHERE
file_src_uuid=? AND file_parent_dir =? AND file_name=?") || die
"$DBI::errs tr";
$select_in = $DB->prepare("INSER T INTO file_dir ( file_src_uuid,
file_name, file_dir, file_parent_dir , file_perm, file_own_user,
file_own_grp, file_size, file_mod_date, file_mod_time,
file_mod_time_z one, file_backup, file_restore, file_display, file_exist
) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )") || die
"$DBI::errs tr";

# Set the 'file_exist' flag to 'false' and reset exiting files to 'true'.
$DB->do("UPDATE file_dir SET file_exist='f' WHERE
file_src_uuid=' $file_src_uuid' ") || die "$DBI::errs tr";

# Start scanning the drive
$num = $select_sth->execute($file_ src_uuid,$relat ive_dir,$file_n ame) ||
die "$DBI::errs tr";
if ( $num > 0 )
{
$select_up->execute($file_ perm,$file_own_ user,$file_own_ grp,$file_size, $file_mod_date, $file_mod_time, $file_mod_time_ zone,$file_exis t,$file_src_uui d,$file_parent_ dir,$file_name)
|| die "$DBI::errs tr";
}
else
{

$select_in->execute($file_ src_uuid,$file_ name,$file_dir, $file_parent_di r,$file_perm,$f ile_own_user,$f ile_own_grp,$fi le_size,$file_m od_date,$file_m od_time,$file_m od_time_zone,$f ile_backup,$fil e_restore,$file _display,$file_ exist)
|| die "$DBI::errs tr";
}

# We need to grab the existing file settings for the special file '/.'
$DBreq=$DB->prepare("SELEC T file_backup, file_restore, file_display FROM
file_dir WHERE file_parent_dir ='/' AND file_name='.' AND
file_src_uuid=' $file_src_uuid' ") || die $DBI::errstr;
$file_backup=$D Breq->execute();
@file_backup=$D Breq->fetchrow_array ();
$file_backup=@f ile_backup[0];
$file_restore=@ file_backup[1];
$file_display=@ file_backup[2];

# Jump into the re-entrant subroutine to scan directories and sub-dirs
&list_files($re al_dir, $exclude_list_n um, $relative_dir, $file_backup,
$file_restore, $file_display);

# Inside the sub routine

# Does the directory/file/symlink already exist? (there are three of
these for each file type)
$num = $select_sth->execute($file_ src_uuid,$relat ive_dir,$file_n ame) ||
die "$DBI::errs tr";
if ( $num > 0 )
{

$select_up->execute($file_ perm,$file_own_ user,$file_own_ grp,$file_size, $file_mod_date, $file_mod_time, $file_mod_time_ zone,$file_exis t,$file_src_uui d,$file_parent_ dir,$file_name)
|| die "$DBI::errs tr";
}
else
{
# The file did not exist so we will use the passed parent settings
for the 'file_backup' flag and leave the 'file_display' flag set to 'f'

$select_in->execute($file_ src_uuid,$file_ name,$file_dir, $file_parent_di r,$file_perm,$f ile_own_user,$f ile_own_grp,$fi le_size,$file_m od_date,$file_m od_time,$file_m od_time_zone,$f ile_backup,$fil e_restore,$file _display,$file_ exist)
|| die "$DBI::errs tr";
}

# If this was a file I would loop and process the next file in the
directory, if it was a directory itself I would now re-enter the
subroutine to process it's contents and when I fell back I would pick up
where I left off

# Returning from the final subroutine and finishing up

$DB->do("VACUUM ANALYZE");

=-[ finished DB related source code ]-=

Here is the schema for the 'file_dir' table which I hit repeatedly here:

=-[ file_dir table and index schemas ]-=

CREATE TABLE file_dir ( -- Used to store info
on every file on source partitions
file_id serial unique, -- make
this 'bigserial' if there may be more than 2 billion files in the database
file_src_uuid varchar(40) not null, -- the
UUID of the source partition hosting the original file
file_org_uuid varchar(40), -- the UUID
that the file came from (when the file was moved by TLE-BU)
file_name varchar(255) not null, -- Name of
the file or directory
file_dir bool not null, -- t = is
directory, f = file
file_parent_dir varchar(255) not null, -- if
directory '/foo/bar', parent is '/foo', if file '/foo/bar/file', parent
is '/foo/bar'. The mount directory is treated as '/' so any directories
below it will be ignored for this record.
file_perm varchar(10) not null, -- file or
directory permissions
file_own_user varchar(255) not null, -- The
file's owning user (by name, not UID!!)
file_own_grp varchar(255) not null, -- The
file's owning group (by name, not GID!!)
file_size bigint not null, -- File size in
bytes
file_mod_date varchar(12) not null, -- File's
last edited date
file_mod_time varchar(20) not null, -- File's
last edited time
file_mod_time_z one varchar(6) not null, -- File's
last edited time zone
file_backup boolean not null default 'f', --
't' = Include in backup jobs, 'f' = Do not include in backup jobs
file_restore boolean not null default 'f', --
't' = Include in restore jobs, 'f' = Do not include in restore jobs
file_display boolean not null default 'f', --
't' = display, 'f' = hide
file_exist boolean default 't' -- Used to
catch files that have been deleted since the last scan. Before rescan,
all files in a given src_uuid are set to 0 (deleted) and then as each
file is found or updated it is reset back to 1 (exists) and anything
left with a value of '0' at the end of the scan is deleted and we will
remove their record.
);

-- CREATE INDEX file_dir_idx ON file_dir
(file_src_uuid, file_name,file_ parent_dir);

=-[ Finish file_dir table and index schemas ]-=

Thanks so much!!

Madison
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #6
Madison Kelly wrote:
Richard Huxton wrote:
Madison Kelly wrote:
Hi all,

I am new to the list and I didn't want to seem rude at all so I
wanted to ask if this was okay first.


No problem. Reading your message below, you might want to try the
performance list, but general is a good place to start.
I have a program I have written in perl which uses a postgresSQL
database as the backend. The program works but the performance is
really bad. I have been reading as much as I can on optimizing
performance but still it isn't very reasonable. At one point I had my
program able to process 175,000 records in 16min 10sec on a Pentium3
650MHz, 448MB RAM test machine. Since then I got a Pentium3 1GHz,
512MB system and I have tried a lot of things to get the performance
up but now it is substantially slower and I can't seem to figure out
what I am doing wrong.


A few places to start:
1. VACUUM FULL
This will make sure any unused space is reclaimed
2. ANALYZE
This will recalculate stats for the tables
3. Basic performce tuning:
http://www.varlena.com/varlena/Gener...bits/index.php
There's also a good guide to the postgresql.conf file on varlena.com
Would it be appropriate to ask for help on my program on this list?
Full disclosure: The program won't be initially GPL'ed because it is
for my company but it will be released for free to home users and the
source code will be made available (similar to other split-license
programs) though once my company makes it's money back I think they
will fully GPL it (I am on my boss's case about it :p ).


No problem - what you licence your software under is your concern.
Once you've taken the basic steps described above, try to pick out a
specific query that you think is too slow and provide:

1. PostgreSQL version
2. Basic hardware info (as you have)
3. Sizes of tables.
4. Output of EXPLAIN ANALYZE <query here>

The EXPLAIN ANALYZE runs the query and shows how much work PG thought
it would be and how much it actually turned out to be.

HTH

Thank you very much!! I am using Psql 7.4 on a stock install of
Fedora Core 2 on my IBM thinkpad a22m (P3 1GHz, 512MB RAM, not the
fastest HDD). The drive carrier I am using is connected via USB2 and
uses a few different hard drives with the fastest being a couple of
Barracuda 7200.7 drives (2MB cache, 7,200rpm). I described the program
in my reply to Martijn so here is some of the code (code not related to
psql snipped, let me know if posting it would help - sorry for the
wrapping...):


I'm not clear if the database is on the local disk or attached to the
USB2. Not sure it's important, since neither will be that fast.

If I understand, you scan thousands or millions of files for backup
purposes and then issue a select + update/insert for each.

Once a partition is scanned, a flag is cleared on all rows.

Once all selected files have been dealt with a vaccum/analyse is issued.
Some things to look at:
1. How many files are you handling per second? Are the disks involved in
the backup as well as the database?
2. What does the output of "vmstat 10" show when the system is running.
Is your I/O saturated? CPU?
3. Is your main index (file_src_uuid, file_name,file_ parent_dir) being
used? Your best bet is to select from "pg_stat_indexe s" before and after.
4. If you are updating several hundred thousand rows then you probably
don't have enought vacuum memory set aside - try a vacuum full after
each set of updates.
5. You might want to batch together queries into transactions of a few
hundred or even few thousand updates.
--
Richard Huxton
Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #7
After a long battle with technology, de*@archonet.co m (Richard Huxton), an earthling, wrote:
5. You might want to batch together queries into transactions of a few
hundred or even few thousand updates.


When this particular application got discussed on local LUG mailing
list, this emerged as being one of the factors most likely to be a Big
Deal.
--
If this was helpful, <http://svcs.affero.net/rm.php?r=cbbrow ne> rate me
http://www3.sympatico.ca/cbbrowne/lsf.html
"A hack is a terrible thing to waste, please give to the
implementation of your choice..." -- GJC
Nov 23 '05 #8
Christopher Browne wrote:
After a long battle with technology, de*@archonet.co m (Richard Huxton), an earthling, wrote:
5. You might want to batch together queries into transactions of a few
hundred or even few thousand updates.

When this particular application got discussed on local LUG mailing
list, this emerged as being one of the factors most likely to be a Big
Deal.


Yep, except... Madison said a laptop was involved, so I'm guessing it's
an IDE drive lying about sync-ing. If fsync is effectively off that
shouldn't have such a huge effect should it?

--
Richard Huxton
Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #9
On Mon, Jun 21, 2004 at 08:29:54PM +0100, Richard Huxton wrote:
Christopher Browne wrote:
When this particular application got discussed on local LUG mailing
list, this emerged as being one of the factors most likely to be a Big
Deal.
Yep, except... Madison said a laptop was involved, so I'm guessing it's
an IDE drive lying about sync-ing. If fsync is effectively off that
shouldn't have such a huge effect should it?


The IDE drive lying about syncing is different from fsync being turned
off. What the drive thinks doesn't matter until after Postgres has
written the WAL, closed the transaction and written the pages out. The
fsync will still cause Linux to wait for all the data to be written to
the disk, which is still a finite amount of time, the disk buffer is
only a few MB. Turning fsync off means Linux will never wait, just
buffer in system memory. Similarly, putting it all in one transaction
means that within the transaction there is no waiting, only in
transaction commit.

With fsync on, in/not it transaction can make a really big difference.
--
Martijn van Oosterhout <kl*****@svana. org> http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
tool for doing 5% of the work and then sitting around waiting for someone
else to do the other 95% so you can sue them.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQFA10RsY5T wig3Ge+YRAub8AK DdT8VvaF1ntIuYJ ICjgRGsAyPHHwCe Ob3K
H+ZhAmqCE9e+BxK VSha1E5E=
=MOdb
-----END PGP SIGNATURE-----

Nov 23 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

9
1946
by: Rogue9 | last post by:
Hi, I´m trying to retrieve an item from a list using a reference called ´draw´' (which is an integer from 1 to 792 in a while loop) using:- draw = 1 while 1: if draw == 792: break b1 = someOtherList r1 = b1
23
2253
by: Mike Meyer | last post by:
Ok, we've added list comprehensions to the language, and seen that they were good. We've added generator expressions to the language, and seen that they were good as well. I'm left a bit confused, though - when would I use a list comp instead of a generator expression if I'm going to require 2.4 anyway? Thanks, <mike --
18
5431
by: Makis Papapanagiotou | last post by:
Hello all, There is a strange case where a class Test is composed from three objects of other classes. i.e. class Test { public: Test(); private: Point x;
0
293
by: Madison Kelly | last post by:
Hi all, I am new to the list and I didn't want to seem rude at all so I wanted to ask if this was okay first. I have a program I have written in perl which uses a postgresSQL database as the backend. The program works but the performance is really bad. I have been reading as much as I can on optimizing performance but still it isn't very reasonable. At one point I had my program able to process 175,000 records in 16min 10sec on a...
65
4195
by: Steven Watanabe | last post by:
I know that the standard idioms for clearing a list are: (1) mylist = (2) del mylist I guess I'm not in the "slicing frame of mind", as someone put it, but can someone explain what the difference is between these and: (3) mylist =
8
2587
by: Guy | last post by:
Is there a better way to search identical elements in a sorted array list than the following: iIndex = Array.BinarySearch( m_Array, 0, m_Array.Count, aSearchedObject ); aFoundObject= m_Array; m_ResultArray.Add ( aFoundObject);
7
1935
by: wqmmnm | last post by:
How do I create an html list using php and mysql that also has the previous and next funtionality.
2
1934
by: Matthew Fitzgibbons | last post by:
Alexnb wrote: if not funList: do_something() -Matt
17
3125
by: trose178 | last post by:
Good day all, I am working on a multi-select list box for a standard question checklist database and I am running into a syntax error in the code that I cannot seem to correct. I will also note that I am using Allen Browne's multi-select list box for a report as a guide. I should also note that my access skills are not the best so I may need some explaining on certain things. First let me give some background on the database: I have a...
0
8857
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
8764
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
8546
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8633
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
1
6186
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5654
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4180
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4347
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
2762
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.