473,408 Members | 1,841 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,408 software developers and data experts.

Is this a good idea?

Hi!

I've been programming ASP for 5 years and am now learning PHP.
In ASP, you can use GetRows function which returns 2 by 2 array of
Recordset.
Actually, it's a recommended way in ASP when you access DB as it
disconnects the DB earlier.
Also, it's handy as you can directly access any data in the array
without looping.

As far as I know, there's no such function in PHP and I can make one.
My question is whether it's good in PHP.

pseudo-code:

$data = get_data("select * from table1");
$var = $data[3][2]; //value at 4th row, 3rd column

This way, I can wrap db connection, data retrieval, and error handling
with one function (or maybe a class).
Is the idea workable?

TIA.
Sam

Jan 17 '06 #1
54 3575
On 17 Jan 2006 09:56:09 -0800, sa********@gmail.com wrote:
I've been programming ASP for 5 years and am now learning PHP.
In ASP, you can use GetRows function which returns 2 by 2 array of
Recordset.
Actually, it's a recommended way in ASP when you access DB as it
disconnects the DB earlier.
Also, it's handy as you can directly access any data in the array
without looping.

As far as I know, there's no such function in PHP and I can make one.
My question is whether it's good in PHP.

pseudo-code:

$data = get_data("select * from table1");
$var = $data[3][2]; //value at 4th row, 3rd column

This way, I can wrap db connection, data retrieval, and error handling
with one function (or maybe a class).
Is the idea workable?


Rather than re-invent the wheel, look at:

http://adodb.sourceforge.net/
http://phplens.com/adodb/reference.f....getarray.html
--
Andy Hassall :: an**@andyh.co.uk :: http://www.andyh.co.uk
http://www.andyhsoftware.co.uk/space :: disk and FTP usage analysis tool
Jan 17 '06 #2
sa********@gmail.com wrote:
I've been programming ASP for 5 years and am now learning PHP.
In ASP, you can use GetRows function which returns 2 by 2 array of
Recordset.
Actually, it's a recommended way in ASP when you access DB as it
disconnects the DB earlier.
Also, it's handy as you can directly access any data in the array
without looping.

As far as I know, there's no such function in PHP and I can make one.
My question is whether it's good in PHP.

pseudo-code:

$data = get_data("select * from table1");
$var = $data[3][2]; //value at 4th row, 3rd column


You can easily access the returned resource as a kind of array with

$result = mysql_query("select * from table1");
$var = mysql_result($result, 3, 2);

If you choose to put all returned data into a "proper" array, don't
forget to

mysql_free_result($result);

--
If you're posting through Google read <http://cfaj.freeshell.org/google>
Jan 17 '06 #3

Pedro Graca wrote:
sa********@gmail.com wrote:
I've been programming ASP for 5 years and am now learning PHP.
In ASP, you can use GetRows function which returns 2 by 2 array of
Recordset.
Actually, it's a recommended way in ASP when you access DB as it
disconnects the DB earlier.
Also, it's handy as you can directly access any data in the array
without looping.

As far as I know, there's no such function in PHP and I can make one.
My question is whether it's good in PHP.

pseudo-code:

$data = get_data("select * from table1");
$var = $data[3][2]; //value at 4th row, 3rd column


You can easily access the returned resource as a kind of array with

$result = mysql_query("select * from table1");
$var = mysql_result($result, 3, 2);

If you choose to put all returned data into a "proper" array, don't
forget to

mysql_free_result($result);


Thanks.
This is the answer I needed.
The reason I want to dump the recordset to an array is to avoid
cleaning up (disconnection) so that I can forget about db-related jobs.

function get_data($sql){
//connect to dbms
//select db
//select data and dump to an array
//close db (free db resource)
}

$data = get_data("select * from table1");

I'll make the function in a library file, and just call the function
and forget about connecting/disconnecting/freeing db resources.
One benefit is that I won't make mistake to forget about cleaning up.
Array will be automatically GCed, right?

To summarize, my goals are:
1. Put db-related routines in one function (or class) to avoid
repeating same codes.
2. Avoid mistakes like forgetting mysql_free_result
3. You can improve the function later and it will affect everywhere.
4. Make db-accessing code simple. (Just throw an sql and get the
result)

What do you think?

Sam

Jan 17 '06 #4

Andy Hassall wrote:
On 17 Jan 2006 09:56:09 -0800, sa********@gmail.com wrote:
I've been programming ASP for 5 years and am now learning PHP.
In ASP, you can use GetRows function which returns 2 by 2 array of
Recordset.
Actually, it's a recommended way in ASP when you access DB as it
disconnects the DB earlier.
Also, it's handy as you can directly access any data in the array
without looping.

As far as I know, there's no such function in PHP and I can make one.
My question is whether it's good in PHP.

pseudo-code:

$data = get_data("select * from table1");
$var = $data[3][2]; //value at 4th row, 3rd column

This way, I can wrap db connection, data retrieval, and error handling
with one function (or maybe a class).
Is the idea workable?


Rather than re-invent the wheel, look at:

http://adodb.sourceforge.net/
http://phplens.com/adodb/reference.f....getarray.html


Thanks for the answer.
My intention is not to use ADO db but make DB-accessing code simple and
avoid repeated codes (connecting/freeing/disconnecting db).
Is there a best practice of db-accessing in PHP?

Regards,
Sam

Jan 17 '06 #5

sa********@gmail.com wrote:
Hi!

I've been programming ASP for 5 years and am now learning PHP.
In ASP, you can use GetRows function which returns 2 by 2 array of
Recordset.
Actually, it's a recommended way in ASP when you access DB as it
disconnects the DB earlier.
Also, it's handy as you can directly access any data in the array
without looping.

As far as I know, there's no such function in PHP and I can make one.
My question is whether it's good in PHP.

pseudo-code:

$data = get_data("select * from table1");
$var = $data[3][2]; //value at 4th row, 3rd column

This way, I can wrap db connection, data retrieval, and error handling
with one function (or maybe a class).
Is the idea workable?


I found a code that suits my intention.

<?php

class mysql_array
{

public function __construct ( $s_host , $s_user , $s_pass ,
$s_db )
{
$this -> r_conn = mysql_connect ( $s_host , $s_user ,
$s_pass ) or die ( mysql_error ( ) ) ;
mysql_select_db ( $s_db ) ;
}

private function array_make ( $s_sql , $i_type )
{
$r_rs = mysql_query ( $s_sql , $this -> r_conn ) or die (
mysql_error ( ) ) ;
while ( $a_col = mysql_fetch_array ( $r_rs , $i_type ) )
{
$a_rs [ ] = $a_col ;
}
mysql_free_result ( $r_rs ) ;
return ( $a_rs ) ;
}

public function array_logic ( $s_sql )
{
$a_rs = $this -> array_make ( $s_sql , MYSQL_NUM ) ;
return ( $a_rs ) ;
}

public function array_assoc ( $s_sql )
{
$a_rs = $this -> array_make ( $s_sql , MYSQL_ASSOC ) ;
return ( $a_rs ) ;
}

public function array_both ( $s_sql )
{
$a_rs = $this -> array_make ( $s_sql , MYSQL_BOTH ) ;
return ( $a_rs ) ;
}

}

$o_mysql = new mysql_array ( 'localhost' , 'user' , 'pass' , 'db' )
;
$s_sql = "SHOW TABLES" ;
$a_rs = $o_mysql -> array_assoc ( $s_sql ) ;

echo '<pre>' ;
print_r ( $a_rs ) ;

?>

It's from http://us2.php.net/manual/en/ref.mysql.php .
Sam

Jan 17 '06 #6
> I found a code that suits my intention.

<?php

class mysql_array
{

public function __construct ( $s_host , $s_user , $s_pass ,
$s_db )
{
$this -> r_conn = mysql_connect ( $s_host , $s_user ,
$s_pass ) or die ( mysql_error ( ) ) ;
mysql_select_db ( $s_db ) ;
}

private function array_make ( $s_sql , $i_type )
{
$r_rs = mysql_query ( $s_sql , $this -> r_conn ) or die (
mysql_error ( ) ) ;
while ( $a_col = mysql_fetch_array ( $r_rs , $i_type ) )
{
$a_rs [ ] = $a_col ;
}
mysql_free_result ( $r_rs ) ;
return ( $a_rs ) ;
}

public function array_logic ( $s_sql )
{
$a_rs = $this -> array_make ( $s_sql , MYSQL_NUM ) ;
return ( $a_rs ) ;
}

public function array_assoc ( $s_sql )
{
$a_rs = $this -> array_make ( $s_sql , MYSQL_ASSOC ) ;
return ( $a_rs ) ;
}

public function array_both ( $s_sql )
{
$a_rs = $this -> array_make ( $s_sql , MYSQL_BOTH ) ;
return ( $a_rs ) ;
}

}

$o_mysql = new mysql_array ( 'localhost' , 'user' , 'pass' , 'db' )
;
$s_sql = "SHOW TABLES" ;
$a_rs = $o_mysql -> array_assoc ( $s_sql ) ;

echo '<pre>' ;
print_r ( $a_rs ) ;

?>

It's from http://us2.php.net/manual/en/ref.mysql.php .


I forgot to ask "Do you think the above code is ok in terms of
performance and memory usage?"

Sam

Jan 17 '06 #7
>To summarize, my goals are:
1. Put db-related routines in one function (or class) to avoid
repeating same codes.
2. Avoid mistakes like forgetting mysql_free_result
3. You can improve the function later and it will affect everywhere.
4. Make db-accessing code simple. (Just throw an sql and get the
result)


Use my code if you like. Amongst other things a database object and
helper classes.

<http://www.eminent.demon.co.uk/phplibrary.htm>

--
PETER FOX Not the same since the poster business went to the wall
pe******@eminent.demon.co.uk.not.this.bit.no.html
2 Tees Close, Witham, Essex.
Gravity beer in Essex <http://www.eminent.demon.co.uk>
Jan 17 '06 #8

Peter Fox wrote:
To summarize, my goals are:
1. Put db-related routines in one function (or class) to avoid
repeating same codes.
2. Avoid mistakes like forgetting mysql_free_result
3. You can improve the function later and it will affect everywhere.
4. Make db-accessing code simple. (Just throw an sql and get the
result)


Use my code if you like. Amongst other things a database object and
helper classes.

<http://www.eminent.demon.co.uk/phplibrary.htm>


Thank you so much.
I'll try that.

Sam

Jan 17 '06 #9
sa********@gmail.com wrote:
I found a code that suits my intention.

<?php

class mysql_array
{

public function __construct ( $s_host , $s_user , $s_pass ,
$s_db )
{
$this -> r_conn = mysql_connect ( $s_host , $s_user ,
$s_pass ) or die ( mysql_error ( ) ) ;
mysql_select_db ( $s_db ) ;
}

private function array_make ( $s_sql , $i_type )
{
$r_rs = mysql_query ( $s_sql , $this -> r_conn ) or die (
mysql_error ( ) ) ;

<snip>

You might want to replace the mysql_query call with
mysql_unbuffered_query ... never tested it though, not sure if there is
any performance gain ...

--
If you're posting through Google read <http://cfaj.freeshell.org/google>
Jan 17 '06 #10
sa********@gmail.com wrote:
As far as I know, there's no such function in PHP and I can make one.
My question is whether it's good in PHP.

pseudo-code:

$data = get_data("select * from table1");
$var = $data[3][2]; //value at 4th row, 3rd column

This way, I can wrap db connection, data retrieval, and error handling
with one function (or maybe a class).
Is the idea workable?


It's a pretty normal practice. I do it all the time. I typically have a
number of functions, for retrieving a single row, multiple rows, a
single column, and a single value.

I believe that's also something that the MySQL manual recommends, to
alleviate lock issues.

Jan 17 '06 #11
On 17 Jan 2006 13:56:02 -0800, "Chung Leong" <ch***********@hotmail.com> wrote:
sa********@gmail.com wrote:
As far as I know, there's no such function in PHP and I can make one.
My question is whether it's good in PHP.

pseudo-code:

$data = get_data("select * from table1");
$var = $data[3][2]; //value at 4th row, 3rd column

This way, I can wrap db connection, data retrieval, and error handling
with one function (or maybe a class).
Is the idea workable?


It's a pretty normal practice. I do it all the time. I typically have a
number of functions, for retrieving a single row, multiple rows, a
single column, and a single value.

I believe that's also something that the MySQL manual recommends, to
alleviate lock issues.


Isn't the default mode of operation for MySQL to send the entire result set
over to the client, then release locks, and only then actually make the results
available to the client application - so you can be as slow as you like reading
them from the client buffer, it doesn't matter?

If you're using unbuffered queries, _then_ it would matter how quickly you
consume the data, but you don't tend to see as many examples of that.

--
Andy Hassall :: an**@andyh.co.uk :: http://www.andyh.co.uk
http://www.andyhsoftware.co.uk/space :: disk and FTP usage analysis tool
Jan 17 '06 #12
On 17 Jan 2006 11:34:37 -0800, sa********@gmail.com wrote:
Rather than re-invent the wheel, look at:

http://adodb.sourceforge.net/
http://phplens.com/adodb/reference.f....getarray.html


Thanks for the answer.
My intention is not to use ADO db but make DB-accessing code simple and
avoid repeated codes (connecting/freeing/disconnecting db).
Is there a best practice of db-accessing in PHP?


You do realise that other than the name and a deliberate similarity in
function names, ADOdb has nothing to do with Microsoft ADO - it's just a thin
PHP library providing the sort of access methods you're talking about on top of
various PHP native database access functions?

IMHO, ADOdb _is_ the best practice of accessing databases in PHP.

--
Andy Hassall :: an**@andyh.co.uk :: http://www.andyh.co.uk
http://www.andyhsoftware.co.uk/space :: disk and FTP usage analysis tool
Jan 17 '06 #13
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

sa********@gmail.com wrote:
In ASP, you can use GetRows function which returns 2 by 2 array of
Recordset.
Actually, it's a recommended way in ASP when you access DB as it
disconnects the DB earlier. [...] Is the idea workable?


Yes, it is possible.

No, it is not a good idea.

If you return all the data from a query to a 2-dimensional array, then you
will be putting a excessive overhead into your code: more variables, more
used memory, more wasted CPU cycles. If you have very large tables, you
will run out of memory, everything will start failing, and you won't know
the reason.

Carefully plan your SQL query so you don't get more results than expected.
Return the DB results row by row. Parse them one by one. Use persistent DB
connections. Don't ever work on the entire results of a query: it's a
complete waste of time, and your code will became more complex, and less
mainteable. Know what you're doing, experiment yourself, don't be blinded
by the "recommended ways in ASP".

Also, it's the job of the DB engine to keep track of the query results, not
PHP's. And modern, stable DB engines do support multiple queries at once.
You just have to be a bit careful about concurrent programming.

- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

http://acm.asoc.fi.upm.es/~mr/ ; http://acm.asoc.fi.upm.es/~ivan/
MSN:i_*************************@hotmail.com
Jabber:iv*********@jabber.org ; iv*********@kdetalk.net
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFDzYU13jcQ2mg3Pc8RAiWaAJ4oR3uFj19oipF7FsoU3y VWpYdpYACfV8PV
BCZoQyPEOl4+BdOx2tbyxSE=
=wGB6
-----END PGP SIGNATURE-----
Jan 18 '06 #14
sa********@gmail.com wrote:
Hi!

I've been programming ASP for 5 years and am now learning PHP.
In ASP, you can use GetRows function which returns 2 by 2 array of
Recordset.
Actually, it's a recommended way in ASP when you access DB as it
disconnects the DB earlier.
Also, it's handy as you can directly access any data in the array
without looping.

<snip>

As many people have pointed out, *never* dump the table data into
array. Fetch the record and immediately get that processed. If you have
any *valid* reason, buffer the data into a very very small (known)
sized array. If using MySQL, use the LIMIT if possible.

--
<?php echo 'Just another PHP saint'; ?>
Email: rrjanbiah-at-Y!com Blog: http://rajeshanbiah.blogspot.com/

Jan 18 '06 #15
Following on from Iván Sánchez Ortega's message. . .
sa********@gmail.com wrote:
In ASP, you can use GetRows function which returns 2 by 2 array of
Recordset.
Actually, it's a recommended way in ASP when you access DB as it
disconnects the DB earlier.[...]
Is the idea workable?


Yes, it is possible.

No, it is not a good idea.

Err... Sometimes it is a cracking good idea.
Small result sets only.

Repetitive reference

Read-only...
.... or lots of writes (eg increments) during a process (followed by a DB
write of course)

Standard data structure to pass to other functions.

An encapsulated result can sometimes be a lot easier to work with
outside of stepping through a database.

Also where you are extracting variable information from the DB and want
to trap errors and anomalies in the data/query at the database level and
so be sure of passing a clean dataset to the caller.

But as you say, if it is logical to plod through the result set record
by record then that's the low overhead method of choice - Probably 95%
of the time.
If you return all the data from a query to a 2-dimensional array, then you
will be putting a excessive overhead into your code: more variables, more
used memory, more wasted CPU cycles. If you have very large tables, you
will run out of memory, everything will start failing, and you won't know
the reason.

Carefully plan your SQL query so you don't get more results than expected.
Return the DB results row by row. Parse them one by one. Use persistent DB
connections. Don't ever work on the entire results of a query: it's a
complete waste of time, and your code will became more complex, and less
mainteable. Know what you're doing, experiment yourself, don't be blinded
by the "recommended ways in ASP".

Also, it's the job of the DB engine to keep track of the query results, not
PHP's. And modern, stable DB engines do support multiple queries at once.
You just have to be a bit careful about concurrent programming.


--
PETER FOX Not the same since the submarine business went under
pe******@eminent.demon.co.uk.not.this.bit.no.html
2 Tees Close, Witham, Essex.
Gravity beer in Essex <http://www.eminent.demon.co.uk>
Jan 18 '06 #16
R. Rajesh Jeba Anbiah wrote:
sa********@gmail.com wrote:
Hi!

I've been programming ASP for 5 years and am now learning PHP.
In ASP, you can use GetRows function which returns 2 by 2 array of
Recordset.
Actually, it's a recommended way in ASP when you access DB as it
disconnects the DB earlier.
Also, it's handy as you can directly access any data in the array
without looping.


<snip>

As many people have pointed out, *never* dump the table data into
array. Fetch the record and immediately get that processed. If you have
any *valid* reason, buffer the data into a very very small (known)
sized array. If using MySQL, use the LIMIT if possible.

--
<?php echo 'Just another PHP saint'; ?>
Email: rrjanbiah-at-Y!com Blog: http://rajeshanbiah.blogspot.com/


And it's *never* a good idea to give absolutes :-).

There are times when it's better to dump a table into an array - like
when you have a lot of processing to do on multiple items and want to
release mysql resources.

Also collection classes for abstracting the data. And that's just the
beginning.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Jan 18 '06 #17
R. Rajesh Jeba Anbiah wrote:

As many people have pointed out, *never* dump the table data into
array. Fetch the record and immediately get that processed. If you have
any *valid* reason, buffer the data into a very very small (known)
sized array. If using MySQL, use the LIMIT if possible.


The reasoning being?

In my opinion conserving memory for the sake of conserving memory is
just silly. Hardware resources are there to be used. There's nothing
wrong with a script using a few megs of extra memory, as it'll release
them a short time later.

Jan 18 '06 #18

sa********@gmail.com wrote:
Hi!

I've been programming ASP for 5 years and am now learning PHP.
In ASP, you can use GetRows function which returns 2 by 2 array of
Recordset.
Actually, it's a recommended way in ASP when you access DB as it
disconnects the DB earlier.
Also, it's handy as you can directly access any data in the array
without looping.

As far as I know, there's no such function in PHP and I can make one.
My question is whether it's good in PHP.

pseudo-code:

$data = get_data("select * from table1");
$var = $data[3][2]; //value at 4th row, 3rd column

This way, I can wrap db connection, data retrieval, and error handling
with one function (or maybe a class).
Is the idea workable?


So far, it doesn't sound good.
Most people here are against it.

In ASP, releasing resourcesearly is very important, especially with
database.
If you're interested, you may read
http://www.learnasp.com/advice/whygetrows.asp .

In ASP, I abstracted db retrieval in a class and closed DB connection
early in the class and left data in memory as 2d array format.
ASP(VBScript) uses reference counting to dispose memory and the array
is guaranteed to be removed as soon as it goes out of scope.
Of course, the class is implemented in a library file and each page
that includes it doesn't have to worry about resources.
If I just let each page loop each record, some pages might forget
releasing db resources.
And I feel uncomfortable with the idea that db is opened at the top of
a page and closed somewhere else far from the top.

The problem gets worse if a pages includes several pages and the
includes pages access database and they don't release db resources as
data are used in the including page.

It's ok that I lose some performance and use a little bit more memory
when I abstract data access.
But if that causes trouble severely, I must think about it again.

Can somebody direct me to a best practice that's from very reliable
source or PHP official site?

Thanks.
Sam

Jan 18 '06 #19
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Chung Leong wrote:
R. Rajesh Jeba Anbiah wrote:

As many people have pointed out, *never* dump the table data into
array. Fetch the record and immediately get that processed. If you have
any *valid* reason, buffer the data into a very very small (known)
sized array. If using MySQL, use the LIMIT if possible.
The reasoning being?


Code refactoring, for example.

As you may already know, refactoring is almost always a good idea, as it
reduces complexity of the algorithm, procesing time, and increases the
cache hit ratio, to name a few consecuences.

Let's suppose the following example:

<?php

mysql_pconnect(blahblahblah);
$r = mysql_query(blahblahblah);
$db_results = array();
while ($row = mysql_fetch_array($r))
{
$db_results[] = $row;
}

foreach ($db_results as $row)
{
foobar;
}

?>

Well, let's refactor that code:
<?php

mysql_pconnect(blahblahblah);
$r = mysql_query(blahblahblah);
while ($row = mysql_fetch_array($r))
{
foobar;
}

?>
Less complexity, less CPU time, less memory, less code. Any developer that
has been taught anything about algorithms knows that. You'd better have a
good reason to not refactorize your code in this way.
In my opinion conserving memory for the sake of conserving memory is
just silly. Hardware resources are there to be used. There's nothing
wrong with a script using a few megs of extra memory, as it'll release
them a short time later.


That's not a bad idea for batch jobs, but is a terrible one when you have
tenths, hundreds of hits per second. A few MB of memory per script may seem
a small issue, but think about a few MB per script, 100 scripts per second.
A "short time" is not a big thing, but a "short time" hundreds of times per
second is.
- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

http://acm.asoc.fi.upm.es/~mr/ ; http://acm.asoc.fi.upm.es/~ivan/
MSN:i_*************************@hotmail.com
Jabber:iv*********@jabber.org ; iv*********@kdetalk.net
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFDzpFq3jcQ2mg3Pc8RAr/+AJ9KpKwSANDLvougUKNpuIkSaHK88gCfaliP
h9jM9Tfgy4TmX37P5dNeH3U=
=JSwA
-----END PGP SIGNATURE-----
Jan 18 '06 #20
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

sa********@gmail.com wrote:

[...]
If I just let each page loop each record, some pages might forget
releasing db resources.
And? In PHP, if you let a file descriptor open, ther garbage collector will
gracefully close it when the script ends or gets killed for whatever
reason. If you let any other resource open (say, a database connection), it
will be gracefully closed when the program ends, whenever the developer
explicitly closes it or not.
And I feel uncomfortable with the idea that db is opened at the top of
a page and closed somewhere else far from the top.


In most cases, the DB connection is opened the first time a user hits a
page, and is not closed until some time after the user has hit a page.
Persistent connections save a lot of overhead, and are a lot preferable
than opening a DB conn, querying it, and closing it several times. A lot.

- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

http://acm.asoc.fi.upm.es/~mr/
Proudly running Debian Linux with 2.6.12-1-686 kernel, KDE3.5.0, and PHP
5.1.1-1 generating this signature.
Uptime: 19:51:59 up 8 min, 1 user, load average: 0.59, 1.04, 0.64

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFDzo8j3jcQ2mg3Pc8RAq/VAJ9MGLTzxgAhd3LX38lOsduRbaL3YACcDQ+a
C0J+ttOUT2zpQmGILWfIyX4=
=bxAi
-----END PGP SIGNATURE-----
Jan 18 '06 #21

Iván Sánchez Ortega wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Chung Leong wrote:
R. Rajesh Jeba Anbiah wrote:

As many people have pointed out, *never* dump the table data into
array. Fetch the record and immediately get that processed. If you have
any *valid* reason, buffer the data into a very very small (known)
sized array. If using MySQL, use the LIMIT if possible.
The reasoning being?


Code refactoring, for example.

As you may already know, refactoring is almost always a good idea, as it
reduces complexity of the algorithm, procesing time, and increases the
cache hit ratio, to name a few consecuences.

Let's suppose the following example:

<?php

mysql_pconnect(blahblahblah);
$r = mysql_query(blahblahblah);
$db_results = array();
while ($row = mysql_fetch_array($r))
{
$db_results[] = $row;
}

foreach ($db_results as $row)
{
foobar;
}

?>

Well, let's refactor that code:
<?php

mysql_pconnect(blahblahblah);
$r = mysql_query(blahblahblah);
while ($row = mysql_fetch_array($r))
{
foobar;
}

?>
Less complexity, less CPU time, less memory, less code. Any developer that
has been taught anything about algorithms knows that. You'd better have a
good reason to not refactorize your code in this way.


Well, the example is not good for the issue here.
The following code will be encapsulated.
mysql_pconnect(blahblahblah);
$r = mysql_query(blahblahblah);
$db_results = array();
while ($row = mysql_fetch_array($r))
{
$db_results[] = $row;
}


So when the practical code will be:

$result = get_result("select * from table1" [, "myDB"]);
for($result as $row){
foobar();
}

Of course, it has overhead, but that's more modular and most modularity
comes with costs.

Now I understand your point and I think dumping db data to an array
should be carefully used.

Thanks.
Sam

Jan 18 '06 #22

Iván Sánchez Ortega wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

sa********@gmail.com wrote:

[...]
If I just let each page loop each record, some pages might forget
releasing db resources.


And? In PHP, if you let a file descriptor open, ther garbage collector will
gracefully close it when the script ends or gets killed for whatever
reason. If you let any other resource open (say, a database connection), it
will be gracefully closed when the program ends, whenever the developer
explicitly closes it or not.
And I feel uncomfortable with the idea that db is opened at the top of
a page and closed somewhere else far from the top.


In most cases, the DB connection is opened the first time a user hits a
page, and is not closed until some time after the user has hit a page.
Persistent connections save a lot of overhead, and are a lot preferable
than opening a DB conn, querying it, and closing it several times. A lot.

- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

http://acm.asoc.fi.upm.es/~mr/
Proudly running Debian Linux with 2.6.12-1-686 kernel, KDE3.5.0, and PHP
5.1.1-1 generating this signature.
Uptime: 19:51:59 up 8 min, 1 user, load average: 0.59, 1.04, 0.64

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFDzo8j3jcQ2mg3Pc8RAq/VAJ9MGLTzxgAhd3LX38lOsduRbaL3YACcDQ+a
C0J+ttOUT2zpQmGILWfIyX4=
=bxAi
-----END PGP SIGNATURE-----


Jan 18 '06 #23
Persistant Connections:
The connection to the SQL server will not be closed when the execution
of the script ends. Instead, the link will remain open for future use
(mysql_close() will not close links established by mysql_pconnect()).

If you have alot of concurrent users on the site mysql_pconnect can
cause major resource issues.

The database library that is in discussion does not close the
connection after each query ie fetch_array.

The only time the mysql connection is closed is when either the object
is removed or the execution of the page terminates.

Most of the data-type returned from a mysql recordset field is string,
Building large arrays using string data types doesn't use alot of
memory nor impact performance really.
Also if you design your SQL statements correctly and efficiently you
shouldn't have this issue anyway.

I've since written a more advanced database library that uses a
singleton pattern,
the singleton pattern is useful when you are requiring only once
instance of that object across multiple objects.

I've found these librarys extermely efficient in programming and
performance,
You can view examples at.

http://sliterous.no-ip.org/php-libs/

Jan 18 '06 #24
Persistant Connections:
The connection to the SQL server will not be closed when the execution
of the script ends. Instead, the link will remain open for future use
(mysql_close() will not close links established by mysql_pconnect()).

If you have alot of concurrent users on the site mysql_pconnect can
cause major resource issues.

The database library that is in discussion does not close the
connection after each query ie fetch_array.

The only time the mysql connection is closed is when either the object
is removed or the execution of the page terminates.

Most of the data-type returned from a mysql recordset field is string,
Building large arrays using string data types doesn't use alot of
memory nor impact performance really.
Also if you design your SQL statements correctly and efficiently you
shouldn't have this issue anyway.

I've since written a more advanced database library that uses a
singleton pattern,
the singleton pattern is useful when you are requiring only one
instance of that object across multiple objects.

I've found these librarys extermely efficient in programming and
performance,
You can view examples at.

http://sliterous.no-ip.org/php-libs/

Jan 18 '06 #25
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jerry Stuckle wrote:
And it's *never* a good idea to give absolutes :-).
And I've told you ten million times not to exaggerate :-P
There are times when it's better to dump a table into an array - like
when you have a lot of processing to do on multiple items and want to
release mysql resources.
Batch jobs are not the most usual thing to do in PHP...
Also collection classes for abstracting the data. And that's just the
beginning.


Such a class should not relay on a complete table dump to work: it should
load data dinamically (and/or cache some of it) in order to improve
performance. And it should use the SPL functionality, to provide a clean
way to dinamically iterate over the data set.

- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

Un ordenador no es un televisor ni un microondas, es una herramienta
compleja.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFDzpKo3jcQ2mg3Pc8RAr/wAJ9qbK5EaFOB02hGp6sdCvFHFaIsawCeJddU
jlJ+IaoAfDDtk2RzId7DAuw=
=/byC
-----END PGP SIGNATURE-----
Jan 18 '06 #26
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

sa********@gmail.com wrote:
Of course, it has overhead, but that's more modular and most modularity
comes with costs.


Yep, and it's up to you to decide if you want modularity or performance in
this case.

Or both, using a carefully planned SPL module :-)

- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

http://acm.asoc.fi.upm.es/~mr/
Proudly running Debian Linux with 2.6.12-1-686 kernel, KDE3.5.0, and PHP
5.1.1-1 generating this signature.
Uptime: 23:00:55 up 3:17, 1 user, load average: 0.34, 0.36, 0.39

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFDzrrl3jcQ2mg3Pc8RAug9AKCA3bMfFq6vCWki6TFDQl yqZ9aRjACeJPwT
Nmycw1uSIykNqa3mwxqWWe4=
=Ha4u
-----END PGP SIGNATURE-----
Jan 18 '06 #27
Iván Sánchez Ortega wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jerry Stuckle wrote:

And it's *never* a good idea to give absolutes :-).

And I've told you ten million times not to exaggerate :-P

There are times when it's better to dump a table into an array - like
when you have a lot of processing to do on multiple items and want to
release mysql resources.

Batch jobs are not the most usual thing to do in PHP...


Who said anything about batch jobs? I didn't.
Also collection classes for abstracting the data. And that's just the
beginning.

Such a class should not relay on a complete table dump to work: it should
load data dinamically (and/or cache some of it) in order to improve
performance. And it should use the SPL functionality, to provide a clean
way to dinamically iterate over the data set.


Who said anything about a complete table dump? I didn't.
- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

Un ordenador no es un televisor ni un microondas, es una herramienta
compleja.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFDzpKo3jcQ2mg3Pc8RAr/wAJ9qbK5EaFOB02hGp6sdCvFHFaIsawCeJddU
jlJ+IaoAfDDtk2RzId7DAuw=
=/byC
-----END PGP SIGNATURE-----

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Jan 19 '06 #28
Iván Sánchez Ortega wrote:
-----BEGIN PGP SIGNED MESSAGE-----
As you may already know, refactoring is almost always a good idea, as it
reduces complexity of the algorithm, procesing time, and increases the
cache hit ratio, to name a few consecuences.
I don't know what refactoring has to do with this, but since you
mention it, I'll give you my 2 cents. In the real world, refactoring is
always a bad idea. By definition, you are not adding new
functionality--hence value to the product. You are thus wasting
programming and QA resource. Moreover, you risk introducting bugs into
what was working before. It's a lose-lose proposition.

As every good engineer knows, if it ain't broken, don't fix it.
Less complexity, less CPU time, less memory, less code. Any developer that
has been taught anything about algorithms knows that. You'd better have a
good reason to not refactorize your code in this way.
There are plenty of good reasons. Interleaving data retrieval and
processing in the manner you describes makes it hard to properly
modularize the code. You are also fixing the direction by which the
rows can be processed--the sorting order of the query--without the
possibility of looking ahead.
That's not a bad idea for batch jobs, but is a terrible one when you have
tenths, hundreds of hits per second. A few MB of memory per script may seem
a small issue, but think about a few MB per script, 100 scripts per second.
A "short time" is not a big thing, but a "short time" hundreds of times per
second is.


That's just unrealistic. When you retrieve data from the database, it
usually goes somewhere--i.e. to the client. In you scenario you'd have
a server that output multiple gigs per-second. I don't know about you
but I certainly don't have a peta-byte bandwidth quota.

Jan 19 '06 #29
Chung Leong wrote:
R. Rajesh Jeba Anbiah wrote:

As many people have pointed out, *never* dump the table data into
array. Fetch the record and immediately get that processed. If you have
any *valid* reason, buffer the data into a very very small (known)
sized array. If using MySQL, use the LIMIT if possible.


The reasoning being?

In my opinion conserving memory for the sake of conserving memory is
just silly. Hardware resources are there to be used. There's nothing
wrong with a script using a few megs of extra memory, as it'll release
them a short time later.


Since, I know that Chung Leong is heading anti-performance campaign,
I'm not going to fight with him;-)

As you already know that by dumping huge bufferred records in to PHP
array, you're just duplicating the buffer. Also, as I mentioned
earlier, one can buffer the records in PHP provided the records are
very very less (for a valid reason).

FWIW, I have came across a PHP application that was manufactured in
another corner of the world. In which the programmer has bufferred the
whole user table into an array by mapping id and then he chose the
record like $record[$_GET['id']]. Not sure, the programmer isn't aware
of WHERE clause.

Also, try benchmarking the outcome by dumping the whole table (or at
least huge record sets) into an PHP array. If you use Windows/Apache
like me, you'd immediately see the result.

<OT>It's really nice to see new comers like Iván Sánchez Ortega
and the c.l.php discussions are now getting hotter:-)</OT>

--
<?php echo 'Just another PHP saint'; ?>
Email: rrjanbiah-at-Y!com Blog: http://rajeshanbiah.blogspot.com/

Jan 19 '06 #30
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Chung Leong wrote:
[refactoring] [...] By definition, you are not adding new
functionality--hence value to the product. You are thus wasting
programming and QA resource. Moreover, you risk introducting bugs into
what was working before. It's a lose-lose proposition.

As every good engineer knows, if it ain't broken, don't fix it.
But, if it's broken, I'd better fix it for good, instead of patching legacy
code so that it "just works".

The isue we're discussing - dumping the entire query results into an array
in memory - induces lazyness in the programmers. Lazyness leads to more
complex code, more complex code leads to more bugs, more bugs leads to more
wasted QA time when something breaks down.

A few MB of memory per script may seem a small issue, but think about a
few MB per script, 100 scripts per second.


That's just unrealistic. When you retrieve data from the database, it
usually goes somewhere--i.e. to the client.


Sometimes it does not - sometimes, due to lazy programming or bad DB design,
data goes into memory, goes out of memory. And the whole thing risks
becoming a GIGO.

(but, hey, it's a "it just works" scenario! it's good for the CEO! I'm
getting paid!)
In you scenario you'd have a server that output multiple gigs per-second.
I don't know about you but I certainly don't have a peta-byte bandwidth
quota.


No; in my scenario I'll just have a couple of Gigabit-ethernet-enabled
servers. And their bottleneck must be the bandwith, not the resource-hungry
scripts. The code must be so flazing fast, so fucking clean, that I'm able
to affirm the server will be able to handle the load and not choke.

- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

Donde se pierde el interés también se pierde la memoria.
-- Johann Wolfang Von Goethe. (1749-1832) Escritor alemán.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFD0DdA3jcQ2mg3Pc8RAmqiAJ0RdFNwsQZ1jGA2IpAl71 RHNq44RQCfaJtK
gLEzeA51A7KxtWqtK9hxBW0=
=HcH0
-----END PGP SIGNATURE-----
Jan 20 '06 #31
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

R. Rajesh Jeba Anbiah wrote:
<OT>It's really nice to see new comers like Iván Sánchez Ortega
and the c.l.php discussions are now getting hotter:-)</OT>


Newcomer? I've been programming in PHP long before you were born, bro!

:-P

- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

Las personas obran por interés propio. Incluso cuando servimos a los demás,
solemos hacerlo porque nos reporta beneficios o porque no hacerlo iría en
nuestro propio detrimento. Aunque las personas sacrifiquen su vida por el
prójimo en tiempos de guerra y otras circunstancias extremas, no puede
decirse que ésta sea la norma. Habitualmente, por no decir ante todo, el
altruismo satisface una necesidad propia.
-- Thomas Hobbes
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFD0DeH3jcQ2mg3Pc8RAkPFAJ9oenvM626ctrD+nsZktN 0ieNz6WQCdG+kL
sZZ+qmt51cKoA9a0Zbeyvzw=
=Snkm
-----END PGP SIGNATURE-----
Jan 20 '06 #32
Iván Sánchez Ortega wrote:
R. Rajesh Jeba Anbiah wrote:
<OT>It's really nice to see new comers like Iván Sánchez Ortega
and the c.l.php discussions are now getting hotter:-)</OT>


Newcomer? I've been programming in PHP long before you were born, bro!


Hey:-)

--
<?php echo 'Just another PHP saint'; ?>
Email: rrjanbiah-at-Y!com Blog: http://rajeshanbiah.blogspot.com/

Jan 20 '06 #33
Iván Sánchez Ortega wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Chung Leong wrote:


But, if it's broken, I'd better fix it for good, instead of patching legacy
code so that it "just works".

The isue we're discussing - dumping the entire query results into an array
in memory - induces lazyness in the programmers. Lazyness leads to more
complex code, more complex code leads to more bugs, more bugs leads to more
wasted QA time when something breaks down.

I disagree completely.

There are many valid reasons for putting the information in an array.

For one thing - once you have it in an array, you can release your
result set - freeing MySQL resources. And if you don't need the
connection for anything else, you can close it. One of the big
concurrency issues with RDB's is holding their resources longer than
necessary.

Also, if you're doing lots of complex processing on the returned data,
it's generally better to buffer the data in your own program rather than
hold it in MySQL.

Your way works for low volume web pages. But once you get into heavy
database use, it's a whole different story.

Sometimes it does not - sometimes, due to lazy programming or bad DB design,
data goes into memory, goes out of memory. And the whole thing risks
becoming a GIGO.

(but, hey, it's a "it just works" scenario! it's good for the CEO! I'm
getting paid!)

You don't need local buffering to make a GIGO. Any poor programming
practice will do it. While no good programming practice will.

No; in my scenario I'll just have a couple of Gigabit-ethernet-enabled
servers. And their bottleneck must be the bandwith, not the resource-hungry
scripts. The code must be so flazing fast, so fucking clean, that I'm able
to affirm the server will be able to handle the load and not choke.

Try it on some of the systems I've worked on in the past. Multiple web
servers feeding off one database server (DB/2 on AIX in this case).
Averages of < 100K > 1M hits/hr.

I've also worked on database systems which can handle peaks of < 1M
database requests per hour. You can't do either if you hold the db
resources unnecessarily long.
- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

Donde se pierde el interés también se pierde la memoria.
-- Johann Wolfang Von Goethe. (1749-1832) Escritor alemán.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFD0DdA3jcQ2mg3Pc8RAmqiAJ0RdFNwsQZ1jGA2IpAl71 RHNq44RQCfaJtK
gLEzeA51A7KxtWqtK9hxBW0=
=HcH0
-----END PGP SIGNATURE-----

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Jan 20 '06 #34
Iván Sánchez Ortega wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

R. Rajesh Jeba Anbiah wrote:

<OT>It's really nice to see new comers like Iván Sánchez Ortega
and the c.l.php discussions are now getting hotter:-)</OT>

Newcomer? I've been programming in PHP long before you were born, bro!

:-P

- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

Las personas obran por interés propio. Incluso cuando servimos a los demás,
solemos hacerlo porque nos reporta beneficios o porque no hacerlo iría en
nuestro propio detrimento. Aunque las personas sacrifiquen su vida por el
prójimo en tiempos de guerra y otras circunstancias extremas, no puede
decirse que ésta sea la norma. Habitualmente, por no decir ante todo, el
altruismo satisface una necesidad propia.
-- Thomas Hobbes
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFD0DeH3jcQ2mg3Pc8RAkPFAJ9oenvM626ctrD+nsZktN 0ieNz6WQCdG+kL
sZZ+qmt51cKoA9a0Zbeyvzw=
=Snkm
-----END PGP SIGNATURE-----

And I've was probably programming before you were born!

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Jan 20 '06 #35
Iván Sánchez Ortega wrote:
But, if it's broken, I'd better fix it for good, instead of patching legacy
code so that it "just works".
Then it's not refactoring, my friend.
The isue we're discussing - dumping the entire query results into an array
in memory - induces lazyness in the programmers. Lazyness leads to more
complex code, more complex code leads to more bugs, more bugs leads to more
wasted QA time when something breaks down.
There we go. That's something that always happen in this group--when
unable to afford a defense on technical merits, resort to a value
judgement of sort. There's nothing wrong in choosing an easy solution.
It's called being smart.
Sometimes it does not - sometimes, due to lazy programming or bad DB design,
data goes into memory, goes out of memory. And the whole thing risks
becoming a GIGO.
Using bad programming to justify your so-called "good programming"--now
you're making a whole of sense.
No; in my scenario I'll just have a couple of Gigabit-ethernet-enabled
servers. And their bottleneck must be the bandwith, not the resource-hungry
scripts. The code must be so flazing fast, so fucking clean, that I'm able
to affirm the server will be able to handle the load and not choke.


You don't even know what you're debating. The exact same amount of work
is done. There is no performance impact unless the memory usage exceeds
the physical amount available. A typical database query will return 10K
or less. To say that reading the data into an array would somehow have
some castatrophe effect on performance is...intellectually challenged.

Jan 20 '06 #36
Chung Leong wrote:
<snip>
There is no performance impact unless the memory usage exceeds
the physical amount available. A typical database query will return 10K
or less. To say that reading the data into an array would somehow have
some castatrophe effect on performance is...intellectually challenged.


Some crazy predictions of the future of this thread:
Chung gonna prove his ideas with his own (benchmarking?) script and I
gonna ask him to use APD and other real tools, some curmudgeon gonna
claim more than 100 years of experience without reading the thread, and
.....

(Discussions in c.l.php are not that hard to imagine;-) Pun intended,
not meant to be serious.)

--
<?php echo 'Just another PHP saint'; ?>
Email: rrjanbiah-at-Y!com Blog: http://rajeshanbiah.blogspot.com/

Jan 20 '06 #37
R. Rajesh Jeba Anbiah wrote:

Some crazy predictions of the future of this thread:
Chung gonna prove his ideas with his own (benchmarking?) script and I
gonna ask him to use APD and other real tools, some curmudgeon gonna
claim more than 100 years of experience without reading the thread, and
....


What I am going to say is this: *never* listen to the advice of R.
Rajesh Jeba Anbiah because it will shrink your dick. Unless someone can
prove otherwise (benchmarking?), we should assume my assertion is true.

Jan 20 '06 #38

Jerry Stuckle wrote:
For one thing - once you have it in an array, you can release your
result set - freeing MySQL resources. And if you don't need the
connection for anything else, you can close it. One of the big
concurrency issues with RDB's is holding their resources longer than
necessary.

Also, if you're doing lots of complex processing on the returned data,
it's generally better to buffer the data in your own program rather than
hold it in MySQL.


It doesn't have to be complex, just time-consuming. An operation like
echo can potentially block for a very long time.

Jan 20 '06 #39
Chung Leong wrote:
R. Rajesh Jeba Anbiah wrote: <snip> What I am going to say is this: *never* listen to the advice of R.
Rajesh Jeba Anbiah because it will shrink your dick. Unless someone can
prove otherwise (benchmarking?), we should assume my assertion is true.


LOL:-)

--
<?php echo 'Just another PHP saint'; ?>
Email: rrjanbiah-at-Y!com Blog: http://rajeshanbiah.blogspot.com/

Jan 21 '06 #40
I also hear it's slower (from another post) than using the direct mysql
functions.
If you want to build an array from the database, you could do this:

$a=array();
$q=mysql_query("SELECT * FROM tablename", $link);
while($row=mysql_fetch_array($q)) {
$a[] = $row;
}
foreach ($a as $row) {
//do whatever with things like $row['item']
}
or you could use a SELECT to create a temporary table and grab your data
from that, and then destroy the temporary table. either way works, depends
on whether you need a snapshot or whether you are willing to work with live
rows. MySQL has row locking.

"Andy Hassall" <an**@andyh.co.uk> wrote in message
news:h3********************************@4ax.com...
On 17 Jan 2006 11:34:37 -0800, sa********@gmail.com wrote:
Rather than re-invent the wheel, look at:

http://adodb.sourceforge.net/
http://phplens.com/adodb/reference.f....getarray.html


Thanks for the answer.
My intention is not to use ADO db but make DB-accessing code simple and
avoid repeated codes (connecting/freeing/disconnecting db).
Is there a best practice of db-accessing in PHP?


You do realise that other than the name and a deliberate similarity in
function names, ADOdb has nothing to do with Microsoft ADO - it's just a
thin
PHP library providing the sort of access methods you're talking about on
top of
various PHP native database access functions?

IMHO, ADOdb _is_ the best practice of accessing databases in PHP.

--
Andy Hassall :: an**@andyh.co.uk :: http://www.andyh.co.uk
http://www.andyhsoftware.co.uk/space :: disk and FTP usage analysis tool

Jan 23 '06 #41

"Jerry Stuckle" <js*******@attglobal.net> wrote in message
news:Ib********************@comcast.com...
Iván Sánchez Ortega wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jerry Stuckle wrote:

And it's *never* a good idea to give absolutes :-).

And I've told you ten million times not to exaggerate :-P

There are times when it's better to dump a table into an array - like
when you have a lot of processing to do on multiple items and want to
release mysql resources.

About those times when it's not - like when there's some other UPDATEr out
there on your rows that you just SELECTed (and you also want to do an
update, or an external UPDATE would really mess things up) - how can you
tell MySQL to lock those rows?


Batch jobs are not the most usual thing to do in PHP...


Who said anything about batch jobs? I didn't.
Also collection classes for abstracting the data. And that's just the
beginning.

Such a class should not relay on a complete table dump to work: it should
load data dinamically (and/or cache some of it) in order to improve
performance. And it should use the SPL functionality, to provide a clean
way to dinamically iterate over the data set.


Who said anything about a complete table dump? I didn't.
- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

Un ordenador no es un televisor ni un microondas, es una herramienta
compleja.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFDzpKo3jcQ2mg3Pc8RAr/wAJ9qbK5EaFOB02hGp6sdCvFHFaIsawCeJddU
jlJ+IaoAfDDtk2RzId7DAuw=
=/byC
-----END PGP SIGNATURE-----

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================

Jan 23 '06 #42
Jim Michaels wrote:
"Jerry Stuckle" <js*******@attglobal.net> wrote in message
news:Ib********************@comcast.com...

About those times when it's not - like when there's some other UPDATEr out
there on your rows that you just SELECTed (and you also want to do an
update, or an external UPDATE would really mess things up) - how can you
tell MySQL to lock those rows?


Jim,

I never said it was ALWAYS a good idea to do this. Rather, I argued
with the statement it is NEVER a good idea. Two entirely different things.

However, since you asked, this happens very often on heavy transaction
systems. It's quite easy.

You buffer the list. If you need to update an item, you fetch THAT ITEM
only a second time. Compare the contents of the just retrieved item to
the saved item. If there's no change, you can update it with impunity.

If there is a change, look at what's changed. If it's a field unrelated
to your current change, go ahead and update it. If it's a field which
will also be changed, you need to make an intelligent decision as to
whether you can update it or need to notify the user of an error.

Take a bank account for example. You get online and wish to move $100
from checking to savings. Meanwhile, your wife is going shopping and
tries to take $50 out of the account at an ATM.

You make the request. The system fetches your current balance and
ensures it is > $100. If it is, it displays a screen asking for you to
confirm this request (if you only have $10 in there, of course it denies
the request).

Now - your internet connection may go down, you may step away a minute
to pour yourself a cup of coffee, whatever. The point is, the system is
waiting for user response. Your account cannot be locked for that
period of time; other actions would be held up (like your wife getting
money or a check being cleared).

Let's say right now your wife makes that $50 withdrawal. Now you tell
the computer to complete your transaction.

But wait - there's less money in your account than when you started. If
the system just subtracted $100 from your earlier request, the $50
withdrawal would be lost (you'd be happy but your bank wouldn't!).

So the program again fetches your balance and compares it to the
original value. But wait - it's changed.

So the system recomputes the balance. If there are sufficient funds in
the account, it processes the withdrawal. But if there aren't, it sends
you an error message.

This is generally how almost all transactional systems work on big
systems. Yes, it's quite a bit more work. But it allows updating
without having to hold locks while someone goes for coffee.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Jan 23 '06 #43
[top-post fixed. Please don't top-post]
Jim Michaels wrote:
"Andy Hassall" <an**@andyh.co.uk> wrote in message
news:h3********************************@4ax.com...
On 17 Jan 2006 11:34:37 -0800, sa********@gmail.com wrote:
Rather than re-invent the wheel, look at:

http://adodb.sourceforge.net/
http://phplens.com/adodb/reference.f....getarray.html
<snip> IMHO, ADOdb _is_ the best practice of accessing databases in PHP.
I also hear it's slower (from another post) than using the direct mysql
functions.


It's slower because of hard coded stuffs at PHP end. But, it's no
surprise that the abstraction would be slower than native functions.
IMHO, it's always better to move to abstractions 'coz switching to
other DB will be very easy if you use abstraction.

--
<?php echo 'Just another PHP saint'; ?>
Email: rrjanbiah-at-Y!com Blog: http://rajeshanbiah.blogspot.com/

Jan 23 '06 #44
Message-ID: <cu********************@comcast.com> from Jerry Stuckle
contained the following:
So the system recomputes the balance. If there are sufficient funds in
the account, it processes the withdrawal.


Isn't a lock required for the very short period of time in between
recomputing the balance and processing the withdrawal?

--
Geoff Berrow (put thecat out to email)
It's only Usenet, no one dies.
My opinions, not the committee's, mine.
Simple RFDs http://www.ckdog.co.uk/rfdmaker/
Jan 23 '06 #45
Geoff Berrow wrote:
Message-ID: <cu********************@comcast.com> from Jerry Stuckle
contained the following:

So the system recomputes the balance. If there are sufficient funds in
the account, it processes the withdrawal.

Isn't a lock required for the very short period of time in between
recomputing the balance and processing the withdrawal?


Yes, but that's a very short time - on a mainframe you're talking
microseconds; milliseconds at the worst.

You can never eliminate all locks. The secret is to minimize the number
and length of the locks.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Jan 23 '06 #46
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jerry Stuckle wrote:
You can never eliminate all locks. The secret is to minimize the number
and length of the locks.


That is a concurrent programming technique; I know it as "make the critical
section as small and fast as possible".

(A critical section is the code that runs with the locks "on", and never
runs in paralell to another critical section)

Make sure the critical section has a very small complexity (usually O(1) ),
make sure there are no deadlocks, rely on listeners and events instead of
active waiting, and you should be fine.

- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

Un ordenador no es un televisor ni un microondas, es una herramienta
compleja.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFD1Sxn3jcQ2mg3Pc8RAhJCAJ9cfP5EuPS1wy6rHwOZac uKrYMTlACeNJBK
zKYT7bqw+irlbJoXWlkaQzs=
=XAH6
-----END PGP SIGNATURE-----
Jan 23 '06 #47
Iván Sánchez Ortega wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jerry Stuckle wrote:

You can never eliminate all locks. The secret is to minimize the number
and length of the locks.

That is a concurrent programming technique; I know it as "make the critical
section as small and fast as possible".

(A critical section is the code that runs with the locks "on", and never
runs in paralell to another critical section)

Make sure the critical section has a very small complexity (usually O(1) ),
make sure there are no deadlocks, rely on listeners and events instead of
active waiting, and you should be fine.


No, don't relay on listeners or events, either. Don't wait on anything
you don't absolutely have to. For instance - you may need to wait for
another database operation to finish (i.e. updating two tables).

But never turn control over to someone else (i.e. wait for an event).
What happens if an event gets lost?
- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

Un ordenador no es un televisor ni un microondas, es una herramienta
compleja.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFD1Sxn3jcQ2mg3Pc8RAhJCAJ9cfP5EuPS1wy6rHwOZac uKrYMTlACeNJBK
zKYT7bqw+irlbJoXWlkaQzs=
=XAH6
-----END PGP SIGNATURE-----

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Jan 23 '06 #48
Jerry Stuckle wrote:
No, don't relay on listeners or events, either. Don't wait on anything
you don't absolutely have to. For instance - you may need to wait for
another database operation to finish (i.e. updating two tables).

But never turn control over to someone else (i.e. wait for an event).
What happens if an event gets lost?


Well, I'm supposing here that the locking mechanism is good enough to not
let that happen...

However, what I really meant was "never ever rely on active waiting to do
important concurrent stuff".

--
----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net

Ofrecer mucho, especie es de negar.

Jan 23 '06 #49
Iván Sánchez Ortega wrote:
Jerry Stuckle wrote:

No, don't relay on listeners or events, either. Don't wait on anything
you don't absolutely have to. For instance - you may need to wait for
another database operation to finish (i.e. updating two tables).

But never turn control over to someone else (i.e. wait for an event).
What happens if an event gets lost?

Well, I'm supposing here that the locking mechanism is good enough to not
let that happen...

However, what I really meant was "never ever rely on active waiting to do
important concurrent stuff".


Locking in a database is controlled by the application. When the
application issues either a COMMIT or ROLLBACK, all locks are released.

But in the meantime, locks are held. Some databases have a lock timeout
setting. But if the timeout occurs, the only solution the database can
take is to internally issue a ROLLBACK.

In the meantime, any apps needing access to the locked resource may have
to wait (depending on a lot of things such as lock type, request type,
etc.). This could be a relatively long time (i.e. 30 sec.).

Rather, the secret to the high transaction systems is to only hold locks
for the shortest possible time. Gather all possible information you
require ahead of time. Get the locks, update the data and COMMIT.

And never, never, never wait for users while holding locks!

That's how the airlines, banks and other high-transaction systems keep
the throughput up.
--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Jan 23 '06 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

24
by: matty | last post by:
Go away for a few days and you miss it all... A few opinions... Programming is a craft more than an art (software engineering, not black magic) and as such, is about writing code that works,...
12
by: Generic Usenet Account | last post by:
I am going through some legacy code that has an "isNull()" method defined on certain classes. I can see that this can be a good way to eliminate certain types of crashes, by making this the first...
14
by: dreamcatcher | last post by:
I always have this idea that typedef a data type especially a structure is very convenient in coding, but my teacher insisted that I should use the full struct declaration and no further...
43
by: Sensei | last post by:
Hi! I'm thinking about a good programming style, pros and cons of some topics. Of course, this has nothing to do with indentation... Students are now java-dependent (too bad) and I need some...
150
by: tony | last post by:
If you have any PHP scripts which will not work in the current releases due to breaks in backwards compatibility then take a look at http://www.tonymarston.net/php-mysql/bc-is-everything.html and...
0
by: Charles D Hixson | last post by:
I was reading through old messages in the list and came up against an idea that I thought might be of some value: "Wouldn't it be a good idea if one could "rewind" an iterator?" Not stated in...
2
by: pigeonrandle | last post by:
Hi, My application creates a project that is structured like a tree, so i want to use a treeview to display it to the user. Would it be a good idea to create the various parts of project as...
7
by: elgiei | last post by:
Good morning at all, i have to implement a server,that every n-seconds (eg. 10sec) sends to other clients,which files and directory has been deleted or modified. i build a n-tree, for each...
5
by: mike3 | last post by:
Hi. Is this a good idea?: <begin code> /* Addition operator: += */ const BigFix &BigFix::operator+=(const BigFix &rhs) { ErrorType err; int lhs_sign = sign, rhs_sign = rhs.sign;
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.