Hi!
I've been programming ASP for 5 years and am now learning PHP.
In ASP, you can use GetRows function which returns 2 by 2 array of
Recordset.
Actually, it's a recommended way in ASP when you access DB as it
disconnects the DB earlier.
Also, it's handy as you can directly access any data in the array
without looping.
As far as I know, there's no such function in PHP and I can make one.
My question is whether it's good in PHP.
pseudo-code:
$data = get_data("selec t * from table1");
$var = $data[3][2]; //value at 4th row, 3rd column
This way, I can wrap db connection, data retrieval, and error handling
with one function (or maybe a class).
Is the idea workable?
TIA.
Sam
Jan 17 '06
54 3651
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 sa********@gmai l.com wrote:
[...] If I just let each page loop each record, some pages might forget releasing db resources.
And? In PHP, if you let a file descriptor open, ther garbage collector will
gracefully close it when the script ends or gets killed for whatever
reason. If you let any other resource open (say, a database connection), it
will be gracefully closed when the program ends, whenever the developer
explicitly closes it or not.
And I feel uncomfortable with the idea that db is opened at the top of a page and closed somewhere else far from the top.
In most cases, the DB connection is opened the first time a user hits a
page, and is not closed until some time after the user has hit a page.
Persistent connections save a lot of overhead, and are a lot preferable
than opening a DB conn, querying it, and closing it several times. A lot.
- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net http://acm.asoc.fi.upm.es/~mr/
Proudly running Debian Linux with 2.6.12-1-686 kernel, KDE3.5.0, and PHP
5.1.1-1 generating this signature.
Uptime: 19:51:59 up 8 min, 1 user, load average: 0.59, 1.04, 0.64
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
iD8DBQFDzo8j3jc Q2mg3Pc8RAq/VAJ9MGLTzxgAhd3 LX38lOsduRbaL3Y ACcDQ+a
C0J+ttOUT2zpQmG ILWfIyX4=
=bxAi
-----END PGP SIGNATURE-----
Iván Sánchez Ortega wrote: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Chung Leong wrote:
R. Rajesh Jeba Anbiah wrote: As many people have pointed out, *never* dump the table data into array. Fetch the record and immediately get that processed. If you have any *valid* reason, buffer the data into a very very small (known) sized array. If using MySQL, use the LIMIT if possible. The reasoning being?
Code refactoring, for example.
As you may already know, refactoring is almost always a good idea, as it reduces complexity of the algorithm, procesing time, and increases the cache hit ratio, to name a few consecuences.
Let's suppose the following example:
<?php
mysql_pconnect( blahblahblah); $r = mysql_query(bla hblahblah); $db_results = array(); while ($row = mysql_fetch_arr ay($r)) { $db_results[] = $row; }
foreach ($db_results as $row) { foobar; }
?>
Well, let's refactor that code:
<?php
mysql_pconnect( blahblahblah); $r = mysql_query(bla hblahblah); while ($row = mysql_fetch_arr ay($r)) { foobar; }
?>
Less complexity, less CPU time, less memory, less code. Any developer that has been taught anything about algorithms knows that. You'd better have a good reason to not refactorize your code in this way.
Well, the example is not good for the issue here.
The following code will be encapsulated.
mysql_pconnect( blahblahblah); $r = mysql_query(bla hblahblah); $db_results = array(); while ($row = mysql_fetch_arr ay($r)) { $db_results[] = $row; }
So when the practical code will be:
$result = get_result("sel ect * from table1" [, "myDB"]);
for($result as $row){
foobar();
}
Of course, it has overhead, but that's more modular and most modularity
comes with costs.
Now I understand your point and I think dumping db data to an array
should be carefully used.
Thanks.
Sam
Iván Sánchez Ortega wrote: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
sa********@gmai l.com wrote:
[...] If I just let each page loop each record, some pages might forget releasing db resources.
And? In PHP, if you let a file descriptor open, ther garbage collector will gracefully close it when the script ends or gets killed for whatever reason. If you let any other resource open (say, a database connection), it will be gracefully closed when the program ends, whenever the developer explicitly closes it or not.
And I feel uncomfortable with the idea that db is opened at the top of a page and closed somewhere else far from the top.
In most cases, the DB connection is opened the first time a user hits a page, and is not closed until some time after the user has hit a page. Persistent connections save a lot of overhead, and are a lot preferable than opening a DB conn, querying it, and closing it several times. A lot.
- -- - ---------------------------------- Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net
http://acm.asoc.fi.upm.es/~mr/ Proudly running Debian Linux with 2.6.12-1-686 kernel, KDE3.5.0, and PHP 5.1.1-1 generating this signature. Uptime: 19:51:59 up 8 min, 1 user, load average: 0.59, 1.04, 0.64
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux)
iD8DBQFDzo8j3jc Q2mg3Pc8RAq/VAJ9MGLTzxgAhd3 LX38lOsduRbaL3Y ACcDQ+a C0J+ttOUT2zpQmG ILWfIyX4= =bxAi -----END PGP SIGNATURE-----
Persistant Connections:
The connection to the SQL server will not be closed when the execution
of the script ends. Instead, the link will remain open for future use
(mysql_close() will not close links established by mysql_pconnect( )).
If you have alot of concurrent users on the site mysql_pconnect can
cause major resource issues.
The database library that is in discussion does not close the
connection after each query ie fetch_array.
The only time the mysql connection is closed is when either the object
is removed or the execution of the page terminates.
Most of the data-type returned from a mysql recordset field is string,
Building large arrays using string data types doesn't use alot of
memory nor impact performance really.
Also if you design your SQL statements correctly and efficiently you
shouldn't have this issue anyway.
I've since written a more advanced database library that uses a
singleton pattern,
the singleton pattern is useful when you are requiring only once
instance of that object across multiple objects.
I've found these librarys extermely efficient in programming and
performance,
You can view examples at. http://sliterous.no-ip.org/php-libs/
Persistant Connections:
The connection to the SQL server will not be closed when the execution
of the script ends. Instead, the link will remain open for future use
(mysql_close() will not close links established by mysql_pconnect( )).
If you have alot of concurrent users on the site mysql_pconnect can
cause major resource issues.
The database library that is in discussion does not close the
connection after each query ie fetch_array.
The only time the mysql connection is closed is when either the object
is removed or the execution of the page terminates.
Most of the data-type returned from a mysql recordset field is string,
Building large arrays using string data types doesn't use alot of
memory nor impact performance really.
Also if you design your SQL statements correctly and efficiently you
shouldn't have this issue anyway.
I've since written a more advanced database library that uses a
singleton pattern,
the singleton pattern is useful when you are requiring only one
instance of that object across multiple objects.
I've found these librarys extermely efficient in programming and
performance,
You can view examples at. http://sliterous.no-ip.org/php-libs/
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Jerry Stuckle wrote: And it's *never* a good idea to give absolutes :-).
And I've told you ten million times not to exaggerate :-P
There are times when it's better to dump a table into an array - like when you have a lot of processing to do on multiple items and want to release mysql resources.
Batch jobs are not the most usual thing to do in PHP...
Also collection classes for abstracting the data. And that's just the beginning.
Such a class should not relay on a complete table dump to work: it should
load data dinamically (and/or cache some of it) in order to improve
performance. And it should use the SPL functionality, to provide a clean
way to dinamically iterate over the data set.
- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net
Un ordenador no es un televisor ni un microondas, es una herramienta
compleja.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
iD8DBQFDzpKo3jc Q2mg3Pc8RAr/wAJ9qbK5EaFOB02 hGp6sdCvFHFaIsa wCeJddU
jlJ+IaoAfDDtk2R zId7DAuw=
=/byC
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 sa********@gmai l.com wrote: Of course, it has overhead, but that's more modular and most modularity comes with costs.
Yep, and it's up to you to decide if you want modularity or performance in
this case.
Or both, using a carefully planned SPL module :-)
- --
- ----------------------------------
Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net http://acm.asoc.fi.upm.es/~mr/
Proudly running Debian Linux with 2.6.12-1-686 kernel, KDE3.5.0, and PHP
5.1.1-1 generating this signature.
Uptime: 23:00:55 up 3:17, 1 user, load average: 0.34, 0.36, 0.39
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
iD8DBQFDzrrl3jc Q2mg3Pc8RAug9AK CA3bMfFq6vCWki6 TFDQlyqZ9aRjACe JPwT
Nmycw1uSIykNqa3 mwxqWWe4=
=Ha4u
-----END PGP SIGNATURE-----
Iván Sánchez Ortega wrote: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Jerry Stuckle wrote:
And it's *never* a good idea to give absolutes :-).
And I've told you ten million times not to exaggerate :-P
There are times when it's better to dump a table into an array - like when you have a lot of processing to do on multiple items and want to release mysql resources.
Batch jobs are not the most usual thing to do in PHP...
Who said anything about batch jobs? I didn't. Also collection classes for abstracting the data. And that's just the beginning.
Such a class should not relay on a complete table dump to work: it should load data dinamically (and/or cache some of it) in order to improve performance. And it should use the SPL functionality, to provide a clean way to dinamically iterate over the data set.
Who said anything about a complete table dump? I didn't.
- -- - ---------------------------------- Iván Sánchez Ortega -i-punto-sanchez--arroba-mirame-punto-net
Un ordenador no es un televisor ni un microondas, es una herramienta compleja. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux)
iD8DBQFDzpKo3jc Q2mg3Pc8RAr/wAJ9qbK5EaFOB02 hGp6sdCvFHFaIsa wCeJddU jlJ+IaoAfDDtk2R zId7DAuw= =/byC -----END PGP SIGNATURE-----
--
=============== ===
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp. js*******@attgl obal.net
=============== ===
Iván Sánchez Ortega wrote: -----BEGIN PGP SIGNED MESSAGE----- As you may already know, refactoring is almost always a good idea, as it reduces complexity of the algorithm, procesing time, and increases the cache hit ratio, to name a few consecuences.
I don't know what refactoring has to do with this, but since you
mention it, I'll give you my 2 cents. In the real world, refactoring is
always a bad idea. By definition, you are not adding new
functionality--hence value to the product. You are thus wasting
programming and QA resource. Moreover, you risk introducting bugs into
what was working before. It's a lose-lose proposition.
As every good engineer knows, if it ain't broken, don't fix it.
Less complexity, less CPU time, less memory, less code. Any developer that has been taught anything about algorithms knows that. You'd better have a good reason to not refactorize your code in this way.
There are plenty of good reasons. Interleaving data retrieval and
processing in the manner you describes makes it hard to properly
modularize the code. You are also fixing the direction by which the
rows can be processed--the sorting order of the query--without the
possibility of looking ahead.
That's not a bad idea for batch jobs, but is a terrible one when you have tenths, hundreds of hits per second. A few MB of memory per script may seem a small issue, but think about a few MB per script, 100 scripts per second. A "short time" is not a big thing, but a "short time" hundreds of times per second is.
That's just unrealistic. When you retrieve data from the database, it
usually goes somewhere--i.e. to the client. In you scenario you'd have
a server that output multiple gigs per-second. I don't know about you
but I certainly don't have a peta-byte bandwidth quota.
Chung Leong wrote: R. Rajesh Jeba Anbiah wrote: As many people have pointed out, *never* dump the table data into array. Fetch the record and immediately get that processed. If you have any *valid* reason, buffer the data into a very very small (known) sized array. If using MySQL, use the LIMIT if possible.
The reasoning being?
In my opinion conserving memory for the sake of conserving memory is just silly. Hardware resources are there to be used. There's nothing wrong with a script using a few megs of extra memory, as it'll release them a short time later.
Since, I know that Chung Leong is heading anti-performance campaign,
I'm not going to fight with him;-)
As you already know that by dumping huge bufferred records in to PHP
array, you're just duplicating the buffer. Also, as I mentioned
earlier, one can buffer the records in PHP provided the records are
very very less (for a valid reason).
FWIW, I have came across a PHP application that was manufactured in
another corner of the world. In which the programmer has bufferred the
whole user table into an array by mapping id and then he chose the
record like $record[$_GET['id']]. Not sure, the programmer isn't aware
of WHERE clause.
Also, try benchmarking the outcome by dumping the whole table (or at
least huge record sets) into an PHP array. If you use Windows/Apache
like me, you'd immediately see the result.
<OT>It's really nice to see new comers like Iván Sánchez Ortega
and the c.l.php discussions are now getting hotter:-)</OT>
--
<?php echo 'Just another PHP saint'; ?>
Email: rrjanbiah-at-Y!com Blog: http://rajeshanbiah.blogspot.com/ This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: matty |
last post by:
Go away for a few days and you miss it all... A few opinions...
Programming is a craft more than an art (software engineering, not
black magic) and as such, is about writing code that works, first
and foremost. If it works well, even better. The same goes for
ease of maintenance, memory footprint, speed, etc, etc. Most of the
time, people are writing code for a use in the *real world*, and
not just as an academic exercise.
Look at...
|
by: Generic Usenet Account |
last post by:
I am going through some legacy code that has an "isNull()" method
defined on certain classes. I can see that this can be a good way to
eliminate certain types of crashes, by making this the first call in a
method (and bailing out immediately if it is true). However, if this
is such a good idea, why is it not common industry practice?
Mohan
|
by: dreamcatcher |
last post by:
I always have this idea that typedef a data type especially a structure
is very convenient in coding, but my teacher insisted that I should use
the full struct declaration and no further explanations, so I wonder is
there any good using typedef ? and I also know that when a data type
being typedefed become an abstract data type, so what exactly is an
abstract data type, is it any good ?
--
Posted via http://dbforums.com
|
by: Sensei |
last post by:
Hi!
I'm thinking about a good programming style, pros and cons of some
topics. Of course, this has nothing to do with indentation... Students
are now java-dependent (too bad) and I need some serious motivations
for many issues... I hope you can help me :) I begin with the two major
for now, others will come for sure!
- function variables: they're used to java, so no pointers. Personally
I'd use always pointers, but why could be better...
|
by: tony |
last post by:
If you have any PHP scripts which will not work in the current releases
due to breaks in backwards compatibility then take a look at
http://www.tonymarston.net/php-mysql/bc-is-everything.html and see if
you agree with my opinion or not.
Tony Marston
http://www.tonymarston.net
| |
by: Charles D Hixson |
last post by:
I was reading through old messages in the list and came up against an
idea that I thought might be of some value:
"Wouldn't it be a good idea if one could "rewind" an iterator?"
Not stated in precisely those terms, perhaps, but that's the way I read it.
I appreciate that one could use a sequence rather than an iterator, and
that I don't understand the implementation issues. (They don't look
large, but what to I know?)
But would it be...
|
by: pigeonrandle |
last post by:
Hi,
My application creates a project that is structured like a tree, so i
want to use a treeview to display it to the user.
Would it be a good idea to create the various parts of project as
classes inherited from TreeNode and then just add them to the treeview
(ie adding extra properties to the inherited classes to hold my data)?
This would make outputting the project as an xml file very simple as i
could just traverse the treeview and...
|
by: elgiei |
last post by:
Good morning at all,
i have to implement a server,that every n-seconds (eg. 10sec) sends to
other clients,which files and directory has been deleted or modified.
i build a n-tree, for each files on harddisk there's a node into n-
tree,
this solution is not good for large hard disk..
and i can't use inotify (it's forbidden),
and only c solutions are accepted
without third party software or external calls.
|
by: mike3 |
last post by:
Hi.
Is this a good idea?:
<begin code>
/* Addition operator: += */
const BigFix &BigFix::operator+=(const BigFix &rhs)
{
ErrorType err;
int lhs_sign = sign, rhs_sign = rhs.sign;
|
by: marktang |
last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look !
Part I. Meaning of...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed.
This is as boiled down as I can make it.
Here is my compilation command:
g++-12 -std=c++20 -Wnarrowing bit_field.cpp
Here is the code in...
| |
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth.
The Art of Business Website Design
Your website is...
|
by: Hystou |
last post by:
Overview:
Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules.
He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms.
Adolph will...
|
by: TSSRALBI |
last post by:
Hello
I'm a network technician in training and I need your help.
I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs.
The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols.
I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
|
by: muto222 |
last post by:
How can i add a mobile payment intergratation into php mysql website.
| |
by: bsmnconsultancy |
last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...
| |