By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,046 Members | 2,058 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,046 IT Pros & Developers. It's quick & easy.

PHP Design tools? IDE?

P: n/a
Hi,

Im an experienced database+software designer and developer,
but, unfortunately, anything to do with web-programming and
web-systems designs is still a pretty new area to me...
(been working mostly with "legacy" environments the last 10 years)

So I am writing this, hoping to get some useful advise and feedback...

I have done some pretty trivial, small websites with html/PHP,
and now starting a bit more advanced one, where I will need SQL
database-support, and will probably go for MySQL.

Of course, I have read some elemantary tutorials and articles on the web,
but still wonder how PHP is actually being used in practice when
creating bigger web-sites...

Is PHP usually handcoded?
Like... you just make up a database, then a number of tables you think will do,
and finally (what I am most concerned about) a multiplicity of different
PHP script-files in an ad-hoc fashion?
(I hope not, since it seems that this will quite soon produce a pretty
diffcult-to-maintain-and-grasp mess of the whole system)

So what I would like to learn, is what nice (preferrably free/open-source)
tools for design and project-management are available?

My idea is that you would like to have some IDE application, something
like VB or Delphi or VC++, where you design your webpages and
forms in a graphical manner, and you can code "event" pieces of code
that will react to clicks, buttons and user-actions like
in the traditional IDE:s.
And then there is some "compiler" or rather that do all the boring routine-work
of converting your design to html/PHP code.
The page-designs would also be based on "templates", of course,
so its flexible to modify if you want to change the look-and-feel
of your whole site.

Ideally, I would also prefer that the whole website-design
would be stored in a separate, fixed set of tables in the database,
rather than in a zillion different script-files, CSS-sheets, etc.

I can also envision a design where, even if the website contains
a huge number of pages and different functions,
there would actually only be a single PHP-script "index.php"
and all sub-pages are generated on-the-fly, from arguments
in the URL, i.e. all URLs simply looking like:

index.php?pageid=1
index.php?pageid=2
...
index.php?pageid=productlist
index.php?pageid=registration
index.php?pageid=loginform
index.php?pageid=contactform

etc...

The index.php startfile might only look something like:

<html>
<php?
include("lib.php");
main();
?>
</html>

Then all pages would be generated on-the-fly by the main() function
(defined in "lib.php", along with a host of useful subroutines),
and driven by the URL-parameter 'pageid' and the overall site-design
somehow stored in the database.

All design-work you did in the IDE (graphical, code-snippets, etc)
would then actually be stored in the DB, and the PHP script-files
could then be a quite static PHP-implementation of an "engine"
reading the DB-content and outputting the HTML-code and pages.

So... could you please give me a short brief/hints/links to what
IDEs and/or other free tools, along those lines, that already exist
or what are commonly used for PHP/MySQL in real-world development.

Thanks,
Jul 17 '05 #1
Share this Question
Share on Google+
43 Replies


P: n/a
grz02 wrote:
So... could you please give me a short brief/hints/links to what
IDEs and/or other free tools, along those lines, that already exist
or what are commonly used for PHP/MySQL in real-world development.


Hi!

Im not an expert on the matter since I've done most of my routines,
classes and template engines myself, but just some sites you might want
to check out come to mind.

See the popular Smarty template engine at:
http://smarty.php.net/

And for a real PHP IDE at least check out Zends PHP Studio at
http://www.zend.com/store/products/zend-studio.php?home

For mySQL database management there is of course at least PhpMyAdmin:
http://www.phpmyadmin.net

I know there are tons of others but without real personal experience on
those, I will let others inform you.

HTH

--
Suni

Jul 17 '05 #2

P: n/a

"grz02" <gr***@spray.se> wrote in message
news:16**************************@posting.google.c om...
Hi,

Im an experienced database+software designer and developer,
but, unfortunately, anything to do with web-programming and
web-systems designs is still a pretty new area to me...
(been working mostly with "legacy" environments the last 10 years)

So I am writing this, hoping to get some useful advise and feedback...

I have done some pretty trivial, small websites with html/PHP,
and now starting a bit more advanced one, where I will need SQL
database-support, and will probably go for MySQL.

Of course, I have read some elemantary tutorials and articles on the web,
but still wonder how PHP is actually being used in practice when
creating bigger web-sites...

Is PHP usually handcoded?
Like... you just make up a database, then a number of tables you think
will do,
and finally (what I am most concerned about) a multiplicity of different
PHP script-files in an ad-hoc fashion?
(I hope not, since it seems that this will quite soon produce a pretty
diffcult-to-maintain-and-grasp mess of the whole system)
The way to Rapid Application Development (RAD) in any language, not just
PHP, is to have a solid architectural framework with lots of reusable
modules which you can access in order to build individual components or
transactions. You may start by using something pre-written by somebody else,
but once you have sussed out how it works you may wish to customise it,
either because you wish to add more options, or because your personal style
is different.

When it comes to architectural frameworks I would suggest the 3-tier
architecture or Model-View-Controller (personally I use both combined). The
3-tier architecture includes a Data Access Object (DAO) which contains the
code which carries out all communication with the database.

When it comes to building individual components with their own screens you
can either produce all HTML tags from within your PHP code, or you can use a
templating system. Personally I use XSL transformations from XML data as it
is controlled by W3C standards and is used widely across the entire
industry, not just within the PHP world.

If you want further ideas you can browse the articles on my website at
http://www.tonymarston.co.uk/php-mysql/index.html
So what I would like to learn, is what nice (preferrably free/open-source)
tools for design and project-management are available?

My idea is that you would like to have some IDE application, something
like VB or Delphi or VC++, where you design your webpages and
forms in a graphical manner, and you can code "event" pieces of code
that will react to clicks, buttons and user-actions like
in the traditional IDE:s.
Web pages do not have "events" like traditional desktop applications. The
client issues an HTTP request, the web server recives it and generates a
response. Web pages are also stateless, which involves another big change in
thinking.
And then there is some "compiler" or rather that do all the boring
routine-work
of converting your design to html/PHP code.
PHP code is not compiled, it is interpretted. However, there are third-party
utilities that are able to cache the intermedaite bytecode.
The page-designs would also be based on "templates", of course,
so its flexible to modify if you want to change the look-and-feel
of your whole site.
Templating systems do not come built into PHP, you have you choose one and
plug it in to your framework.
Ideally, I would also prefer that the whole website-design
would be stored in a separate, fixed set of tables in the database,
rather than in a zillion different script-files, CSS-sheets, etc.
Dream on. Unless you can configure your web server to access a database you
MUST have scripts as files in a directory.
I can also envision a design where, even if the website contains
a huge number of pages and different functions,
there would actually only be a single PHP-script "index.php"
and all sub-pages are generated on-the-fly, from arguments
in the URL, i.e. all URLs simply looking like:
Yuck! You are talking about a front controller. Personally I would avoid
such things like the plague and go for smaller page controllers.
index.php?pageid=1
index.php?pageid=2
...
index.php?pageid=productlist
index.php?pageid=registration
index.php?pageid=loginform
index.php?pageid=contactform

etc...

The index.php startfile might only look something like:

<html>
<php?
include("lib.php");
main();
?>
</html>

Then all pages would be generated on-the-fly by the main() function
(defined in "lib.php", along with a host of useful subroutines),
and driven by the URL-parameter 'pageid' and the overall site-design
somehow stored in the database.

All design-work you did in the IDE (graphical, code-snippets, etc)
would then actually be stored in the DB, and the PHP script-files
could then be a quite static PHP-implementation of an "engine"
reading the DB-content and outputting the HTML-code and pages.
WYSIWYG editors for web pages are notorious for producing inefficient HTML
code. Quite frankly if you are unable to build web pages without such an
editor then your are a pretty poor developer. PHP was designed to produce
HTML tags, so if you do not know how to generate HTML tags manually you will
find PHP too difficult for your limited abilities.
So... could you please give me a short brief/hints/links to what
IDEs and/or other free tools, along those lines, that already exist
or what are commonly used for PHP/MySQL in real-world development.


Different developers use different combinations of tools to achieve their
purpose. I do it one way, but a thousand other developers will do it a
thousand other ways. Who is right? We all are. You pays your money and you
takes your choice.

Happy hunting.

--
Tony Marston

http://www.tonymarston.net

Jul 17 '05 #3

P: n/a
grz02 <gr***@spray.se> wrote or quoted:
I can also envision a design where, even if the website contains
a huge number of pages and different functions,
there would actually only be a single PHP-script "index.php"
and all sub-pages are generated on-the-fly, from arguments
in the URL, i.e. all URLs simply looking like:

index.php?pageid=1
index.php?pageid=2
...
index.php?pageid=productlist
index.php?pageid=registration
index.php?pageid=loginform
index.php?pageid=contactform

etc...

The index.php startfile might only look something like:

<html>
<php?
include("lib.php");
main();
?>
</html>


That's called a "front controller".

http://www.phppatterns.com/index.php...leview/81/1/1/

....is an article about front controllers in PHP.

It is good - but AFAICS, the arguments given against front controllers
in it are hogwash.

The main objections are:

``In the simple example above, that may not seem like a problem, but what
about when you have hundreds of Page Controllers? You end up with a
massive switch statment or perhaps something disguised in an array, an
XML document or whatever. For every page request, PHP will have to
reload a bunch of data which is irrelevant to the current request the
user is trying to perform.''

This is fundamentally the same task a webserver performs when looking up a
file in a filing system. If PHP is doing the task, the webserver
doesn't have to. It might be a bit slower in PHP than in C - but
that's hardly a reason for not doing it in PHP.

....and...

``The other problem with running everything through a single PHP script,
such as index.php, is it imposes a significant restriction on the
flexibility PHP offers, by default, of allowing developers to "drop a
script" anywhere under the web server's web root directory. Adopting
this approach will likely cause you problems later, both from the
practical perspective of having to updating the command hierarchy each
time you create a new file to issues like integration with other PHP
applications.''

....which seems like nonsense to me. If you /must/ access code that
is incompatible with a front controller (for some reason) nothing
stops you from doing so. You lose the front controller's facilities
in the process - but the alternative is not having them at all in
the first place - so in practice little is lost.

Storing your code in a filing system is a bit backwards - since
you have to do independent version control, history tracking,
backups, permission handling and updates.

Everything should go into a database.

Microsoft realised this - with WinFS:

http://msdn.microsoft.com/Longhorn/u...S/default.aspx

....but have yet to pull it off.
--
__________
|im |yler http://timtyler.org/ ti*@tt1lock.org Remove lock to reply.
Jul 17 '05 #4

P: n/a
In article <I5********@bath.ac.uk>, Tim Tyler wrote:
It is good - but AFAICS, the arguments given against front controllers
in it are hogwash.

The main objections are:

``In the simple example above, that may not seem like a problem, but what
about when you have hundreds of Page Controllers? You end up with a
massive switch statment or perhaps something disguised in an array, an
XML document or whatever. For every page request, PHP will have to
reload a bunch of data which is irrelevant to the current request the
user is trying to perform.''

This is fundamentally the same task a webserver performs when looking up a
file in a filing system. If PHP is doing the task, the webserver
doesn't have to. It might be a bit slower in PHP than in C - but
that's hardly a reason for not doing it in PHP.
The fileserver has to do a lookup anyways, to map the request to the
controller script. Thus you end up with 2 lookups, where the first is
pure overhead. In the case you want to have some code that is executed
in every script you can also consider using the auto_prepend and
auto_append functionality php has.

My conclusion: The added value of a controller is 0. Not even
mentionning that the chances for introducing bugs in your request->page
mapper. Where the code in your webserver has been tested many times
before.

Storing your code in a filing system is a bit backwards - since
you have to do independent version control, history tracking,
backups, permission handling and updates.

Everything should go into a database.


Imho, a file system is a database.

--
Met vriendelijke groeten,
Tim Van Wassenhove <http://www.timvw.info>
Jul 17 '05 #5

P: n/a
grz02 wrote:
Im an experienced database+software designer and developer,
but, unfortunately, anything to do with web-programming and
web-systems designs is still a pretty new area to me...
(been working mostly with "legacy" environments the last 10 years)
First thing to remember: This html + http. Your possibilities are very
limited (and also the work for you to do ;). But this also makes
UI-Design the most important thing when writing php-code, for the code
itself is not really a problem. But UI-Design is very difficult, for you
can barely interact with the user.

IMHO this makes php very different from any other programming language.

Is PHP usually handcoded?
Yeah, I would say so - but that doesn't mean that you will have to do it.

Like... you just make up a database, then a number of tables you think will do,
and finally (what I am most concerned about) a multiplicity of different
PHP script-files in an ad-hoc fashion?
(I hope not, since it seems that this will quite soon produce a pretty
diffcult-to-maintain-and-grasp mess of the whole system)
This is done quite often with php (the wbb is written this way), but
this definitely doesn't mean, that it is a good way.

So what I would like to learn, is what nice (preferrably free/open-source)
tools for design and project-management are available?
For php: Not really any.

My idea is that you would like to have some IDE application, something
like VB or Delphi or VC++, where you design your webpages and
forms in a graphical manner, and you can code "event" pieces of code
that will react to clicks, buttons and user-actions like
in the traditional IDE:s.
And then there is some "compiler" or rather that do all the boring routine-work
of converting your design to html/PHP code.
The page-designs would also be based on "templates", of course,
so its flexible to modify if you want to change the look-and-feel
of your whole site.
Of course you should use templates for any of your projects (I would say
that everything with more than 30 lines of codes should use a
template-system). Which template engine to use is your first decision
(smarty, etc or a self-written?).
But the thing with vb-like design and events is neither available nor
needed - html does this already. There are no events, all fields are
perfectly given to you in the $_REQUEST variable - nothing more to want.

What you seem to be looking for is a WYSIWYG Editor for html. Of course
you can use one (I'd say dreamweaver is definitely the best one for
windows, but it's not free), but that has nothing to do with php, but
rather just a matter of your html-design. You could also write you html
in a text-editor - it's not such a big difference, once you've learned it.

Ideally, I would also prefer that the whole website-design
would be stored in a separate, fixed set of tables in the database,
rather than in a zillion different script-files, CSS-sheets, etc.
CSS should stay in files, for there is no use in storing it in tables.
One big css file for your whole site will do it. If you don't like
files, you can of course store your templates in the database, but this
is not making a real difference. You will change a zillion different
files to a zillion different table-rows...

I can also envision a design where, even if the website contains
a huge number of pages and different functions,
there would actually only be a single PHP-script "index.php"
and all sub-pages are generated on-the-fly, from arguments
in the URL, i.e. all URLs simply looking like:

index.php?pageid=productlist

etc... [snip] Then all pages would be generated on-the-fly by the main() function
(defined in "lib.php", along with a host of useful subroutines),
and driven by the URL-parameter 'pageid' and the overall site-design
somehow stored in the database.
This isn't such a good thing to do. You should combine some functions in
one file, but not all - this will get big and bloated and your urls will
look ugly and will be hard to understand. Just look at some forums (not
the wbb - it's coding style is complete shit) - they have 5-15 files,
each dedicated to a set of functions (one for thread-handling, one for
post-handling, one for the user-profile and settings, and the admin
control panel completely separated into a separate dir). I'd say that
this is the right way to do it.

All design-work you did in the IDE (graphical, code-snippets, etc)
would then actually be stored in the DB, and the PHP script-files
could then be a quite static PHP-implementation of an "engine"
reading the DB-content and outputting the HTML-code and pages.
That's what templates do.

So... could you please give me a short brief/hints/links to what
IDEs and/or other free tools, along those lines, that already exist
or what are commonly used for PHP/MySQL in real-world development.


The best editor for windows is NuSphere's phpEd. But it costs 500$. Good
- completely free - tools are rare for php (on windows).
I'd say you have a strange view of php and should think about a bit, for
most of what you wanted an IDE to do is done with a template-system and
the already existing separation of processing-code ->PHP and design and
interface code ->HTML.

greetings, Christian
Jul 17 '05 #6

P: n/a

"Tim Tyler" <ti*@tt1lock.org> wrote in message news:I5********@bath.ac.uk...
grz02 <gr***@spray.se> wrote or quoted:
I can also envision a design where, even if the website contains
a huge number of pages and different functions,
there would actually only be a single PHP-script "index.php"
and all sub-pages are generated on-the-fly, from arguments
in the URL, i.e. all URLs simply looking like:

index.php?pageid=1
index.php?pageid=2
...
index.php?pageid=productlist
index.php?pageid=registration
index.php?pageid=loginform
index.php?pageid=contactform

etc...

The index.php startfile might only look something like:

<html>
<php?
include("lib.php");
main();
?>
</html>
That's called a "front controller".

http://www.phppatterns.com/index.php...leview/81/1/1/

...is an article about front controllers in PHP.

It is good - but AFAICS, the arguments given against front controllers
in it are hogwash.


No they're not. IMHO they are perfectly sound. Front controllers were
designed for languages such as Java which have web servers which behave
differently to Apache. With Apache you can have a URL that says
www.site.com/page256.php?arg1=foo&arg2=bar and it will go straight to the
requseted page. Using http://www.site.com/index.php?page=2...1=foo&arg2=bar
means that everything has to go through index.php first, and index.php must
then contain code to redirect to page256.php. Not only is this an unecessary
overhead, it also means that every time you change your application you must
remember to update index.php. This is a disaster waiting to happen, IMHO. I
much prefer having a separate controller for each page.

--
Tony Marston

http://www.tonymarston.net
The main objections are:

``In the simple example above, that may not seem like a problem, but what
about when you have hundreds of Page Controllers? You end up with a
massive switch statment or perhaps something disguised in an array, an
XML document or whatever. For every page request, PHP will have to
reload a bunch of data which is irrelevant to the current request the
user is trying to perform.''

This is fundamentally the same task a webserver performs when looking up a
file in a filing system. If PHP is doing the task, the webserver
doesn't have to. It might be a bit slower in PHP than in C - but
that's hardly a reason for not doing it in PHP.
The web server does not have a massive switch statement. It simply has the
name of one of thousands of possible files that may exist on your file
system.
...and...

``The other problem with running everything through a single PHP script,
such as index.php, is it imposes a significant restriction on the
flexibility PHP offers, by default, of allowing developers to "drop a
script" anywhere under the web server's web root directory. Adopting
this approach will likely cause you problems later, both from the
practical perspective of having to updating the command hierarchy each
time you create a new file to issues like integration with other PHP
applications.''

...which seems like nonsense to me. If you /must/ access code that
is incompatible with a front controller (for some reason) nothing
stops you from doing so. You lose the front controller's facilities
in the process - but the alternative is not having them at all in
the first place - so in practice little is lost.
You do not lose any functionality of a front controller by using separate
page controllers. There is nothing that can be done in a front controller
that cannot be done in separate page controllers.
Storing your code in a filing system is a bit backwards - since
you have to do independent version control, history tracking,
backups, permission handling and updates.

Everything should go into a database.
Who says?
Microsoft realised this - with WinFS:

http://msdn.microsoft.com/Longhorn/u...S/default.aspx

...but have yet to pull it off.


Just because Microsoft say something does not mean that it is a good idea.

--
Tony Marston

http://www.tonymarston.net

Jul 17 '05 #7

P: n/a
Hi grz,

This is a lot of questions/ideas/requirements. We have developed an open
source framework for development of web based applications. It is called
phpPeanuts. It seems to be close to some of your ideas but quite far
from others. This has to do with design choices we made. I will begin
with the similarities, then go into the differences.
I can also envision a design where, even if the website contains
a huge number of pages and different functions,
there would actually only be a single PHP-script "index.php"
and all sub-pages are generated on-the-fly, from arguments
in the URL, i.e. all URLs simply looking like:

index.php?pageid=1
index.php?pageid=2
...
index.php?pageid=productlist
index.php?pageid=registration
Something like that yes:
index.php?pntType=Product&pntHandler=EditDetailsPa ge&id=1

The index.php script will give you a form for editing the Product with
id = 1. Or more precise, it will probably include the file that contains
the class ObjectEditDetailsPage, instantiate it and forward it the
request*. ObectEditDetailsPage will use its inherited function
getRequestedObject() to obtain an instance of the class Product with id
= 1, and generate and output a form for displaying and editing its
properties. The form may be generated entirely from metadata. No need to
have a database with forms. No boring designing forms for Product,
Customer, Order, Shipment and all those other types you need. Just one
single generic site design. Then if you specify the metadata in the
Product class, the rest done is automaticly.

As you see our aproach is object oriented. This means that we do not
have a big pile of functions in which an engine is implemented, that
processes passive data from a database. We rather have an assembly of
objects that cooperate to do the work. Objects combine functions and
data. This makes them much more flexible then just trying to represent
the entire website desing in data (like your design seems to do) or in
functions (like standard php scripting tends to do).

With object orientation it is possible to offer the developer a default
user interface for his application, and at the same time to allow the
developer to specialize allmost any aspect of both the engine and the
design by creating subclasses and overriding some methods. Because the
methods are written in php, he can put in any code he likes, so he is
not limited to what the existing engine can do.

Like with most traditional IDE's, the user interface is composed from
objects, like listboxes, tables, buttons, dialogs, etc. Only the
phpPeanuts objects do not exist in the client PC and draw on the screen,
but rather exist on the server, process requests and output HTML. For
lists of user interfacing classes by category see
http://www.phppeanuts.org/site/index...t.web/web.html

Of course there is still a need for a place to put pieces of HTML that
hold parts of the design. We could have stored them in the database, or
use a template enigine, but we chose 'the simpelest thing that could
possibly work': php include files. These have the advantage that you can
include pieces of php to call methods. See
http://www.phppeanuts.org/site/index...principle.html
for how all this fits together.

But if you like it better to have a database, be my guest: make some
subclasses, override the methods that currently do the inclusion and
fetch and interpret your design data from there... (actually you may hit
a design limitation here, or maybe it has already been solved. Anyhow,
if you explain how our design limits you you to specilize the framework
in the way you need to, we will be happy see if we can make the
necessary adaptations)
and you can code "event" pieces of code
that will react to clicks, buttons and user-actions like
in the traditional IDE:s.


We do in specific situations support event handler methods. However,
http works with url's, requests and pages that are returned. So our user
interfacing framework focusses on handling requests and composing pages
and forms. Only where the composition needs to be specialized in
repeated details you will need to implement event handler methods.
Otherwise it is a matter of specializing classes that becuase they exist
override the defaults and override methods.

We do not support IDE style WYSIWYG graphically editing a user
interface. In an earlier version of the framework (in Java) we had an
template technique for this, but even with dreamweaver it proved to be a
lot of work to manually desing the user interface of a substantial
application. The problem was a lack of abstraction. The result wass lots
of replication, little reuse. We tried to make our objects editable with
dreamweaver, but that was too cumbersome. Existing WYSIWYG editors
simply could not cope with the dynamics of objects. (For some reason
none of them supported the JavaBeans standard the way IDE's do).

It would still be nice to have a way to WYSIWIG edit the bits and pieces
of html the design is composed from. Preferably i would have that in the
browser, as part of the working application. But building that is a huge
effort, and the more we can reuse pieces of design, the less we can
gain from it. But if anyone has substantial leftover money i would be
happy to look into it ;-)

I guess this is the basic difference between traditional IDE's and
phpPeanuts: IDE's try to facilitate graphical designing and coding. We
build components and try to faciltate their reuse. The more you can
reuse, the less design and code you need for the same end user function.
We believe that in the long run this gives a higer productivity then
traditional IDE's and results in more flexible applications, in the
sense of easyer to adapt to new or changing requirements.

Greetings,

Henk Verhoeven,
www.phpPeanuts.org.
* The mapping of urls to objects is called Request Dispatch and it is
acutally a bit more complicated, see
http://www.phppeanuts.org/site/index...+dispatch.html

Jul 17 '05 #8

P: n/a
Tim Van Wassenhove <eu**@pi.be> wrote or quoted:
In article <I5********@bath.ac.uk>, Tim Tyler wrote:
It is good - but AFAICS, the arguments given against front controllers
in it are hogwash.

The main objections are:

``In the simple example above, that may not seem like a problem, but what
about when you have hundreds of Page Controllers? You end up with a
massive switch statment or perhaps something disguised in an array, an
XML document or whatever. For every page request, PHP will have to
reload a bunch of data which is irrelevant to the current request the
user is trying to perform.''

This is fundamentally the same task a webserver performs when looking up a
file in a filing system. If PHP is doing the task, the webserver
doesn't have to. It might be a bit slower in PHP than in C - but
that's hardly a reason for not doing it in PHP.


The fileserver has to do a lookup anyways, to map the request to the
controller script.


Examine the work it has to do - depending on the structure of the
filing system. The fewer files, the less work.
In the case you want to have some code that is executed
in every script you can also consider using the auto_prepend and
auto_append functionality php has.
Doesn't using auto_prepend require php.ini/httpd.conf access - and affect
*everything*?

It doesn't make much difference - the main issue here - IMO - is whether
the site is in the filing system or the database.
My conclusion: The added value of a controller is 0. Not even
mentionning that the chances for introducing bugs in your request->page
mapper. Where the code in your webserver has been tested many times
before.


Why can't a front controller be "tested many times before"?

Code can be shared. Not everyone has to reinvent the wheel.
Everything should go into a database.


Imho, a file system is a database.


It is.

It is a crappy one, with poor versioning, searching, filtering and sorting
facilities.
--
__________
|im |yler http://timtyler.org/ ti*@tt1lock.org Remove lock to reply.
Jul 17 '05 #9

P: n/a
Tony Marston <to**@nospam.demon.co.uk> wrote or quoted:
"Tim Tyler" <ti*@tt1lock.org> wrote in message news:I5********@bath.ac.uk...
grz02 <gr***@spray.se> wrote or quoted:
I can also envision a design where, even if the website contains
a huge number of pages and different functions,
there would actually only be a single PHP-script "index.php"
and all sub-pages are generated on-the-fly, from arguments
in the URL, i.e. all URLs simply looking like:

index.php?pageid=1
index.php?pageid=2
...
index.php?pageid=productlist
index.php?pageid=registration
index.php?pageid=loginform
index.php?pageid=contactform

etc...

The index.php startfile might only look something like:

<html>
<php?
include("lib.php");
main();
?>
</html>
That's called a "front controller".

http://www.phppatterns.com/index.php...leview/81/1/1/

...is an article about front controllers in PHP.

It is good - but AFAICS, the arguments given against front controllers
in it are hogwash.


No they're not. IMHO they are perfectly sound. Front controllers were
designed for languages such as Java which have web servers which behave
differently to Apache. With Apache you can have a URL that says
www.site.com/page256.php?arg1=foo&arg2=bar and it will go straight to the
requseted page. Using http://www.site.com/index.php?page=2...1=foo&arg2=bar
means that everything has to go through index.php first, and index.php must
then contain code to redirect to page256.php.


You are complaining about the performance cost of reading a file like:

<php?
include("lib.php");
main();
?>

....from the server?

For those anal about performance, that seems to be more than compensated
for by the lack of a need to include:

include("header.php");
....
include("footer.php");

....in every single page - and besides, that file should get cached and
ought to take practically no time to read.
Not only is this an unecessary overhead, it also means that every time
you change your application you must remember to update index.php.


When index.php looks like:

<php?
include("lib.php");
main();
?>

....?

How does that need updating?
--
__________
|im |yler http://timtyler.org/ ti*@tt1lock.org Remove lock to reply.
Jul 17 '05 #10

P: n/a
Tim Tyler wrote:
Doesn't using auto_prepend require php.ini/httpd.conf access
No, you can use .htaccess files.
and affect *everything*?


Yes.

--
Chris Hope - The Electric Toolbox - http://www.electrictoolbox.com/
Jul 17 '05 #11

P: n/a
In article <I5********@bath.ac.uk>, Tim Tyler wrote:
Tim Van Wassenhove <eu**@pi.be> wrote or quoted:
In article <I5********@bath.ac.uk>, Tim Tyler wrote:
> It is good - but AFAICS, the arguments given against front controllers
> in it are hogwash.
>
> The main objections are:
>
> ``In the simple example above, that may not seem like a problem, but what
> about when you have hundreds of Page Controllers? You end up with a
> massive switch statment or perhaps something disguised in an array, an
> XML document or whatever. For every page request, PHP will have to
> reload a bunch of data which is irrelevant to the current request the
> user is trying to perform.''
>
> This is fundamentally the same task a webserver performs when looking up a
> file in a filing system. If PHP is doing the task, the webserver
> doesn't have to. It might be a bit slower in PHP than in C - but
> that's hardly a reason for not doing it in PHP.


The fileserver has to do a lookup anyways, to map the request to the
controller script.


Examine the work it has to do - depending on the structure of the
filing system. The fewer files, the less work.


The fewer lookups, the less work.
request -> lookup controller script by webserver -> lookup other script
by controller ->

vs

request -> lookup other script by webserver

In the case you want to have some code that is executed
in every script you can also consider using the auto_prepend and
auto_append functionality php has.
Doesn't using auto_prepend require php.ini/httpd.conf access - and affect
*everything*?


It only works for the directory where you request it (and for the
directories under it)
It doesn't make much difference - the main issue here - IMO - is whether
the site is in the filing system or the database.
My conclusion: The added value of a controller is 0. Not even
mentionning that the chances for introducing bugs in your request->page
mapper. Where the code in your webserver has been tested many times
before.


Why can't a front controller be "tested many times before"?
Code can be shared. Not everyone has to reinvent the wheel.


But the code in apache has already been tested many times before. And i
have a good feeling it has been tested many more times than whatever
controller script. Yes, i agree on not reinventing the wheel.

> Everything should go into a database.


Imho, a file system is a database.


It is.

It is a crappy one, with poor versioning, searching, filtering and sorting
facilities.


Depends on the filesystem you are using, and what your requirements are.
Personally i like the article: http://www.namesys.com/whitepaper.html

--
Met vriendelijke groeten,
Tim Van Wassenhove <http://www.timvw.info>
Jul 17 '05 #12

P: n/a
Great thanks to all who responded!

Was very interesting reading for me with
discussion, links and thought-food....

Just some clarification...
With Apache you can have a URL that says
www.site.com/page256.php?arg1=foo&arg2=bar and it will go straight to the
requseted page. Using http://www.site.com/index.php?page=2...1=foo&arg2=bar
means that everything has to go through index.php first, and index.php must
then contain code to redirect to page256.php. Not only is this an unecessary
overhead, it also means that every time you change your application you must
remember to update index.php.
No, that wasnt what I meant. There would be no scriptfile, page256.php, at all.
My idea was that the whole HTML-page would be generated on-the-fly
by the "engine" from the URL-arguments and the db-contents,
where you can store both the site-contents, and style information.

Some performance penalty, of course, to generate everything on the fly,
but if you dont expect the trafficload to be much of a concern...

I guess my basic attraction to this approach, is that once the engine is
debugged and working, you shouldnt have to change the code very often
-- but maybe I am dreaming again, perhaps :)

The database, consisting of something like one set of tables for the contents,
and another set of table for design-parameters, styles, etc,
and mappings between them
would then be more flexible and easier to maintain,
than a growing set of scriptfiles with a mix of HTML,
PHP, CSS and db-calls in them.

Of course, the main challenge of this approach, however, is how to design the
database in a good way, to allow you to FULLY DESCRIBE your whole site.

But I was actually having the idea (or dreaming) that somebody most likely
already had done all that work, and released it as a useful product.

... Imho, a file system is a database.


Well... kind of... but a very simple special-case,
something like a db with only one table in it:

create table filesys
(
directory char(1024),
filename char(256)),
filecontent blob
);

I figure, however, there must be something more to db:es than this,
both from my personal experience, plus I heard they have become pretty
popular gadgets in today's business world, too ;)

Regards,
Jul 17 '05 #13

P: n/a
Very interesting, especially when considering maintainability. I would like
to have the information on one hand and layout on the other without
cluttering up my fs. Nice ideas!
Jul 17 '05 #14

P: n/a
In article <16**************************@posting.google.com >, grz02 wrote:
Great thanks to all who responded!

Was very interesting reading for me with
discussion, links and thought-food....

Just some clarification...
With Apache you can have a URL that says
www.site.com/page256.php?arg1=foo&arg2=bar and it will go straight to the
requseted page. Using http://www.site.com/index.php?page=2...1=foo&arg2=bar
means that everything has to go through index.php first, and index.php must
then contain code to redirect to page256.php. Not only is this an unecessary
overhead, it also means that every time you change your application you must
remember to update index.php.
No, that wasnt what I meant. There would be no scriptfile, page256.php, at all.
My idea was that the whole HTML-page would be generated on-the-fly
by the "engine" from the URL-arguments and the db-contents,
where you can store both the site-contents, and style information.

Some performance penalty, of course, to generate everything on the fly,
but if you dont expect the trafficload to be much of a concern...


But what are the gains from it when it's stored in a RDBMS instead of the
filesystem?
I guess my basic attraction to this approach, is that once the engine is
debugged and working, you shouldnt have to change the code very often
-- but maybe I am dreaming again, perhaps :)
Well, most filesystems already do work quite good. So if you use that,
you don't have to write the code (and/or debug).
The database, consisting of something like one set of tables for the contents,
and another set of table for design-parameters, styles, etc,
and mappings between them
would then be more flexible and easier to maintain,
than a growing set of scriptfiles with a mix of HTML,
PHP, CSS and db-calls in them.
Why would the grow more/less fast in a dbms than in a filesystem??
Of course, the main challenge of this approach, however, is how to design the
database in a good way, to allow you to FULLY DESCRIBE your whole site.


It is about designing your code/website in a good way. No matter where
the code for it will be stored.
... Imho, a file system is a database.


Well... kind of... but a very simple special-case,
something like a db with only one table in it:

create table filesys
(
directory char(1024),
filename char(256)),
filecontent blob
);


Actually, i think most filesytems (in use nowadays) map better to the hierarchical or the
network model than the relational model.

--
Met vriendelijke groeten,
Tim Van Wassenhove <http://www.timvw.info>
Jul 17 '05 #15

P: n/a
grz02 wrote:
I guess my basic attraction to this approach, is that once the engine is
debugged and working, you shouldnt have to change the code very often
-- but maybe I am dreaming again, perhaps :)

The database, consisting of something like one set of tables for the contents,
and another set of table for design-parameters, styles, etc,
and mappings between them
would then be more flexible and easier to maintain,
than a growing set of scriptfiles with a mix of HTML,
PHP, CSS and db-calls in them.

Of course, the main challenge of this approach, however, is how to design the
database in a good way, to allow you to FULLY DESCRIBE your whole site.

But I was actually having the idea (or dreaming) that somebody most likely
already had done all that work, and released it as a useful product.
That is useless. Of course it could be done, but it a lot more
effective, to customize the script to the site, than having that big
overhead (not only in speed, but also in code-lines and debugging time)
of beeing able to create a _whole site_ dynamically.

And that exists - it's called a cms and there are thousands of that kind
out there.
I have one running myself:
http://www.cyberpunkuniverse.de/sitemap.htm

This wohle tree is stored in one database-table. I use some apache
tricks, to get those nice urls - the number-dir is the only interessting
part of the url, which is the id in my db-table.

I'm working on a modularity, so that the news and image galleries can be
put into that tree. Is this what you ment, or did I completely miss your
point?

I figure, however, there must be something more to db:es than this,
both from my personal experience, plus I heard they have become pretty
popular gadgets in today's business world, too ;)


Well, you can do everything with PCs. It is possible to write a script,
that will allow your whole website to be managned via web-interface
(even the design could be changed via web-interface), and a lot more,
but this just isn't worth the time writing it, for it can allready be
done. Maybe not as fast, but the time needed to write such a script
won't be compensated by that.

Greetings, Christian.
Jul 17 '05 #16

P: n/a

"Tim Tyler" <ti*@tt1lock.org> wrote in message news:I5********@bath.ac.uk...
Tony Marston <to**@nospam.demon.co.uk> wrote or quoted:
"Tim Tyler" <ti*@tt1lock.org> wrote in message
news:I5********@bath.ac.uk...
> grz02 <gr***@spray.se> wrote or quoted: >> I can also envision a design where, even if the website contains
>> a huge number of pages and different functions,
>> there would actually only be a single PHP-script "index.php"
>> and all sub-pages are generated on-the-fly, from arguments
>> in the URL, i.e. all URLs simply looking like:
>>
>> index.php?pageid=1
>> index.php?pageid=2
>> ...
>> index.php?pageid=productlist
>> index.php?pageid=registration
>> index.php?pageid=loginform
>> index.php?pageid=contactform
>>
>> etc...
>>
>> The index.php startfile might only look something like:
>>
>> <html>
>> <php?
>> include("lib.php");
>> main();
>> ?>
>> </html>
>
> That's called a "front controller".
>
> http://www.phppatterns.com/index.php...leview/81/1/1/
>
> ...is an article about front controllers in PHP.
>
> It is good - but AFAICS, the arguments given against front controllers
> in it are hogwash.


No they're not. IMHO they are perfectly sound. Front controllers were
designed for languages such as Java which have web servers which behave
differently to Apache. With Apache you can have a URL that says
www.site.com/page256.php?arg1=foo&arg2=bar and it will go straight to the
requseted page. Using http://www.site.com/index.php?page=2...1=foo&arg2=bar
means that everything has to go through index.php first, and index.php
must
then contain code to redirect to page256.php.


You are complaining about the performance cost of reading a file like:

<php?
include("lib.php");
main();
?>

...from the server?

For those anal about performance, that seems to be more than compensated
for by the lack of a need to include:

include("header.php");
...
include("footer.php");

...in every single page - and besides, that file should get cached and
ought to take practically no time to read.


I can achieve the functionality I require without header.php and footer.php.
Not only is this an unecessary overhead, it also means that every time
you change your application you must remember to update index.php.


When index.php looks like:

<php?
include("lib.php");
main();
?>

...?

How does that need updating?


If every request for a page starts off by going through index.php then it
has to be redirected to the correct page, so index.php has to have knowledge
of every page in the system so it can redirect to that page. This means that
every time you add or remove a page you have to update index.php. In your
example the cross reference between request and script may exist in lib.php,
but that is the same difference - there is still a single place that needs
updating each time you change a page.

That's why I think front controllers are inferior to page controllers.

--
Tony Marston

http://www.tonymarston.net

Jul 17 '05 #17

P: n/a

"Tim Tyler" <ti*@tt1lock.org> wrote in message news:I5********@bath.ac.uk...
Tim Van Wassenhove <eu**@pi.be> wrote or quoted:
In article <I5********@bath.ac.uk>, Tim Tyler wrote:

<snip>
> Everything should go into a database.


Imho, a file system is a database.


It is.

It is a crappy one, with poor versioning, searching, filtering and sorting
facilities.


Storing code in a database does not automatically provide any of those
features. You still have to write them, just as you can for files in a file
system.

--
Tony Marston

http://www.tonymarston.net

Jul 17 '05 #18

P: n/a
Chris Hope <bl*******@electrictoolbox.com> wrote or quoted:
Tim Tyler wrote:

Doesn't using auto_prepend require php.ini/httpd.conf access


No, you can use .htaccess files.


Many service providers disable that for security reasons.

Write code that depends of this sort of thing and many customers will
be unable to run it.
--
__________
|im |yler http://timtyler.org/ ti*@tt1lock.org Remove lock to reply.
Jul 17 '05 #19

P: n/a
Tony Marston <to**@nospam.demon.co.uk> wrote or quoted:
"Tim Tyler" <ti*@tt1lock.org> wrote in message news:I5********@bath.ac.uk...
Tim Van Wassenhove <eu**@pi.be> wrote or quoted:
In article <I5********@bath.ac.uk>, Tim Tyler wrote: > Everything should go into a database.

Imho, a file system is a database.


It is.

It is a crappy one, with poor versioning, searching, filtering and sorting
facilities.


Storing code in a database does not automatically provide any of those
features. You still have to write them, just as you can for files in a file
system.


There are quite a number of searching filtering and sorting facilities
built into databases and made available via the SQL language.

Databases typically have better facilites for grouping actions, undoing
them and making backups, as well.

Filing systems are primitive databases - too primitve for many
applications these days - and a lot of data is migrating into
databases, to provide better searching, indexing, filtering
and sorting facilities.
--
__________
|im |yler http://timtyler.org/ ti*@tt1lock.org Remove lock to reply.
Jul 17 '05 #20

P: n/a
Thanks Christian.
CMS here stands for what?
/grz

Christian Fersch <Ch******@web.de> wrote in message news:<ck*************@news.t-online.com>...
Of course it could be done, but it a lot more
effective, to customize the script to the site, than having that big
overhead (not only in speed, but also in code-lines and debugging time)
of beeing able to create a _whole site_ dynamically. And that exists - it's called a cms and there are thousands of that kind
out there.
I have one running myself:
http://www.cyberpunkuniverse.de/sitemap.htm This wohle tree is stored in one database-table.

Jul 17 '05 #21

P: n/a
Tony Marston <to**@nospam.demon.co.uk> wrote or quoted:
"Tim Tyler" <ti*@tt1lock.org> wrote in message news:I5********@bath.ac.uk...
You are complaining about the performance cost of reading a file like:

<php?
include("lib.php");
main();
?>

...from the server?

For those anal about performance, that seems to be more than compensated
for by the lack of a need to include:

include("header.php");
...
include("footer.php");

...in every single page - and besides, that file should get cached and
ought to take practically no time to read.


I can achieve the functionality I require without header.php and footer.php.


This still seems like an anal performance niggle. Such issues are
way below my threshold of what's important when designing sites.
What I care about are things like time-to-market and rapid development -
not whether it takes 433 or 440 ms to render a page.

You should not make a mess of your architecture for the sake of a
few milliseconds - that's called premature optimisation.
Not only is this an unecessary overhead, it also means that every time
you change your application you must remember to update index.php.


When index.php looks like:

<php?
include("lib.php");
main();
?>

...?

How does that need updating?


If every request for a page starts off by going through index.php then it
has to be redirected to the correct page, so index.php has to have knowledge
of every page in the system so it can redirect to that page.


....or have access to that knowledge.
This means that every time you add or remove a page you have to update
index.php.
No - not if it has access to the information. The information is
presumably also stored in a database - and updates itself automatically
when new pages are created - so there need not be any maintenance
issue involved.
In your example the cross reference between request and script may
exist in lib.php, but that is the same difference - there is still a
single place that needs updating each time you change a page.


Creating a page involved modifying the database. That would automatically
update the list of available pages.
--
__________
|im |yler http://timtyler.org/ ti*@tt1lock.org Remove lock to reply.
Jul 17 '05 #22

P: n/a
Tim Van Wassenhove <eu**@pi.be> wrote or quoted:
In article <I5********@bath.ac.uk>, Tim Tyler wrote:
Tim Van Wassenhove <eu**@pi.be> wrote or quoted:
In article <I5********@bath.ac.uk>, Tim Tyler wrote: > It is good - but AFAICS, the arguments given against front controllers
> in it are hogwash.
>
> The main objections are:
>
> ``In the simple example above, that may not seem like a problem, but what
> about when you have hundreds of Page Controllers? You end up with a
> massive switch statment or perhaps something disguised in an array, an
> XML document or whatever. For every page request, PHP will have to
> reload a bunch of data which is irrelevant to the current request the
> user is trying to perform.''
>
> This is fundamentally the same task a webserver performs when looking up a
> file in a filing system. If PHP is doing the task, the webserver
> doesn't have to. It might be a bit slower in PHP than in C - but
> that's hardly a reason for not doing it in PHP.

The fileserver has to do a lookup anyways, to map the request to the
controller script.


Examine the work it has to do - depending on the structure of the
filing system. The fewer files, the less work.


The fewer lookups, the less work.


As a generalisation, this is false. The lookups can take different
quantities of work, and it is not always true that more looksups take
more work.

Another determiner of performance is how many alternatives have to
be searched throuh. Basically:

index.php?foo/bar/zub

....and...

foo/bar/zub/index.php

....takes the came work to parse as and resolve - since the same number of
alternatives need to be considered and searched throuh.

One is being searched in compiled C and the other one is being done
in PHP - and there /will/ be a speed difference there.

If performance is your overriding concern, then indexing files is
likely to be faster than accessing a database. That's a pretty
crummy argument for using flat files in this day and age, though.

If you want things to go faster, spend more on hardware - don't
bend your application framework out of shape for the sake of a
few milliseconds.
My conclusion: The added value of a controller is 0. Not even
mentionning that the chances for introducing bugs in your request->page
mapper. Where the code in your webserver has been tested many times
before.


Why can't a front controller be "tested many times before"?
Code can be shared. Not everyone has to reinvent the wheel.


But the code in apache has already been tested many times before. And i
have a good feeling it has been tested many more times than whatever
controller script. Yes, i agree on not reinventing the wheel.


An argument against not writing a front controller yourself.

It's not exactly rocket science, though. You are only looking
up a script from a query string. Is that code going to need
weeks of testing before you get it right? Also, front controllers
are not exactly new either. There are plenty of them out there -
many already pretty well tested.
> Everything should go into a database.

Imho, a file system is a database.


It is.

It is a crappy one, with poor versioning, searching, filtering and sorting
facilities.


Depends on the filesystem you are using, and what your requirements are.
Personally i like the article: http://www.namesys.com/whitepaper.html


It seems to agree that most filing systems are impoverished and need
replacing.
--
__________
|im |yler http://timtyler.org/ ti*@tt1lock.org Remove lock to reply.
Jul 17 '05 #23

P: n/a
grz02 wrote:
Thanks Christian.
CMS here stands for what?


Content Management System
http://en.wikipedia.org/wiki/Content_management_system

geetings, Christian
Jul 17 '05 #24

P: n/a
Hi grz,

This is a lot of questions/ideas/requirements. We have developed an open
source framework for development of web based applications. It is called
phpPeanuts. It seems to be close to some of your ideas but quite far
from others. This has to do with design choices we made. I will begin
with the similarities, then go into the differences.
I can also envision a design where, even if the website contains
a huge number of pages and different functions,
there would actually only be a single PHP-script "index.php"
and all sub-pages are generated on-the-fly, from arguments
in the URL, i.e. all URLs simply looking like:

index.php?pageid=1
index.php?pageid=2
...
index.php?pageid=productlist
index.php?pageid=registration
Something like that yes:
index.php?pntType=Product&pntHandler=EditDetailsPa ge&id=1

The index.php script will give you a form for editing the Product with
id = 1. Or more precise, it will probably include the file that contains
the class ObjectEditDetailsPage, instantiate it and forward it the
request*. ObectEditDetailsPage will use its inherited function
getRequestedObject() to obtain an instance of the class Product with id
= 1, and generate and output a form for displaying and editing its
properties. The form may be generated entirely from metadata. No need to
have a database with forms. No boring designing forms for Product,
Customer, Order, Shipment and all those other types you need. Just one
single generic site design. Then if you specify the metadata in the
Product class, the rest done is automaticly.

As you see our aproach is object oriented. This means that we do not
have a big pile of functions in which an engine is implemented, that
processes passive data from a database. We rather have an assembly of
objects that cooperate to do the work. Objects combine functions and
data. This makes them much more flexible then just trying to represent
the entire website desing in data (like your design seems to do) or in
functions (like standard php scripting tends to do).

With object orientation it is possible to offer the developer a default
user interface for his application, and at the same time to allow the
developer to specialize allmost any aspect of both the engine and the
design by creating subclasses and overriding some methods. Because the
methods are written in php, he can put in any code he likes, so he is
not limited to what the existing engine can do.

Like with most traditional IDE's, the user interface is composed from
objects, like listboxes, tables, buttons, dialogs, etc. Only the
phpPeanuts objects do not exist in the client PC and draw on the screen,
but rather exist on the server, process requests and output HTML. For
lists of user interfacing classes by category see
http://www.phppeanuts.org/site/index...t.web/web.html

Of course there is still a need for a place to put pieces of HTML that
hold parts of the design. We could have stored them in the database, or
use a template enigine, but we chose 'the simpelest thing that could
possibly work': php include files. These have the advantage that you can
include pieces of php to call methods. See
http://www.phppeanuts.org/site/index...principle.html
for how all this fits together.

But if you like it better to have a database, be my guest: make some
subclasses, override the methods that currently do the inclusion and
fetch and interpret your design data from there... (actually you may hit
a design limitation here, or maybe it has already been solved. Anyhow,
if you explain how our design limits you you to specilize the framework
in the way you need to, we will be happy see if we can make the
necessary adaptations)
and you can code "event" pieces of code
that will react to clicks, buttons and user-actions like
in the traditional IDE:s.


We do in specific situations support event handler methods. However,
http works with url's, requests and pages that are returned. So our user
interfacing framework focusses on handling requests and composing pages
and forms. Only where the composition needs to be specialized in
repeated details you will need to implement event handler methods.
Otherwise it is a matter of specializing classes that becuase they exist
override the defaults and override methods.

We do not support IDE style WYSIWYG graphically editing a user
interface. In an earlier version of the framework (in Java) we had an
template technique for this, but even with dreamweaver it proved to be a
lot of work to manually desing the user interface of a substantial
application. The problem was a lack of abstraction. The result wass lots
of replication, little reuse. We tried to make our objects editable with
dreamweaver, but that was too cumbersome. Existing WYSIWYG editors
simply could not cope with the dynamics of objects. (For some reason
none of them supported the JavaBeans standard the way IDE's do).

It would still be nice to have a way to WYSIWIG edit the bits and pieces
of html the design is composed from. Preferably i would have that in the
browser, as part of the working application. But building that is a huge
effort, and the more we can reuse pieces of design, the less we can gain
from it. But if anyone has substantial leftover money i would be happy
to look into it

I guess this is the basic difference between traditional IDE's and
phpPeanuts: IDE's try to facilitate graphical designing and coding. We
build components and try to faciltate their reuse. The more you can
reuse, the less design and code you need for the same end user function.
We believe that in the long run this gives a higer productivity then
traditional IDE's and results in more flexible applications, in the
sense of easyer to adapt to new or changing requirements.

Greetings,

Henk Verhoeven,
www.phpPeanuts.org.
* The mapping of urls to objects is called Request Dispatch and it is
acutally a bit more complicated, see
http://www.phppeanuts.org/site/index...+dispatch.html

Jul 17 '05 #25

P: n/a

"Tim Tyler" <ti*@tt1lock.org> wrote in message news:I5********@bath.ac.uk...
Tony Marston <to**@nospam.demon.co.uk> wrote or quoted:
"Tim Tyler" <ti*@tt1lock.org> wrote in message
news:I5********@bath.ac.uk...
> Tim Van Wassenhove <eu**@pi.be> wrote or quoted:
>> In article <I5********@bath.ac.uk>, Tim Tyler wrote: >> > Everything should go into a database.
>>
>> Imho, a file system is a database.
>
> It is.
>
> It is a crappy one, with poor versioning, searching, filtering and
> sorting
> facilities.
Storing code in a database does not automatically provide any of those
features. You still have to write them, just as you can for files in a
file
system.


There are quite a number of searching filtering and sorting facilities
built into databases and made available via the SQL language.


But you still have to write commands to invoke those operations, just as yiu
have to write commands to invoke filesystem operations.
Databases typically have better facilites for grouping actions, undoing
them and making backups, as well.

Filing systems are primitive databases - too primitve for many
applications these days - and a lot of data is migrating into
databases, to provide better searching, indexing, filtering
and sorting facilities.


I agree that data belongs in a database, but code does not.

--
Tony Marston

http://www.tonymarston.net

Jul 17 '05 #26

P: n/a

"Tim Tyler" <ti*@tt1lock.org> wrote in message news:I5********@bath.ac.uk...
Tony Marston <to**@nospam.demon.co.uk> wrote or quoted:
"Tim Tyler" <ti*@tt1lock.org> wrote in message
news:I5********@bath.ac.uk...

> You are complaining about the performance cost of reading a file like:
>
> <php?
> include("lib.php");
> main();
> ?>
>
> ...from the server?
>
> For those anal about performance, that seems to be more than
> compensated
> for by the lack of a need to include:
>
> include("header.php");
> ...
> include("footer.php");
>
> ...in every single page - and besides, that file should get cached and
> ought to take practically no time to read.


I can achieve the functionality I require without header.php and
footer.php.


This still seems like an anal performance niggle. Such issues are
way below my threshold of what's important when designing sites.
What I care about are things like time-to-market and rapid development -
not whether it takes 433 or 440 ms to render a page.

You should not make a mess of your architecture for the sake of a
few milliseconds - that's called premature optimisation.


If you say that there is very little difference in performance between a
single front controller and multiple page controllers, then why should I
switch from one to the other? My design works, it employs a lot of reusable
modules, it is flexible, so why should I change?

You like front controllers, I don't. I like page controllers, you don't. We
agree to disagree.
>> Not only is this an unecessary overhead, it also means that every time
>> you change your application you must remember to update index.php.
>
> When index.php looks like:
>
> <php?
> include("lib.php");
> main();
> ?>
>
> ...?
>
> How does that need updating?


If every request for a page starts off by going through index.php then it
has to be redirected to the correct page, so index.php has to have
knowledge
of every page in the system so it can redirect to that page.


...or have access to that knowledge.
This means that every time you add or remove a page you have to update
index.php.


No - not if it has access to the information. The information is
presumably also stored in a database - and updates itself automatically
when new pages are created - so there need not be any maintenance
issue involved.
In your example the cross reference between request and script may
exist in lib.php, but that is the same difference - there is still a
single place that needs updating each time you change a page.


Creating a page involved modifying the database. That would automatically
update the list of available pages.


I have all my transaction details stored in a database, but I still don't
need a front controller.

--
Tony Marston

http://www.tonymarston.net

Jul 17 '05 #27

P: n/a

"Tim Tyler" <ti*@tt1lock.org> wrote in message news:I5********@bath.ac.uk...
Tim Van Wassenhove <eu**@pi.be> wrote or quoted:
In article <I5********@bath.ac.uk>, Tim Tyler wrote:
> Tim Van Wassenhove <eu**@pi.be> wrote or quoted:
>> In article <I5********@bath.ac.uk>, Tim Tyler wrote: >> > It is good - but AFAICS, the arguments given against front
>> > controllers
>> > in it are hogwash.
>> >
>> > The main objections are:
>> >
>> > ``In the simple example above, that may not seem like a problem, but
>> > what
>> > about when you have hundreds of Page Controllers? You end up with
>> > a
>> > massive switch statment or perhaps something disguised in an
>> > array, an
>> > XML document or whatever. For every page request, PHP will have to
>> > reload a bunch of data which is irrelevant to the current request
>> > the
>> > user is trying to perform.''
>> >
>> > This is fundamentally the same task a webserver performs when
>> > looking up a
>> > file in a filing system. If PHP is doing the task, the webserver
>> > doesn't have to. It might be a bit slower in PHP than in C - but
>> > that's hardly a reason for not doing it in PHP.
>>
>> The fileserver has to do a lookup anyways, to map the request to the
>> controller script.
>
> Examine the work it has to do - depending on the structure of the
> filing system. The fewer files, the less work.


The fewer lookups, the less work.


As a generalisation, this is false. The lookups can take different
quantities of work, and it is not always true that more looksups take
more work.

Another determiner of performance is how many alternatives have to
be searched throuh. Basically:

index.php?foo/bar/zub

...and...

foo/bar/zub/index.php

...takes the came work to parse as and resolve - since the same number of
alternatives need to be considered and searched throuh.

One is being searched in compiled C and the other one is being done
in PHP - and there /will/ be a speed difference there.

If performance is your overriding concern, then indexing files is
likely to be faster than accessing a database. That's a pretty
crummy argument for using flat files in this day and age, though.

If you want things to go faster, spend more on hardware - don't
bend your application framework out of shape for the sake of a
few milliseconds.
>> My conclusion: The added value of a controller is 0. Not even
>> mentionning that the chances for introducing bugs in your
>> request->page
>> mapper. Where the code in your webserver has been tested many times
>> before.
>
> Why can't a front controller be "tested many times before"?
> Code can be shared. Not everyone has to reinvent the wheel.


But the code in apache has already been tested many times before. And i
have a good feeling it has been tested many more times than whatever
controller script. Yes, i agree on not reinventing the wheel.


An argument against not writing a front controller yourself.

It's not exactly rocket science, though. You are only looking
up a script from a query string. Is that code going to need
weeks of testing before you get it right? Also, front controllers
are not exactly new either. There are plenty of them out there -
many already pretty well tested.


Just because some developers like front controllers is no justification for
saying that everyone should use a front controller. They are not the only
solution, and IMHO they are not the best solution.

--
Tony Marston

http://www.tonymarston.net

Jul 17 '05 #28

P: n/a
Tony Marston <to**@nospam.demon.co.uk> wrote or quoted:
"Tim Tyler" <ti*@tt1lock.org> wrote in message news:I5********@bath.ac.uk...
Tony Marston <to**@nospam.demon.co.uk> wrote or quoted:
"Tim Tyler" <ti*@tt1lock.org> wrote in message > You are complaining about the performance cost of reading a file like:
>
> <php?
> include("lib.php");
> main();
> ?>
>
> ...from the server?
>
> For those anal about performance, that seems to be more than
> compensated for by the lack of a need to include:
>
> include("header.php");
> ...
> include("footer.php");
>
> ...in every single page - and besides, that file should get cached and
> ought to take practically no time to read.

I can achieve the functionality I require without header.php and
footer.php.


This still seems like an anal performance niggle. Such issues are
way below my threshold of what's important when designing sites.
What I care about are things like time-to-market and rapid development -
not whether it takes 433 or 440 ms to render a page.

You should not make a mess of your architecture for the sake of a
few milliseconds - that's called premature optimisation.


If you say that there is very little difference in performance between a
single front controller and multiple page controllers, then why should I
switch from one to the other? My design works, it employs a lot of reusable
modules, it is flexible, so why should I change?


My original point was that the arguments given against using
front controllers in an online article were not very good.

Front controllers have their moments - and are often used by
CMS systems - which are deployed across multiple web sites -
and which thus treat whole web sites as being user data.
In such contexts, a front controller makes quite a bit of sense.

I'm not trying to convert you to liking front controllers -
if you say you don't like them.
--
__________
|im |yler http://timtyler.org/ ti*@tt1lock.org Remove lock to reply.
Jul 17 '05 #29

P: n/a

"Tim Tyler" <ti*@tt1lock.org> wrote in message news:I5********@bath.ac.uk...
Tony Marston <to**@nospam.demon.co.uk> wrote or quoted:
"Tim Tyler" <ti*@tt1lock.org> wrote in message
news:I5********@bath.ac.uk...
> Tony Marston <to**@nospam.demon.co.uk> wrote or quoted:
>> "Tim Tyler" <ti*@tt1lock.org> wrote in message >> > You are complaining about the performance cost of reading a file
>> > like:
>> >
>> > <php?
>> > include("lib.php");
>> > main();
>> > ?>
>> >
>> > ...from the server?
>> >
>> > For those anal about performance, that seems to be more than
>> > compensated for by the lack of a need to include:
>> >
>> > include("header.php");
>> > ...
>> > include("footer.php");
>> >
>> > ...in every single page - and besides, that file should get cached
>> > and
>> > ought to take practically no time to read.
>>
>> I can achieve the functionality I require without header.php and
>> footer.php.
>
> This still seems like an anal performance niggle. Such issues are
> way below my threshold of what's important when designing sites.
> What I care about are things like time-to-market and rapid
> development -
> not whether it takes 433 or 440 ms to render a page.
>
> You should not make a mess of your architecture for the sake of a
> few milliseconds - that's called premature optimisation.


If you say that there is very little difference in performance between a
single front controller and multiple page controllers, then why should I
switch from one to the other? My design works, it employs a lot of
reusable
modules, it is flexible, so why should I change?


My original point was that the arguments given against using
front controllers in an online article were not very good.

Front controllers have their moments - and are often used by
CMS systems - which are deployed across multiple web sites -
and which thus treat whole web sites as being user data.
In such contexts, a front controller makes quite a bit of sense.

I'm not trying to convert you to liking front controllers -
if you say you don't like them.


Understood. It's just that too many people say "you should use a front
controller because...", and when I look at their reasons I cannot see any
real justification, just opinion. I have seen documentation on why java
applications need front controllers, for example, but as I don't have the
same set of problems I certainly don't need the same set of solutions. A
front controller is just a means to an end, but it is not the only means.

--
Tony Marston

http://www.tonymarston.net

Jul 17 '05 #30

P: n/a
"Tim Van Wassenhove" <eu**@pi.be> wrote in message
news:2s*************@uni-berlin.de...
My conclusion: The added value of a controller is 0. Not even
mentionning that the chances for introducing bugs in your request->page
mapper. Where the code in your webserver has been tested many times
before.


I can't agree more. The so called front controller architecture introduces
many security issues, because those who uses it often fail to lock down
other possible entry points into the system. I mean, more than anything, it
was responsible for the demise of register_globals. In a "normally"
structured site, where each page has its own entry point, having
register_globals on is not that big of an issue, as people rarely leave
critical variables uninitialized. In a front controller site, on the other
hand, leaving register_globals on is often fatal, because the code that
initialize these variables (ie the controller) could be bypassed.

Here are two example, the first coded in a conventional way and the
second--using a front controller:

<?
/* example 1 */
require("globals.php");
require("$LIB/email.php");

RestrictAccess();
PrinterHeader("examples");
// call functions in email.php
PrintFooter();
?>

<?
/* example 2 */
require("$LIB/email.php");

// call functions in email.php
?>

In example 1, $LIB is set in globals.php. Register_globals doesn't cause any
problem here because you can't bypass globals.php in the execution path; On
the other hand, it would completely compromise your site in example 2, if
the script sits in a web accessible folder.

The notion that you somehow need to use a front controller in a large site
is completely bogus. As I always say, complexity does not equal
sophistication. The larger a site get, the simpler, the more straight
forward the architecture should be, lest the codebase grows to be completely
incomprehensible. One of the great virtues of web programming is that it
naturally breaks a large application into smaller, semi-independent applets.
To purposely tie everything back into a knot is effort misguided, to say the
least.
Jul 17 '05 #31

P: n/a
Hi Chung,

Nice to see you are still here. I don't know why, but for some reason i
find your messages more interesting then those others. Maybe its that
combination of practical and philosphical arguments?
"Tim Van Wassenhove" <eu**@pi.be> wrote:
The added value of a controller is 0. (..)
Chung Leong wrote:I can't agree more.
It appears to me that front controller and object oriented are logical
companions: The handling of http requests seems to really begs for the
application of a compostion of Command Objects (you know,
http://c2.com/cgi/wiki?CommandObject). This naturally begins with
including a class and instantiating it. Now this first step is very much
the same pattern every time. So i refacor to get rid of the repetition.
I cant't help to end up with a relativly small number of front
controllers. There i put generic code i use for a many http request,
like starting a session, setting up an error handler with logging,
checking if this user is authorized for that particular page or action,
site statistics.
In a front controller site, on the other
hand, leaving register_globals on is often fatal, because the code that
initialize these variables (ie the controller) could be bypassed.
Natually all code from the moment the command object's service gets
invoked is running inside method calls. I do not see how request
parameters with register_globals on can have impact on the values of all
those temporary variables in those methods, or object fields in all the
objects created from there. Unless of course globals are used
explicitly. But why on earth would you do that if you got a front
controller object to keep track of everything that needs to have the
same lifetime as a http request? The only reason i can think of is
performace for caching objects. But those are typically put into an
array. AFAIK the only thing that can happen is the entire array to be
replaced. That will obviously cause an E_ERROR, but is that a security risk?

(..) The notion that you somehow need to use a front controller in a large
site is completely bogus. As I always say, complexity does not equal
sophistication. The larger a site get, the simpler, the more straight
forward the architecture should be, lest the codebase grows to be
completely incomprehensible.
I cant agree more with complexity and sophistication, but I fail to see
the relationship with te size of a site. Doesn't a larger site just
means more content in its database?
One of the great virtues of web programming is that it
naturally breaks a large application into smaller, semi-independent
applets. To purposely tie everything back into a knot is effort
misguided, to say the least.


I suppose you find phpPeanuts completely incomprehensable too. I must
have been completeley misguided to think that less code is simpeler (but
requires reuse == to ty things together), more reliable and more
productive, and (but only if done well) more flexible.

Greetings,

Henk Verhoeven,
www.phpPeanuts.org.


Jul 17 '05 #32

P: n/a
Chung Leong <ch***********@hotmail.com> wrote or quoted:
"Tim Van Wassenhove" <eu**@pi.be> wrote in message
My conclusion: The added value of a controller is 0. Not even
mentionning that the chances for introducing bugs in your request->page
mapper. Where the code in your webserver has been tested many times
before.


I can't agree more. The so called front controller architecture introduces
many security issues, because those who uses it often fail to lock down
other possible entry points into the system. [...]


Huh? What are you talking about? Doesn't a front controller
do just the opposite - make only one entry point into the
system?
I mean, more than anything, it was responsible for the demise of
register_globals. In a "normally" structured site, where each page has
its own entry point, having register_globals on is not that big of an
issue, as people rarely leave critical variables uninitialized. In a
front controller site, on the other hand, leaving register_globals on
is often fatal, because the code that initialize these variables (ie
the controller) could be bypassed.
I think if you leave "register_globals" turned on, your security problems
are mostly the fault of the designers of PHP - and partly your own fault
for leaving "register_globals" turned on!
Here are two example, the first coded in a conventional way and the
second--using a front controller:

<?
/* example 1 */
require("globals.php");
require("$LIB/email.php");

RestrictAccess();
PrinterHeader("examples");
// call functions in email.php
PrintFooter();
?>

<?
/* example 2 */
require("$LIB/email.php");

// call functions in email.php
?>

In example 1, $LIB is set in globals.php. Register_globals doesn't cause any
problem here because you can't bypass globals.php in the execution path; On
the other hand, it would completely compromise your site in example 2, if
the script sits in a web accessible folder.


That problem is to do with setting up variables in different
independently-executable files from where they are used.

Use an OO approach - and put all your code into functions - and
none of your library files can ever be executed as independent
scripts - even if you leave them in the site's path.
--
__________
|im |yler http://timtyler.org/ ti*@tt1lock.org Remove lock to reply.
Jul 17 '05 #33

P: n/a

"Tim Tyler" <ti*@tt1lock.org> wrote in message news:I5********@bath.ac.uk...
Chung Leong <ch***********@hotmail.com> wrote or quoted:
"Tim Van Wassenhove" <eu**@pi.be> wrote in message

> My conclusion: The added value of a controller is 0. Not even
> mentionning that the chances for introducing bugs in your request->page
> mapper. Where the code in your webserver has been tested many times
> before.


I can't agree more. The so called front controller architecture
introduces
many security issues, because those who uses it often fail to lock down
other possible entry points into the system. [...]


Huh? What are you talking about? Doesn't a front controller
do just the opposite - make only one entry point into the
system?


A front controller gives you one *official* entry point into the system, but
what happens if a hacker can execute a script without going through the
front controller?
I mean, more than anything, it was responsible for the demise of
register_globals. In a "normally" structured site, where each page has
its own entry point, having register_globals on is not that big of an
issue, as people rarely leave critical variables uninitialized. In a
front controller site, on the other hand, leaving register_globals on
is often fatal, because the code that initialize these variables (ie
the controller) could be bypassed.


I think if you leave "register_globals" turned on, your security problems
are mostly the fault of the designers of PHP - and partly your own fault
for leaving "register_globals" turned on!
Here are two example, the first coded in a conventional way and the
second--using a front controller:

<?
/* example 1 */
require("globals.php");
require("$LIB/email.php");

RestrictAccess();
PrinterHeader("examples");
// call functions in email.php
PrintFooter();
?>

<?
/* example 2 */
require("$LIB/email.php");

// call functions in email.php
?>

In example 1, $LIB is set in globals.php. Register_globals doesn't cause
any
problem here because you can't bypass globals.php in the execution path;
On
the other hand, it would completely compromise your site in example 2, if
the script sits in a web accessible folder.


That problem is to do with setting up variables in different
independently-executable files from where they are used.

Use an OO approach - and put all your code into functions - and
none of your library files can ever be executed as independent
scripts - even if you leave them in the site's path.


Using an OO approach does not prevent screw-ups - it just gives you a
different class of screw-up (IMHO).

--
Tony Marston

http://www.tonymarston.net

Jul 17 '05 #34

P: n/a
"Henk Verhoeven" <ne***@phppeanutsREMOVE-THIS.org> wrote in message
news:ck**********@news6.zwoll1.ov.home.nl...
Nice to see you are still here. I don't know why, but for some reason i
find your messages more interesting then those others. Maybe its that
combination of practical and philosphical arguments?
Nice to see you here too :-)
It appears to me that front controller and object oriented are logical
companions: The handling of http requests seems to really begs for the
application of a compostion of Command Objects (you know,
http://c2.com/cgi/wiki?CommandObject). This naturally begins with
including a class and instantiating it. Now this first step is very much
the same pattern every time. So i refacor to get rid of the repetition.
I cant't help to end up with a relativly small number of front
controllers. There i put generic code i use for a many http request,
like starting a session, setting up an error handler with logging,
checking if this user is authorized for that particular page or action,
site statistics.
True. But then is is object oriented programming really a good approach for
web programming? All web scripts basically do two things: processing input
data and produce output data (usally HTML). Something in, something out.
Very simple. Everything happens more or less linearly. IMHO, hardly warrants
the use of OOP.
Natually all code from the moment the command object's service gets
invoked is running inside method calls. I do not see how request
parameters with register_globals on can have impact on the values of all
those temporary variables in those methods, or object fields in all the
objects created from there.
I was referring to how front controllers are usually used, where you would
see code that looks like:

switch($action) {
case 'welcome': include("welcome.php"); break;
...
}
I suppose you find phpPeanuts completely incomprehensable too. I must
have been completeley misguided to think that less code is simpeler (but
requires reuse == to ty things together), more reliable and more
productive, and (but only if done well) more flexible.


I don't see your logic there. Reuse hardly requires you to tie everything
into a single application. When you call a built-in PHP function, you're
reusing that piece of code. Likewise you can write your own function library
and have your scripts call on its service.
Jul 17 '05 #35

P: n/a
Chung Leong wrote:
True. But then is is object oriented programming really a good approach for
web programming? All web scripts basically do two things: processing input
data and produce output data (usally HTML). Something in, something out.
Very simple. Everything happens more or less linearly. IMHO, hardly warrants
the use of OOP.


IMHO web programming is definitely a good case for OOP. I would suspect
that you unwittingly use OOP yourself, but not in the formal class
declarations within your code.

There are sections of every web page within a site that remain the same
for all pages - header, menu/navigation, footer. Do you cut and paste
these to every page or do you 'include' common sections? Think of those
common sections as methods of your base class. If you go to a more
formal OOP system you would write a base class that contains all the
common parts then create a new class for each page inheriting from your
base class. If some pages use MySQL and others don't you could write a
another class that adds database connectivity. Pages that use SQL would
then inherit from that class. Any time you want to change a common
aspect, just change the one base class.

The one problem for me is that OOP does add an overhead in amount of
processing required. That's acceptable if you're using a compiled
language but not with an interpreted language like PHP. (For my
complicated sites I use freepascal.)

Mike
Jul 17 '05 #36

P: n/a
2metre <2m****@xxxhersham.net> wrote or quoted:
IMHO web programming is definitely a good case for OOP. I would suspect
that you unwittingly use OOP yourself, but not in the formal class
declarations within your code.

There are sections of every web page within a site that remain the same
for all pages - header, menu/navigation, footer. Do you cut and paste
these to every page or do you 'include' common sections? Think of those
common sections as methods of your base class. If you go to a more
formal OOP system you would write a base class that contains all the
common parts then create a new class for each page inheriting from your
base class. If some pages use MySQL and others don't you could write a
another class that adds database connectivity. Pages that use SQL would
then inherit from that class. Any time you want to change a common
aspect, just change the one base class.

The one problem for me is that OOP does add an overhead in amount of
processing required. That's acceptable if you're using a compiled
language but not with an interpreted language like PHP. [...]


IMO, using an interpreted language should only very rarely stop
you using OOP. It should not be *much* slower than the same site
without OOP - and any performance problems are best addressed by:

* using more efficient algorithms - or by:
* buying more expensive hardware;

Allowing performance to dictate a procedural programming style is
often a case of premature optimisation.
--
__________
|im |yler http://timtyler.org/ ti*@tt1lock.org Remove lock to reply.
Jul 17 '05 #37

P: n/a
Chung Leong wrote:
True. But then is is object oriented programming really a good approach for
web programming? All web scripts basically do two things: processing input
data and produce output data (usally HTML).
Something in, something out.
Very simple.
Hmm, all that a fancy GUI program really does is process a series of
keyboard keys pressed, moue moves and mouse click and change the color
of some pixels on a screen. Obviously programming that is a much
simpeler task then writing the winning chess program ;-)

But OK, for simple websites i do not use a lot of components, even if i
have them available. But some sites are more complicated then others.
For example if a site should sell PC's that will be assembled from
components the customer can select himself, but some components fit
together and others do not, OOP can be helpfull. Another example: the
phpPeanuts site has sort of an IDE style 'Browsers' to browse and search
source code. This works with a combination of objects, indexing in a
database, but also shows content from the directories and files with
actual source code (actually the code the examples run on, but that is
irrelevant here). Objects and classes proved great to reassemble data
that comes partially from the database and partially from the files.
Everything happens more or less linearly. IMHO, hardly warrants
the use of OOP.


Even if processing happens linearily, programming it may not. More
compicated sites tend to require function that can be reused. As you
said, this can be done with a 'flat' function library, but objects and
classes offer a nice way to structure the reuse. For example a search
engine component may be used in several sites. If you need it to work
partially different for specific sites, you may make a subclass of it
for each site. Now if at some point you improve the search algorithm in
the superclass, it is realatively easy to integrate the new version in
the existing sites (especially if you have unit tests). Unless of course
you are afraid that instantiating one search enigine object per request
may JUST add that little bit of overhead that causes your servers
processor to overheat ;-)

Greetings,

Henk Verhoeven,
www.phpPeanuts.org

Jul 17 '05 #38

P: n/a
Tim Tyler wrote:
2metre <2m****@xxxhersham.net> wrote or quoted:
IMHO web programming is definitely a good case for OOP. I would suspect
that you unwittingly use OOP yourself, but not in the formal class
declarations within your code.

There are sections of every web page within a site that remain the same
for all pages - header, menu/navigation, footer. Do you cut and paste
these to every page or do you 'include' common sections? Think of those
common sections as methods of your base class. If you go to a more
formal OOP system you would write a base class that contains all the
common parts then create a new class for each page inheriting from your
base class. If some pages use MySQL and others don't you could write a
another class that adds database connectivity. Pages that use SQL would
then inherit from that class. Any time you want to change a common
aspect, just change the one base class.

The one problem for me is that OOP does add an overhead in amount of
processing required. That's acceptable if you're using a compiled
language but not with an interpreted language like PHP. [...]


IMO, using an interpreted language should only very rarely stop
you using OOP. It should not be *much* slower than the same site
without OOP - and any performance problems are best addressed by:

* using more efficient algorithms - or by:
* buying more expensive hardware;

Allowing performance to dictate a procedural programming style is
often a case of premature optimisation.


While this is true, you are basing your logic entirely on the premise that
OOP programming is *always* better/nicer/cooler/whatever than straight
procedural programming when it isn't. For some things OOP is better, for
other things it just adds too much programming overhead and straight
procedural is better.

--
Chris Hope - The Electric Toolbox - http://www.electrictoolbox.com/
Jul 17 '05 #39

P: n/a
Tim Tyler wrote:
IMO, using an interpreted language should only very rarely stop
you using OOP. It should not be *much* slower than the same site
without OOP - and any performance problems are best addressed by:

* using more efficient algorithms - or by:
* buying more expensive hardware;

Allowing performance to dictate a procedural programming style is
often a case of premature optimisation.


The most efficient (ie best performance v cost) way of addressing
performance problems is to use a compiled language in place of
interpreted. Just imagine what percentage of your processor time is used
to parse (include all the error checking) your code before it does
anything useful?

I have one app I wrote for a client - a web based contact management
database - that I originally wrote in PHP. Some pages were dreadfully
slow (there was a lot of MySQL queries within loops.) I rewrote it using
OOP in freepascal, and now all the pages come up as fast as static html
pages. You really cannot tell that they are not static!

As for "buying more expensive hardware..." that has to be *the* most
extreme case of premature optimisation.
Jul 17 '05 #40

P: n/a
Christian Fersch <Ch******@web.de> wrote in message news:<ck*************@news.t-online.com>...
The best editor for windows is NuSphere's phpEd. But it costs 500$.


Christian,

PhpED is $299, and the additional license is $50 only. You can get
more information about PhpED features and pricing options on our
website http://www.nusphere.com/products/index.htm

Best regards,
NuSphere Corp.
Jul 17 '05 #41

P: n/a
In article <af**************************@posting.google.com >,
ns***@nusphere.com (NuSphere Customer Service) wrote:

:Christian,
:
:PhpED is $299, and the additional license is $50 only. You can get
:more information about PhpED features and pricing options on our
:website http://www.nusphere.com/products/index.htm
:
:Best regards,
:NuSphere Corp.

NuSphere Corp:

It's still too expensive. Emacs is free. So is Nedit (although, to be
fair and all, I don't know if the latter has a php mode, and the
former is Emacs). Html tidy is free. So are the W3C validators. So is
Gubed and Xdebug.

And why are you using Webex files for your online seminars? Is
NuSphere owned by Microsoft? Hello in there: We DON'T LIKE INTERNET
EXPLORER!

Sheesh.
--
Looks like more of Texas to me ...
-- from The Wild Bunch
Jul 17 '05 #42

P: n/a
2metre <2m****@xxxhersham.net> wrote or quoted:
Tim Tyler wrote:
IMO, using an interpreted language should only very rarely stop
you using OOP. It should not be *much* slower than the same site
without OOP - and any performance problems are best addressed by:

* using more efficient algorithms - or by:
* buying more expensive hardware;

Allowing performance to dictate a procedural programming style is
often a case of premature optimisation.


[...]
As for "buying more expensive hardware..." that has to be *the* most
extreme case of premature optimisation.


Premature optimisation usually refers to making a mess of your code
for the sake of performance.

Running on a faster machine doesn't seem to qualify there.
--
__________
|im |yler http://timtyler.org/ ti*@tt1lock.org Remove lock to reply.
Jul 17 '05 #43

P: n/a
Chris Hope <bl*******@electrictoolbox.com> wrote or quoted:
Tim Tyler wrote:
2metre <2m****@xxxhersham.net> wrote or quoted: IMO, using an interpreted language should only very rarely stop
you using OOP. It should not be *much* slower than the same site
without OOP - and any performance problems are best addressed by:

* using more efficient algorithms - or by:
* buying more expensive hardware;

Allowing performance to dictate a procedural programming style is
often a case of premature optimisation.


While this is true, you are basing your logic entirely on the premise that
OOP programming is *always* better/nicer/cooler/whatever than straight
procedural programming [...]


I read my post again. I can find no sign of such a premise.

I was mostly arguing that choosing a procedural programming style
on performance grounds was usually a bad idea - not that grounds
for writing procedural code do not exist.
--
__________
|im |yler http://timtyler.org/ ti*@tt1lock.org Remove lock to reply.
Jul 17 '05 #44

This discussion thread is closed

Replies have been disabled for this discussion.