By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,417 Members | 1,856 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,417 IT Pros & Developers. It's quick & easy.

getting absolute directory path?

P: n/a
Hello all -

I have a two part question.

First of all, I have a website under /home/user/www/. The index.php
and all the other website pages are under /home/user/www/. For
functions that are used in multiple files, I have php files under /
home/user/www/functions/. These files simply have

So, in index.php and other files, I have
<?php
include("./functions/function.create_html_page.inc");
include("./functions/function.create_pdf.inc");
.....
?>

And then function files, such as function.create_html_page.inc
<?php
function create_html_page( $arg1, $arg2 ) {
...
}
?>

So, my php pages have relative references to the function files, i.e.
"./functions/".

This works okay, but when I want to include a function in a function
file, I have to change the relative path. So, instead of
include("./functions/function.create_pdf.inc");
I should have
include("./function.create_pdf.inc");

Ideally I would like to figure out the absolute path name, so that I
don't have to change the relative path names depending on whether the
file is in the root directory or the functions directory, or any other
directory for that matter.

In other words, I could do
<?php
$path = "/home/user/www/";
include( $path . "functions/function.create_html_page.inc");
include( $path . "functions/function.create_pdf.inc");
....
?>

And then the include statements are identical, no matter what
directory they're in.

But, I would like the solution to be more portable. In change I change
hosts, or install the website on another server, I would like the path
name not to be hard-coded, so I wouldn't have to change anything.
Something like

<?
$path = get_base_directory();
include( $path . "functions/function.create_html_page.inc");
include( $path . "functions/function.create_pdf.inc");
....
?>

My website is designed so that pages in subdirectories are never
served; the user is always navigating in the root directory. So, even
if a function in the functions subdirectory are called, the original
executing script was in the root.

I looked at functions like basename, but they expect the directory as
an argument.

I looked at $_SERVER and $_ENV, and they have 'OLDPWD'
[OLDPWD] =/home/user/www
[PWD] =/home/user/www/functions

OLDPWD looks like it would work, but I couldn't find it documented
anywhere, so I don't know if it would be useable as a portable
solution. If it's not documented, I probably can't rely on it being on
most systems, right?

Anywho, my thought now is to go through the backtrace array to figure
out the originating script, and thus the base directory. Is there an
easier way to do this?

Jun 2 '08 #1
Share this Question
Share on Google+
13 Replies


P: n/a
la*****@gmail.com wrote:
Hello all -

I have a two part question.

First of all, I have a website under /home/user/www/. The index.php
and all the other website pages are under /home/user/www/. For
functions that are used in multiple files, I have php files under /
home/user/www/functions/. These files simply have

So, in index.php and other files, I have
<?php
include("./functions/function.create_html_page.inc");
include("./functions/function.create_pdf.inc");
.....
?>

And then function files, such as function.create_html_page.inc
<?php
function create_html_page( $arg1, $arg2 ) {
...
}
?>

So, my php pages have relative references to the function files, i.e.
"./functions/".

This works okay, but when I want to include a function in a function
file, I have to change the relative path. So, instead of
include("./functions/function.create_pdf.inc");
I should have
include("./function.create_pdf.inc");

Ideally I would like to figure out the absolute path name, so that I
don't have to change the relative path names depending on whether the
file is in the root directory or the functions directory, or any other
directory for that matter.

In other words, I could do
<?php
$path = "/home/user/www/";
include( $path . "functions/function.create_html_page.inc");
include( $path . "functions/function.create_pdf.inc");
....
?>

And then the include statements are identical, no matter what
directory they're in.

But, I would like the solution to be more portable. In change I change
hosts, or install the website on another server, I would like the path
name not to be hard-coded, so I wouldn't have to change anything.
Something like

<?
$path = get_base_directory();
include( $path . "functions/function.create_html_page.inc");
include( $path . "functions/function.create_pdf.inc");
....
?>

My website is designed so that pages in subdirectories are never
served; the user is always navigating in the root directory. So, even
if a function in the functions subdirectory are called, the original
executing script was in the root.

I looked at functions like basename, but they expect the directory as
an argument.

I looked at $_SERVER and $_ENV, and they have 'OLDPWD'
[OLDPWD] =/home/user/www
[PWD] =/home/user/www/functions

OLDPWD looks like it would work, but I couldn't find it documented
anywhere, so I don't know if it would be useable as a portable
solution. If it's not documented, I probably can't rely on it being on
most systems, right?

Anywho, my thought now is to go through the backtrace array to figure
out the originating script, and thus the base directory. Is there an
easier way to do this?
Either use $_SERVER['DOCUMENT_ROOT'], or use the include_path setting,
and don't provide the directory at all on an include.

--
Rik Wasmus
Jun 2 '08 #2

P: n/a
la*****@gmail.com a écrit :
This works okay, but when I want to include a function in a function
file, I have to change the relative path. So, instead of
include("./functions/function.create_pdf.inc");
I should have
include("./function.create_pdf.inc");
In a function file:
include (dirname(__FILE__) . './function.create_pdf.inc");

I would also recommand to use .inc.php so your php code cannot be
displayed in a browser, only executed.

Regards,
--
Guillaume
Jun 2 '08 #3

P: n/a
On 24 Apr, 16:32, lawp...@gmail.com wrote:
>
First of all, I have a website under /home/user/www/. The index.php
and all the other website pages are under /home/user/www/. For
functions that are used in multiple files, I have php files under /
home/user/www/functions/. These files simply have

So, in index.php and other files, I have
<?php
include("./functions/function.create_html_page.inc");
include("./functions/function.create_pdf.inc");
....
?>

And then function files, such as function.create_html_page.inc
<?php
function create_html_page( $arg1, $arg2 ) {
...}

?>

So, my php pages have relative references to the function files, i.e.
"./functions/".

This works okay, but when I want to include a function in a function
file, I have to change the relative path. So, instead of
include("./functions/function.create_pdf.inc");
I should have
include("./function.create_pdf.inc");

Ideally I would like to figure out the absolute path name, so that I
don't have to change the relative path names depending on whether the
file is in the root directory or the functions directory, or any other
directory for that matter.
No.

Put all your include files in a directory referenced by the ini
include_path (or a specific sub-directory thereof). Then its a no
brainer to always reference the right file regardless of where it is
included from.
My website is designed so that pages in subdirectories are never
served; the user is always navigating in the root directory. So, even
if a function in the functions subdirectory are called, the original
executing script was in the root.
Setting yourself such a constraint is a very bad idea - its
ridiculously restrictive and encourages bad habits (dumping too many
files into one folder).

C.
Jun 2 '08 #4

P: n/a
On Apr 25, 8:18 am, "C. (http://symcbean.blogspot.com/)"
<colin.mckin...@gmail.comwrote:
Put all your include files in a directory referenced by the ini
include_path (or a specific sub-directory thereof). Then its a no
brainer to always reference the right file regardless of where it is
included from.
Good idea.
My website is designed so that pages in subdirectories are never
served; the user is always navigating in the root directory. So, even
if a function in the functions subdirectory are called, the original
executing script was in the root.

Setting yourself such a constraint is a very bad idea - its
ridiculously restrictive and encourages bad habits (dumping too many
files into one folder).
I rather like this idea. I know where everything is right away, and I
don't have to traverse directories looking for a file. It's not a
'restriction' any more than keeping your papers on your desk well-
organized is a 'restriction' -- technically it is, but you're doing it
because it gains you some benefit.

I've done it the other way before -- crafting a well-thought out tree
and placing files in their 'proper' directory -- but what if a page
'belongs' in two directories? client_sales_review.php might belong
both under /clients and /sales. Where do you put it then? Wherever you
put it, you're going to drive yourself nuts trying to find it in the
other directory, in its rightful 'place'. Okay, should I make a
symlink? What about making a new directory, or reorganizing the
clients and sales folder? Hey, instead of jimmying with the directory
tree, how about getting back to work on the pages?

I now use a naming convention that gets me all the usability of the
directory tree, without the hassles of getting the right tree
structure.

sales_view.php
sales_list.php
sales_new.php
sales_edit.php
client_sales_review.php
client_list.php
client_new.php
client_view.php
client_edit.php

If I simply type 'ls *sales*', I get all the sales person files,
including client_sales_review.php . If I type 'ls *client*', I get all
the client files, along with client_sales.php. Now here's the kicker:
if I want to see all editing pages, I simply type "ls *edit*" . I
would get both sales_edit.php and client_edit.php. If I had each
*_edit.php page in a different subdirectory ( such as /clients and /
sales ), I would have to descend into each subdirectory do anything
with them. A major PITA, IMHO. In essence, the substrings in the file
name act as virtual subdirectories. I have to be as disciplined with
filenames as I would a subdirectory hierarchy, but I feel that I get
more benefit maintaining filenames in a single folder than maintaining
a proper directory tree.

What should be the upper limit for number of files in a folder,
anyway? The max number that can be displayed on a screen? Wouldn't you
run into that problem in any directory, regardless of if there were
subdirectories? I think that a directory tree should be organized by
the abstract concept that relates the files residing in the same
directory. Any other way, and it's just an arbitrary number that
drives what location a file resides in. Suppose my screen is bigger
than another developer's -- I would put more files in a directory than
he/she might. Is that the only "bad habit" that this practice
encourages?
Jun 2 '08 #5

P: n/a
On Mon, 28 Apr 2008 09:15:05 -0700 (PDT), la*****@gmail.com wrote:
If I simply type 'ls *sales*', I get all the sales person files,
including client_sales_review.php . If I type 'ls *client*', I get all
the client files, along with client_sales.php. Now here's the kicker:
if I want to see all editing pages, I simply type "ls *edit*" . I
would get both sales_edit.php and client_edit.php. If I had each
*_edit.php page in a different subdirectory ( such as /clients and /
sales ), I would have to descend into each subdirectory do anything
with them. A major PITA, IMHO. In essence, the substrings in the file
name act as virtual subdirectories. I have to be as disciplined with
filenames as I would a subdirectory hierarchy, but I feel that I get
more benefit maintaining filenames in a single folder than maintaining
a proper directory tree.

What should be the upper limit for number of files in a folder,
anyway? The max number that can be displayed on a screen? Wouldn't you
run into that problem in any directory, regardless of if there were
subdirectories? I think that a directory tree should be organized by
the abstract concept that relates the files residing in the same
directory. Any other way, and it's just an arbitrary number that
drives what location a file resides in. Suppose my screen is bigger
than another developer's -- I would put more files in a directory than
he/she might. Is that the only "bad habit" that this practice
encourages?
Generally, you'll run out of patience trying to maintain large numbers
of files before you'll run out of room in a single directory. Some
utilities may have trouble dealing with large numbers of files at once,
so operations of every file in a directory may fail. Even so, you're
talking about thousands of files as a maximum for even the crummiest
utility, not dozens.

--
97. My dungeon cells will not be furnished with objects that contain reflective
surfaces or anything that can be unravelled.
--Peter Anspach's list of things to do as an Evil Overlord
Jun 2 '08 #6

P: n/a
On Apr 28, 1:25 pm, "Peter H. Coffin" <hell...@ninehells.comwrote:
Generally, you'll run out of patience trying to maintain large numbers
of files before you'll run out of room in a single directory.
What would be an example scenario that would try my patience? If it's
editing anything more than 3 files, I'm using command line tools such
as grep, find, bash scripts, or command line perl, rather than doing
it by hand. What would I be able to do to 10 files that would give me
a problem at 10,000? I'll wait 10 seconds instead of less than 1 for
my command to finish when I have 10,000 files.

I run out of patience trying to delve through a crazy directory tree
that's more than three directories wide or deep. There's no easy way
to tell a command line script to ignore certain subdirectories and
apply a function on others. At least, none that I know of.

If it ever gets to the point where some site is more than 10,000
individual php files, well, I'd like to see someone develop that.
Chances are, there is tremendous redundancy in that site, and the
number of files could be greatly reduced. However, if you really do
need 10,000 php pages for this application, LAMP or WAMP is probably
not the proper tool for the job.
Some utilities may have trouble dealing with large numbers of files at once,
so operations of every file in a directory may fail.
On a unix system? I doubt it.

No utility works on a large number of files "at once". It goes through
each file individually, one at a time.
Jun 2 '08 #7

P: n/a
On Mon, 28 Apr 2008 18:42:52 -0700 (PDT), la*****@gmail.com wrote:
On Apr 28, 1:25 pm, "Peter H. Coffin" <hell...@ninehells.comwrote:
>Generally, you'll run out of patience trying to maintain large numbers
of files before you'll run out of room in a single directory.

What would be an example scenario that would try my patience? If it's
editing anything more than 3 files, I'm using command line tools such
as grep, find, bash scripts, or command line perl, rather than doing
it by hand. What would I be able to do to 10 files that would give me
a problem at 10,000? I'll wait 10 seconds instead of less than 1 for
my command to finish when I have 10,000 files.
Shell filename globbing frequently expands into a list of filenames that
are stored in the command buffer, which generally has SOME kind of a
cap. Mostly I've seen 32k, but that'll vary. If your list of filenames
expands out past the cap, you'll get an "Arguement list too long" error
and nothing will happen. find(1) works around that, but it allows
operations on files individually, not as aggregate.
I run out of patience trying to delve through a crazy directory tree
that's more than three directories wide or deep. There's no easy way
to tell a command line script to ignore certain subdirectories and
apply a function on others. At least, none that I know of.

If it ever gets to the point where some site is more than 10,000
individual php files, well, I'd like to see someone develop that.
Chances are, there is tremendous redundancy in that site, and the
number of files could be greatly reduced. However, if you really do
need 10,000 php pages for this application, LAMP or WAMP is probably
not the proper tool for the job.
>Some utilities may have trouble dealing with large numbers of files at once,
so operations of every file in a directory may fail.

On a unix system? I doubt it.
See above about "Arguement list too long".
No utility works on a large number of files "at once". It goes through
each file individually, one at a time.
What's the term for a statement that's technically true but irrelevant?

--
We're the technical experts. We were hired so that management could
ignore our recommendations and tell us how to do our jobs.
-- Mike Andrews
Jun 2 '08 #8

P: n/a
On Apr 29, 9:32 am, "Peter H. Coffin" <hell...@ninehells.comwrote:
Shell filename globbing frequently expands into a list of filenames that
are stored in the command buffer, which generally has SOME kind of a
cap. Mostly I've seen 32k, but that'll vary. If your list of filenames
expands out past the cap, you'll get an "Arguement list too long" error
and nothing will happen. find(1) works around that, but it allows
operations on files individually, not as aggregate.
OK, this is starting to make sense as a difficulty in management.

What, then, would be an appropriate directory structure? Start putting
files that being with 'a' under the a/ directory? How do you handle
linking in the website then?

If too many files for the command buffer is what's preventing me from
using a wildcard , then I could do the same results by doing something
like '$command a*; command b*;' etc.

Or, do I create directories and organize them thematically, according
to the functions of the website, such as 'new_customer_signup'? What
happens when that directory gets too many files in it?

It seems to me, that if the website is has grown so much that there
are too many files in the root directory, so much so that you can't
properly run commands on it, that a website is not the right solution
for the problem. There's a flaw in the design somewhere, and the
number of files in the root directory is a symptom of it.

I'm willing to see the other side of this, but so far I can't think of
an instance where a file management task I'm doing on the command line
is going to be more cumbersome within a single directory than across
and into multiple subdirectories. Is wildcarding the only problem?
Jun 2 '08 #9

P: n/a
On Apr 29, 10:14 am, Guillaume <ggra...@NOSPAM.gmail.com.INVALID>
wrote:
My website would work through it... If I have to parse file, or backup
them all, I could be in trouble.

It usually is a good idea to create sub directories based on the first
letters, the date or any other relevant data.
I'm far from an expert, but I've seen this in systems like
mailservers, where user's mailboxes are stored under something like /u/
s/username, seemingly expanding out each letter when more space
becomes necessary.

But in the case of a website, how would you properly handle links
within pages in that case?
Jun 2 '08 #10

P: n/a
On Tue, 29 Apr 2008 19:02:19 -0700 (PDT), la*****@gmail.com wrote:
On Apr 29, 9:32 am, "Peter H. Coffin" <hell...@ninehells.comwrote:
>Shell filename globbing frequently expands into a list of filenames that
are stored in the command buffer, which generally has SOME kind of a
cap. Mostly I've seen 32k, but that'll vary. If your list of filenames
expands out past the cap, you'll get an "Arguement list too long" error
and nothing will happen. find(1) works around that, but it allows
operations on files individually, not as aggregate.

OK, this is starting to make sense as a difficulty in management.

What, then, would be an appropriate directory structure? Start putting
files that being with 'a' under the a/ directory? How do you handle
linking in the website then?
That's a common way to arrange Very Large Collections of files. And it
extends nicely horizontally (aa/ ab/ ac/ etc) as well as vertically
(a/a/ a/b/ a/c/). As far as site linking goes, you're generally dealing
with automated (by php or whatever) management of files by the point you
get to more than a few hundred anyway, so you can just automate the
references as well...

$catalog_image_thumb_path = substr($catalog_item_id,1,1) .
"/" .
$catalog_item_id;
print "<td><a href="http://example.com/show_item?id=$item_id'><img src='$catalog_image_thumb_path'></a></td>";
If too many files for the command buffer is what's preventing me from
using a wildcard , then I could do the same results by doing something
like '$command a*; command b*;' etc.
Sometimes. It depends on what you're trying to do. If you're trying to
do an operation to each file, you're okay. If you're trying to (for
example) get a count of the total number of lines that refer to a
particular included module in all the .c files in a massive directory
but not in the .php files, that may be a bit more of a challenge.
Or, do I create directories and organize them thematically, according
to the functions of the website, such as 'new_customer_signup'? What
happens when that directory gets too many files in it?

It seems to me, that if the website is has grown so much that there
are too many files in the root directory, so much so that you can't
properly run commands on it, that a website is not the right solution
for the problem. There's a flaw in the design somewhere, and the
number of files in the root directory is a symptom of it.
Often it's a problem. Sometimes it's a matter of growth outstripping the
time available to rework a functioning site.
I'm willing to see the other side of this, but so far I can't think of
an instance where a file management task I'm doing on the command line
is going to be more cumbersome within a single directory than across
and into multiple subdirectories. Is wildcarding the only problem?
*grin* It's probably not, but it's the one I've run into more than once,
and it's the one that was primative enough to be a recurring problem
rather than a "solve it once by applying more technology" situtation.

--
It's not hard, it's just asking for a visit by the fuckup fairy.
-- Peter da Silva
Jun 2 '08 #11

P: n/a
On 30 Apr, 14:34, "Peter H. Coffin" <hell...@ninehells.comwrote:
On Tue, 29 Apr 2008 19:02:19 -0700 (PDT), lawp...@gmail.com wrote:
On Apr 29, 9:32 am, "Peter H. Coffin" <hell...@ninehells.comwrote:
Shell filename globbing frequently expands into a list of filenames that
are stored in the command buffer, which generally has SOME kind of a
cap. Mostly I've seen 32k, but that'll vary. If your list of filenames
expands out past the cap, you'll get an "Arguement list too long" error
and nothing will happen. find(1) works around that, but it allows
operations on files individually, not as aggregate.
OK, this is starting to make sense as a difficulty in management.
What, then, would be an appropriate directory structure? Start putting
files that being with 'a' under the a/ directory? How do you handle
linking in the website then?

That's a common way to arrange Very Large Collections of files. And it
extends nicely horizontally (aa/ ab/ ac/ etc) as well as vertically
(a/a/ a/b/ a/c/). As far as site linking goes, you're generally dealing
with automated (by php or whatever) management of files by the point you
get to more than a few hundred anyway, so you can just automate the
references as well...

$catalog_image_thumb_path = substr($catalog_item_id,1,1) .
"/" .
$catalog_item_id;
print "<td><a href="http://example.com/show_item?id=$item_id'><img src='$catalog_image_thumb_path'></a></td>";
If too many files for the command buffer is what's preventing me from
using a wildcard , then I could do the same results by doing something
like '$command a*; command b*;' etc.

Sometimes. It depends on what you're trying to do. If you're trying to
do an operation to each file, you're okay. If you're trying to (for
example) get a count of the total number of lines that refer to a
particular included module in all the .c files in a massive directory
but not in the .php files, that may be a bit more of a challenge.
Or, do I create directories and organize them thematically, according
to the functions of the website, such as 'new_customer_signup'? What
happens when that directory gets too many files in it?
It seems to me, that if the website is has grown so much that there
are too many files in the root directory, so much so that you can't
properly run commands on it, that a website is not the right solution
for the problem. There's a flaw in the design somewhere, and the
number of files in the root directory is a symptom of it.

Often it's a problem. Sometimes it's a matter of growth outstripping the
time available to rework a functioning site.
I'm willing to see the other side of this, but so far I can't think of
an instance where a file management task I'm doing on the command line
is going to be more cumbersome within a single directory than across
and into multiple subdirectories. Is wildcarding the only problem?

*grin* It's probably not, but it's the one I've run into more than once,
and it's the one that was primative enough to be a recurring problem
rather than a "solve it once by applying more technology" situtation.

--
It's not hard, it's just asking for a visit by the fuckup fairy.
-- Peter da Silva
I've often seen shared web servers where the individual sites are
placed in structures based on their first 2 letters:
>a
>>a
>>>aardvark_site
b
>>>abstract_site
b
>>ba
>>>barker_site
and so on.
Jun 2 '08 #12

P: n/a
On Apr 30, 9:34 am, "Peter H. Coffin" <hell...@ninehells.comwrote:

>
If too many files for the command buffer is what's preventing me from
using a wildcard , then I could do the same results by doing something
like '$command a*; command b*;' etc.

Sometimes. It depends on what you're trying to do. If you're trying to
do an operation to each file, you're okay. If you're trying to (for
example) get a count of the total number of lines that refer to a
particular included module in all the .c files in a massive directory
but not in the .php files, that may be a bit more of a challenge.
grep -c "search_string" *.c

What is more of a challenge to me, however, is doing a function
recursively into certain subdirectories while ignoring others.
>
Or, do I create directories and organize them thematically, according
to the functions of the website, such as 'new_customer_signup'? What
happens when that directory gets too many files in it?
It seems to me, that if the website is has grown so much that there
are too many files in the root directory, so much so that you can't
properly run commands on it, that a website is not the right solution
for the problem. There's a flaw in the design somewhere, and the
number of files in the root directory is a symptom of it.

Often it's a problem. Sometimes it's a matter of growth outstripping the
time available to rework a functioning site.
It makes sense in the case of images, when you might have thousands or
tens of thousands. But for php files?

Can a website really grow so much that you outstrip the amount of
files that a directory can hold? Presumably all of these php files are
written by hand; if not, there's some incredible redundancy -- you
just don't need that many php files. ( I can see that, however, for
something like images or thumbnails, etc. ) . If you really do need
that amount of code, I would bet that a website/PHP application is the
wrong solution for the task.

I mean, if the problem is more than 10,000 files in a directory, who's
going to write all those files?
>
I'm willing to see the other side of this, but so far I can't think of
an instance where a file management task I'm doing on the command line
is going to be more cumbersome within a single directory than across
and into multiple subdirectories. Is wildcarding the only problem?

*grin* It's probably not, but it's the one I've run into more than once,
and it's the one that was primative enough to be a recurring problem
rather than a "solve it once by applying more technology" situtation.
What was the wildcarding problem you had?The size of the command
buffer to expand all of the filenames in * ? Is it something that
cannot be solved by in turn doing command a*, command b* ?

I think I have a good solution; others are cautioning me against it,
but so far, the problems they say I might encounter don't seem all
that problematic.
Jun 2 '08 #13

P: n/a
On Wed, 30 Apr 2008 13:43:35 -0700 (PDT), la*****@gmail.com wrote:
On Apr 30, 9:34 am, "Peter H. Coffin" <hell...@ninehells.comwrote:

>>
If too many files for the command buffer is what's preventing me from
using a wildcard , then I could do the same results by doing something
like '$command a*; command b*;' etc.

Sometimes. It depends on what you're trying to do. If you're trying to
do an operation to each file, you're okay. If you're trying to (for
example) get a count of the total number of lines that refer to a
particular included module in all the .c files in a massive directory
but not in the .php files, that may be a bit more of a challenge.

grep -c "search_string" *.c

What is more of a challenge to me, however, is doing a function
recursively into certain subdirectories while ignoring others.
"Arguement list too long" Ptui. Remember, the challenge is that we've
got too many files to fit within the shell's filename expansion. That's
why it's a *problem*.
Or, do I create directories and organize them thematically, according
to the functions of the website, such as 'new_customer_signup'? What
happens when that directory gets too many files in it?
It seems to me, that if the website is has grown so much that there
are too many files in the root directory, so much so that you can't
properly run commands on it, that a website is not the right solution
for the problem. There's a flaw in the design somewhere, and the
number of files in the root directory is a symptom of it.

Often it's a problem. Sometimes it's a matter of growth outstripping the
time available to rework a functioning site.

It makes sense in the case of images, when you might have thousands or
tens of thousands. But for php files?

Can a website really grow so much that you outstrip the amount of
files that a directory can hold? Presumably all of these php files are
written by hand; if not, there's some incredible redundancy -- you
just don't need that many php files. ( I can see that, however, for
something like images or thumbnails, etc. ) . If you really do need
that amount of code, I would bet that a website/PHP application is the
wrong solution for the task.

I mean, if the problem is more than 10,000 files in a directory, who's
going to write all those files?
I've seen a lot of dumb websites.... And don't forget that php can build
php programs that it can then call...
I'm willing to see the other side of this, but so far I can't think of
an instance where a file management task I'm doing on the command line
is going to be more cumbersome within a single directory than across
and into multiple subdirectories. Is wildcarding the only problem?

*grin* It's probably not, but it's the one I've run into more than once,
and it's the one that was primative enough to be a recurring problem
rather than a "solve it once by applying more technology" situtation.

What was the wildcarding problem you had?The size of the command
buffer to expand all of the filenames in * ? Is it something that
cannot be solved by in turn doing command a*, command b* ?
Eventually, yes, you can subdivide the namespace far enough down to
something that will fit in a command buffer. But you're then effectively
mirroring exactly the same problem you're trying to avoid by avoiding
building a tree of directories. It becomes something that can't be done
with a *simple* command, and you end up needing at least on-the-fly
shell programming with some for-loops and ranges, dogs and cats living
together, anarchy!
I think I have a good solution; others are cautioning me against it,
but so far, the problems they say I might encounter don't seem all
that problematic.
--
"Doesn't everybody?" is a question that never expects an answer of "No."
Jun 2 '08 #14

This discussion thread is closed

Replies have been disabled for this discussion.