By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
458,170 Members | 1,679 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 458,170 IT Pros & Developers. It's quick & easy.

How to pull existing Title tag into the body of the document as textonly

P: n/a
I need to be able to pull the current <Title> tag from the header into
the body of the html page. We started coding this with php, and we
were able to get it working. However the entire page is being parsed
and really bogging down the system and the pages load very slowly.
(the system is Dual P4 2.4, 1GB ram, Redhat 9.0)
To fix this we decided to just use javascript, which worked great.
Text loaded immediately. However, search engines will not process the
javascript, thus this text will not be available in the document to be
scanned.
Does anyone have a way to pull a title tag from a document and display
its content as text only within a document, Without loading down the
system? When we tried it, it took the pages at least 5 times longer to
load than without the code.

Any feedback at all would be greatly appreciated. Thanks very much.
Jul 17 '05 #1
Share this Question
Share on Google+
11 Replies


P: n/a
*** Mike wrote/escribió (Thu, 05 Aug 2004 05:08:49 GMT):
Does anyone have a way to pull a title tag from a document and display
its content as text only within a document, Without loading down the
system? When we tried it, it took the pages at least 5 times longer to
load than without the code.


I'm not sure of your exact requirements but the easiest solutions seems to
use a variable:

<?
$title='Foo';
?>
<html>
<head>
<title><?=$title?></title>
</head>
<body>
<p>This page is "<?=$title?>".</p>
</body>
</html>

--
--
-- Álvaro G. Vicario - Burgos, Spain
--
Jul 17 '05 #2

P: n/a
"Mike" <pl****@spam-me-not.com> wrote in message
news:Bh*******************@twister.socal.rr.com...
I need to be able to pull the current <Title> tag from the header into
the body of the html page. We started coding this with php, and we
were able to get it working. However the entire page is being parsed
and really bogging down the system and the pages load very slowly.
(the system is Dual P4 2.4, 1GB ram, Redhat 9.0)
To fix this we decided to just use javascript, which worked great.
Text loaded immediately. However, search engines will not process the
javascript, thus this text will not be available in the document to be
scanned.
Does anyone have a way to pull a title tag from a document and display
its content as text only within a document, Without loading down the
system? When we tried it, it took the pages at least 5 times longer to
load than without the code.

Any feedback at all would be greatly appreciated. Thanks very much.


Can you describe the situation in greater details? Are you saying you have
many static HTML files that need adjustment, or are the pages generated
dynamically?

And what exactly do you mean by parsing the page? Finding a text string
within a file is absolute cakewalk. It shouldn't bog down the system to such
a degree.
Jul 17 '05 #3

P: n/a
WE have 1000s of static pages. We need to read the title from each
page and display that on the page.

We are using the following php function which causes extremely slow
page load times..

function get_title () {

$tempHost = $_SERVER['HTTP_HOST'];
$tempPath = $_SERVER['PHP_SELF'];
$tempQuery = $_SERVER['QUERY_STRING'];
$loc = "http://" . $tempHost . $tempPath . $tempQuery;

$open=fopen("$loc","r");
while(!feof($open))
{

$line=fgets($open,255);
$string = $line;
while ( ereg( '<title>([^<]*)</title>(.*)', $string, $regs ) )
{
$string = $regs[2];
}

}

return $regs[1];

}

"Chung Leong" <ch***********@hotmail.com> wrote in message news:<oJ********************@comcast.com>...
"Mike" <pl****@spam-me-not.com> wrote in message
news:Bh*******************@twister.socal.rr.com...
I need to be able to pull the current <Title> tag from the header into
the body of the html page. We started coding this with php, and we
were able to get it working. However the entire page is being parsed
and really bogging down the system and the pages load very slowly.
(the system is Dual P4 2.4, 1GB ram, Redhat 9.0)
To fix this we decided to just use javascript, which worked great.
Text loaded immediately. However, search engines will not process the
javascript, thus this text will not be available in the document to be
scanned.
Does anyone have a way to pull a title tag from a document and display
its content as text only within a document, Without loading down the
system? When we tried it, it took the pages at least 5 times longer to
load than without the code.

Any feedback at all would be greatly appreciated. Thanks very much.


Can you describe the situation in greater details? Are you saying you have
many static HTML files that need adjustment, or are the pages generated
dynamically?

And what exactly do you mean by parsing the page? Finding a text string
within a file is absolute cakewalk. It shouldn't bog down the system to such
a degree.

Jul 17 '05 #4

P: n/a
WE have 1000s of static HTML pages to update. We need to read the
page title and display it within the page.

I am using the following function which causes page load times of
1min+

function get_title () {

$tempHost = $_SERVER['HTTP_HOST'];
$tempPath = $_SERVER['PHP_SELF'];
$tempQuery = $_SERVER['QUERY_STRING'];
$loc = "http://" . $tempHost . $tempPath . $tempQuery;

$open=fopen("$loc","r");
while(!feof($open))
{

$line=fgets($open,255);
$string = $line;
while ( ereg( '<title>([^<]*)</title>(.*)', $string, $regs ) )
{
$string = $regs[2];
}

}

return $regs[1];

}

"Chung Leong" <ch***********@hotmail.com> wrote in message news:<oJ********************@comcast.com>...
"Mike" <pl****@spam-me-not.com> wrote in message
news:Bh*******************@twister.socal.rr.com...
I need to be able to pull the current <Title> tag from the header into
the body of the html page. We started coding this with php, and we
were able to get it working. However the entire page is being parsed
and really bogging down the system and the pages load very slowly.
(the system is Dual P4 2.4, 1GB ram, Redhat 9.0)
To fix this we decided to just use javascript, which worked great.
Text loaded immediately. However, search engines will not process the
javascript, thus this text will not be available in the document to be
scanned.
Does anyone have a way to pull a title tag from a document and display
its content as text only within a document, Without loading down the
system? When we tried it, it took the pages at least 5 times longer to
load than without the code.

Any feedback at all would be greatly appreciated. Thanks very much.


Can you describe the situation in greater details? Are you saying you have
many static HTML files that need adjustment, or are the pages generated
dynamically?

And what exactly do you mean by parsing the page? Finding a text string
within a file is absolute cakewalk. It shouldn't bog down the system to such
a degree.

Jul 17 '05 #5

P: n/a
beau <be**@customautotrim.com> wrote:
WE have 1000s of static pages. We need to read the title from each
page and display that on the page.
Why!
We are using the following php function which causes extremely slow
page load times..
If the pages are static you should create a static list with titles.
function get_title () {

$tempHost = $_SERVER['HTTP_HOST'];
$tempPath = $_SERVER['PHP_SELF'];
$tempQuery = $_SERVER['QUERY_STRING'];
$loc = "http://" . $tempHost . $tempPath . $tempQuery;

$open=fopen("$loc","r");
while(!feof($open))
{ .... }

return $regs[1];

}


Definition of recursive: see recursive.

--

Daniel Tryba

Jul 17 '05 #6

P: n/a
We need to pull that info and display it as text. We want to do it with
php if it is possible. I dont want to have to build a database for the
same info to pull it that way, there has to be an efficient way to do it
right on the page that isists as it is.
Thanks for your help.
mike


Daniel Tryba wrote:
beau <be**@customautotrim.com> wrote:
WE have 1000s of static pages. We need to read the title from each
page and display that on the page.

Why!

We are using the following php function which causes extremely slow
page load times..

If the pages are static you should create a static list with titles.

function get_title () {

$tempHost = $_SERVER['HTTP_HOST'];
$tempPath = $_SERVER['PHP_SELF'];
$tempQuery = $_SERVER['QUERY_STRING'];
$loc = "http://" . $tempHost . $tempPath . $tempQuery;

$open=fopen("$loc","r");
while(!feof($open))
{


...
}

return $regs[1];

}

Definition of recursive: see recursive.

Jul 17 '05 #7

P: n/a
We need to pull that info and display it as text. We want to do it with
php if it is possible. I dont want to have to build a database for the
same info to pull it that way, there has to be an efficient way to do it
right on the page that exists as it is.
Thanks for your help.
mike


Daniel Tryba wrote:
beau <be**@customautotrim.com> wrote:
WE have 1000s of static pages. We need to read the title from each
page and display that on the page.

Why!

We are using the following php function which causes extremely slow
page load times..

If the pages are static you should create a static list with titles.

function get_title () {

$tempHost = $_SERVER['HTTP_HOST'];
$tempPath = $_SERVER['PHP_SELF'];
$tempQuery = $_SERVER['QUERY_STRING'];
$loc = "http://" . $tempHost . $tempPath . $tempQuery;

$open=fopen("$loc","r");
while(!feof($open))
{


...
}

return $regs[1];

}

Definition of recursive: see recursive.

Jul 17 '05 #8

P: n/a
"beau" <be**@customautotrim.com> wrote in message
news:3d**************************@posting.google.c om...
WE have 1000s of static HTML pages to update. We need to read the
page title and display it within the page.

I am using the following function which causes page load times of
1min+

function get_title () {

$tempHost = $_SERVER['HTTP_HOST'];
$tempPath = $_SERVER['PHP_SELF'];
$tempQuery = $_SERVER['QUERY_STRING'];
$loc = "http://" . $tempHost . $tempPath . $tempQuery;

$open=fopen("$loc","r");
while(!feof($open))
{

$line=fgets($open,255);
$string = $line;
while ( ereg( '<title>([^<]*)</title>(.*)', $string, $regs ) )
{
$string = $regs[2];
}

}

return $regs[1];

}


The main problem here is that you're pulling the text through HTTP. You will
see significant improvement in your script by fopen()ing the file through
the file system.

Perhaps even more damaging to performance is the fact that you're calling
ereg() on each line of a file. ereg() known to be less efficient than PCRE.
And compiling a regular pression takes time.
Jul 17 '05 #9

P: n/a
Mike <pl****@spam-me-not.com> wrote:
We need to pull that info and display it as text. We want to do it with
php if it is possible. I dont want to have to build a database for the
same info to pull it that way, there has to be an efficient way to do it
right on the page that exists as it is.


If you need this there is something definitly flawwed in your setup...
but you can get the already generated output from a script with a ob
handler: http://nl3.php.net/manual/en/ref.outcontrol.php

--

Daniel Tryba

Jul 17 '05 #10

P: n/a
be**@customautotrim.com (beau) wrote in message news:<3d**************************@posting.google. com>...
WE have 1000s of static pages. We need to read the title from each
page and display that on the page.

We are using the following php function which causes extremely slow
page load times..

function get_title () {

$tempHost = $_SERVER['HTTP_HOST'];
$tempPath = $_SERVER['PHP_SELF'];
$tempQuery = $_SERVER['QUERY_STRING'];
$loc = "http://" . $tempHost . $tempPath . $tempQuery;

$open=fopen("$loc","r");
while(!feof($open))
{

$line=fgets($open,255);
$string = $line;
while ( ereg( '<title>([^<]*)</title>(.*)', $string, $regs ) )
{
$string = $regs[2];
}

}

return $regs[1];

}


If I understand well what you mean, you are trying to get a pretty
static TOC html page. Why don't you just implement some very basic
caching system? (No rocket science implied, really).

Basically, you keep the same script, but crontab it (or use this
windows scheduler thing if you are running Windows), every hour, or
every day, every 10 minutes, depending of how often the <title>'s are
changed.

With some basic modifications, the script, instead of generating the
page on the fly, will generate a static .html page whose speed can't
be beaten.

Or am I misunderstanding something?

HTH,
JFLac
Jul 17 '05 #11

P: n/a

"Chung Leong" <ch***********@hotmail.com> wrote in message
news:5M********************@comcast.com...
"beau" <be**@customautotrim.com> wrote in message
news:3d**************************@posting.google.c om...
WE have 1000s of static HTML pages to update. We need to read the
page title and display it within the page.

I am using the following function which causes page load times of
1min+

function get_title () {

$tempHost = $_SERVER['HTTP_HOST'];
$tempPath = $_SERVER['PHP_SELF'];
$tempQuery = $_SERVER['QUERY_STRING'];
$loc = "http://" . $tempHost . $tempPath . $tempQuery;

$open=fopen("$loc","r");
while(!feof($open))
{

$line=fgets($open,255);
$string = $line;
while ( ereg( '<title>([^<]*)</title>(.*)', $string, $regs ) )
{
$string = $regs[2];
}

}

return $regs[1];

}
The main problem here is that you're pulling the text through HTTP. You

will see significant improvement in your script by fopen()ing the file through
the file system.

Perhaps even more damaging to performance is the fact that you're calling
ereg() on each line of a file. ereg() known to be less efficient than PCRE. And compiling a regular pression takes time.


It seems that my original reply got lost somewhere... the main problem is
not ereg, but the fact that he is not terminating the WHILE loops after the
<title> tag is found. Inserting a BREAK 2; into the inner WHILE loop after
$string = $regs[2]; will fix the big problem. That and a fclose($open); at
the end wouldn't hurt either.

Norm
Jul 17 '05 #12

This discussion thread is closed

Replies have been disabled for this discussion.