473,837 Members | 1,603 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

A challenge? Help isolating links in a WebPage

Hello, I am writing a script that calls a URL and reads the resulting
HTML into a function that strips out everthing and returns ONLY the
links, this is so that I can build a link index of various pages.
I have been programming in PHP for over 2 years now and have never
encountered a problem like the one I am having now. To me this seems
like it should be just about the simplest thing in the world, but I
must admit I'm stumped BIG TIME!
For the sake of speed I choose to use preg_match_all to isolate the
links and return them in an array.
I have tried various regular expressions and modifications of the
regular expressions I find in PHP.net and scripts I've found laying
around as well, and have read through everything I can find on them,
including the stuff on PHP.net.
While researching I found an open source Class called snoopy that has
nearly the functionality I want, so like any good programmer, I used
it as a starting point.
The default regular expression that is used in snoppy for this
functionality is

preg_match_all( "'<\s*a\s.*?hre f\s*=\s*([\"\'])?(?(1)
(.*?)\\1|([^\s\>]+))'isx",$docum ent,$links);

For the benefit of all those new to regular expressions here it is
broken down with the authors comments

'<\s*a\s.*?href \s*=\s* # find <a href=
([\"\'])? # find single or double quote
(?(1) (.*?)\\1 | ([^\s\>]+))' # if quote found, match up to next
matching quote, otherwise match up to next space

Of course $document is the complete HTML result of the webpage I am
indexing.

This expression only returns where the link is pointing to.

I need to obtain the complete link from \< \a
href=mysite.com/mypage.html \>My Page</a>
Excuse the extra \ escape characters, I am using google to post and I
don't want it to turn that into an actual link (just hope it works)

Anyways I needed the complete link so I replaced that with this

preg_match_all( '/\<a href.*?\>(.*)(< \/a\\1>)/',$document,$li nks);

Again for those new to regular expressions here goes
'/\<a href.*?\> #Look for <a href
(.*) #Grab everything staring at the first match
(<\/a\\1>)/' #And continue to the < /a > end of the link \\1
tells it to return ONLY that which matches the whole expression.
This appears to work fine except when I run it, I seem to only get the
first 17-20 links on the same webpage, where the first expression may
return over a 100. This told me something might be wrong, so I looked
ALOT closer at both expressions and the pages I'm dealing with and
realized that some of the links may use various case and spacing
combos. The second expression doesn't appear to match anything but
exact spacing & case. So I went back to the drawing board and came up
with this.
preg_match_all( "'<\s*a\s.*?hre f.*?\>(.*)(<\/a\\1>)'",$docum ent,$links);
Again here it is broken down for those new to regular expressions
'<\s*a\s.*?href .*?\> #Find all <a href regardles of case
or spacing
(.*) #Grab everything just matched
(<\/a\\1>) #Find the closing < /a > and stop

Using the same webpage as the first two, this expression only returns
12 results! It actually is returning less than the first two.

Right now I am really mad at regular expressions. Could someone
please not just give me the solution, to the problem, but detail the
thought process to come up with that solution, and show what I'm doing
wrong here so next time I use PCRE functions, I can use correct
thinking.

Look closely at my comments, they are by no means exact, this is how I
BELIEVE the regular expression is being evaluted, I am open to
criticism on that point.

Thanx in advance, and I certainly hope this gets an informative &
instructional thread going for the benefit of everyone new to Regular
Expressions.
Jul 17 '05 #1
9 3704
Steve wrote:
Hello, I am writing a script that calls a URL and reads the resulting
HTML into a function that strips out everthing and returns ONLY the
links, this is so that I can build a link index of various pages.
I have been programming in PHP for over 2 years now and have never
encountered a problem like the one I am having now. To me this seems
like it should be just about the simplest thing in the world, but I
must admit I'm stumped BIG TIME!


Why don't you do yourself a favour and use HTMLSax from Pear:
http://pear.php.net/package-info.php...ge=XML_HTMLSax

Regards

Hartmut

--
SnakeLab - Internet und webbasierte Software | /\ /

Hartmut König (mailto:h.***** *@snakelab.de) | /\/ \ /

___________ http://www.snakelab.de _______/\/\| /\/ \/

Do you know your Shop-clients ? ShopStat do ->\/_____________

Jul 17 '05 #2
Well two reasons really.
First off I didn't know this thing existed, and I'll now probably have
to learn a new API :)
And the second was to get a good discussion going on PCRE's
preg_match_all regular expressions.
But thank you, and I will take a closer look, since I'm running out of
dev time waiting on this one part.

Hartmut König <h.******@snake lab.de> wrote in message news:<bn******* ******@news.t-online.com>...
Steve wrote:
Hello, I am writing a script that calls a URL and reads the resulting
HTML into a function that strips out everthing and returns ONLY the
links, this is so that I can build a link index of various pages.
I have been programming in PHP for over 2 years now and have never
encountered a problem like the one I am having now. To me this seems
like it should be just about the simplest thing in the world, but I
must admit I'm stumped BIG TIME!


Why don't you do yourself a favour and use HTMLSax from Pear:
http://pear.php.net/package-info.php...ge=XML_HTMLSax

Regards

Hartmut

Jul 17 '05 #3
I just downloaded it, and took a peek.
It's totally over kill for what I need, and I would have to recode
from the beginning to utilize it. I will however be using it on my
next project.
Also as stated earlier, I really want to do this with a regular
expression.

Hartmut König <h.******@snake lab.de> wrote in message news:<bn******* ******@news.t-online.com>...
Steve wrote:
Hello, I am writing a script that calls a URL and reads the resulting
HTML into a function that strips out everthing and returns ONLY the
links, this is so that I can build a link index of various pages.
I have been programming in PHP for over 2 years now and have never
encountered a problem like the one I am having now. To me this seems
like it should be just about the simplest thing in the world, but I
must admit I'm stumped BIG TIME!


Why don't you do yourself a favour and use HTMLSax from Pear:
http://pear.php.net/package-info.php...ge=XML_HTMLSax

Regards

Hartmut

Jul 17 '05 #4
Steve wrote:
I just downloaded it, and took a peek.
It's totally over kill for what I need, and I would have to recode
from the beginning to utilize it. I will however be using it on my
next project.
Also as stated earlier, I really want to do this with a regular
expression.


I used a mix of preg_match_all( ) and substr().

source at http://www.geocities.com/alterpedro/phps.html
result at http://www.geocities.com/alterpedro/php.html

--
I have a spam filter working.
To mail me include "urkxvq" (with or without the quotes)
in the subject line, or your mail will be ruthlessly discarded.
Jul 17 '05 #5
Pedro wrote:
I used a mix of preg_match_all( ) and substr(). and strpos(), and preg_replace()
source at http://www.geocities.com/alterpedro/phps.html
result at http://www.geocities.com/alterpedro/php.html


I have pasted the code to geocities, because it was much bigger
than I felt "safe" to post here. Much of its size was the
yahoo HTML chunk that I had to remove before the file
got accepted ... but I had thought of that and didn't want
to go back to some other way. Now I'm home, thinking clearer,
and the code is better :)

New Version!
[ I'll remove the geocities pages in a few days ]

<?php
function extract_URLs($s ) {
$res = array();
preg_match_all( '@(<a .*</a>)@Uis', $s, $a);
foreach ($a[1] as $x) {
$gtpos = strpos($x, '>');
$y = substr($x, 0, $gtpos);
if ($hrefpos = strpos($x, 'href=')) {
$z = substr($y, $hrefpos+5);
$z = preg_replace('/^(\S+)\s.*$/U', '$1', $z);
if ($z[0] == '"' && substr($z, -1) == '"') $z = substr($z, 1, -1);
if ($z[0] == "'" && substr($z, -1) == "'") $z = substr($z, 1, -1);
$res[] = array(substr($x , $gtpos+1, -4), $z);
}
}
unset($a);
return $res;
}
###
### example usage:
###

$data = <<<EOT
<a href=z>zz</a> <a href="z" bold="yes">ZZ</a>
<a link="y">yy</a> <a title="x" href='aa'>aa</a>
text before, <a href="href.here "><b>bold text inside</b></a> and text after
<a href="image.png "><img src="image.png"/></a>
EOT;

$LINKS = extract_URLs($d ata);
foreach ($LINKS as $v) {
echo $v[0], ' --> [', $v[1], "]\n";
}
?>

:x
--
I have a spam filter working.
To mail me include "urkxvq" (with or without the quotes)
in the subject line, or your mail will be ruthlessly discarded.
Jul 17 '05 #6
Pedro <he****@hotpop. com> wrote in message news:<bn******* ******@ID-203069.news.uni-berlin.de>...
Pedro wrote:
I used a mix of preg_match_all( ) and substr().

and strpos(), and preg_replace()
source at http://www.geocities.com/alterpedro/phps.html
result at http://www.geocities.com/alterpedro/php.html


I have pasted the code to geocities, because it was much bigger
than I felt "safe" to post here. Much of its size was the
yahoo HTML chunk that I had to remove before the file
got accepted ... but I had thought of that and didn't want
to go back to some other way. Now I'm home, thinking clearer,
and the code is better :)

New Version!
[ I'll remove the geocities pages in a few days ]

<?php
function extract_URLs($s ) {
$res = array();
preg_match_all( '@(<a .*</a>)@Uis', $s, $a);
foreach ($a[1] as $x) {
$gtpos = strpos($x, '>');
$y = substr($x, 0, $gtpos);
if ($hrefpos = strpos($x, 'href=')) {
$z = substr($y, $hrefpos+5);
$z = preg_replace('/^(\S+)\s.*$/U', '$1', $z);
if ($z[0] == '"' && substr($z, -1) == '"') $z = substr($z, 1, -1);
if ($z[0] == "'" && substr($z, -1) == "'") $z = substr($z, 1, -1);
$res[] = array(substr($x , $gtpos+1, -4), $z);
}
}
unset($a);
return $res;
}
###
### example usage:
###

$data = <<<EOT
<a href=z>zz</a> <a href="z" bold="yes">ZZ</a>
<a link="y">yy</a> <a title="x" href='aa'>aa</a>
text before, <a href="href.here "><b>bold text inside</b></a> and text after
<a href="image.png "><img src="image.png"/></a>
EOT;

$LINKS = extract_URLs($d ata);
foreach ($LINKS as $v) {
echo $v[0], ' --> [', $v[1], "]\n";
}
?>

:x


Looks great, and it does basically what I want...
Care to explain to the class how it works? Especially the regular expression part?

And thanx by the way.
Jul 17 '05 #7
Steve wrote:
Looks great, and it does basically what I want...
Care to explain to the class how it works? Especially the regular
expression part?

And thanx by the way.


Let's see how I go about that ... hope it makes sense :)
# extract URLs from a string return an array of arrays;
# each inner array has the text and the URL
function extract_URLs($s ) {
# initialize return array
$res = array();
# grab all "<a ...</a>" bits
preg_match_all( '@(<a\s.*</a>)@Uis', $s, $a);
# |`----v-----'|||`- dot metacharacter matches all (\n
# included)
# | | ||`-- case insensitive matches
# | | |`--- ungreedy, so that '<a
# href="1">1</a><a href="2">2</a>'
# | | | does *NOT* match \___
# all_of_this _______________ _/
# | | `---- end pattern delimiter
# | `----------- grab into $a[1]
# `----------------- pattern delimiter
#
# for the pattern inside the parenthesis:
# <a\s literal "<a" followed by whitespace, which stops the regex
# from matching "abbr", "acronym", "address", "applet", and
# "area"
# .* any number of anything
# (except "</a>" because we're in ungreedy matching)
# </a> literal "</a>"
# for all "<a ...</a>" matches
foreach ($a[1] as $x) {
# find the first ">" -- certainly it is the one that ends the opening "<a "
$gtpos = strpos($x, '>');
# and isolate that part
$y = substr($x, 0, $gtpos);
# if there's a "href=" there we have a good match!
# get rid of "title" in <a title="index" href="index.htm l">
if ($hrefpos = strpos($y, 'href=')) {
# put the URL, and trailing stuff (up to, but not including, the closing ">"), in $z
$z = substr($y, $hrefpos+5);
# remove everything after, and including, the first whitespace
# (whitespace is not allowed in URLs)
# get rid of "title" in <a href="index.htm l" title="index">
# if there's no match, there also is no change
$z = preg_replace('/^(\S+)\s.*$/U', '$1', $z);
# / start or expression
# ^ start of string
# ( grab
# \S+ one or more non whitespace characters
# ) into $1
# \s discard the first whitespace
# .* and everthing following it
# $ up to the end of the string
# /U end of expression, do ungreedy match (why? I can't remember :)
# it the URL is delimited by '"' or "'" remove those
if ($z[0] == '"' && substr($z, -1) == '"') $z = substr($z, 1, -1);
if ($z[0] == "'" && substr($z, -1) == "'") $z = substr($z, 1, -1);
# save result in array
# $x still is the whole string "<a href='index.htm l' title='index'>l ink text</a>"
# $gtpos is the position of the first ">": _______________ ________^__
# and the last 4 charcaters of $x are "</a>"
#
# $z is the URL from the href that has been dealt with previously
$res[] = array(substr($x , $gtpos+1, -4), $z);
}
}
# I don't like leaving "large" things abandoned
unset($a);

return $res;
}

compact new version function:

<?php
function extract_URLs($s ) {
### version 3
### changes from version 2:
### the character separating "<a" from "href" (or whatever) may be any whitespace
### only need to test for "href=" in the <a ...> part
$res = array();
preg_match_all( '@(<a\s.*</a>)@Uis', $s, $a);
foreach ($a[1] as $x) {
$gtpos = strpos($x, '>');
$y = substr($x, 0, $gtpos);
if ($hrefpos = strpos($y, 'href=')) {
$z = substr($y, $hrefpos+5);
$z = preg_replace('/^(\S+)\s.*$/U', '$1', $z);
if ($z[0] == '"' && substr($z, -1) == '"') $z = substr($z, 1, -1);
if ($z[0] == "'" && substr($z, -1) == "'") $z = substr($z, 1, -1);
$res[] = array(substr($x , $gtpos+1, -4), $z);
}
}
unset($a);
return $res;
}
?>
--
I have a spam filter working.
To mail me include "urkxvq" (with or without the quotes)
in the subject line, or your mail will be ruthlessly discarded.
Jul 17 '05 #8
Pedro <he****@hotpop. com> wrote in message news:<bn******* ******@ID-203069.news.uni-berlin.de>...
Steve wrote:
Looks great, and it does basically what I want...
Care to explain to the class how it works? Especially the regular
expression part?

And thanx by the way.


Let's see how I go about that ... hope it makes sense :)
# extract URLs from a string return an array of arrays;
# each inner array has the text and the URL
function extract_URLs($s ) {
# initialize return array
$res = array();
# grab all "<a ...</a>" bits
preg_match_all( '@(<a\s.*</a>)@Uis', $s, $a);
# |`----v-----'|||`- dot metacharacter matches all (\n
# included)
# | | ||`-- case insensitive matches
# | | |`--- ungreedy, so that '<a
# href="1">1</a><a href="2">2</a>'
# | | | does *NOT* match \___
# all_of_this _______________ _/
# | | `---- end pattern delimiter
# | `----------- grab into $a[1]
# `----------------- pattern delimiter
#
# for the pattern inside the parenthesis:
# <a\s literal "<a" followed by whitespace, which stops the regex
# from matching "abbr", "acronym", "address", "applet", and
# "area"
# .* any number of anything
# (except "</a>" because we're in ungreedy matching)
# </a> literal "</a>"
# for all "<a ...</a>" matches
foreach ($a[1] as $x) {
# find the first ">" -- certainly it is the one that ends the opening "<a "
$gtpos = strpos($x, '>');
# and isolate that part
$y = substr($x, 0, $gtpos);
# if there's a "href=" there we have a good match!
# get rid of "title" in <a title="index" href="index.htm l">
if ($hrefpos = strpos($y, 'href=')) {
# put the URL, and trailing stuff (up to, but not including, the closing ">"), in $z
$z = substr($y, $hrefpos+5);
# remove everything after, and including, the first whitespace
# (whitespace is not allowed in URLs)
# get rid of "title" in <a href="index.htm l" title="index">
# if there's no match, there also is no change
$z = preg_replace('/^(\S+)\s.*$/U', '$1', $z);
# / start or expression
# ^ start of string
# ( grab
# \S+ one or more non whitespace characters
# ) into $1
# \s discard the first whitespace
# .* and everthing following it
# $ up to the end of the string
# /U end of expression, do ungreedy match (why? I can't remember :)
# it the URL is delimited by '"' or "'" remove those
if ($z[0] == '"' && substr($z, -1) == '"') $z = substr($z, 1, -1);
if ($z[0] == "'" && substr($z, -1) == "'") $z = substr($z, 1, -1);
# save result in array
# $x still is the whole string "<a href='index.htm l' title='index'>l ink text</a>"
# $gtpos is the position of the first ">": _______________ ________^__
# and the last 4 charcaters of $x are "</a>"
#
# $z is the URL from the href that has been dealt with previously
$res[] = array(substr($x , $gtpos+1, -4), $z);
}
}
# I don't like leaving "large" things abandoned
unset($a);

return $res;
}

compact new version function:

<?php
function extract_URLs($s ) {
### version 3
### changes from version 2:
### the character separating "<a" from "href" (or whatever) may be any whitespace
### only need to test for "href=" in the <a ...> part
$res = array();
preg_match_all( '@(<a\s.*</a>)@Uis', $s, $a);
foreach ($a[1] as $x) {
$gtpos = strpos($x, '>');
$y = substr($x, 0, $gtpos);
if ($hrefpos = strpos($y, 'href=')) {
$z = substr($y, $hrefpos+5);
$z = preg_replace('/^(\S+)\s.*$/U', '$1', $z);
if ($z[0] == '"' && substr($z, -1) == '"') $z = substr($z, 1, -1);
if ($z[0] == "'" && substr($z, -1) == "'") $z = substr($z, 1, -1);
$res[] = array(substr($x , $gtpos+1, -4), $z);
}
}
unset($a);
return $res;
}
?>
--
I have a spam filter working.
To mail me include "urkxvq" (with or without the quotes)
in the subject line, or your mail will be ruthlessly discarded.


<--Gives Pedro a Gold Star and says, Thank You for that detailed
report, you get a Gold Star!
Jul 17 '05 #9
Steve wrote:
Thank You for that detailed report.


You're very welcome.

--
I have a spam filter working.
To mail me include "urkxvq" (with or without the quotes)
in the subject line, or your mail will be ruthlessly discarded.
Jul 17 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
2631
by: Ken | last post by:
This is a challenge. Perhaps someone can offer suggestions. I am trying to create a variable, ordernumber, that increases by an increment of 1 every time the variable is accessed. For example: Call up the variable the first time: ordernumber = 1 "Reopen" the webpage, call up the variable a second time: ordernumber = 2, etc.
7
10562
by: Chris | last post by:
I'm using eight links listed horizontally as a menu on my site. I'm using font-variant:small-caps and they are padded so that they mimic buttons. My gripe is with the way IE handles the focus rectangle on these links. It insists on drawing this crazy shape that traces the text, which with small caps looks rather assinine. Firefox handles the same task very gracefully (yeah Gecko!) and I would like to force IE to do the same. The site...
18
10754
by: Jan Tuxen | last post by:
Jakob Nielsen in his most recent Alertbox (http://www.useit.com/alertbox/20040503.html) tells web authors to change the color of visited links. I agree to his purpose: Help users understand where they have been. I also agree to the background: Too many web authors keep uniform link colors or their pages, thereby confusing the users. What I have a hard time agreeing to is his conclusion that web authors should deliberately change the...
1
1498
by: craig.lloyd | last post by:
Hi all, Can anyone tell me how I can display details of the 3 most recently visited links onto my webpage... i.e. My webpage has many links off to various documents etc. I would like to place a box on the page which lists the last 3 links within my site that the person has clicked on. Any ideas ????
13
2915
by: NoSpamThankYouMam | last post by:
I am looking for a product that I am not sure exists. I have bookmarks to webpages in Internet Explorer, Mozilla Firefox, Opera, Netscape Navigator, and on a "Favorite Links" page on my website. Every time I add a bookmarked page, I have to do it five times. Is there a product or combination of products that will synchronize my bookmarks and generate a HTML document that I can upload to my website? I am using Windows 2000.
7
1703
by: Patrick Olurotimi Ige | last post by:
I have a simple Stored Procedure with multiple select statements..doing select * from links for example. I created a DataTable and then fill the tables But the first dtTemplate DataTable doesn't give the error but the links does! I get the error on this LINE(when looping):- PageLinks.Text = PageLinks.Text & dtLinks.Rows(iLoop)("link_url")
2
1679
by: LazyLarry | last post by:
Hi there, I need serious help. Ive got a project that needs some unique select box function. OK THERE IS A LOT OF Code but If you Crack it youll get the credit. just mail me at info@malachite.de with your Info you wold like posted My issue: 1. I've got two select boxes, of wich one gets filled with MYSQL Data.. Works great...
6
1731
by: Jetus | last post by:
Is there a good place to look to see where I can find some code that will help me to save webpage's links to the local drive, after I have used urllib2 to retrieve the page? Many times I have to view these pages when I do not have access to the internet.
1
1612
by: metameta | last post by:
This question may be a little complicated, at least for me, since I am fairly new to python. So I know a webpage that has two drop-down selection boxes. and a 'search' button. When I choose the parameters inside the two drop down boxes and click search I get a webpage that displays the results. In the result page there are a bunch of url links. What I want to do is to automate the process of choosing different combination of search parameters...
0
10902
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10583
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10642
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10288
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9420
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
7014
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5680
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5863
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
3
3128
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.