Hi;
I have a site that I do not want the search engines to pick up
on.....attracts people and problems I do not want.
Is there a tag ( or some other means ) of preventing this?
Thanks
Steve
May 24 '06
11 2063
Alan Silver schrieb: In article <11************ **********@i40g 2000cwc.googleg roups.com>, "Andy Dingley <di*****@codesm iths.com>" <di*****@codesm iths.com> writes Google Toolbar (for just one) is a backchannel that feeds the URLs of "hidden" web sites back to Google, where they then get spidered.
Also, Google has taken to looking at newly registered domain names to see if there is a web site there. This means that even if your site doesn't have any links to it and you don't use the Google toolbar, Google could still find it!!
Google will always be able to "find" your page. You can just tell Google
not to list your page.
You can use either robots.txt or <meta>-Tags to keep out a site from the
indexes of most search-engines. Although there are Spiders who do not
follow your rules in robots.txt, most of the searchbots do.
If you just want a single page not to be listed by search-engines insert
the following tag into your HTML-head:
<meta name="robots" content="noinde x">
If you want a whole directory not to be listed you have to create a text
file called "robots.txt " in the main directory of your domain. In this
file you write:
User-agent: *
Disallow: /DIRECTORY/
(Replace DIRECTORY with the name of the directory you want to disallow.)
Hope I could help you.
>>Google Toolbar (for just one) is a backchannel that feeds the URLs of
>>"hidden" web sites back to Google, where they then get spidered.
Also, Google has taken to looking at newly registered domain names to see if there is a web site there. This means that even if your site doesn't have any links to it and you don't use the Google toolbar, Google could still find it!!
Google will always be able to "find" your page. You can just tell Google not
to list your page.
You can use either robots.txt or <meta>-Tags to keep out a site from the
indexes of most search-engines. Although there are Spiders who do not follow
your rules in robots.txt, most of the searchbots do.
If you just want a single page not to be listed by search-engines insert the
following tag into your HTML-head:
<meta name="robots" content="noinde x">
If you want a whole directory not to be listed you have to create a text file
called "robots.txt " in the main directory of your domain. In this file you
write:
User-agent: *
Disallow: /DIRECTORY/
(Replace DIRECTORY with the name of the directory you want to disallow.)
You can request that your URL be removed from Google's index at http://services.google.com:8882/urlc...&lastcmd=login
and read more about Google's webmaster's guidelines at http://www.google.com/support/webmas...y?answer=35769
Google generally plays by the rules, so a Disallow instruction in robots.txt
should work - for google's bot. But don't expect all bots to heed your
instructions (many will ignore robots.txt entirely). It's like you are on a
crowded public street telling people not to look at you. As long as you're in
sight, there's nothing preventing people (good, bad, and indifferent) from
looking. This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: Aardwolf |
last post by:
I have recently started to convert several of my websites over to
dynamic sites with pages written as requested with php and in some
cases using mysql databases to supply data within parts of the pages
rather than the plain HTML that they used to be writtten in.
I am concerned as to how this will affect the ability of the search
engine spiders to map out my websites.
Can anyone tell me what the ramifications of doing this are, and if...
|
by: Bosconian |
last post by:
I know this question is asked from time to time, but the offerings change
often enough that it deserves repeating.
I have a dynamic database-driven web site using PHP/MySQL on Linux. I need
to integrate a search tool that will search both the database and static
pages and output a list of links corresponding to the search string entered.
I am an experience web developer, but have never worked with web site search
engines before. I know...
|
by: R. Rajesh Jeba Anbiah |
last post by:
Q: Is PHP search engine friendly?
Q: Will search engine spiders crawl my PHP pages?
A:
Spiders should crawl anything provided they're accessible. Since,
nowadays most of the websites are been developed with PHP, you are not
supposed to doubt that.
As a proof that PHP pages could be crawled and indexed, refer this
Google search
|
by: disaia |
last post by:
2 problems:
Example: If a person types in a part number into Yahoo:
1. Is there a way for Yahoo to list your web site as one of the
results.
2. If the user clicks on your link, can your web application know the
part number the user typed into the Yahoo search box. I would like to
use that part number to query our database and present a dynamic web
|
by: Sandy.Pittendrigh |
last post by:
Here's a question I don't know the answer to:
I have a friend who makes very expensive, hand-made bamboo
flyrods. He's widely recognized (in the fishing industry) as one of
the 3-5 'best' rod makers in the world. He gets (sic) close to $5000
per custom made flyrod. A surprising number of people buy these
fishing rods and never use them....they buy them as art-like
investments. He is, after all, the best there is.
But if you search on...
| |
by: Mark |
last post by:
Our site gets searched by robots all the time. This is great. However,
many of our pages that we want to be cataloged are data driven, so we end up
with pages like:
www.ourdomain.com/products.aspx?productid=356
Let's assume that we stop selling productid 356. This means that the url
above is invalid. If a general user has bookmarked this page or pastes in a
url into a browser that isn't quite right, we want them to get a 'pretty'...
|
by: Sandy Pittendrigh |
last post by:
I have a how-to-do-it manual like site,
related to fishing. I want to add a new
interactive question/comment feature to each
instructional page on the site.
I want (registered) users to be able to add
comments at the bottom of each page, similar
to the way the php, mysql, apache manuals work.
PUNCHLINE_A:
|
by: Griff |
last post by:
Hi
We have an eCommerce site that was designed as a BusinessToBusiness system.
When anyone accesses a page, the site checks to see whether they have a
current session (i.e. already authenticated) and if not it redirects them to
the log-on page.
Recently, we added some BusinessToConsumer functionality. The same
authentication process described above applies, but when the unknown user
gets redirected to the logon page they see a button...
|
by: passion |
last post by:
"Specialized Search Engines" along with Google Search Capability (2
in 1):
http://specialized-search-engines.blogspot.com/
Billions of websites are available on the web and plenty of extremely
good search engines are there like Google, Yahoo and Live to name few
of them. Though this search engines have extremely efficient, complex
and beautiful algorithms designed by gems of the industry, but still
they may not deliver best results for...
|
by: marktang |
last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look !
Part I. Meaning of...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed.
This is as boiled down as I can make it.
Here is my compilation command:
g++-12 -std=c++20 -Wnarrowing bit_field.cpp
Here is the code in...
| |
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth.
The Art of Business Website Design
Your website is...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
|
by: agi2029 |
last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own....
Now, this would greatly impact the work of software developers. The idea...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules.
He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms.
Adolph will...
|
by: conductexam |
last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one.
At the time of converting from word file to html my equations which are in the word document file was convert into image.
Globals.ThisAddIn.Application.ActiveDocument.Select();...
|
by: 6302768590 |
last post by:
Hai team
i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
| |
by: bsmnconsultancy |
last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...
| |