What is Robots.txt File in a blog or website Need and how to use Robots.txt File

Robots.txt-Do I really need it for my Website/Blog

I am neither a SEO expert nor I am expert in HTML or any computer coding. I know only outlines of all these things and a little knowledge is a dangerous thing so before using these I do lengthy discussions with my colleagues before doing any task related to these codes. But a one  eyed person is called king among blinds so last week I received a phone call from one of my colleagues that he has receives an email from Google about “crawl Error”. The most probable reason of this error is a wrong robots.txt file or wrong Meta tags, so my first question was about robots.txt and after an affirmative reply I told him to remove it immediately.



Robots.txt - Image Curtsey-wikipedia.org

What is a Robots.txt File

This is a small file containing the instructions or commands to the Search Engine Robot or crawler that how it should crawl my Blog or Website.

Need or use of a Robots.txt File in a Blog or a Website.

Search Engine Robots are so designed that they want to index as much high quality information as they can, and  assume that they can and will crawl everything unless you tell them otherwise. Always be very careful that Robot.txt file is to stop a Search Engine Robot from crawling a file, a folder or a portion of your Blog/Website. Always bear in mind that it is to hide some thing and not to reveal. If you want that my entire Blog/Website must be indexed, you do not need a robots.txt at all



If you Still think that there must be robots.txt ?

I have given below 2 sample file which will have no effect on your search results -

     (1)  User-agent: *
          Disallow:

          This file instructs that every search engine is allowed to crawl this and the entire site or blog is to be crawled. here the '*' means no specific search engine is defined and every search engine is allowed. Disallow: is left blank means no crawl restriction is imposed and thus the entire Blog/Website is to be crawled.

     (2)  User-agent: *
          Allow: /

          This file has similar effect to that at serial No-1. Here Allow: command is used instead of Disallow: But instead of blank after Disallow: the '/' is used after Allow:  which denotes every thing is allowed. Hence the entire Blog/Website is to be crawled.

Some sample files which impose crawl restrictions on a Robot -

     (1)  User-agent: *
          Disallow: /


          Here line no 1 is similar to that of serial No-(1) and (2) above but in the second line '/' is placed after Disallow: which means every thing and no robot will index any thing from this Blog/Website in other words this Blog/Website is null and void for the search engines.

     (2)  User-agent: *
          Disallow: /folder/

          This file is used to impose search restriction on a particular page or folder. The word "folder" in second line is  to be replaced by the name/URL of the page or folder on which the search restrictions are to be imposed. You can impose similar restrictions on many pages in the same way like -

          User-agent: *
          Disallow: /folder/
          Disallow: /folder/ and so on....
         
Once again I want to emphasize that if you do not want to impose a search restriction in your Blog or Website you do not need a robots.txt file at all.

Robots.txt-Do I really need it for my Website/Blog, What is a Robots.txt File, Need or use of a Robots.txt File, Robots.txt Impose Search Restrictions. Why I need a robots.txt, robots.txt, How to Create Robots.txt File, Add Robots.txt file in Blogger, robots.txt generator, what is robots.txt, robots.txt for blogger, 404 error, Custom robot file, robots.txt kya hai, seo, wordpress robots.txt, blogger robots.txt disallow, tutorial robots.txt, configuracion robots.txt, googlebot

एक टिप्पणी भेजें

Please Donot spam

और नया पुराने