How to Make/Create Robots Txt File and Use it To Block Indexing in Search

Robots.txt file help you in telling google what to index and what not to index if you are dont know about this properly then it may affect your website in google, so you must know about Robots.txt file before uploading it to you website.

People use this to block some special part of website like admin section , if you want some pages to remove from google then you can do this with the help of Robots.txt file , But if you dont know how to do that then it will block your whole website.For instance if your robots.txt file has this line in it; User-agent: * Disallow: / it’s basically telling every crawler on the web to take a hike and not index any of your site’s content. It is also reported by some web masters that sometime search engine is blocked when crawlers detect the robot.txt file.

Block Search engine from crawling

User-agent: *
Disallow: /

Allow Search engine to crawl complete website

User-agent: *
Disallow:

(or create an empty “robots.txt” file)

Block search engine to crawl some folder or some part of the Server

User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /junk/

Exclude a single robot

User-agent: BadBot
Disallow: /

Only allow Google to crawl/index your website

User-agent: Google
Disallow:

User-agent: *
Disallow: /

You will get more details about robots.txt file from google support

Can we Block Images Using Robots.txt File

Yes You can prevent Image from appearing in google Search Using Robots.txt File , What you need to do is to add below code with your image path

User-agent: Googlebot-Image
Disallow: /images/dogs.jpg

Remove all photos/images from Google index

User-agent: Googlebot-Image
Disallow: /

To remove all files of a specific file type like .jpg but not .gif images, you’d use the following robots.txt entry

User-agent: Googlebot-Image
Disallow: /*.gif$

Above Disallow patterns may include “*” to match any sequence of characters, and patterns may end in “$” to indicate the end of a name.

Googlebot web crawlers follow the instructions in a robots.txt file, other crawlers may not follow robots instruction because there crawler follow there system,so you can password protect your files and folder, we have also mentioned How to Stop Facebook To Crawl or Track your Website Pages

Advertisement