BLEXBot Crawler

General information about the BLEXBot site crawler

What is it

The BLEXBot crawler is an automated robot that visits pages to examine and analyse the content, in this sense it is similar to the robots used by the major search engine companies.

The BLEXBot crawler is identified by having a user-agent of the following form:
Mozilla/5.0 (compatible; BLEXBot/1.0; +

The BLEXBot crawler can be identified by the user-agent above. If you are suspicious about requests being spoofed you should first check the IP address of the request and make a reverse DNS lookup to see its domain name via appropriate tools - it should point to one of the sub-domains of *

We care about your site's performance and will never hurt it!

BLEXbot is a very site-friendly crawler. We made it as "gentle" as possible when crawling sites: it makes only 1 request per 3 seconds, or even less frequently, if another crawl delay is specified in your robots.txt file. BLEXbot respects rules you specify in your robots.txt file.

If any problems arise, they may be due to peculiarities of your particular site, or a bug on another site linking to you. Therefore, we would like to ask you, if you noticed any problem with BLEXbot, please report it to We will quickly make unique settings for your particular site, so that the crawling will never affect your site's performance.

Why is it crawling my site

BLEXBot assists internet marketers to get information on the link structure of sites and their interlinking on the web, to avoid any technical and possible legal issues and improve overall online experience. To do this it is necessary to examine, or crawl, the page to collect and check all the links it has in its content.

If the BLEXBot Crawler has visited your site, this means that links have never been collected and tested on that page before or needed to be refreshed. For this reason you will not see recurring requests from the BLEXBot crawler to the same page.

The Crawler systems are engineered to be as friendly as possible, such as limiting request rates to any specific site (BLEXBot doesn't make more than one hit per 3 seconds), automatically backing away if a site is down or slow.

Blocking with robots.txt

Firstly note that BLEXBot is:

  1. Collecting only publicly available information that can be accessed by any random visitor. In case you think the Crawler collects some sensitive information please remove it from public access.

  2. Cannot overload your site and do any harm to it - BLEXBot is designed to be very polite and it can make only 1 hit per 3 seconds max. Besides, you can easily slow BLEXBot (and any other robot / crawler which takes directions from the robots.txt file that should be on your site).

  3. Does not read, parse, collect or store any information from your site but the links from your pages. This refers to any texts, graphical or video materials or anything else on your pages.

With a robots.txt file you may block the BLEXBot Crawler from parts or all of your site or slow it, as shown in the following examples:

Block specific parts of your site:

Block entire site:

Slow the Crawler:

Attention: As soon as you make changes to your robots.txt, please give the crawler up to 10 minutes to fully stop crawling your website. This is caused by the fact that some pages might be in the processing queue already, so we cannot guarantee the crawler will be able to stop immediately. However, it should fully stop crawling after 10 minutes max.

For a general introduction to the robots.txt protocol, please see Please also see the Wikipedia article for more details and examples of robots.txt rules.

Contact us

All that said, we of course take any request to desist crawling any site, or parts of a site, or any other feedback on the Crawler operations seriously and will act on it in a prompt and appropriate manner.

If this is the case for you please don't hesitate to contact us at and we will be happy to exclude your site, or otherwise investigate immediately.