General information about the BLEXBot site crawler
The BLEXBot crawler is an automated robot that visits pages to examine and analyse the content, in this sense it is similar to the robots used by the major search engine companies.
The BLEXBot crawler is identified by having a user-agent of the following form:
Mozilla/5.0 (compatible; BLEXBot/1.0; +http://webmeup-crawler.com/)
The BLEXBot crawler can be identified by the user-agent above. If you are suspicious about requests being spoofed you should first check the IP address of the request and make a reverse DNS lookup to see its domain name via appropriate tools - it should point to one of the sub-domains of *.webmeup.com.
BLEXbot is a very site-friendly crawler. We made it as "gentle" as possible when crawling sites: it makes only 1 request per 3 seconds, or even less frequently, if another crawl delay is specified in your robots.txt file. BLEXbot respects rules you specify in your robots.txt file.
If any problems arise, they may be due to peculiarities of your particular site, or a bug on another site linking to you. Therefore, we would like to ask you, if you noticed any problem with BLEXbot, please report it to firstname.lastname@example.org We will quickly make unique settings for your particular site, so that the crawling will never affect your site's performance.
BLEXBot assists internet marketers to get information on the link structure of sites and their interlinking on the web, to avoid any technical and possible legal issues and improve overall online experience. To do this it is necessary to examine, or crawl, the page to collect and check all the links it has in its content.
If the BLEXBot Crawler has visited your site, this means that links have never been collected and tested on that page before or needed to be refreshed. For this reason you will not see recurring requests from the BLEXBot crawler to the same page.
The Crawler systems are engineered to be as friendly as possible, such as limiting request rates to any specific site (BLEXBot doesn't make more than one hit per 3 seconds), automatically backing away if a site is down or slow.
Firstly note that BLEXBot is:
With a robots.txt file you may block the BLEXBot Crawler from parts or all of your site or slow it, as shown in the following examples:
Block specific parts of your site:
Block entire site:
Slow the Crawler:
Attention: As soon as you make changes to your robots.txt, please give the crawler up to 10 minutes to fully stop crawling your website. This is caused by the fact that some pages might be in the processing queue already, so we cannot guarantee the crawler will be able to stop immediately. However, it should fully stop crawling after 10 minutes max.
All that said, we of course take any request to desist crawling any site, or parts of a site, or any other feedback on the Crawler operations seriously and will act on it in a prompt and appropriate manner.
If this is the case for you please don't hesitate to contact us at email@example.com and we will be happy to exclude your site, or otherwise investigate immediately.