LogBeast CrawlBeast Consulting Blog Download Free
⏳ AI & Bot Detection

Crawl-Delay

Crawl-delay is a robots.txt directive that tells crawlers to wait a specified number of seconds between consecutive requests, used to prevent server overload from aggressive crawling.

What Is Crawl-Delay?

Crawl-delay is a robots.txt directive that specifies the minimum number of seconds a crawler should wait between requests. For example, Crawl-delay: 10 tells the bot to wait at least 10 seconds between requests. It is supported by Bingbot, Yandex, and some other crawlers, but not by Googlebot.

Why Crawl-Delay Matters

Crawl-delay helps protect servers from being overwhelmed by aggressive crawlers. If a bot is making hundreds of requests per second, it can degrade performance for real users. Crawl-delay provides a simple way to throttle well-behaved bots without implementing complex rate limiting.

How to Use Crawl-Delay

Add Crawl-delay: N to the relevant user-agent section in robots.txt. For Googlebot, use Google Search Console's crawl rate settings instead. Be careful not to set crawl-delay too high, as it reduces how many pages get crawled per day. Monitor compliance in your server logs with LogBeast.

📖 Related Article: The Ultimate robots.txt Guide — Read our in-depth guide for practical examples and advanced techniques.

Analyze This in Your Own Logs

LogBeast parses, visualizes, and alerts on server log data — see crawl patterns, bot activity, and errors in seconds.

Try LogBeast Free