FIX: blacklisted crawlers could get through by omitting the accept header

This commit is contained in:
Neil Lalonde
2018-04-17 12:39:21 -04:00
parent 059f1d8df4
commit b87fa6d749
2 changed files with 2 additions and 3 deletions

View File

@ -289,7 +289,6 @@ class Middleware::RequestTracker
def block_crawler(request)
request.get? &&
!request.xhr? &&
request.env['HTTP_ACCEPT'] =~ /text\/html/ &&
!request.path.ends_with?('robots.txt') &&
CrawlerDetection.is_blocked_crawler?(request.env['HTTP_USER_AGENT'])
end