mirror of
https://github.com/discourse/discourse.git
synced 2025-05-22 06:41:25 +08:00
FIX: blacklisted crawlers could get through by omitting the accept header
This commit is contained in:
@ -289,7 +289,6 @@ class Middleware::RequestTracker
|
||||
def block_crawler(request)
|
||||
request.get? &&
|
||||
!request.xhr? &&
|
||||
request.env['HTTP_ACCEPT'] =~ /text\/html/ &&
|
||||
!request.path.ends_with?('robots.txt') &&
|
||||
CrawlerDetection.is_blocked_crawler?(request.env['HTTP_USER_AGENT'])
|
||||
end
|
||||
|
Reference in New Issue
Block a user