Recently I was viewing Planet Debian and there was an entry present which was horribly mangled - although the original post seemed to be fine.
It seemed obvious to me that that some of the filtering which the planet software had applied to the original entry had caused it to become broken, malformed, or otherwise corrupted. That made me wonder what attacks could be performed against the planet aggregator software used on Planet Debian.
Originally Planet Debian was produced using the planet software.
This was later replaced with the actively developed planet-venus software instead.
(The planet package has now been removed from Debian unstable.)
In fairness it seems most of the browsers I tested didn't actually alert when viewing that code - but as a notable exception Opera does.
I placed a demo online to test different browsers:
If your browser executes the code there, and it isn't Opera, then please do let me know!
The XSS testing of planets
Rather than produce a lot of malicious input feeds I constructed and verified my attack entirely off line.
How? Well the planet distribution includes a small test suite, which saved me a great deal of time, and later allowed me to verify my fix. Test suites are good things.
The testing framework allows you to run tiny snippets of code such as this:
# ensure onblur is removed:
HTML( "<img src=\"foo.png\" onblur=\"alert(1);\" />",
"<img src=\"foo.png\" />" );;
Here we give two parameters to the HTML function, one of which is the input string, and the other is the expected output string - if the sanitization doesn't produce the string given as the expected result an error is raised. (The test above is clearly designed to ensure that the onblur attribute and its value is removed.)
This was how I verified initially that the SRC attribute wasn't checked for malicious content and removed as I expected it to be.
Later I verified this by editing my blog's RSS feed to include a malicious, but harmless, extra section. This was then shown upon the Planet Debian output site for about 12 hours.
During the twelve hour window in which the exploit was "live" I received numerous hits. Here's a couple of log entries (IP + referer + user-agent):
xx.xx.106.146 "http://planet.debian.org/" "Opera/9.80
xx.xx.74.192 "http://planet.debian.org/" "Opera/9.80
xx.xx.82.143 "http://planet.debian.org/" "Opera/9.80
xx.xx.64.150 "http://planet.debian.org/" "Opera/9.80
xx.xx.20.18 "http://planet.debian.net/" "Opera/9.63
xx.xx.42.61 "-" "gnome-vfs/2.16.3
The Opera hits were to be expected from my previous browser testing, but I'm still not sure why hits were with from User-Agents identifying themselves as gnome-vfs/n.n.n. Enlightenment would be rewarding.
In conclusion the incomplete escaping of input by Planet/Venus was allocated the identifier CVE-2009-2937, and will be fixed by a point release.
There are a lot of planets out there - even I have one: Pluto - so we'll hope Opera is a rare exception.
(Pluto isn't a planet? I guess thats why I call my planet a special planet ;)
Tags: blogs, meta, planet-debian, security
27 September 2009 21:50
It is interesting that François Marier recently posted a brief "howto" document on debugging problems caused by overly-agressive filtering with privoxy, as I've recently been having problems with that tool.
My home network frequently changes configuration depending on what I'm concentrating upon, but every few months I'll start/cease using the following tools:
- squid - The caching proxy server.
- tor - The onion router.
- privoxy - The filtering cache.
Recently I was experimenting with XSS attacks against various browsers, which meant using them for real. As not all browsers have the same anti-advert setups I was running privoxy to filter out web-annoyances, and I spotted a major flaw with it.
Unfortunately I can only describe the problem, not reproduce it, or track it down. I'm 80% certain the bug is in privoxy, but the stack is suitably high that determining that for sure is problematic.
In short the issue is that HTTP requests would end up being sent to the wrong host:
- I load my start page in one tab: http://www.steve.org.uk/start/
- I click to open the following URL in another tab: http://www.perlmonks.org/?node=Newest Nodes.
- The request gets sent to http://steve.org.uk/?node=...
After that clicking around consistently sends requests to the first HTTP host which was accessed succesfully. So, for example, attempting to visit http://foo.com/bar/ will send the request to http://steve.org.uk/bar - which then gives a 404.
In terms of setup I use a dnsmasq DNS cache, privoxy and iceweasel from Debian unstable. From the symptoms I'm not sure if iceweasel's "KeepAlive" system is to blame, or if privoxy has a bad cache of hosts. Perhaps it is dnsmasq returning bogus DNS data, or my cable connection itself having DNS issues.
Anyway once the symptoms present themselves closing the browser and restarting the cache fixes it. Until the next time which might be hours or days later.
I'd report it as a bug - but I don't know where it should be. Privoxy caching things it shouldn't? iceweasel having keepalive issues? dnsmasq returning wrong DNS entries?
I'd ask "Have you seen this before, internet world?" but I guess if you have tracked it down it'd be fixed by now, and it clearly isn't!
Anyway for the moment I've uninstalled privoxy.
ObFilm: Pulp Fiction
Tags: bugs, privoxy, proxies, random
30 September 2009 21:50
There was a recent post by Martin Meredith asking about dotfile management.
This inspired me to put together a simple hack which allows several operations to be carried out:
- dotfile-manager update [directory]
Update the contents of the named directory to the most recent version, via "hg pull" or HTTP fetch.
This could be trivially updated to allow git/subversion/CVS to be used instead.
(directory defaults to ~/.dotfiles/ if not specified.)
- dotfile-manager link [directory]
For each file in the named directory link _foo to ~/.foo.
(directory defaults to ~/.dotfiles/ if not specified.)
e.g. directory/_screenrc will be linked to from ~/.screenrc. But hostnames count too! So you can create directory/_screenrc.gold and that will be the target of ~/.screenrc on the host gold.my.flat
- dotfile-manager tidy
This removes any dangling ~/.* symlinks.
- dotfile-manager report
Report on any file ~/.* which isn't a symlink - those files might be added in the future.
Right now that lets me update my own dotfiles via:
dotfile-manager update ~/.dotfiles
dotfile-manager update ~/.dotfiles-private
dotfile-manager link ~/.dotfiles
dotfile-manager link ~/.dotfiles-private
It could be updated a little more, but it already supports profiles - if you assume "profile" means "input directory".
To be honest it probably needs to be perlified, rather than being hoky shell script. But otherwise I can see it being useful - much more so than my existing solution which is ~/.dotfiles/fixup.sh inside my dotfiles repository.
ObFilm: Forever Knight
Tags: dotfile-manager, dotfiles, tools, utilities