Entries posted in September 2009

I've got a sick friend. I need her help.

Wednesday, 30 September 2009

There was a recent post by Martin Meredith asking about dotfile management.

This inspired me to put together a simple hack which allows several operations to be carried out:

dotfile-manager update [directory]

Update the contents of the named directory to the most recent version, via "hg pull" or HTTP fetch.

This could be trivially updated to allow git/subversion/CVS to be used instead.

(directory defaults to ~/.dotfiles/ if not specified.)

dotfile-manager link [directory]

For each file in the named directory link _foo to ~/.foo.

(directory defaults to ~/.dotfiles/ if not specified.)

e.g. directory/_screenrc will be linked to from ~/.screenrc. But hostnames count too! So you can create directory/_screenrc.gold and that will be the target of ~/.screenrc on the host gold.my.flat

dotfile-manager tidy

This removes any dangling ~/.* symlinks.

dotfile-manager report

Report on any file ~/.* which isn't a symlink - those files might be added in the future.

Right now that lets me update my own dotfiles via:

dotfile-manager update ~/.dotfiles
dotfile-manager update ~/.dotfiles-private

dotfile-manager link ~/.dotfiles
dotfile-manager link ~/.dotfiles-private

It could be updated a little more, but it already supports profiles - if you assume "profile" means "input directory".

To be honest it probably needs to be perlified, rather than being hoky shell script. But otherwise I can see it being useful - much more so than my existing solution which is ~/.dotfiles/fixup.sh inside my dotfiles repository.

ObFilm: Forever Knight

| 5 comments.

 

Looks like me an Vincent caught you boys at breakfast

Sunday, 27 September 2009

It is interesting that François Marier recently posted a brief "howto" document on debugging problems caused by overly-agressive filtering with privoxy, as I've recently been having problems with that tool.

My home network frequently changes configuration depending on what I'm concentrating upon, but every few months I'll start/cease using the following tools:

  • squid - The caching proxy server.
  • tor - The onion router.
  • privoxy - The filtering cache.

Recently I was experimenting with XSS attacks against various browsers, which meant using them for real. As not all browsers have the same anti-advert setups I was running privoxy to filter out web-annoyances, and I spotted a major flaw with it.

Unfortunately I can only describe the problem, not reproduce it, or track it down. I'm 80% certain the bug is in privoxy, but the stack is suitably high that determining that for sure is problematic.

In short the issue is that HTTP requests would end up being sent to the wrong host:

  • I load my start page in one tab: http://www.steve.org.uk/start/
  • I click to open the following URL in another tab: http://www.perlmonks.org/?node=Newest Nodes.
  • The request gets sent to http://steve.org.uk/?node=...

After that clicking around consistently sends requests to the first HTTP host which was accessed succesfully. So, for example, attempting to visit http://foo.com/bar/ will send the request to http://steve.org.uk/bar - which then gives a 404.

In terms of setup I use a dnsmasq DNS cache, privoxy and iceweasel from Debian unstable. From the symptoms I'm not sure if iceweasel's "KeepAlive" system is to blame, or if privoxy has a bad cache of hosts. Perhaps it is dnsmasq returning bogus DNS data, or my cable connection itself having DNS issues.

Anyway once the symptoms present themselves closing the browser and restarting the cache fixes it. Until the next time which might be hours or days later.

I'd report it as a bug - but I don't know where it should be. Privoxy caching things it shouldn't? iceweasel having keepalive issues? dnsmasq returning wrong DNS entries?

I'd ask "Have you seen this before, internet world?" but I guess if you have tracked it down it'd be fixed by now, and it clearly isn't!

Anyway for the moment I've uninstalled privoxy.

ObFilm: Pulp Fiction

| 4 comments.

 

Hack the planet!

Tuesday, 22 September 2009

Recently I was viewing Planet Debian and there was an entry present which was horribly mangled - although the original post seemed to be fine.

It seemed obvious to me that that some of the filtering which the planet software had applied to the original entry had caused it to become broken, malformed, or otherwise corrupted. That made me wonder what attacks could be performed against the planet aggregator software used on Planet Debian.

Originally Planet Debian was produced using the planet software.

This was later replaced with the actively developed planet-venus software instead.

(The planet package has now been removed from Debian unstable.)

Planet, and the Venus project which forked from it, do a great job at scrutinising their input and removing malicious content. So my only hope was to stumble across something they had missed. Eventually I discovered the (different) filtering applied by the two feed aggregators missed the same malicious input - an image with a src parameter including javascript like this:

<img src="javascript:alert(1)">

When that markup is viewed by some browsers it will result in the execution of javascript. In short it is a valid XSS attack which the aggregating software didn't remove, protect against, or filter correctly.

In fairness it seems most of the browsers I tested didn't actually alert when viewing that code - but as a notable exception Opera does.

I placed a demo online to test different browsers:

If your browser executes the code there, and it isn't Opera, then please do let me know!

The XSS testing of planets

Rather than produce a lot of malicious input feeds I constructed and verified my attack entirely off line.

How? Well the planet distribution includes a small test suite, which saved me a great deal of time, and later allowed me to verify my fix. Test suites are good things.

The testing framework allows you to run tiny snippets of code such as this:

# ensure onblur is removed:
HTML( "<img src=\"foo.png\" onblur=\"alert(1);\" />",
      "<img src=\"foo.png\" />" );;

Here we give two parameters to the HTML function, one of which is the input string, and the other is the expected output string - if the sanitization doesn't produce the string given as the expected result an error is raised. (The test above is clearly designed to ensure that the onblur attribute and its value is removed.)

This was how I verified initially that the SRC attribute wasn't checked for malicious content and removed as I expected it to be.

Later I verified this by editing my blog's RSS feed to include a malicious, but harmless, extra section. This was then shown upon the Planet Debian output site for about 12 hours.

During the twelve hour window in which the exploit was "live" I received numerous hits. Here's a couple of log entries (IP + referer + user-agent):

xx.xx.106.146 "http://planet.debian.org/" "Opera/9.80
xx.xx.74.192  "http://planet.debian.org/" "Opera/9.80
xx.xx.82.143  "http://planet.debian.org/" "Opera/9.80
xx.xx.64.150  "http://planet.debian.org/" "Opera/9.80
xx.xx.20.18   "http://planet.debian.net/" "Opera/9.63
xx.xx.42.61   "-"                         "gnome-vfs/2.16.3
..

The Opera hits were to be expected from my previous browser testing, but I'm still not sure why hits were with from User-Agents identifying themselves as gnome-vfs/n.n.n. Enlightenment would be rewarding.

In conclusion the incomplete escaping of input by Planet/Venus was allocated the identifier CVE-2009-2937, and will be fixed by a point release.

There are a lot of planets out there - even I have one: Pluto - so we'll hope Opera is a rare exception.

(Pluto isn't a planet? I guess thats why I call my planet a special planet ;)

ObFilm: Hackers.

| 6 comments.

 

I am the Earl of Preston

Sunday, 6 September 2009

Paul Wise recently reported that the Planet Debian search index hadn't updated since the 7th of June. The search function is something I added to the setup, and although I don't use it very often when I do find it enormously useful.

Anyway normal service should now be restored, but the search index will be missing the content of anything posted for the two months the indexer wasn't running.

Recently I tried to use this search functionality to find a post that I knew I'd written upon my blog a year or so ago, which I'd spectacularly failed to find via grep and my tag list.

Ultimately this lead to my adding a search interface to my own blog entries using the namazu2 package. If I get some free time tomorrow I'll write a brief guide to setting this up for the Debian Administration website - something that has been a little neglected recently.

ObFilm: Bill & Ted's Excellent Adventure

| 3 comments.

 

Recent Posts

Recent Tags