Recently I've been spidering the internet, merrily downloading content for the past few days.
The intention behind the spidering is to record, in a database, the following pieces of information for each image it stumbles across:
- The page that contained the link to this image. (i.e. the image parent)
- The image URL.
- The MD5sum of the image itself.
- The dimensions of the image.
I was motivated by seeing an image upon a website and thinking "Hang on I've seen that before - but where?".
Thus far I've got details of about 30,000 images and I can now find duplicates or answer the question "Does this image appear on the internet and if so where?".
Obviously this is going to be foiled trivially via rotations, cropping, or even resizing. But I'm going to let the spider run for the next few days at least to see what interesting things the data can be used for.
In other news I'm a little behind schedule but I'm going to be moving from Xen to KVM over the next week or ten days.
My current plan is to setup the new host on Monday, move myself there that same day. Once that's been demonstrated to work I can move the other users over one by one, probably one a day. That will allow a little bit of freedom for people to choose their downtime window, and will ensure that its not an all-or-nothing thing.
The new management system is pretty good, but I have the advantage here in that I've worked upon about four systems for driving KVM hosting. The system allows people to enable/disable VNC access, use the serial console, and either use one of a number of pre-cooked kernels or upload their own. (Hmmm security you say?)
ObFilm: Chasing Amy
Tags: images, kvm, kvm-hosting, projects, searching, xen, xen-hosting 15 comments
Sounds a bit like what http://www.tineye.com/ is doing except they calculate a more complex hash so it can deal with resizing, etc. It's a good idea. Let us know what happens.