It appears to have very little use, except for myself, and I'm significantly better at bookmarking posts of interest these days.
If you'd like to run your own copy the code is available and pretty trivial to reimplement regardless. There are only two parts:
- Poll and archive content from the planet RSS feed - taking care of duplicates.
- Scanning for /robots.txt upon the source-host, to avoid archiving content which should be "private".
Once you've done that you'll have a database populated with blog entries, and you just need to write a little search script.
ObRandom: In the time it has been running it has archived 15,464 posts!