About Archive Tags RSS Feed

 

Entries tagged utilities

The traffic is waiting outside

26 August 2006 21:50

A tool suggestion for moreutils

haschanged

The intention of haschanged is to test whether a file has changed.

It will do this by computing a hash of a file contents and storing that in a file beneath ~/.haschanged.

If a new hash differs from the previously stored hash, or there is no recorded hash it will return 1 0.

If the new hash and the old hash remain the same then the tool will return 0 1.

The script is trivial, but fairly useful for a lot of things.

The only thing that I don't like is having to store the hash somewhere… (The alternative is to copy the file somewhere, or create "${file}.orig", and then run diff. The latter doesn't work for non-root users wanting to monitor a file in /etc.)

| No comments

 

To read makes our speaking English good.

5 July 2008 21:50

I've setup several repositories for apt-get in the past, usually using reprepro as the backend. Each time I've come up with a different scheme to maintain them.

Time to make things consistent with a helper tool:

skx@gold:~/hg/rapt$ ls input/
spambayes-threaded_0.1-1_all.deb        spambayes-threaded_0.1-1.dsc
spambayes-threaded_0.1-1_amd64.build    spambayes-threaded_0.1-1.dsc.asc
spambayes-threaded_0.1-1_amd64.changes  spambayes-threaded_0.1-1.tar.gz

So we have an input directory containing just the package(s) we want to be in the repository.

We have an (empty) output directory:

skx@gold:~/hg/rapt$ ls output/
skx@gold:~/hg/rapt$

Now lets run the magic:

skx@gold:~/hg/rapt$ ./bin/rapt --input=./input/ --output=./output/
Data seems not to be signed trying to use directly...
Data seems not to be signed trying to use directly...
Exporting indices...

What do we have now?

skx@gold:~/hg/rapt$ tree output/
output/
|-- dists
|   `-- etch
|       |-- Release
|       |-- main
|           |-- binary-amd64
|           |   |-- Packages
|           |   |-- Packages.bz2
|           |   |-- Packages.gz
|           |   `-- Release
|           `-- source
|               |-- Release
|               |-- Sources
|               |-- Sources.bz2
|               `-- Sources.gz
|-- index.html
`-- pool
    `-- main
        `-- s
            `-- spambayes-threaded
                |-- spambayes-threaded_0.1-1.dsc
                |-- spambayes-threaded_0.1-1.tar.gz
                `-- spambayes-threaded_0.1-1_all.deb

neat.

Every time you run the rapt tool the output pool and dists directories are removed and then rebuilt to contain only the packages located in the incoming/ directory. (More correctly only *.changes are processed. Not *.deb.)

This mode of operation might strike some people as odd - but I guess it depends on whether you view "incoming" to mean "packages to be added to the exiting pool", or "packages to take as the incoming input to the pool generation process".

Anyway if it is useful to others feel free to clone it from the mercurial repository. There is no homepage yet, but it should be readable code and there is a minimum of supplied documentation in the script itself ..

ObQuote: Buffy. Again.

| 2 comments

 

Wash your face and try again, if you survive.

3 September 2008 21:50

There are many online blacklists which are populated by volunteers. I'm looking for such a blacklist which contains records of hosts conducting portscans, ssh brute-forcing, or other similar "badness".

dshield looks good - but doesn't make the scanning IP availble - just shows the port data.

denyhosts allows you to upload/download a list of IPs trying to run ssh bruteforce attacks - but when I wrote my own RPC code to poll that list of IPs I found I couldnt' get the full list.

I'm aware that I could run denyhosts on a spare IP, then just copy the IPs it downloads but that feels icky...

I'm unaware of any existing service that I could use for my purposes.

So would there be any interest in a service listing only portscanning/ssh brute-force IPs? (Allowing DNS queries, XML-RPC, or rsync downloads of the submitted data.)

Obviously I have my own reason for wanting such a list of bad IPs... Those are probably obvious, but it does seem like it would be generally useful.

I'd be willing to host a server to process the submitted reports, and make the results available, but I guess thats the easy part. The hard part is persuading people to run my "submit IP" client. Which has to understand ssh logs, iptable logs, or something similar.. Ugh.

I guess between the machiens I work with and the machines I host myself I've got a fair number of IPs which I could collect scans from - I could populate the database myself. But this is a perfect job for distributed submission.

ObQuote: Batoru rowaiaru

| 8 comments

 

I think you should let this one go

7 April 2009 21:50

I work with log files a lot.

Most of the logfiles I work with are in a standard format of some kind, and most often they are rotated upon a daily basis. (Examples include syslog, qpsmtpd, and Apache logfiles.)

I wish there were a general purpose way to say "grep time-range pattern logfile".

Right now, for example, I've just deployed some changes upon a cluster of hosts. Now I want to see only messages that refer to a particular area of the codebase only those that occurred after 23:00 - which is when I did the commit/push/pull dance.

I've written a quick hack - tgrep (time-grep) - which allows simple before/equal/after/range grepping :

# show matching lines after 23:00PM
tgrep \>23:00:00 -i subject /var/log/qpsmtpd/qpsmtpd.log

# show matching lines in the interval 23:00PM 23:15PM
tgrep 23:00:00-23:15:00 -i -r subject /var/log/qpsmtpd/

If there is a common way of doing this "properly" then I'd love to be educated, failing that take it if it is useful (moreutils?)

ObFilm: Chasing Amy

| 8 comments

 

Oh, this should be stunning.

8 August 2009 21:50

Recently I've been writing some documentation using the docbook toolset.

"Helpfully" the docbook tools produce a nice table of contents for your documentation. For example it will produce an index.html file containing a list of chapters, list of figures, list of tables, and finally a list of examples.

For my specific use I only wanted a table of contents listing chapters, all the other lists were just noise.

Unfortunately I've produced my documentation using the naive docbook2html tool, and all the details I can find online about customising the table of contents to remove specific items refer to using xslt and other more low-level tools.

So I thought I'd cheat. Looking at the generated index.html file I notice that the contents I wish to remove have got class attributes of TOC.

Is there a tool to parse HTML removing items with particular ID attributes? Or removing items having a particular CLASS?

I couldn't find one. So I knocked one up, using HTML::TreeBuilder::XPath, perhaps it will be useful to ohters:

html-tool --file=index.html --cut-class=foo --indent

The file index.html will be read, parsed, and all items with "class='foo'" will be removed. The output will be indented in a pretty fashion and written to STDOUT.

This example does a similar thing:

html-tool --url=http://www.steve.org.uk/ --output=x.html \
  --cut-id=top --cut-class=mbox --indent

I dabbled with allowing you to just dump HTML sections, so you could run:

html-tool --show-class=foo --file=index.html

But that didn't seem as obviously useful, so I dyked it out. Other similar operations could make it more generally useful though - right now it's more of a html-cut than a html-tool!

ObFilm: The Breakfast Club

| 7 comments

 

I've got a sick friend. I need her help.

30 September 2009 21:50

There was a recent post by Martin Meredith asking about dotfile management.

This inspired me to put together a simple hack which allows several operations to be carried out:

dotfile-manager update [directory]

Update the contents of the named directory to the most recent version, via "hg pull" or HTTP fetch.

This could be trivially updated to allow git/subversion/CVS to be used instead.

(directory defaults to ~/.dotfiles/ if not specified.)

dotfile-manager link [directory]

For each file in the named directory link _foo to ~/.foo.

(directory defaults to ~/.dotfiles/ if not specified.)

e.g. directory/_screenrc will be linked to from ~/.screenrc. But hostnames count too! So you can create directory/_screenrc.gold and that will be the target of ~/.screenrc on the host gold.my.flat

dotfile-manager tidy

This removes any dangling ~/.* symlinks.

dotfile-manager report

Report on any file ~/.* which isn't a symlink - those files might be added in the future.

Right now that lets me update my own dotfiles via:

dotfile-manager update ~/.dotfiles
dotfile-manager update ~/.dotfiles-private

dotfile-manager link ~/.dotfiles
dotfile-manager link ~/.dotfiles-private

It could be updated a little more, but it already supports profiles - if you assume "profile" means "input directory".

To be honest it probably needs to be perlified, rather than being hoky shell script. But otherwise I can see it being useful - much more so than my existing solution which is ~/.dotfiles/fixup.sh inside my dotfiles repository.

ObFilm: Forever Knight

| 5 comments