|
Entries posted in July 2008
5 July 2008 21:50
I've setup several repositories for apt-get in the past, usually using reprepro as the backend. Each time I've come up with a different scheme to maintain them.
Time to make things consistent with a helper tool:
skx@gold:~/hg/rapt$ ls input/
spambayes-threaded_0.1-1_all.deb spambayes-threaded_0.1-1.dsc
spambayes-threaded_0.1-1_amd64.build spambayes-threaded_0.1-1.dsc.asc
spambayes-threaded_0.1-1_amd64.changes spambayes-threaded_0.1-1.tar.gz
So we have an input directory containing just the package(s) we want to be in the repository.
We have an (empty) output directory:
skx@gold:~/hg/rapt$ ls output/
skx@gold:~/hg/rapt$
Now lets run the magic:
skx@gold:~/hg/rapt$ ./bin/rapt --input=./input/ --output=./output/
Data seems not to be signed trying to use directly...
Data seems not to be signed trying to use directly...
Exporting indices...
What do we have now?
skx@gold:~/hg/rapt$ tree output/
output/
|-- dists
| `-- etch
| |-- Release
| |-- main
| |-- binary-amd64
| | |-- Packages
| | |-- Packages.bz2
| | |-- Packages.gz
| | `-- Release
| `-- source
| |-- Release
| |-- Sources
| |-- Sources.bz2
| `-- Sources.gz
|-- index.html
`-- pool
`-- main
`-- s
`-- spambayes-threaded
|-- spambayes-threaded_0.1-1.dsc
|-- spambayes-threaded_0.1-1.tar.gz
`-- spambayes-threaded_0.1-1_all.deb
neat.
Every time you run the rapt tool the output pool and dists directories are removed and then rebuilt to contain only the packages located in the incoming/ directory. (More correctly only *.changes are processed. Not *.deb.)
This mode of operation might strike some people as odd - but I guess it depends on whether you view "incoming" to mean "packages to be added to the exiting pool", or "packages to take as the incoming input to the pool generation process".
Anyway if it is useful to others feel free to clone it from the mercurial repository. There is no homepage yet, but it should be readable code and there is a minimum of supplied documentation in the script itself ..
ObQuote: Buffy. Again.
Tags: debian, rapt, reprepro, utilities
|
7 July 2008 21:50
I made a tarball release of the repository creation tool I mentioned previously, because I got a couple of mails about it.
Mostly it seems to be working out OK, to the extent that I've noticed using it at home and work.
Anyway there is a tarball available from the rapt site.
ObQuote: Hancock
Tags: rapt
|
8 July 2008 21:50
There are few programs I use with so much combined love & loathing than GNU screen.
Yesterday I spent a while adding another feature I've been wanting for so long, the unbindall primitive.
In many cases I find myself using screen as a wrapper around other things. But usually I end up having to disable dangerous keybindings, to gain security or to protect users from themselves.
Typically this leads to a screenrc file looking like this:
#
# Disable these bindings.
#
bind :
bind s
bind S
bind Z
bind ^\
bind c
bind ^c
bind z
bind Z
bind B
...
Instead it would be better if I could just say:
#
# Unbind *all* keystrokes
#
unbindall
#
# Restore actions we need/want/love.
#
bind x quit
bind d detach
bind c screen
..
Anyway, thanks to a small patch I can now.
ObQuote: The Princess Bride>
Tags: gnu screen, screen
|
10 July 2008 21:50
So recently there have been a few posts by people discussing the idea
of distributed bug reporting systems. This is topical, becuse I've
been working on something vaguely related - a support system.
Why is a support system, or ticketing system, related to a bug
tracker? To answer that you must first clarify what you mean by a
support system (or a bug tracker for that matter). There are two distinct types of support systems:
- "Full"
This is a system which does "everything". Each submitted
support, or bug, request has an associated status, severity, priority and work
log. It might be assigned to multiple people through its lifetime,
and it might be moved around various internal queues.
Example: Request Tracker.
- "Minimal"
A reported ticket is an email submission.
A "ticket" from start to finish is nothing more than a collection of mails upon a particular topic - there is no assigned "owner", no "priority", no object-specific attributes at all.
I've neither the time, patience, or desire to create any system
like the first one. But I've become increasingly
dissatisfied with my current support system, roundup, for various
reasons. So I need something new.
Previously
I ruled out several support systems and most of my objections still stand, although I admit I've not really looked again. I will certainly do so shortly.
So what have I done? Well I figure I care very little about queues, priorities, reports, and all that stuff. (On the basis that priority is mostly in the ticket itself, and that I'm the sole recipient of the submissions. Previously I had a partner to share them, but these days just me.)
I want something that is basically e-mail centric. That reduces the functionality of my support system to two distinct parts:
- Ticket Submission
An email to [email protected] should result in three actions:
- A message being sent back to the submitter. "Hey got your ticket. We'll fix it. Love you long time.". (Set the reply-to address appropriately!)
- A message to the site admin(s) saying "Hey, you've got a new ticket submission. Deal."
- A new ticket being created/stored somewhere.
(There are additional nice things to have - such as SMS alerts outside 10:00-17:00, etc, but those are trivial bolt-ons.)
- Ticket Updates
When a mail comes in to "[email protected]" - which was setup in the submission process, or when a mail arrives with subject "[issueXXXX]" we need to append the correspondance to the existing ticket.
If that ticket doesn't exist we report an error. If the ticket is closed we re-open it.
So, what do we have? A script that is conceptually simple, and can be invoked via a pipe transport of Exim4. A script that just reads the submitted email from STDIN and only has to decide a couple of things based pretty much on the contents of the "To:" and "Subject:" headers.
Simple, no?
OK it was actually amazingly simple once broken down. The amazing part is that it all works. I figured that I'd manage new tickets by writing them to mbox files - simple to do. Simple to understand.
So the process goes:
- New email arrives to "[email protected]"
- /home/steve/tickets/1 is created with the received message written to it.
- I get a mail to [email protected] saying "new ticket!!!".
- I reply to the ticket by opening /home/steve/tickets/1 in mutt - where my muttrc ensures that when I reply the from address is the ticket address, and there is an automatic CC.
- The cycle turns, and more discussion happens until we're all happy the ticket is "done"..
- A mail to [email protected]" closes the ticket by moving it from ~/tickets/ to ~/tickets/archived/.
There are a couple of helper scripts:
But basically one script and mutt does it all.
So far it's been tested and it is rockin'! (Look at me, hangin' with the cool kids!)
Obviously it is pretty Steve-specific, and there is only a toy CGI process for viewing history online, but I'm actually feeling pretty good about it.
Once I've closed all my existing tickets I will probably migrate over to it. At least this handles (read: ignores) MIME - which makes it a clear winner of roundup, which just eats, bounces, or corrupts submitted multipart messages.
ObQuote: Flash Gordon
Tags: request tracker, roundup, support
|
14 July 2008 21:50
Yesterday I was forced to test my backup system in anger, on a large scale, for the first time in months.
A broken package upgrade meant that my anti-spam system lost the contents of all its MySQL databases.
That was a little traumatic, to say the least. But happily I have a good scheme of backups in place, and only a single MX machine was affected.
So, whilst there was approximately an hour of downtime on the primary MX the service as a whole continued to run, and the secondary (+ trial tertiary) MX machines managed to handle the load between them.
I'm almost pleased I had to suffer this downtime, because it did convince me that my split-architecture is stable - and that the loss of the primary MX machine isn't a catastrophic failure.
The main reason for panicing was that I was late for a night in the pub. Thankfully the people I were due to meet believe in flexible approaches to start times - something I personally don't really believe in.
Anyway the mail service is running well, and I've setup "instant activation now", combined with a full month of free service which is helping attract more users.
Apart from that I've continued my plan of migrating away from Xen, and toward KVM. That is going well.
I've got a few guests up and running, and I'm impressed at how stable, fast, and simple the whole process is. :)
ObQuote: Brief Encounter
(That is a great film; and a true classic. Recommended.)
Tags: kvm, mail-scanning, xen
|
18 July 2008 21:50
Over the past few nights I've managed to successfully migrate the Debian Administration website to the jQuery javascript library
This means that my own javascript library code has been removed, replaced, and improved!
The site itself doesn't use very much javascript - there are a couple of places where focus is set to a couple of elements, but other than that we're only talking about:
Still there are a couple of enhancements that I've got planned which will make the site neater and more featureful for those users who've chosen to enable javascript in their browsers.
Here's my list of previous javascript usage - out of date now that I've basically chosen to use jQuery for everything.
ObQuote: Short Circuit.
Tags: debian-administration, javascript, jquery
|
19 July 2008 21:50
I'm only a minimal MySQL user, but I've got a problem with a large table full of data and I'm hoping for tips on how to improve it.
Right now I have a table which looks like this:
CREATE TABLE `books` (
`id` int(11) NOT NULL auto_increment,
`owner` int(11) NOT NULL,
`title` varchar(200) NOT NULL,
....
PRIMARY KEY (`id`),
KEY( `owner`)
) ;
This allows me to lookup all the BOOKS a USER has - because the user table has an ID and the books table has an owner attribute.
However I've got hundreds of users, and thousands of books. So I'm thinking I want to be able to find the list of books a user has.
Initially I thought I could use a view:
CREATE VIEW view_steve AS select * FROM books WHERE owner=73
But that suffers from a problem - the table has discountinuous IDs coming from the books table, and I'd love to be able to work with them in steps of 1. (Also having to create a view for each user is an overhead I could live without. Perhaps some stored procedure magic is what I need?)
Is there a simple way that I can create a view/subtable which would allow me to return something like:
|id|book_id|owner | title |....|
|0 | 17 | Steve| Pies | ..|
|1 | 32 | Steve| Fly Fishing| ..|
|2 | 21 | Steve| Smiles | ..|
|3 | 24 | Steve| Debian | ..|
Where the "id" is a consecutive, incrementing number, such that "paging" becomes trivial?
ObQuote: The Lost Boys
Update: without going into details the requirement for known,
static, and ideally consecutive identifiers is related to doing correct paging.
Tags: lazyweb, mysql, paging, questions
|
24 July 2008 21:50
KVM Utility
Gunnar Wolf made an interesting post about KVM today which is timely.
He points to a simple shell script for managing running instances of KVM which was a big improvement on mine - and so is worth a look if you're doing that stuff yourself.
Once I find time I will document my reasons for changing from Xen to KVM, but barring a few irritations I'm liking it a lot.
Chronicle Theme Update
I made a new release of the chronicle blog compiler yesterday, mostly to update one of the themes.
That was for purely selfish reasons as I've taken the time to update the antispam protection site I'm maintaining. There have been some nice changes to make it scale more and now it is time for me to make it look prettier.
(A common theme - I'm very bad at doing website design.)
So now the site blog matches the real site.
ObQuote: Resident Evil
Tags: chronicle, kvm, mail-scanning, xen
|
28 July 2008 21:50
There should be a word for those silly little ways you can fool your body & brain. For example recently I've been having trouble with my boiler - so getting hot water is a challenge.
I find myself doing the crazy thing:
- Turn on hot tap(s)
- Stick my hands under them to see if the water is hot.
- Think to myself "Hey it is getting warmer..".
- Realise actually I just imagined it.
Lather, rinse, repeat.
Similarly there are times when you can imaging all kinds of bodily sensations. More than once I've been walking out, or sat at home, convinced that my mobile phone vibrated in my pocket. And it hadn't at all.
I remember, random, conversations with people who agreed they sometimes believe their phones are vibrating when they are not. Seems to be a common thing.
Which begs the question, is this a modern thing? Ten years ago if you had something vibrating against your body you damn well knew about it ... because you were doing it deliberately!
It is only recently that it was possible to have something semi-randomly vibrating against you, without your explicit control. Right?
(OK that sounds rude. It'll be our little secret.)
ObQuote: Godfather (Pt.1)
Tags: life, random
|
30 July 2008 21:50
I've made a new release of my sift IMAP utility, which now adds a few new types of rules:
- Match message bodies against regular expressions.
- Match recipients against regular expressions.
All the "searching" rules now allow negation too, so that "from:[email protected]" and "!from:foo@bar" work as you'd expect.
Finally each of the searching rules, which require the download of the complete IMAP message, will use a local disk-cache to avoid undue overhead.
All in all I'm pretty pleased with the way the tool is being used in the wild. Each of these changes were the result of a direct user request, or suggestion.
ObMovie: Dogma
Tags: sift
|
|