About Archive Tags RSS Feed

 

Entries tagged debian

We've Been Out All Night And We Havn't Been Home,

21 June 2007 21:50

The source-searching system I was talking about previously is progressing slowly.

So far I've synced the source to Etch to my local machine, total size 29Gb, and this evening I've started unpacking all the source.

I'm still in the "a" section at the moment, but thanks to caching I should be able to re-sync the source archive and unpack newer revisions pretty speedily.

The big problem at the moment is that the unpacking of all the archives is incredibly slow. Still I do have one new bug to report aatv: Buffer overflow in handling environmental variables..

That was found with:

rgrep getenv /mnt/mirror/unpacked | grep sprintf

(A very very very slow pair of greps. Hopefully once the unpacking has finished it will become faster. ha!)

The only issue I see at the moment is that I might not have the disk space to store an unpacked tree. I've got 100Gb allocated, with 29Gb comprised of the source. I'll just have to hope that the source is less than 70Gb unpacked or do this in stages.)

I've been working on a list of patterns and processes to run, I think pscan, rats, and its should be the first tools to run on the archive. Then after that some directed use of grep.

If anybody else with more disk space and connectivity than myself be interested I can post the script(s) I'm using to sync and unpack .. Failing that I'll shut up now.

| No comments

 

When you want to go to it

4 October 2007 21:50

Here's a quick question - does there exist a stable and reliable caching proxy for APT?

Both apt-proxy and approx cause me problems on a regular basis - with MD5 sum mismatches on Release files. And general hangs, stalls and timeouts.

I've just installed and configured apt-cacher but #437166 doesn't fill me with confidence..

| No comments

 

As I move on through the year

19 October 2007 21:50

Bits from the Security Team

  • We get tons of spam. If your issue isn't replied to at least once wait a day and resend.
  • Frequently advisories are delayed because our buildd machines are broken. We can't fix them.
  • People reporting bugs with the 'security' tag help us.
  • People reporting bugs with patches help us more.
  • People reporting bugs with patches and pointers to fixed packages they have build help us best.
  • I like pies.

I am happy to look over patches, built packages, and generally encourage people to be involved. Our team isn't huge but historically we've only added people who've done a fair bit of work first. That is both good and bad.

I could write more, and probably should, but I'll stop there for now because I'm frustrated by the HPPA build machine. Again.

ObRelated: Moritz is trying to get the archive rebuilt with security features from our compilers (eg. -fstack-protector) included. This would be a fantastic achievement. People interested in tested kernel patches, donating buildd machines, etc, etc should give him a ping.

| No comments

 

Drip drip drip drip drink a little drip drip drip drip

21 October 2007 21:50

It is interesting that there have been posts about archive tools appearing upon the Planet Debian.

Recently I setup an instance of rebuildd which worked nicely once I'd installed the required dependencies manually.

I also run three instances of reprepro, but there life is not such a happy picture.

I might be using reprepro incorrectly, but despite fighting with it for some time I cannot coerce the software into allowing me to upload the same version of a binary package for amd64 & i386 architectures - something I frequently want to do.

On the face of it importing packages into a small database doesn't seem terribly difficult, but it is a problem I've not spent much time looking at yet.

| No comments

 

thinking everything's gonna be as sweet as pie

1 November 2007 21:50

I'm in a position where I need to rebuild a Linux kernel for a number of distributions and architectures. Currently the distributions are:

  • Debian Etch
  • Ubuntu Dapper
  • Ubuntu Edgy
  • Ubuntu Feisty
  • Ubuntu Gutsy

(For each distribution I need a collection of packages for both i386 and amd64.)

I've written a couple of scripts to automate the process - first of all running "make menuconfig" within a debootstrap-derived chroot of each arch & distribution pair. Then later using those stored .config files to actually produce the packages via make-kpkg.

This process, as you could imagine, takes several hours to complete. Then there's the testing ...

I'm sure there must be other people with this kind of need but I was suprised to see nothing in my search attempts.

ObRandom: I'm tempted to switch from song-lyrics to film names as post titles. Undecided as yet. I guess it doesn't really matter, just gives me a small amount of amusement. Even now.

| No comments

 

When the day is through

4 November 2007 21:50

The webpages for the Debian Security Audit Project have been outdated for quite some time. Specifically because they contained two static pages comprised of large lists which were annoying to update:

  • The list of security advisories we've been responsible for.
  • The list of security-sensitive bug reports we'd made which didn't require a security advisory. (ie. packages in sid)

Last week, with the help of Rhonda, and a couple of other people, I removed the static lists and replaced them with simple data files, and a perl script to convert those data files into HTML content.

Now the advisories which have been released are ordered by date, and broken down into years. For example 2002, 2003, & etc. We also have a list of all the people who've been credited with at least one advisory.

There are some outstanding advisories still to be included, but otherwise I'm much happier with the process (and feel only guilt at the breakage of the translations).

There isn't much actual auditing happening at the moment, with only four advisories released in 2007 compared to many more at the peak. But I guess that is a separate problem, and one that I can do less about - short of finding more time to look at source code.

| No comments

 

Listen to me when I'm telling you

14 February 2008 21:50

So today I'm a little bit lazy and I've got the day off work. As my previous plan suggested I wanted to spend at least some of the day tackling semi-random bugs. Earlier I picked a victim: less.

less rocks, and I use it daily. I even wrote an introduction to less once upon a time.

So lets take a look at two bugs from the long-neglected pile. These two issues are basically the same:

They seem like simple ones to fix, with the same root cause. Here's an example if you want to play along at home:

 cp /dev/null testing
 gzip testing
 zless testing.gz

What do you see? I see this:

"testing.gz" may be a binary file.  See it anyway?

When I select "y" I see the raw binary of the compressed file.

So, we can reproduce it. Now to see why it happens. /bin/zless comes from the gzip package and is a simple shell script:

#!/bin/sh
# snipped a lot of text
LESSOPEN="|gzip -cdfq -- %s"; export LESSOPEN
exec less "$@"

So what happens if we run that?

$ LESSOPEN="|gzip -cdfq -- ~/testing.gz" /usr/bin/less ~/testing.gz
"/home/skx/testing.gz" may be a binary file.  See it anyway?

i.e. it fails in the same way. Interestingly this works just fine:

gzip -cdfq -- ~/testing.gz | less

So we've learnt something interesting and useful. We've learnt that when LESSOPEN is involved we get the bug. Which suggests we should "apt-get source less" and then "rgrep LESSOPEN ~/less-*/".

Doing so reveals the following function in filename.c:

	public char *
open_altfile(filename, pf, pfd)
	char *filename;
	int *pf;
	void **pfd;
{

/* code to test whether $LESSOPEN is set, and attempt to run the
   command if it is */

		/*
		 * Read one char to see if the pipe will produce any data.
		 * If it does, push the char back on the pipe.
		 */
		f = fileno(fd);
		SET_BINARY(f);

		if (read(f, &c, 1) != 1)
		{
			/*
			 * Pipe is empty.  This means there is no alt file.
			 */
			pclose(fd);
			return (NULL);
		}
		ch_ungetchar(c);
		*pfd = (void *) fd;
		*pf = f;
		return (save("-"));

That might not be digestible, but basically less runs the command specified in $LESSOPEN. If it may read a single character of output from that command it replaces the file it was going to read with the output of the command instead!

(i.e. Here less failed to read a single character, because our gzipped file was zero-bytes long! So instead it reverted to showing the binary gzipped file.)

So we have a solution: If we want this to work we merely remove the "read a single character test". I can't think of circumstance in which that would do the wrong thing, so I've submitted a patch to do that.

Bug(s) fixed.

Incidentally if you like these kind of "debuggin by example" posts, or hate them, do let me know. So I'll know whether to take notes next time or not..

| 22 comments

 

Some people get by with a little understanding

9 March 2008 21:50

Since my last example of fixing a bug received some interesting feedback (although I notice no upload of the package in question ..) we'll have another go.

Looking over my ~/.bash_history file one command I use multiple times a day is make. Happily GNU make has at least one interesting bug open:

I verified this bug by saving the Makefile in the report and running make:

skx@gold:~$ make
make: file.c:84: lookup_file: Assertion `*name != '\0'' failed.
Aborted

(OK so this isn't a segfault; but an assertion failure is just as bad. Honest!)

So I downloaded the source to make, and rebuilt it. This left me with a binary with debugging symbols. The execution was much more interesting this time round:

skx@gold:~$ ./make
*** glibc detected ***
  /home/skx/./make: double free or corruption (fasttop): 0x00000000006327b0 ***
======= Backtrace: =========
/lib/libc.so.6[0x2b273dbdd8a8]
/lib/libc.so.6(cfree+0x76)[0x2b273dbdf9b6]
/home/skx/./make[0x4120a5]
/home/skx/./make[0x4068ee]
/home/skx/./make[0x406fb2]
...
[snip mucho texto]

And once I'd allowed core-file creation ("ulimit -c 9999999") I found I had a core file to help debugging.

Running the unstripped version under gdb showed this:

(gdb) up
#5  0x00000000004120a5 in multi_glob (chain=0x1c, size=40) at read.c:3106
3106			    free (memname);

So it seems likely that this free is causing the abort. There are two simple things to do here:

  • Comment out the free() call - to see if the crash goes away (!)
  • Understand the code to see why this pointer might be causing us pain.

To get started I did the first of these: Commenting out the free() call did indeed fix the problem, or at least mask it (at the cost of a memory leak):

skx@gold:~$ ./make
make: *** No rule to make target `Erreur_Lexicale.o', needed by `compilateur'.  Stop.

So, now we need to go back to read.c and see why that free was causing problems.

The function containing the free() is "multi_glob". It has scary pointer magic in it, and it took me a lot of tracing to determine the source of the bug. In short we need to change this:

free (memname);

To this:

free (memname);
memname = 0;

Otherwise the memory is freed multiple times, (once each time through the loop in that function. See the source for details).

Patch mailed.

| 5 comments

 

E.T. phone home

16 March 2008 21:50

I've just finished reading "Don't You Have Time to Think", a collection of letters written to and from Richard P. Feynman. A birthday present from my wishlist.

Previously I've read the collection of letters to/from Tolkien. (Several times actually. Very nice collection!)

It suddenly struck me that over my lifetime I've probably written <200 letters to people.

When I was young I had a couple of pen pals, and when I was entering university I was involved with a couple of play by mail games which involved writing random letters involving strategy & etc.

Personal letters though? I've written very few, and I think they've mostly consisted of letters to my partner/partners of the time.

(For example Megan went home for a few months at the end of a university year about two months after I initially met her. So there were many letters back and forth. Recently she spent two months working in the USA; counting eggs and avoiding alligators so again there was a flurry of written letters, maybe 20 total during the duration of her trip.)

I guess that most of my (hand)written messages to people have been in the form of postcards whilst on holiday.

A long time ago I offered to mail postcards to Debian developers. I know I sent at least two, and I received at least one back - but the thing I remember most was exchanging addresses with Amayita and getting into character set issues. Her emails, containing her Spanish address, were difficult to understand as my mutt/console refused to display the foreign character set properly.

I can't recall whether she did ultimately receive a card from me, but I'm sure she'll remind me if she did.

Anyway I have no resolution, intention, or expectation that I will suddenly start writing more physical mails to people. But I think it almost counts as something we do less of these days. The telephone and internet have become the norm.

In some ways this is fantastic. In others it less good.

On the left my handwriting is so bad that maybe this isn't necessarily a problem.

ObQuote: E.T.

| 5 comments

 

Thanks for the flashback

20 March 2008 21:50

Well this has been a busy week, and no mistake.

Still I've advocated a new individual who wishes to become a Debian Developer. I guess now I get to watch second-hand to see how long the process takes!

(I messed up though; the first sponsored upload for her has the wrong mail address. It'll get REJECTed, and then we'll try again. D'oh.)

In more optimistic news this weekend I'm going to attempt to finish painting my front room. The painting of this room was started n the 3rd of February. So we're coming close to two months. A new record!

Also this weekend I must write some letters ...

Tonight will involve some Balvenie and a copy of The Godfather (part 1).

ObQuote: Eight Legged Freaks.

| No comments

 

Don't you just hate loose ends?

21 March 2008 21:50

Today I spent a while fixing some more segfault bugs. I guess that this work qualifies as either fixing RC bugs, or potential security bugs.

Anyway I did an NMU of libpam-tmpdir a while back to fix all but one of the open bugs against it.

I provided a patch for #461625 yelp: segfault while loading info documentation, which fixes the symptoms of bad info-parsing, and avoids the segfault.

I also looked into the #466771 busybox cpio: double free or corruption during cpio extraction of hardlinks - but it turns out that was already fixed in Sid.

Finally I found a segfault bug open against ftp:

To reproduce this bug run:

skx@gold:~$ ftp ftp.debian.org
220 saens.debian.org FTP server (vsftpd)
Name (ftp.debian.org:skx): anonymous
331 Please specify the password
Password: [email protected]
ftp> cd debian/doc
250 Directory successfully changed.
ftp> get dedication-2.2.cn.txt dedication-2.2.de.txt dedication-2.2.es.txt ..
local: dedication-2.2.de.txt remote: dedication-2.2.cn.txt
Segmentation fault

You need to repeat the arguments about 50 times. But keep adding more and more copies of the three files to the line until you get the crash.

It isn't interesting as a security issue as it is client side only; but as a trivially reproducable issue it becomes fun to solve.

Lets build it with debugging information, and run it again. Here is what we see:

Core was generated by `./ftp/ftp ftp.debian.org'.
Program terminated with signal 11, Segmentation fault.
#0  0x00002b85ad77f1cf in fwrite () from /lib/libc.so.6
(gdb) up
#1  0x0000000000408c3e in command (fmt=0x40dd15 "TYPE %s") at ftp.c:366
366		fputs("\r\n", cout);
(gdb) up
#2  0x0000000000402c3e in changetype (newtype=3, show=)
    at cmds.c:348
348			comret = command("TYPE %s", p->t_mode);
(gdb) up
#3  0x000000000040a569 in recvrequest (cmd=,
    local=0x623d10 "dedication-2.2.de.txt",
    remote=0x6238d4 "dedication-2.2.cn.txt", lmode=0x40e310 "w",
    printnames=) at ftp.c:935
935			changetype(type, 0);

OK so things look trashed, and not in the middle of a copy/sprintf/similar. i.e. there is no obvious problem.

Lets take a step back from things. We know that the crash occurs when we send a long command line. Looking over the code we see the fucntion main.c:makeargv(). This attempts to split the input command line string into an array of tokens.

Interestingly we see this:

char **
makeargv(int *pargc, char **parg)
{
	static char *rargv[20];
	int rargc = 0;
	char **argp;

I wonder what happens if we set the 20 to 2048? Good guess. The crash no longer occurs. (Though I'm sure it would if you entered enough tokens...)

So we know that the crash is relating to the number of space-separated tokens upon the command line. If we increase the limit we'll be fine. But of course we want to fix it properly. There are two ways forward:

  • Abort handling a line if there are >15 "space" characters on the line.
  • Recode the makeargv function to work properly.

I did eventually submit a patch to the bug report which uses dynamic memory allocation, and should always work. Job done.

I mailed the maintainer of FTP and said unless I heard differently I'd NMU and cleanup the package in a week.

All being well this entry will be nicely truncated in the RSS feeds as support for the <cut> tag was the main new feature in my previous upload of chronicle - the blog compiler I use/wrote/maintain.

ObQuote: Razor Blade Smile

| No comments

 

so you might get lucky, and you might not

7 April 2008 21:50

Emacs

One thing I do a lot is select a region of text, then have it replaced with the output of a command.

The most common job is sorting a number of lines, such as "use XX:YY;" lines in perl scripts.

Finally having gotten annoyed enough about how clunky shell-command-on-region was I wrote my own lisp function:

Only after that did I discover M-x sort-lines. D'oh. Still I guess my solution is more general, and less difficult to use. (I find the use of the Emacs prefix troublesome to type; since you have to do it in advance - I almost always forget.)

I also learnt of M-x list-matching-lines yesterday. Thats a new discovery which really rocks. (I can use "^sub " to find a list of subroutines, etc.)

NEW-queue

This could be improved, and fleshed out a lot if there were any interest.

But its neat as-is:

#!/bin/sh
#
#  Dump packages in the NEW queue.
#
#  This could be improved, perhaps:
#
#  --show-names --show-dates, etc.  Or just show all info in a table.
#
wget --quiet -O-  http://ftp-master.debian.org/new.html | \
 perl -ne 'print $1 . "\n" if ( $_ =~ /^<td valign="top" class="sid">([^<]+)<\/td>$/ )'

I thought there was something similar in the devscripts package, or contained within debian-goodies but apparently not.

ObQuote:Battle Royale

| 2 comments

 

I'm the only one qualified to remote-pilot the ship anyway.

11 April 2008 21:50

http://10.print.debian.rocks.twentygototen.org/

ObQuote: Aliens

| No comments

 

That wasn't true. Made it up. Shouldn't have done that. Sorry.

18 April 2008 21:50

Chronicle

My blog compiler received a bit of love recently, primarily because MJ Ray wanted to use it.

As mentioned before I've added a simple spooling system, and the mercurial repository now contains a simple RSS importer.

Debian Work

In other news I've been working on various Debian packages, here is a brief summery:

bash-completion

After seeing a RFH bug I closed a few bash-completion bugs, and submitted patches for a couple more.

I was intending to do more, but I'm still waiting for the package code to be uploaded to the the alioth project.

javascript work

I've updated the jquery package I uploaded to follow the new "Javascript standard" - in quotes only because it is both minimal and new.

Once the alioth project has been configured I'll upload my sources.

Apache2

I've agreed to work on a couple of SSL-related bugs in the Apache 2.x package(s) - time disappeared but I hope to get that done this weekend.

Initially that was because I was hoping I could trade a little love for getting a minor patch applied to mod_vhost_alias - instead I've now copied that module into libapache2-mod-vhost-bytemark and we'll maintain our own external module.

Hardware

I've been loaned a Nokia 770 which is very nice. Having used it with vim, ssh & etc I think that I'd rather have a device with a real keyboard.

The Nokia 810 looks pretty ideal for me. I'm going to be asking around to see if I can get a donated/loaned device to play with for a while before I take the plunge and pay for one of my own.

I've got a couple more things on the go at the moment, but mostly being outdoors is more interesting to me than the alternative. Hence the downturn in writing and releasing security advisories.

I'll pick things up more fully over the coming weeks I'm sure.

ObQuote: Shaun of the Dead

| No comments

 

Offer me everything I ask for

29 April 2008 21:50

I installed Debian upon a new desktop machine yesterday, via a PXE network boot.

It was painless.

Getting xen up and running, with a 32-bit guest and a 64-bit guest each running XDMCP & VNC was also pretty straightforward.

There is a minor outstanding problem with the 32-bit xen guest though; connecting to it from dom0, via XDMCP, I see only a blank window - no login manager running.

GDM appears painlessly when I connect via VNC.

The relevent configuration file looks like this:

# /etc/gdm/gdm.conf
[security]
AllowRoot=true
AllowRemoteRoot=true

[xdmcp]
Enable=true

The same configuration on the 64-bit guest works OK for both cases.

(I like to use XDMCP for accessing the desktop of Xen guests, since it means that I get it all full-screen, and don't have to worry about shortcuts affecting the host system and not the guest - as is the case if you're connecting via VNC, etc).

Weirdness. Help welcome; I'm not 100% sure where to look

Anyway, once again, a huge thank you to the Debian Developers, bug submitters, and anybody else involved peripherally (such as myself!) with Debian!

I love it when a plan comes together.

SSL

ObRandom: Where is the cheapest place to get an SSL certificate, for two years, which will work with my shiny Apache2 install?

Somebody, rightly, called me for not having SSL available as an option on my mail filtering website.

I've installed a self-signed certificate just now, but I will need to pay the money and buy a "real" one shortly.

So far completessl.com seems to be high in the running:

  • 1 year - £26
  • 2 years - £49

For double-bonus points they accept Paypal which most of my customers pay with ..

ObQuote: The Princess Bride

| 9 comments

 

You're not too technical, just ugly, gross ugly

7 May 2008 21:50

Well a brief post about what I've been up to over the past few days.

An alioth project was created for the maintainance of the bash-completion package. I spent about 40 minutes yesterday committing fixes to some of the low-lying fruit.

I suspect I'll do a little more of that, and then back off. I only started looking at the package because there was a request-for-help bug filed against it. It works well enough for me with some small local additions

The big decision for the bash-completion project is how to go forwards from the current situation where the project is basically a large monolithic script. Ideally the openssh-client package should contain the completion for ssh, scp, etc..

Making that transition will be hard. But interesting.

In other news I submitted a couple of "make-work" patches to the QPSMTPD SMTP proxy - just tidying up a minor cosmetic issues. I'm starting to get to the point where I understand the internals pretty well now, which is a good thing!

I love working on QPSMTPD. It rocks. It is basically the core of my antispam service and a real delight to code for. I cannot overemphasise that enough - some projects are just so obviously coded properly. Hard to replicate, easy to recognise...

I've been working on my own pre-connection system which is a little more specialied; making use of the Class::Pluggable library - packaged for Debian by Sarah.

(The world -> Pre-Connection/Load-Balancing Proxy -> QPSMTPD -> Exim4. No fragility there then ;)

Finally I made a tweak to the Debian Planet configuration. If you have Javascript disabled you'll no longer see the "Show Author"/"Hide Author" links. This is great for people who use Lynx, Links, or other minimal browsers.

TODO:

I'm still waiting for the creation of the javascript project to be setup so that I can work on importing my jQuery package.

I still need to sit down and work through the Apache2 bugs I identified as being simple to fix. I've got it building from SVN now though; so progress is being made!

Finally this weekend I need to sit down and find the time to answer Steve's "Team Questionnaire". Leave it any longer and it'll never get answered. Sigh.

ObQuote: Shooting Fish

| 2 comments

 

Manni - you're not dead yet.

18 June 2008 21:50

Well I'm back and ready to do some fun work.

In the meantime it seems that at least one of my crash-fixes, from the prior public bugfixing, has been uploaded:

I'm still a little bit frustrated that some of the other patches I made (to different packages) were ignored, but I guess I shouldn't be too worried. They'll get fixed sooner or later regardless of whether it was "my" fix.

In other news I've been stalling a little on the Debian Administration website.

There are a couple of reasonable articles in the submissions queue - but nothing really special. I can't help thinking that the next article being a nice round number of 600 deserves something good/special/unique? hard to quantify, but definitely something I'm thinking. I guess I leave it while the weekend and if nothing presents itself I'll just go dequeue the pending pieces.

In other news I've managed to migrate the mail scanning service into a nicely split architecture - with minimal downtime.

I'm pleased that:

  • The architecture was changed massively from a single-machine orientated service to a trivially scalable one - and that this was essentially seamless.
  • My test cases really worked.
  • I've switched from being "toy" to being "small".
  • I've even pulled in a couple of new users.

Probably more changes to come once I've had a rest (but I guess I write about that elsewhere; because otherwise people get bored!).

The most obvious change to consider is to allow almost "instant-activation". I dislike having to manually approve and setup new domains, even if it does boil down to clicking a button on a webpage - so I'm thinking I should have a system in place such that you can sign up, add your domain, and be good to go without manual involvement. (Once DNS has propogated, obviously!)

Anyway enough writing. Ice-cream calls, and then I must see if more bugs have been reported against my packages...

ObQuote: Run Lola Run.

| No comments

 

To read makes our speaking English good.

5 July 2008 21:50

I've setup several repositories for apt-get in the past, usually using reprepro as the backend. Each time I've come up with a different scheme to maintain them.

Time to make things consistent with a helper tool:

skx@gold:~/hg/rapt$ ls input/
spambayes-threaded_0.1-1_all.deb        spambayes-threaded_0.1-1.dsc
spambayes-threaded_0.1-1_amd64.build    spambayes-threaded_0.1-1.dsc.asc
spambayes-threaded_0.1-1_amd64.changes  spambayes-threaded_0.1-1.tar.gz

So we have an input directory containing just the package(s) we want to be in the repository.

We have an (empty) output directory:

skx@gold:~/hg/rapt$ ls output/
skx@gold:~/hg/rapt$

Now lets run the magic:

skx@gold:~/hg/rapt$ ./bin/rapt --input=./input/ --output=./output/
Data seems not to be signed trying to use directly...
Data seems not to be signed trying to use directly...
Exporting indices...

What do we have now?

skx@gold:~/hg/rapt$ tree output/
output/
|-- dists
|   `-- etch
|       |-- Release
|       |-- main
|           |-- binary-amd64
|           |   |-- Packages
|           |   |-- Packages.bz2
|           |   |-- Packages.gz
|           |   `-- Release
|           `-- source
|               |-- Release
|               |-- Sources
|               |-- Sources.bz2
|               `-- Sources.gz
|-- index.html
`-- pool
    `-- main
        `-- s
            `-- spambayes-threaded
                |-- spambayes-threaded_0.1-1.dsc
                |-- spambayes-threaded_0.1-1.tar.gz
                `-- spambayes-threaded_0.1-1_all.deb

neat.

Every time you run the rapt tool the output pool and dists directories are removed and then rebuilt to contain only the packages located in the incoming/ directory. (More correctly only *.changes are processed. Not *.deb.)

This mode of operation might strike some people as odd - but I guess it depends on whether you view "incoming" to mean "packages to be added to the exiting pool", or "packages to take as the incoming input to the pool generation process".

Anyway if it is useful to others feel free to clone it from the mercurial repository. There is no homepage yet, but it should be readable code and there is a minimum of supplied documentation in the script itself ..

ObQuote: Buffy. Again.

| 2 comments

 

Father... father, the sleeper has awakened!

10 September 2008 21:50

To solve performance problems I've now started to switch my SMTP servers from using the "forkserver" version of qpsmtpd to using the "prefork" version.

Under testing qpsmtpd-prefork performed significantly better than the qpsmtpd-forkserver for handling incoming SMTP connections.

The loadavg of one machine has dropped from a constant 2.xx to 0.4x!

I'd love to see what the asynchronous server would behave like, but that would require re-writing all my plugins to work in an asynchronous manner which would be a significant undertaking.

(It would be nice if the qpsmtpd package available to Debian would allow you to choose between the two version of the server - I will file a wishlist bug.)

ObQuote: Dune

| No comments

 

If there isn't a movie about it, it's not worth knowing, is it?

26 November 2008 21:50

So, I've just got a portable machine. I've configured it to be a pretty minimal installation of Debian Lenny, but one thing that makes me unhappy is mail handling.

By default it came with several exim4- packages. Now in general Exim4 rocks. But it is a deamon running all the time, and overhead I could live without.

I looked around to find a mail transport agent that would be more suited to the machine and was suprised to find nothing suitable.

Basically I figure the machine will never generate "real" emails. Instead it will only receive mails from cron, etc. The machine will never have a real fixed IP, and so relaying mail externally is a waste of time. The mail should just go somewhere predictable and local.

There are a couple of lightweight agents which will forward to another system, but nothing seems to exist which will queue mail locally only.

So I've hacked a simple script which will do the job.

Given the spool director /var/spool/skxmail the following command:

skxmail root < /etc/passwd

Produces this:

/var/spool/skxmail/
`-- root
    |-- cur
    |-- new
    |   `-- 1227702470.P8218M243303Q22.vain.my.flat
    `-- tmp

4 directories, 1 file

That seems to be sufficient for my needs. (I support the flag which says "read the receipient from the body).

Of course to do this properly I'd be setgid(mailgroup). Instead I assume that local means everybody can see it and /var/spool/skxmail is mode 777. Ooops.

Still happy to share if it sounds interesting.

ObFilm: Dogma

| 8 comments

 

I am not going to sit on my ass as the events that affect me unfold to determine the course of my life

17 December 2008 21:50

So I finally got round to vote on the Debian Lenny Firmware issue.

I can't help thinking that most of the discussion so far has been a waste of time, on the grounds that developers either

  • Care a lot about the subject and already have an opinion.
  • Don't care and get bored by the discussion.

Either way the discussion has two, or more, sides talking past each other rather than any convergence upon a consensus. (Me? I'm in the middle. On the fence. Getting bored. Hitting "Delete thread" a lot. Who knows, maybe I could have my vote influenced, but right now I'm just not caring enough either way.)

Anyway enough on that subject. Tonight I have mostly been looking around to come up with a simple way to give an "overview" of mail traffic. Something like:

  • Most prolific poster.
  • Most popular thread.
  • etc etc.

Individually these jobs are easy. Making it look pretty is hard. (Also I'd like to have archives of the lists in question, which are searchable.)

Seems like nothing out there does everything I want.

ObFilm: Ferris Bueller's Day Off.

| 1 comment

 

She must suffer to her last breath.

21 December 2008 21:50

Is Debian slowly tearing itself apart, or am I just unduly dramatic?

ObFilm: Kill Bill (volume two)

| 6 comments

 

What can you do? Sparta will need sons.

23 December 2008 21:50

Since I've ranted a little recently lets do another public bugfix. The last few times people seemed to like them, and writing things down helps me keep track of where I am, what I'm doing, and how soon it will be "beer o'clock".

So I looked over the release critical bug list, looking for things that might be easy to fix.

One bug jumped out at me:

I installed the package:

skx@gold:~$ apt-get install gnomad2
..
The following NEW packages will be installed
  gnomad2 libmtp7 libnjb5 libtagc0
..

Once I copied an .mp3 file to my home directory, with the .ogg suffix I got a segfault on startup:

skx@gold:~$ cp /home/music/Audio/RedDwarf-back-to-reality.mp3 ~/foo.ogg
skx@gold:~$ gnomad2
..
LIBMTP_Get_First_Device: No Devices Attached
PDE device NULL.
TagLib: Ogg::File::packet() -- Could not find the requested packet.
TagLib: Vorbis::File::read() - Could not find the Vorbis comment header.
[segfault]
skx@gold:~$

So I downloaded the source ("apt-get source gnomad2"), and the dependencies for rebuilding ("apt-get build-dep gnomad2"). This allowed me to rebuild it locally:

skx@gold:/tmp/gnomad2-2.9.1$ ./configure --enable-debug && make
skx@gold:/tmp/gnomad2-2.9.1$ cd src/

And now it can be run under GDB.

skx@gold:/tmp/gnomad2-2.9.1/src$ gdb ./gnomad2
GNU gdb 6.8-debian
...
(gdb) run
Starting program: /tmp/gnomad2-2.9.1/src/gnomad2
...
LIBMTP_Get_First_Device: No Devices Attached
PDE device NULL.
[New Thread 0x41dcd950 (LWP 23593)]
TagLib: Ogg::File::packet() -- Could not find the requested packet.
TagLib: Vorbis::File::read() - Could not find the Vorbis comment header.

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x41dcd950 (LWP 23593)]
0x00007fc89e0340b0 in taglib_tag_artist () from /usr/lib/libtag_c.so.0
(gdb)

Interestingly the crash comes from a library, libtag_c.so.0. So either:

  • The bug is realy in the libtag module.
  • The gnomad2 package doesn't handle a failure case it should.

Time to guess which that is. Charitably we'll assume any library that segfaults will be quickly fixed, because it will have more users than a single program..

Moving on we can look at the gnomad2 source for mentions of taglib. Several mentions are found, but src/tagfile.c is clearly the correct place to look. That file contains the following code:

void
get_tag_for_file (metadata_t *meta)
{
  gchar *tmppath = filename_fromutf8(meta->path);
  TagLib_Tag *tag;
  const TagLib_AudioProperties *properties;
  TagLib_File *file = taglib_file_new(tmppath);

  if(file == NULL) {
    g_printf("could not open file %s", tmppath);
    g_free(tmppath);
    return;
  }
  g_free(tmppath);

  tag = taglib_file_tag(file);
  properties = taglib_file_audioproperties(file);

  gchar* artist = taglib_tag_artist(tag);
  ..

This looks like a great place to explore because opening a file to read the tags is most likely where the crash is going to be coming from.

Interestingly we see the code:

  • Calls taglib_file_new to get a handle of some kind relating to the file.
  • Tests for errors with that operation.
  • Calls taglib_file_tag to fetch tag information using the handle, and indirectly the file.
  • But then uses taglib_tag_artist to fetch the artist (?I guess?) without testing the handle previously obtained was valid.

Let us guess that the file opening is succeeding but that the tag structure fetched via taglib_file_tag is NULL - and this causes the crash.

A quick edit:

  g_free(tmppath);

  tag = taglib_file_tag(file);
  if(tag == NULL) {
    g_printf("tag was null");
    return;
  }

  properties = taglib_file_audioproperties(file);
  gchar* artist = taglib_tag_artist(tag);

Rebuild, and the segfault is gone. We have a winner. Now we just need to file a patch...

ObFilm: 300

| 7 comments

 

Guess he wasn't too popular at the end, huh?

23 December 2008 21:50

Previously I've posted a couple of running commentries describing how I've examined and gone about fixing a couple of bugs in Debian packages I use, enjoy, or have stumbled upon by accident.

Each of these commentries has resulted in a change or two to the affected software to make the bug vanish.

Fixing the bug is usually the hard part, but obviously it isn't the only thing that you need to do. While it is fun to have a locally fixed piece of software if you don't share it then the very next release will have the same bug again - and you'll spend your life doing nothing more than fixing the same bug again and again.

Generally the way that I report my fixes is by sending email to the Debian bug tracker - because I usually only try to fix bugs that I see reported already. Specificially I tend to only care about:

  • Bugs in packages that I maintain.
  • Bugs that affect me personally. I'm selfish.
  • Bugs in packages I use, even if they don't affect me.
  • Segfaults. Because segfaults and security issues often go hand in hand.

So my starting point is generally an existing bug report, such as the last bug I attempted to fix:

This bug was pretty simple to track down, and once I had added a couple of lines to the source to fix it creating the patch and reporting it was pretty simple.

The way I generally work is to download the source tree of the Debian package to my local system and work with it in-place until I think I've fixed the issue. I generally get the current sources to a package by running:

apt-get source "package"

Once I'm done fixing I'll want to create a patch. A patch is just a simple way of saying:

  • open file foo.c
  • Change "xxx" on line 12 to be "yyy".
  • Add "blah blah" after line 25

Assuming I made my changes in the local source in /tmp/gnomad2-2.9.1 I'll move that somewhere safe, and download another copy of the unmodified source, so I can create a diff and ensure that I've got the change recorded correctly:

skx@gold:/tmp$ mv gnomad2-2.9.1 gnomad2-2.9.1.new
skx@gold:/tmp$ apt-get source gnomad2

Now I have two trees:

  • /tmp/gnomad2-2.9.1.new - My modified, fixed, and updated directory.
  • /tmp/gnomad2-2.9.1 - The current source in Debian's unstable branch.

Creating the patch just means running:

 diff --recursive \
      --ignore-space-change \
      --context \
 gnomad2-2.9.1 gnomad2-2.9.1.new/

This will output the diff to the console. You can save it to a file too by rusing tee:

 diff --recursive \
      --ignore-space-change \
      --context \
 gnomad2-2.9.1 gnomad2-2.9.1.new/  | tee /tmp/patch.diff

If the new directory isn't clean your patch will include things like:

Only in gnomad2-2.9.1.new/src: player.o
Only in gnomad2-2.9.1.new/src: playlists.o
Only in gnomad2-2.9.1.new/src: prefs.o
Only in gnomad2-2.9.1.new/src: tagfile.o
Only in gnomad2-2.9.1.new/src: util.o
Only in gnomad2-2.9.1.new/src: wavfile.o

Just edit those lines out of the diff.

So the next step is to submit that to the bug report. Simply mail "[email protected]" ([email protected] in our example) including your patch in the mail. If all goes well you'll receive an auto-reply after a while.

Finally to make things neat you can manipulate the bug tracker by email by sending a mail :

To: [email protected]
Subject: updates

tags 12344 + patch
end
stop
bored now

Here is one I sent earlier.

This will ensure that the bug is reported as having a patch present in it.

Job done.

In an ideal world the next time the package is uploaded to Debian the bug will be fixed, marked as closed, and the world will be a little happier.

In a non-ideal world your patch will sit in the bug tracker for years with no further comment. If that happens there is not too much you can do, except send reminders by email, or distract yourself with a nice curry.

Happily Debian maintainers really do seem to appreciate bug fixes, and I'd say it is rare that my fixes have been ignored. It happens, but not often enough to make me give up.

ObFilm: Ghostbusters 2

| 7 comments

 

Death is... whimsical... today.

12 January 2009 21:50

I'm not sure how you can pre-announce something in a way that cannot be later faked.

The best I can imagine is you write it in a text file and post the size / hash of the file.

steve@skx:~$ ls -l 10-march-2009
-rw-r--r-- 1 steve users 234 Jan 12 21:40 10-march-2009
steve@skx:~$ sha1sum 10-march-2009
99d1b6d625ed4c15a3be2be5fec63c17941c370d  10-march-2009
steve@skx:~$ md5sum 10-march-2009
1a0e68b8fbb3b0fe30e5b4a9413ceeec  10-march-2009

I don't need anybody to keep me honest, but I welcome interesting suggestions on more neat ways to pre-confirm you have content that hasn't been changed between being written and being released...?

I guess you could use GPG and a disposible key-pair, and then post the secret key afterward, but that feels kinda wrong too.

Update of course you could post the detached signature. D'oh.

Shamir's Secret Sharing could be another option - posting just enough pieces of the secret to make recovery possible with the addition of one piece that was witheld until the later date. Jake wrote a nice introduction to secret sharing a couple of years ago.

ObFilm: Léon

| 12 comments

 

Nobody likes a perky goth

3 March 2009 21:50

Debian project leader elections are coming up soon. My vote will go to the candidate that:

  • Makes no promises for significant projectwide change.
  • Provides at least one mail every 8 weeks to summerise "stuff".

Thats all.

I guess most people don't really care about being a DPL per-se, instead they stand for election to accomplish a pet project, or two. (Not that there is anything wrong with that. If your platform says "Elect me and I'll do $foo" then you've got implicit support.)

A recurring theme in platforms has been communication, and I think that over time it has got a lot better. Regardless, if theres one thing I want to see from the DPL in 2009 it is even more communication.

In other news, depending on your timezone, todays date is 3/3/9 - enjoy it while it lasts. The next similar date is 4/4/16 - which is seven years in the future.

ObTitle: Blood Ties

| 1 comment

 

I may have kept you chained up in that room but it was for your own good.

21 March 2009 21:50

Last week I resigned from my position as member of the Debian Security Team.

Historically several Debian teams have had members inactive for months and years at a time, and I'd rather be removed of my own volition than end up listed but inactive like that.

It's been a pleasure working with all members of the team, past and current (especially Joey), and who knows I might return in the future.

If you're interested in security work then getting involved isn't difficult. It just takes time, patience, and practise.

ObFilm: The Goonies

| No comments

 

Are you sure you don't mind me going without you?

17 March 2010 21:50

Recently I received a small flurry of patches to my blog compiler, from Chris Frey. These patches significantly speedup rebuilding a static blog when using Danga's memcached.

The speedup is sufficiently fast that my prior SQLite based approach is no longer required - and (re)building my blog now takes on the order of 5 seconds.

On the topic of other people's blogs I've been enjoying David Watson's recent photo challenge. I was almost tempted to join in, but I'm not sure I could manage one every day - Although I can pretend I recently carried out my my first real photoshoot.

I'm still taking pictures of "things/places" but I'm starting to enjoy "people" more. With a bit of luck I'll get some more people to pose in the near future, even if I have to rely upon posting to gumtree for local bodies!

ObFilm: Love Actually

| 2 comments

 

I don't like it when people yell at me for no reason at all

26 March 2010 21:50

Ubuntu always gets a lot of coverage in blogs, and the recent controvesy realisation that it isn't a 100% community-made distribution has triggered yet another round of this.

A lot of the controvesy, coverage, and attention can be laid at the feet of Canonical themselves; I think it is fair to say that the visibility, hype, advertising, and the goal of trying to be all things to all men means that even relatively trivial issues can easily get blown out of proportion, and to a certain extent this is a self-inflicted. Live by the sword media & etc ...

I think it is fair to say that Ubuntu has attracted a huge swathe of non-technical users. They want something "easy", "free", and "sexy", but more than that they want to use their computer, not develop the operating system.

When a particular bug report, with 400+ comments, hits the press we're primarily seeing a marketing-fail rather than technical-fail. The realisation that yes bugs are reported, but no the community (of users) doesn't get input into every single thing is as it should be. If you look back over "controvesy" in the past you'll see comments from the non-technical users which are tantamoint to blackmail:

This should be fixed ... or I'm gonna .. install .. gentoo. yeah. really.

(Similarly you see many comments of the form "I agree", "oh noes", or "Please revert ASAP", rather than technical arguments.)

This non-technical nature of the userbase is also readily apparent if you browse through the answers to problems posted in forums for example "Delete this file, I don't know why it works but it fixes it for me!!2!". (I've seen some truly horrific advice upon Ubuntu forums, even so far as chmodding various parts of the system to allow users to write binaries to /bin.)

Similarly you'll see that the launchpad is full of generic linux misunderstandings and bugs that aren't "real". The unfortunate fact is that the Ubuntu bug tracker is a wasteland in many places:

  • Lots and lots and lots of users reporting bugs.
  • Those bugs being ignored for huge periods of time
    • Except for "Hey is this still present in $pending-release?"
  • The issue isn't that Ubuntu developers don't care, the issue is one of manpower.

The tight timescale of releases combined with the sheer number of incoming bug reports means that often issues are overlooked. (For example one bug that bit a colleague is #402188 - on the one hand it is a trivial bug, on the other hand its readily apparent to users. If something like that can be missed it makes you wonder ..?) ObRandom: Ubuntu has 100 bugs open against its Vim package, some of which have been marked as NEW since 2009 (ie. untouched, ignored). By contrast the Debian vim package has way fewer bugs. I'm sure there are packages where the situation is reversed but I think is not an unusual comparison.

Finally in addition to sheer numbers of bugs, and tight timescales, it has to be noted that the relative number of developers to users is miniscule, and this in turn has lead to some interesting solutions. The Ubuntu PPA system (personal package archive) should be a good thing. It should allow people to submit new packages for testing, for bugfixes, and for more visibility. Instead downloading a PPA file is no different than going to download.com and downloading a random binary - sure it might be legit, but there's no oversight, no quality control, and most likely no future updates.

Ubuntu as a distribution is interesting, and I'm not trying to be overly critical - A year or two ago had somebody thrown money at me I might have been inclined to accept it.

I think most of the perceived problems stem from a single common source, which is largely the issue of scale. (e.g. bug reports to bug handlers. developer numbers to user numbers.)

There are many good things to be said about Ubuntu (& Canonical) in addition to the negative ones that we see in the press or that I've perceived and mentioned above. The truth is it works for a lot of people, and the growing pains will continue until it either dies or both its audience and itself matures.

Either way I don't hate Ubuntu, in the same way that I don't hate Microsoft, Oracle, Fedora, Gentoo, or other mass-entities. There are pros and cons to be made for most of them, (and of course Debian itself is no different).

However I will say that every time I see people write "If you want a sexy/shiny/easy to use Linux desktop then install Ubuntu" I glance over at my Debian Lenny desktop, marvel at how sexy, shiny and easy to use it is, and get a little bit disappointed at our own marketing failure(s).

ObFilm: Day of the Woman

| 10 comments

 

Dwayne, I think you might be colorblind.

13 April 2010 21:50

It is unfortunate that most server-packages don't seperate out their init scripts into separate packages:

  • foo
    • Contains the server binary, associated config files, and libraries.
  • foo-run or foo-server
    • Contains the init script(s).

Right now its a real pain to have to modify things like /etc/init.d/ssh to launch two daemons, running on two different ports, with two different configuration files.

Running multiple copies of SMTP daemons, databases, and similar things is basically more complex than it has to be, because our packages aren't setup for it.

If you maintain a daemon please do consider this, failing that honoring a flag such as "DISABLED=true" in /etc/default/foo would allow people to use their own /etc/init.d/foo.local initscript. (That's not perfect, but it is a step in the right direction.)

ObFilm: Little Miss Sunshine.

| 12 comments

 

I miss the old Debian

11 November 2010 21:50

I miss the days when Debian was about making software work well together.

These days the user mailing lists are full of posts from users asking for help on Ubuntu [*1*], people suggesting that we copy what Ubuntu has done, and people asking for "howtos" because documentation is too scary or too absent for them to read.

Yesterday the whole commercial spam on Planet debate started. Its yet another example of how in-fighting[*2*] seems to be the most fun part of Debian for too many.

Me? I started and folded a company. I got Debian help and users. Some threw money at me.

Joey Hess? Started making the nice-looking ikiwiki-powered branchable.com.

Commercial? Yes. Spam? No.

I guess there is little I can do. I could quit - as over time I've become less patient dealing with the project as a whole, but simultaneously more interested in dealing with a few specific people. But I suspect the net result would be no change. Either people would say "Ok , bye" or worse still offer my flttry: "Don't go - we lurve you".

Meh.

I shouldn't write when I'm annoyed, but living in a hotel will do that to you.

Footy-Mc-Foot-notes. Cos HTML is hard. Lets go shopping Eat Cake.

1

The Ubuntu forums are largely full of the blind leading the blind. Or equally often the blind being ignored.

I do believe that an Ubuntu stackoverflow site would be more useful than forums. But that's perhaps naive. People will still most often say "My computer doesn't work" missing all useful the details.

The only obvious gain is you can avoid "me too!!!" comments, and "fix this now or I'm gonna go .. use gentoo?".

2

Back a few years when people were less civil some mailing lists and irc channesl were unpleasant places to be.

These days we've solved half the problem: People mostly don't swear at each other.

The distractions, the threads that don't die, even if you ignore them and don't join in the "hilarity" they still have a devisively negative effect.

| 13 comments

 

It would be nice if we could record which files populate or read

1 January 2011 21:50

It would be really neat if there were some tool which recorded which dotfiles an application read, used, or created.

As an example emacs uses .emacs, but won't create it. However firefox will create and fill ~/.mozilla if it isn't present, and links will create ~/.links2.

What would we do with that data? I'm not sure off the top of my head, but I think it is interesting to collect regardless. Perhaps a simple tool such as apt-file to download the data and let you search:

who-creates ~/.covers
who-creates ~/.dia

Obviously the simple use is to purge user-data when matching packages are removed - e.g. dpkg-posttrigger hook. But that's a potentially dangerous thing to do.

Anyway I'm just pondering - I expect that over time applications will start switching to using "centralised" settings such as ~/.gconf2 etc.

In the menatime I've started cleaning up ~/ on my own machines - things like ~/.spectemurc, ~/.grip, etc.

ObQuote: What a long sword. I like that in a man - Blood of the Samurai (Don't be tempted; awful film.)

| 8 comments

 

This week in brief

16 January 2011 21:50

This week in brief:

I've rejoined the Debian Security Team

My first (recent) DSA was released earlier today, with assistance from various team members. (My memory of the process was poor, and some things have changed in my absence.)

BlogSpam gains a new user

The BlogSpam API is now available for users of Trac.

Finally, before I go, I've noticed several people on Planet Debian report their photo-challenges; either a picture a day or one a week. I too take pictures, and I'm happy if I get one session a month.

I suspect some of my content might be a bit too racy for publication here. If you're not avoiding friendface-style sites you can follow "highlights" easily enough - or just look at the site.

ObQuote: "Be strong and you will be renewed. Identify. " - Logan's Run (1976)

| No comments

 

Upgrading from Lenny to Squeeze

16 February 2011 21:50

Rather than waiting for a few months, as I typically do, I decided to be brave and upgrade my main virtual machine from Lenny to Squeeze. That host runs QPSMTPD, Apache, thttpd, and my blogspam server; nothing too complex or atypical.

The upgrade was mostly painless; I was interrupted several times by debconf asking me if I wished to replace configuration files I'd modified, but otherwise there were only two significant messages in the process:

crm114

crm114 warned me that its spam database and/or configuration files had changed and would most likely result in brokenness, post-upgrade, and I should do something to stop avoiding lost mail.

Happily this was expected.

sysv-rc

It transpired I had a couple of local init scripts which didn't have dependency information succesfully encoded into them; so I couldn't migrate to dependency-based bootup.

Given that this server gets a reboot maybe once every six months that wasn't really worth telling me about; but nevermind. No harm done.

That aside there were no major surprises; all services seemed to start normally and my use of locally-compiled backports meant that custom services largely upgraded in a clean fashion. The only exception was my patched copy of mutt which was replaced unexpectedly. That meant my lovely mutt-sidebar was horribly full of mailboxes, rather than showing only new messages. I created a hasty backported mutt package for Squeeze and made it available. (This patch a) enables the side-bar, and b) allows you to toggle between the display of all mailboxes and those with only new mail in them. It is buggy if you're using IMAP; but works for me. I would not choose to live without it.)

Now that I've had a quick scan over the machine the only other significant change was an upgrade of the mercurial revision control system, the updated templates broke my custom look & feel and also required some Apache mod_rewrite updates to allow simple clones via HTTP. (e.g. "hg clone http://asql.repository.steve.org.uk/").

So in conclusion:

  • The upgrade from Lenny to Squeeze (i386) worked well.
  • Before you begin running "iptables -I INPUT -p tcp --dport 25 -j REJECT" will avoid some potential surprises
    • There are probably other services worth neutering, but I tend to only do this for SMTP.
  • Keeping notes of updated template files will be useful if you make such system-wide changes. (e.g. hgwebdir templates)

ObQuote - "Hmm, upgrades " - The Matrix Reloaded (shudder).

| 2 comments

 

Goodbye, world.

29 April 2011 21:50

Today I resigned from the Debian project. The following packages are up for adoption:

I'll remove myself from Planet Debian tomorrow, assuming the keyring revokation isn't swift and merciless.

ObQuote: This space is intentionally blank.

| 18 comments

 

Continuous integration that uses chroots?

12 June 2011 21:50

I'd like to setup some auto-builders for some projects - and theese projects must be built upon Lenny, Squeeze, Lucid, and multiple other distros. (i386 and amd64 obviously.)

Looking around I figure it should be simple. There are a lot of continuous integration tools out there - but when looking at them in depth it seems like they all work in temporary directories and are a little different to how I'd expect them to be.

Ultimately I want to point a tool at a repository (mercurial), and receive a status report and a bunch of .deb packages for a number of distributions.

The alternative seems to be to write a simple queue submission system, then for each job popped from the queue run:

  • Creates a new debootstrap-based chroot.
  • Installs build-essential, mercurial, etc.
  • Fetches the shource.
  • Runs make.
  • Copies the files produced in ./binary-out/ to a safe location.
  • Cleans up.

Surely this wheel must already exist? I guess its a given that we have to find build-dependencies, and that we cannot just run "pbuilder *.dsc" - as the dsc doesn't exist in advance. We really need to run "make dependencies test build", or similar.

Hudson looked promising, but it builds things into /var/lib/hudson, and doesn't seem to support the use of either chroots or schroots.

ObQuote: "I feel like I should get you another sweater." - "Friends"

| 8 comments

 

Today I migrated from 32-bit to 64-bit, in-place

7 March 2012 21:50

This evening I sat down and migrated my personal virtual machine from a 32-bit installation of Debian GNU/Linux to a 64-bit installation.

I've been meaning to make this change for a good few months, but it took me until this evening until I decided it was as good a time as any.

Mostly the process is painless:

  • Ensure you have a 64-bit kernel, with support for 32-bit binaries too.
  • Install the 32-bit compatibility libraries, such that your old binaries work.
  • Overwrite your binaries and libraries in-place so you have a 64-bit base system.
  • Patch it up afterwards.

I overwrote a lot of the libraries and binaries on the system such that I had a working 64-bit apt-get, dpkg, sash, etc, and associated libraries. Then once I had that I could use those tools to pull the resto of the system up to date.

One thing I hadn't counted on is that I needed to have a 64-bit version of bzip such that "apt-get update" didn't complain about errors. I suspect I could have fixed that by re-configuring my system to disable compression. Still it was easily solved.

Along the way I also shot myself in the foot by having a local caching DNS resolver, listening on 127.0.0.1, which broke. With no DNS I couldn't use apt-get - but once the problem was identified it was trivial to fix.

Anyway all seems OK now. My websites are up, email is flowing and I guess anything else can wait until the morning.

ObQuote: "Somebody's coming up. Somebody serious." - Leon

| 7 comments

 

My code makes it into GNU Screen, and now you can use it. Possibly.

21 March 2012 21:50

Via Axel Beckert I learned today that GNU Screen is 25 years old, and although development is slow it has not ceased.

Back in 2008 I started to post about some annoyances with GNU Screen. At the time I posted a simple patch to implement the unbindall primitive. I posted some other patches and fixed a couple of bugs, but although there was some positive feedback initially over time that ceased completely. Regretably I didn't have the feeling there was the need to maintain a fork properly, so I quietly sighed, cried, and ceased.

In 2009 my code was moved upstream into the GNU Screen repository (+documentation update).

We're now in 2012. It looks like there might be a stable release of GNU Screen in the near future, which makes my code live "for real", but in the meantime the recent snapshot upload to Debian Experimental makes it available to the brave.

2008 - 2012. Four years to make my change visible to end-users. If I didn't use screen every day, and still have my own local version, I'd have forgotten about that entirely.

Still I guess this makes today a happy day!

Wheee!

ObQuote: "Thanks. For a while there I thought you were keeping it a secret. " - Escape To Victory

| No comments

 

Debian-Administration.org almost migrated

6 January 2013 21:50

The new version of the Debian Administration is almost ready now. I'm just waiting on some back-end changes to happen on the excellent BigV hosting product.

I was hoping that the migration would be a fun "Christmas Project", but I had to wait for outside help once or twice and that pushed things back a little. Still it is hard to be anything other than grateful to folk who volunteer time, energy, and enthusiasm.

Otherwise this week has largely consisted of sleeping, planting baby spider-plants, shuffling other plants around (Aloe Vera, Cacti, etc), and enjoying my new moving plant (video isn't my specific plant).

I've spent too long reworking templer such that is now written in a modular fashion and supports plugins. The documentation is overhauled.

The only feedback I received was that it should support inline perl - so I added that as a plugin this morning via a new formatter plugin:

Title: This is my page title
Format: perl
Name: Steve
----
This is my page.  It has inline perl:

   The sum of 1 + 5 is { 1 + 5 }

This page was written by { $name }

ObQuote: "She even attacked a mime. Just found out about it. Seems the mime had been reluctant to talk. " - Hexed

| No comments

 

Debian is missing a tool, want to write it?

14 June 2013 21:50

Seeing this piece in the news, about how Debian-Multimedia.org is now unsafe, I was reminded we don't have a tool to manipulate sources.lists entries.

For example:

$ apt-sources list
..
deb http://ftp.uk.debian.org/debian/ squeeze main non-free contrib
deb-src http://ftp.uk.debian.org/debian/ squeeze main

deb http://security.debian.org/ squeeze/updates main
deb-src http://security.debian.org/ squeeze/updates main
..

How about listing only my repos?

$ apt-sources list steve.org.uk
deb-src http://packages.steve.org.uk/firefox-wrapper/squeeze/ ./
deb     http://packages.steve.org.uk/firefox-wrapper/squeeze/ ./
deb     http://packages.steve.org.uk/meta/squeeze/ ./
deb-src http://packages.steve.org.uk/meta/squeeze/ ./
deb-src http://packages.steve.org.uk/minidlna/squeeze/ ./
deb     http://packages.steve.org.uk/minidlna/squeeze/ ./

Now add in a command to delete lines matching a given pattern:

# apt-sources delete debian-multimedia.org

Doesn't that seem like a tool that should exist?

I've added this quick hack to this repository which you can submit pull requests against, or use as a base.

TODO: Write the "add" handler. Neaten.

Ever felt jealous that Ubuntu users can add PPAs? Nows your chance to do something like this:

# apt-sources add "deb http://packages.steve.org.uk/lumail/wheezy/ ./"

| 11 comments

 

So I have a new desktop..

29 June 2013 21:50

So I have a new desktop computer. I installed Wheezy on it via a USB stick, and everything worked. All the hardware. Yay. I guess we take it for granted when things like sound, disks, and network cards just work these days. I remmeber fighting with distros in the past, where such things were not necessarily straightforward.

The only minor complication is the graphics card. I bought a cheap/random GeForce card for the new machine (£30):

$ lspci -nn | grep VGA
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF119 [GeForce GT 610] [10de:104a] (rev a1)

Booting up I get a working X.org and GNOME 3.x, but the open graphics driver is "too bad" so I get fallback GNOME; with "Applications" & "Places" menus.

Installing the proprietry driver gave me a full GNOME 3.x experience. But I didn't like it so for the moment I'm running:

  • GNOME fallback mode.
  • Bluetile.
  • Open (nvidia) drivers only.

The plan was to install awesome, or similar, but I'm just a creature of habit and I'm still cloning git/mercurial repos and selectively restoring backups.

My old desktop has been given to my partner to replace the EeeeeePC she's been using for the past year.

I'll fettle over the weekend until I'm back up and running properly; but for the moment I'm good. All my videos/music are ported across. I can print, and I have access to the repos I'm currently working on. (Mostly lumail which will have a new release over the weekend.)

| 4 comments

 

I understand volunterering is hard

5 October 2013 21:50

The tail end of this week was mostly spoiled by the discovery that libbeanstalkclient-ruby was not included in Wheezy.

Apparently it was removed because the maintainer had no time, and there were no reverse dependencies - #650308.

Debian maintainers really need to appreciate that no official dependencies doesn't mean a package is unused.

Last year I needed to redesign our companies monitoring software, because we ran out of options that scaled well. I came up with the (obvious) solution:

  • Have a central queue containing jobs to process.
    • e.g. Run a ping-test on host1.example.com
    • e.g. Run an SSH-probe on host99.example.com
    • e.g. Fetch a web-page from https://example3.net/ and test it has some text or a given HTTP status code.
    • (About 15 different test-types are available).
  • Have N workers each pull one job from the queue, execute it, and send the results somewhere.

I chose beanstalkd for my central queue precisely because it was packaged for Debian, had a client library I could use, and seemed to be a good fit. It was a good fit, a year on and we're still running around 5000 tests every minute with 10 workers.

The monitoring tool is called Custodian Custodian, and I think I've mentioned it before here and on the company blog.

It looks like we'll need to re-package the Ruby beanstalk client, and distribute it alongside our project now. That's not ideal, but also not a huge amount of work.

In summary? Debian you're awesome. But libraries shouldn't be removed unless it can't be helped, because you have more users than you know.

| 4 comments

 

Sad times

10 February 2014 21:50

There are times when I'm very proud of the Debian project, the developers, the contributors, the bug-reporters, even the users.

There are times when I'm less impressed.

These days I guess I'm not qualified to comment, being an ex-developer, but I still am disappointed.

Part of me wants to rejoin the project, to see if I can help. The other part is thinking there are other choices, maybe I should look at them.

Conflict is bad.

Being conflicted is worse.

| 6 comments

 

And so it begins ...

3 July 2014 21:50

1. This weekend I will apply to rejoin the Debian project, as a developer.

2. In the meantime I've been begun releasing some some code which powers the git-based DNS hosting site/service.

3. This is the end of my list.

4. I lied. This is the end of my list. Powers of two, baby.

| 6 comments

 

So what can I do for Debian?

16 July 2014 21:50

So I recently announced my intention to rejoin the Debian project, having been a member between 2002 & 2011 (inclusive).

In the past I resigned mostly due to lack of time, and what has changed is that these days I have more free time - primarily because my wife works in accident & emergency and has "funny shifts". This means we spend many days and evenings together, then she might work 8pm-8am for three nights in a row, which then becomes Steve-time, and can involve lots of time browsing reddit, coding obsessively, and watching bad TV (currently watching "Lost Girl". Shades of Buffy/Blood Ties/similar. Not bad, but not great.)

My NM-progress can be tracked here, and once accepted I have a plan for my activities:

  • I will minimally audit every single package running upon any of my personal systems.
  • I will audit as many of the ITP-packages I can manage.
  • I may, or may not, actually package software.

I believe this will be useful, even though there will be limits - I've no patience for PHP and will just ignore it, along with its ecosystem, for example.

As progress today I reported #754899 / CVE-2014-4978 against Rawstudio, and discussed some issues with ITP: tiptop (the program seems semi-expected to be installed setuid(0), but if it is then it will allow arbitrary files to be truncated/overwritten via "tiptop -W /path/to/file"

(ObRandom still waiting for a CVE identifier for #749846/TS-2867..)

And now sleep.

| 4 comments

 

luonnos viesti - 31 heinäkuu 2014

31 July 2014 21:50

Yesterday I spent a while looking at the Debian code search site, an enormously useful service allowing you to search the code contained in the Debian archives.

The end result was three trivial bug reports:

#756565 - lives

Insecure usage of temporary files.

A CVE-identifier should be requested.

#756566 - libxml-dt-perl

Insecure usage of temporary files.

A CVE-identifier has been requested by Salvatore Bonaccorso, and will be added to my security log once allocated.

756600 - xcfa

Insecure usage of temporary files.

A CVE-identifier should be requested.

Finding these bugs was a simple matter of using the code-search to look for patterns like "system.*>.*%2Ftmp".

Perhaps tomorrow somebody else would like to have a go at looking for backtick-related operations ("`"), or the usage of popen.

Tomorrow I will personally be swimming in a loch, which is more fun than wading in code..

| 2 comments

 

Applications updating & phoning home

16 September 2014 21:50

Personally I believe that any application packaged for Debian should neither phone home, attempt to download plugins over HTTP at run-time, or update itself.

On that basis I've filed #761828.

As a project we have guidelines for what constitutes a "serious" bug, which generally boil down to a package containing a security issue, causing data-loss, or being unusuable.

I'd like to propose that these kind of tracking "things" are equally bad. If consensus could be reached that would be a good thing for the freedom of our users.

(Ooops I slipped into "us", "our user", I'm just an outsider looking in. Mostly.)

| 4 comments

 

Planning how to configure my next desktop

6 November 2014 21:50

I recently setup a bunch of IPv6-only accessible hosts, which I mentioned in my previous blog post.

In the end I got them talking to the IPv4/legacy world via the installation of an OpenVPN server - they connect over IPv6 get a private 10.0.0.0/24 IP address, and that is masqueraded via the OpenVPN-gateway.

But the other thing I've been planning recently is how to configure my next desktop system. I generally do all development, surfing, etc, on one desktop system. I use virtual desktops to organize things, and I have a simple scripting utility to juggle windows around into the correct virtual-desktop as they're launched.

Planning a replacement desktop means installing a fresh desktop, then getting all the software working again. These days I'd probably use docker images to do development within, along with a few virtual machines (such as the pbuilder host I used to release all my Debian packages).

But there are still niggles. I'd like to keep the base system lean, with few packages, but you can't run xine remotely, similarly I need mpd/sonata for listening to music, emacs for local stuff, etc, etc.

In short there is always the tendency to install yet-another package, service, or application on the desktop, which makes migration a pain.

I'm not sure I could easily avoid that, but it is worth thinking about. I guess I could configure a puppet/slaughter/cfengine host and use that to install the desktop - but I've always done desktops "manually" and servers "magically" so it's a bit of a change in thinking.

| 2 comments

 

How could you rationally fork Debian?

9 November 2014 21:50

The topic of Debian forks has come up a lot recently, and as time goes on I've actually started considering the matter seriously: How would you fork Debian?

The biggest stumbling block is that the Debian distribution contains thousands of packages, which are maintained by thousands of developers. A small team has virtually no hope of keeping up to date, importing changes, dealing with bug-reports, etc. Instead you have to pick your battle and decide what you care about.

This is why Ubuntu split things into "main" and "universe". Because this way they didn't have to deal with bug reports - instead they could just say "Try again in six months. Stuff from that repository isn't supported. Sorry!"

So if you were going to split the Debian project into "supported" and "unsupported" what would you use as the dividing line? I think the only sensible approach would be :

  • Base + Server stuff.
  • The rest.

On that basis you'd immediately drop the support burden of GNOME, KDE, Firefox, Xine, etc. All the big, complex, and user-friendly stuff would just get thrown away. What you'd end up with would be a Debian-Server fork, or derivative.

Things you'd package and care about would include:

  • The base system.
  • The kernel.
  • SSHD.
  • Apache / Nginx / thttpd / lighttpd / etc
  • PHP / Perl / Ruby / Python / etc
  • Jabberd / ircd / rsync / etc
  • MySQL / PostGres / Redis / MariadB / etc.

Would that be a useful split? I suspect it would. It would also be manageable by a reasonably small team.

That split would also mean if you were keen on dropping any particular init-system you'd not have an unduly difficult job - your server wouldn't be running GNOME, for example.

Of course if you're thinking of integrating a kernel and server-only stuff then you might instead prefer a BSD-based distribution. But if you did that you'd miss out on Docker. Hrm.

| 10 comments

 

An experiment in (re)building Debian

20 November 2014 21:50

I've rebuilt many Debian packages over the years, largely to fix bugs which affected me, or to add features which didn't make the cut in various releases. For example I made a package of fabric available for Wheezy, since it wasn't in the release. (Happily in that case a wheezy-backport became available. Similar cases involved repackaging gtk-gnutella when the protocol changed and the official package in the lenny release no longer worked.)

I generally release a lot of my own software as Debian packages, although I'll admit I've started switching to publishing Perl-based projects on CPAN instead - from which they can be debianized via dh-make-perl.

One thing I've not done for many years is a mass-rebuild of Debian packages. I did that once upon a time when I was trying to push for the stack-smashing-protection inclusion all the way back in 2006.

Having had a few interesting emails this past week I decided to do the job for real. I picked a random server of mine, rsync.io, which stores backups, and decided to rebuild it using "my own" packages.

The host has about 300 packages installed upon it:

root@rsync ~ # dpkg --list | grep ^ii | wc -l
294

I got the source to every package, patched the changelog to bump the version, and rebuild every package from source. That took about three hours.

Every package has a "skx1" suffix now, and all the build-dependencies were also determined by magic and rebuilt:

root@rsync ~ # dpkg --list | grep ^ii | awk '{ print $2 " " $3}'| head -n 4
acpi 1.6-1skx1
acpi-support-base 0.140-5+deb7u3skx1
acpid 1:2.0.16-1+deb7u1skx1
adduser 3.113+nmu3skx1

The process was pretty quick once I started getting more and more of the packages built. The only shortcut was not explicitly updating the dependencies to rely upon my updages. For example bash has a Debian control file that contains:

Depends: base-files (>= 2.1.12), debianutils (>= 2.15)

That should have been updated to say:

Depends: base-files (>= 2.1.12skx1), debianutils (>= 2.15skx1)

However I didn't do that, because I suspect if I did want to do this decently, and I wanted to share the source-trees, and the generated packages, the way to go would not be messing about with Debian versions instead I'd create a new Debian release "alpha-apple", "beta-bananna", "crunchy-carrot", "dying-dragonfruit", "easy-elderberry", or similar.

In conclusion: Importing Debian packages into git, much like Ubuntu did with bzr, is a fun project, and it doesn't take much to mass-rebuild if you're not making huge changes. Whether it is worth doing is an entirely different question of course.

| 2 comments

 

I eventually installed Debian on a new desktop.

7 December 2014 21:50

Recently I build a new desktop system. The hightlights of the hardware are a pair of 512Gb SSDs, which were to be configured in software RAID for additional speed and reliability (I'm paranoid that they'd suddenly stop working one day). From power-on to the (GNOME) login-prompt takes approximately 10 seconds.

I had to fight with the Debian installer to get the beast working though as only the Jessie Beta 2 installer would recognize the SSDs, which are Crucual MX100 devices. My local PXE-setup which deploys the daily testing installer, and the wheezy installer, both failed to recognize the drives at all.

The biggest pain was installing grub on the devices. I think this was mostly this was due to UFI things I didn't understand. I created spare partitions for it, and messaged around with grub-ufi, but ultimately disabled as much of the "fancy modern stuff" as I could in the BIOS, leaving me with AHCI for the SATA SSDs, and then things worked pretty well. After working through the installer about seven times I also simplified things by partitioning and installing on only a single drive, and only configured the RAID once I had a bootable and working system.

(If you've never done that it's pretty fun. Install on one drive. Ignore the other. Then configure the second drive as part of a RAID array, but mark the other half as missing/failed/dead. Once you've done that you can create filesystems on the various /dev/mdX devices, rsync the data across, and once you boot from the system with root=/dev/md2 you can add the first drive as the missing half. Do it patiently and carefully and it'll just work :)

There were some niggles though:

  • Jessie didn't give me the option of the gnome desktop I know/love. So I had to install gnome-session-fallback. I also had to mess around with ~/.config/autostart because the gnome-session-properties command (which should let you tweak the auto-starting applications) doesn't exist anymore.

  • Setting up custom keyboard-shortcuts doesn't seem to work.

  • I had to use gnome-tweak-tool to get icons, etc, on my desktop.

Because I assume the SSDs will just die at some point, and probably both on the same day, I installed and configured obnam to run backups. There is more testing and similar, but this is the core of my backup script:

#!/bin/sh

# backup "/" - minus some exceptions.
obnam backup -r /media/backups/storage --exclude=/proc --exclude=/sys --exclude=/dev --exclude=/media /

# keep files for various periods
obnam forget --keep="30d,8w,8m" --repository /media/backups/storage

| 9 comments

 

skx-www upgraded to jessie

18 April 2015 21:50

Today I upgraded my main web-host to the Jessie release of Debian GNU/Linux.

I performed the upgraded by changing wheezy to jessie in the sources.list file, then ran:

apt-get update
apt-get dist-upgrade

For some reason this didn't upgrade my kernel, which remained the 3.2.x version. That failed to boot, due to some udev/systemd issues (lots of "waiting for job: udev /dev/vda", etc, etc). To fix this I logged into my KVM-host, chrooted into the disk image (which I mounted via the use of kpartx), and installed the 3.16.x kernel, before rebooting into that.

All my websites seemed to be OK, but I made some changes regardless. (This was mostly for "neatness", using Debian packages instead of gems, and installing the attic package rather than keeping the source-install I'd made to /opt/attic.)

The only surprise was the significant upgrade of the Net::DNS perl-module. Nothing that a few minutes work didn't fix.

Now that I've upgraded the SSL-issue I had with redirections is no longer present. So it was a worthwhile thing to do.

| No comments

 

A weekend of migrations

4 May 2015 21:50

This weekend has been all about migrations:

Host Migrations

I've migrated several more systems to the Jessie release of Debian GNU/Linux. No major surprises, and now I'm in a good state.

I have 18 hosts, and now 16 of them are running Jessie. One of them I won't touch for a while, and the other is a KVM-host which runs about 8 guests - so I won't upgraded that for a while (because I want to schedule the shutdown of the guests for the host-reboot).

Password Migrations

I've started migrating my passwords to pass, which is a simple shell wrapper around GPG. I generated a new password-managing key, and started migrating the passwords.

I dislike that account-names are stored in plaintext, but that seems known and unlikely to be fixed.

I've "solved" the problem by dividing all my accounts into "Those that I wish to disclose post-death" (i.e. "banking", "amazon", "facebook", etc, etc), and those that are "never to be shared". The former are migrating, the latter are not.

(Yeah I'm thinking about estates at the moment, near-death things have that effect!)

| No comments

 

The Jessie 8.2 point-release broke for me

7 September 2015 21:50

I have about 18 personal hosts, all running the Jessie release of Debian GNU/Linux. To keep up with security updates I use unattended-upgrades.

The intention is that every day, via cron, the system will look for updates and apply them. Although I mostly expect it to handle security updates I also have it configured such that point-releases will be applied by magic too.

Unfortunately this weekend, with the 8.2 release, things broke in a significant way - The cron deamon was left in a broken state, such that all cronjobs failed to execute.

I was amazed that nobody had reported a bug, as several people on twitter had the same experience as me, but today I read through a lot of bug-reports and discovered that #783683 is to blame:

  • Old-cron runs.
  • Scheduled unattended-upgrades runs.
  • This causes cron to restart.
  • When cron restarts the jobs it was running are killed.
  • The system is in a broken state.

The solution:

# dpkg --configure -a
# apt-get upgrade

I guess the good news is I spotted it promptly, with the benefit of hindsight the bug report does warn of this as being a concern, but I guess there wasn't a great solution.

Anyway I hope others see this, or otherwise spot the problem themselves.

 

In unrelated news the seaweedfs file-store I previously introduced is looking more and more attractive to me.

I reported a documentation-related bug which was promptly handled, even though it turned out I was wrong, and I contributed CIDR support to whitelisting hosts which was merged in well.

I've got a two-node "cluster" setup at the moment, and will be expanding that shortly.

I've been doing a lot of little toy-projects in Go recently. This weekend I was mostly playing with the nats.io message-bus, and tying it together with sinatra.

| No comments

 

Finding and reporting trivial security issues

22 December 2015 21:50

This week I'll be mostly doing drive-by bug-reporting.

As with last year we start by using the Debian Code Search, to look for obviously broken patterns such as "system.>./tmp/.*"

Once we find a fun match we examine the code and then report the bugs we find. Today that was stalin which runs some fantastic things on startup:

(system "uname -m >/tmp/QobiScheme.tmp")
(system "rm -f /tmp/QobiScheme.tmp"))

We can exploit this like so:

$ ln -s /home/steve/HACK /tmp/QobiScheme.tmp
$ ls -l /home/steve/HACK
ls: cannot access /home/steve/HACK: No such file or directory

Now we run the script:

$ cd /tmp/stalin-0.11/benchmarks
$ ./make-hello

And we see this:

$ ls -l /home/steve/HACK
-rw-r--r-- 1 steve steve 6 Dec 22 08:30 /home/steve/HACK

For future reference the lsat looks horrifically bad

  • it writes multiple times to /tmp/lsat1.lsat and although it tries to detect races I'm not convinced. Something to look at in the future.

| No comments

 

Getting ready for Stretch

25 May 2017 21:50

I run about 17 servers. Of those about six are very personal and the rest are a small cluster which are used for a single website. (Partly because the code is old and in some ways a bit badly designed, partly because "clustering!", "high availability!", "learning!", "fun!" - seriously I had a lot of fun putting together a fault-tolerant deployment with haproxy, ucarp, etc, etc. If I were paying for it the site would be both retired and static!)

I've started the process of upgrading to stretch by picking a bunch of hosts that do things I could live without for a few days - in case there were big problems, or I needed to restore from backups.

So far I've upgraded:

  • master.steve
    • This is a puppet-master, so while it is important killing it wouldn't be too bad - after all my nodes are currently setup properly, right?
    • Upgrading this host changed the puppet-server from 3.x to 4.x.
    • That meant I had to upgrade all my client-systems, because puppet 3.x won't talk to a 4.x master.
    • Happily jessie-backports contains a recent puppet-client.
    • It also meant I had to rework a lot of my recipes, in small ways.
  • builder.steve
    • This is a host I use to build packages upon, via pbuilder.
    • I have chroots setup for wheezy, jessie, and stretch, each in i386 and amd64 flavours.
  • git.steve
    • This is a host which stores my git-repositories, via gitbucket.
    • While it is an important host in terms of functionality, the software it needs is very basic: nginx proxies to a java application which runs on localhost:XXXX, with some caching magic happening to deal with abusive clients.
    • I do keep considering using gitlab, because I like its runners, etc. But that is pretty resource intensive.
    • On the other hand If I did switch I could drop my builder.steve host, which might mean I'd come out ahead in terms of used resources.
  • leave.steve
    • Torrent-box.
    • Upgrading was painless, I only run rtorrent, and a simple object storage system of my own devising.

All upgrades were painless, with only one real surprise - the attic-backup software was removed from Debian.

Although I do intend to retry using Larss' excellent obnum in the near future pragmatically I wanted to stick with what I'm familiar with. Borg backup is a fork of attic I've been aware of for a long time, but I never quite had a reason to try it out. Setting it up pretty much just meant editing my backup-script:

s/attic/borg/g

Once I did that, and created some new destinations all was good:

[email protected] ~ $ borg init /backups/git.steve.org.uk.borg/
[email protected] ~ $ borg init /backups/master.steve.org.uk.borg/
[email protected] ~ $ ..

Upgrading other hosts, for example my website(s), and my email-box, will be more complex and fiddly. On that basis they will definitely wait for the formal stretch release.

But having a couple of hosts running the frozen distribution is good for testing, and to let me see what is new.

| 1 comment

 

Upgraded my first host to buster

9 July 2019 12:01

I upgrade the first of my personal machines to Debian's new stable release, buster, yesterday. So far two minor niggles, but nothing major.

My hosts are controlled, sometimes, by puppet. The puppet-master is running stretch and has puppet 4.8.2 installed. After upgrading my test-host to the new stable I discovered it has puppet 5.5 installed:

root@git ~ # puppet --version
5.5.10

I was not sure if there would be compatibility problems, but after reading the release notes nothing jumped out. Things seemed to work, once I fixed this immediate problem:

 # puppet agent --test
 Warning: Unable to fetch my node definition, but the agent run will continue:
 Warning: SSL_connect returned=1 errno=0 state=error: dh key too small
 Info: Retrieving pluginfacts
 ..

This error-message was repeated multiple times:

SSL_connect returned=1 errno=0 state=error: dh key too small

To fix this comment out the line in /etc/ssl/openssl.cnf which reads:

CipherString = DEFAULT@SECLEVEL=2

The second problem was that I use borg to run backups, once per day on most systems, and twice per day on others. I have an invocation which looks like this:

borg create ${flags} --compression=zlib  --stats ${dest}${self}::$(date +%Y-%m-%d-%H:%M:%S) \
   --exclude=/proc \
   --exclude=/swap.file \
   --exclude=/sys  \
   --exclude=/run  \
   --exclude=/dev  \
   --exclude=/var/log \
   /

That started to fail :

borg: error: unrecognized arguments: /

I fixed this by re-ordering the arguments such that it ended "destination path", and changing --exclude=x to --exclude x:

borg create ${flags} --compression=zlib  --stats \
   --exclude /proc \
   --exclude /swap.file \
   --exclude /sys  \
   --exclude /run  \
   --exclude /dev  \
   --exclude /var/log \
   ${dest}${self}::$(date +%Y-%m-%d-%H:%M:%S)  /

That approach works on my old and new hosts.

I'll leave this single system updated for a few more days to see what else is broken, if anything. Then I'll upgrade them in turn.

Good job!

| No comments

 

Initial server migration complete..

28 January 2020 12:20

So recently I talked about how I was moving my email to a paid GSuite account, that process has now completed.

To recap I've been paying approximately €65/month for a dedicated host from Hetzner:

  • 2 x 2Tb drives.
  • 32Gb RAM.
  • 8-core CPU.

To be honest the server itself has been fine, but the invoice is a little horrific regardless:

  • SB31 - €26.05
  • Additional subnet /27 - €26.89

I'm actually paying more for the IP addresses than for the server! Anyway I was running a bunch of virtual machines on this host:

  • mail
    • Exim4 + Dovecot + SSH
    • I'd SSH to this host, daily, to read mail with my console-based mail-client, etc.
  • www
    • Hosted websites.
    • Each different host would run an instance of lighttpd, serving on localhost:XXX running under a dedicated UID.
    • Then Apache would proxy to the right one, and handle SSL.
  • master
    • Puppet server, and VPN-host.
  • git
  • ..
    • Bunch more servers, nine total.

My plan is to basically cut down and kill 99% of these servers, and now I've made the initial pass:

I've now bought three virtual machines, and juggled stuff around upon them. I now have:

  • debian - €3.00/month
  • dns - €3.00/month
    • This hosts my commercial DNS thing
    • Admin overhead is essentially zero.
    • Profit is essentially non-zero :)
  • shell - €6.00/month
    • The few dynamic sites I maintain were moved here, all running as www-data behind Apache. Meh.
    • This is where I run cron-jobs to invoke rss2email, my google mail filtering hack.
    • This is also a VPN-provider, providing a secure link to my home desktop, and the other servers.

The end result is that my hosting bill has gone down from being around €50/month to about €20/month (€6/month for gsuite hosting), and I have far fewer hosts to maintain, update, manage, and otherwise care about.

Since I'm all cloudy-now I have backups via the provider, as well as those maintained by rsync.net. I'll need to rebuild the shell host over the next few weeks as I mostly shuffled stuff around in-place in an adhoc fashion, but the two other boxes were deployed entirely via Ansible, and Deployr. I made the decision early on that these hosts should be trivial to relocate and they have been!

All static-sites such as my blog, my vanity site and similar have been moved to netlify. I lose the ability to view access-logs, but I'd already removed analytics because I just don't care,. I've also lost the ability to have custom 404-pages, etc. But the fact that I don't have to maintain a host just to serve static pages is great. I was considering using AWS to host these sites (i.e. S3) but chose against it in the end as it is a bit complex if you want to use cloudfront/cloudflare to avoid bandwidth-based billing surprises.

I dropped MX records from a bunch of domains, so now I only receive email at steve.fi, steve.org.uk, and to a lesser extent dns-api.com. That goes to Google. Migrating to GSuite was pretty painless although there was a surprise: I figured I'd setup a single user, then use aliases to handle the mail such that:

  • debian@example -> steve
  • facebook@example -> steve
  • webmaster@example -> steve

All told I have about 90 distinct local-parts configured in my old Exim setup. Turns out that Gsuite has a limit of like 20 aliases per-user. Happily you can achieve the same effect with address maps. If you add an address map you can have about 4000 distinct local-parts, and reject anything else. (I can't think of anything worse than having wildcard handling; I've been hit by too many bounce-attacks in the past!)

Oh, and I guess for completeness I should say I also have a single off-site box hosted by Scaleway for €5/month. This runs monitoring via overseer and notification via purppura. Monitoring includes testing that websites are up, that responses contain a specific piece of text, DNS records resolve to expected values, SSL certificates haven't expired, & etc.

Monitoring is worth paying for. I'd be tempted to charge people to use it, but I suspect nobody would pay. It's a cute setup and very flexible and reliable. I've been pondering adding a scripting language to the notification - since at the moment it alerts me via Pushover, Email, and SMS-messages. Perhaps I should just settle on one! Having a scripting language would allow me to use different mechanisms for different services, and severities.

Then again maybe I should just pay for pingdom, or similar? I have about 250 tests which run every two minutes. That usually exceeds most services free/cheap offerings..

| 3 comments