About Archive Tags RSS Feed

 

Entries tagged todo

You have to go on living

11 June 2007 21:50

Debconf7 TODO:

These are my personal goals for the debcamp week:

  • Develop xen-tools to fix bugs reported by Henning.
  • Unify debootstrap.
  • Get involved in discussions of Debian init-systems.

The second item is the one that I'm most interersted in right now, but I'll leave it a day to see if there is any useful feedback or massive objections right now.

The debootstrap problem

Right now there are two versions of debootstrap - the "Debian one" and the "Ubuntu one".

The Ubuntu debootstrap package has support for installing additional suites; dapper, feisty, etc.. (My current understanding is that the Ubuntu debootstrap can still install Etch, Sarge, etc.)

Right now if you want to bootstrap an Ubuntu system on a Debian host you're out of luck unless you install the Ubuntu debootstrap package.

There is no real reason why this should be the case. Either distribution should be able to install the other.

Why I care

I wrote/maintain xen-tools. This package allows you to create Xen guests of different distributions. Most of the time the tool will install a distribution by invoking "debootstrap ...".

It would be nice if this could work upon a plain Debian system such that installing Dapper, Fiesty, etc, was supported.

What We Can't Solve

There are problems with debootstrap which I'm not going to attempt to solve. The most obvious one is that Sarge's debootstrap cannot install Etch.

There are a few open bugs in the Debian BTS which I will triage though, since some of them are safe to close.

The Approach

There are three approaches for this:

The hack

Download the source to Ubuntu's package. Test it can install Sarge, Etch & Lenny. Then upload to Debian.

What I'll do

I'll do three things:

  • Triage open bugs on debootstrap.
  • I'll compare the two scripts + manpages, to see if they've diverged.
  • I'll move the Ubuntu scripts into the Debian package.
The neat way forward

Split debootstrap into :

  • debootstrap
    • Compatability package, to pull in the next three:
  • debootstrap-common
    • The tool.
  • debootstrap-debian
    • Contains the scripts/support for installing Sarge, Etch & Lenny.
  • debootstrap-ubuntu
    • Contains the scripts/support for installing Dapper, Feisty, etc.

This second approach should be straightforward. The first thing to do is to test that actual debootstrap script differ little between the two distributions. That should be simple enough.

The next thing is to create the additional packages.

Acceptance?

For this work to be useful it needs support from the maintainers of the Debian & Ubuntu packages. (I guess mostly the Debian maintainers actually.)

Progress

The good news is that between the version of debootstrap in Ubuntu right now, debootstrap-0.3.3.3ubuntu4, and that in Debian sid, debootstrap-0.3.3.3, the code has only one minor change. There is only a minor typo-fix in the manpage too.

So I can move the scripts into the package trivially...

| No comments

 

In World War II the average age of the combat soldier was 26

13 July 2007 21:50

Bootstrapping non-Debian distributions suck.

The only available tool appears to be rpmstrap which quite frankly fails more often than it works.

Since my xen-tools project needs to do carry out this kind of operation I've been pondering the idea of writing a tool which will install CentOS/Fedora/SuSE into a directory, in a similar fashion to debootstrap.

If there's nothing out there that you can point to, then I think that will be my next project.

| No comments

 

Do you remember a time when fear

14 July 2007 21:50

I can successfully boot Fedora Core 6 & 7 - to the extent that RPM and Yum both work.

This takes in the order of 70Mb and around 2 minutes - assuming the caching mechanism works.

Now I need :

  1. A cute name.
  2. To abstract the common parts of the code somehow.
  3. To get started on SuSE + CentOS.

Today: Hacking. Tomorrow: Security work.

| No comments

 

No, I don't want your number

23 November 2007 21:50

I'm still in the middle of a quandry with regards to revision control.

90% of my open code is hosted via CVS at a central site.

I wish to migrate away from CVS in the very near future, and having ummed and ahhed for a while I've picked murcurial as my system of choice. There is extensive documentation, and it does everything I believe I need.

The close-runner was git, but on balance I've decided to choose mercurial as it wins in a few respects.

Now the plan. I have two options:

  • Leave each project in one central site.
  • Migrate project $foo to its own location.

e.g. My xen-tools could be hosted at mercurial.xen-tools.org, my blog compiler could live at mercurial.steve.org.uk.

Alternatively I could just leave the one site in place, ignoring the fact that the domain name is now inappropriate.

The problem? I can't decide which approach to go for. Both have plusses and minuses.

Suggestions or rationales welcome - but no holy wars on why any particular revision control system is best...

I guess ultimately it matters little, and short of mass-editing links its 50/50.

| No comments

 

You can't hide the knives

6 December 2007 21:50

After recently intending to drop the Planet Debian search and recieving complaints that it was/is still useful it looks like there is a good solution.

The code will be made live and official upon the planet debian in the near future.

The DSA team promptly installed the SQLite3 package for me, and I've ported the code to work with it. Once Apache us updated to allow me to execute CGI scripts it'll be moved over, and I'll export the current data to the new database.

In other news I'm going to file an ITP bug against asql as I find myself using it more and more...

| No comments

 

I hear it every day

17 January 2008 21:50

It bothers me that my Tor usage is less than I'd like because it is just so fiddly.

When it comes to privacy I want to keep things simple, I want to use tor, but I dont want to use it for things that aren't sane.

In practise that means I want to use tor for a small amount of browsing:

  • When the host is a.com, b.com, & c.com
  • When the traffic is not over SSL.

To do that I have to install privoxy, and use that with a configuration file like this:

# don't forward by default.
forward-socks4   /    .
# don't forward by default, even more so for HTTPS
forward-socks4   :443 .

# but we do want tor on these three sites:
forward-socks4   a.com/       127.0.0.1:9050 .
forward-socks4   b.com/       127.0.0.1:9050 .
forward-socks4   c.com/       127.0.0.1:9050 .

I'm using absolutely nothing else in my Privoxy configuration, so it seems like overkill.

I'd love to hear about a simple rule-based proxy-chaining tool - if there is one out there then I'd love to know about it lazyweb.

If not it shouldn't be too hard to write one with the Net::Proxy & Net::Socks module(s).

<global>
  listen 1234
  no-proxy
</global>

<sites>  
  hostname one.com
  port != 443
  proxy socks localhost 8050
</sites>

<sites>  
  hostname two.com
  port != 443
  proxy socks localhost 8050
</sites>

<sites>
  hostname foo.com
  port = 80
  proxy localhost 8000
</sites>

| 7 comments

 

A good cockerel always points north

11 February 2008 21:50

I spent a while yesterday thinking over the software projects that I'm currently interested in. It is a reasonably short list.

At the time I just looked over the packages that I've got installed and the number of bugs. I'm a little disappointed to see that the bugfixes that I applied to GNU screen have been mostly ignored.

Still I have the day off work on Thursday and Friday this week and would probbly spend it releasing the pending advisories I've got in my queue, and then fixing N bugs in a single package.

The alternative is to build a quick GPG-based mailing list manager.

I'd like a simple system which allowed users to subscribe, and only accepted GPG-signed mails. The subscriber could choose to receive their messages either signed (as-is) by the submitter or encrypted to them.

So to join you'd do something like this:

subscribe foo@example.org [encrypted]
--BEGIN PUBLIC KEY --
...
--ND PUBLIC KEY--

There is the risk, with a large enough number of users, that a list could DOS the host if it had to encrypt each message to each subscribers. But if the submissions were validated as being signed by a user with a known key it should be minimal, unless there is a lot of traffic.

The cases are simple:

  • foo-subscribe => Add the user to the list, assuming valid key data found
  • foo-unsubscribe => Do the reverse.
  • foo:
    • If the message is signed accept and either mail to each recipient, or encrypt on a per-recipient basis.
    • If the message is not signed, or signed by a non-subscriber drop it.

There are some random hacks out there for this, including a mailman patch (did I mention how much I detest mailman yet today?) but nothing recent.

| 1 comment

 

You're not too technical, just ugly, gross ugly

7 May 2008 21:50

Well a brief post about what I've been up to over the past few days.

An alioth project was created for the maintainance of the bash-completion package. I spent about 40 minutes yesterday committing fixes to some of the low-lying fruit.

I suspect I'll do a little more of that, and then back off. I only started looking at the package because there was a request-for-help bug filed against it. It works well enough for me with some small local additions

The big decision for the bash-completion project is how to go forwards from the current situation where the project is basically a large monolithic script. Ideally the openssh-client package should contain the completion for ssh, scp, etc..

Making that transition will be hard. But interesting.

In other news I submitted a couple of "make-work" patches to the QPSMTPD SMTP proxy - just tidying up a minor cosmetic issues. I'm starting to get to the point where I understand the internals pretty well now, which is a good thing!

I love working on QPSMTPD. It rocks. It is basically the core of my antispam service and a real delight to code for. I cannot overemphasise that enough - some projects are just so obviously coded properly. Hard to replicate, easy to recognise...

I've been working on my own pre-connection system which is a little more specialied; making use of the Class::Pluggable library - packaged for Debian by Sarah.

(The world -> Pre-Connection/Load-Balancing Proxy -> QPSMTPD -> Exim4. No fragility there then ;)

Finally I made a tweak to the Debian Planet configuration. If you have Javascript disabled you'll no longer see the "Show Author"/"Hide Author" links. This is great for people who use Lynx, Links, or other minimal browsers.

TODO:

I'm still waiting for the creation of the javascript project to be setup so that I can work on importing my jQuery package.

I still need to sit down and work through the Apache2 bugs I identified as being simple to fix. I've got it building from SVN now though; so progress is being made!

Finally this weekend I need to sit down and find the time to answer Steve's "Team Questionnaire". Leave it any longer and it'll never get answered. Sigh.

ObQuote: Shooting Fish

| 2 comments

 

Didn't I kill you already?

16 August 2008 21:50

One of the sites that I no longer use, but have fond memories of is dotfiles.com.

It had some pretty coarse catagories and allowed you to view other peoples configuration files. (I have no idea how the upload worked. Probably email submission I'd guess.)

I know that my my own dotfiles have benefited from seeing other peoples snippets.

Sadly it seems that the last upload to their site was back in 2006.

With all the Web2.0 lust around it would seem to be a perfect candidate for reinvention.

We need:

  • The ability to create/delete an account.
  • The ability to upload a file (<100k say)
  • The ability to tag all files with multiple arbitrary labels.
  • Possibly the ability to comment / rate / vote on submissions.
  • The ability to flag uploads as being "spam"

Somebody competant could probably knock up a reasonable hack in a day or two. I guess we have some sites out there already like DZone snippets, snipplr, & swik, but none of those are exactly the same thing.

Consider it my challenge to the world - just don't tempt me. I've got enough to do as it is.

ObQuote: Hellboy

| 7 comments

 

Why do you keep torturing yourself?

9 July 2009 21:50

Recently I came to realise that my planning and memory skills weren't adequate to keeping track of what I want to do, and what I need to do.

For a while I've been fooling myself into thinking than "emacs ~/TODO" was a good way to keep track of tasks. It really isn't - especially if you work upon multiple machines throughout the week.

So I figured I needed something "always available", which these days mostly means an online application / website.

Over the years I've looked at many multi-user online "todo-list" applications, and inevitably they all suck. Mostly they suck because they're either too rigid or don't meet my particular way of working, living, and doing "stuff".

To my mind :

  • A todo-list should above all make it easy to add tasks.
    • If you cannot easily add tasks then you won't. So you'll never use it.
  • A task might be open or closed, but it will never be 23.55% complete.
  • A task might be "urgent" or not, but it will never be "urgent", "semi-urgent", "do soon", "do today".
  • A task might have many steps but they should either be added separately, or the steps noted in some notes.
    • I find the notion of making "task A" depend upon "task B" perilous.
  • A task belongs to one person. It cannot be moved, shared, or split.

Some of those things such as subtasks and completion percentages I guess are more application to project management software. Those and time-sheet applications really wind me up.

With my mini-constraints in mind I sketched out a couple of prototypes. The one I was expecting to use had a typical three-pane view:

[ Task Menu ]     |  Task1: Buy potatoes
                  |  Task2: Remember to check email
  All Tasks       |  Task3: Order more cake.
  Completed Tasks |------------------------------------
  Urgent Tasks    |
                  |  [Taske Details here]
  Tags:           |
   * Work         |  [Urgent | Non-Urgent ]
   * Fun          |
   * Birthdays    |  [Close Task | Re-OPen Task ]
   * Parties      |
   * Shopping     |  [Notes ..]
   ...

That turned out to be a pain to implement, and also a little unwieldy. I guess trying to treat a tasklist as a collection of email is a difficult proposition at best - but more on that in my next post.

So a quick rethink and now I've came up with a simple but functional layout based upon the notions that:

  • Adding tasks must be almost too easy.
  • Most tasks only need a one-line description.
  • Adding tags is good. Because tasks cross boundaries.
  • Adding notes is good.
  • No task should ever be deleted - but chances are we don't actually wish to view tasks older than a month. We can, but we can hide them by default.
  • When a task is closed/completed it cannot be edited.
  • All tasks belong to their owner and are non-public.

So what I've got is a multi-user system which is essentially split into four behaviours: Adding a task, viewing open tasks, viewing closed tasks, and searching for tasks.

Tasks can be tagged, have notes added to them (but never deleted/edited) and over time closed tasks fade away (though they're never deleted).

Some of my constraints or assumptions might change over time, but so far I'm happy with them. (e.g. I can imagine tagging an entry "public" might make it appear visible to others.)

Anyway the code is (surprise) built using minimal perl & jquery and you can play with it:

The site contains a demo user which you can use. I don't much care if people wish to use it for real once it is more complete, but I expect that it will either be ignored completely or be the kind of thing you wish to self-host.

With that in mind the code is currently closed, but I'll add it to my mercurial repository soon. (Maybe even tonight!)

ObSubject: Dog Soldiers

| 8 comments

 

Not even if you let me video tape it.

10 July 2009 21:50

The online todo list seems popular, or rather a lot of people logged in with the posted details and created/resolved tasks at least.

It is kinda cute to watch multiple people all using a site with only one set of login credentials - I guess this is a special case because you cannot easily delete things. Still I wonder how other sites would work like that? Would visitors self-organise, some trashing things, and others repairing damage? (I guess we could consider wikipedia a good example.)

Anyway I've spent a little while this morning and this lunchtime adding in the missing features and doing things suggested by users.

So now:

  • "Duration" is shown for both open & completed tasks.
  • The "home" page is removed as it added no value.
  • Tasks may be flagged as urgent.
  • *Tasks which have titles beginning with "*" are urgent by default).
  • Searching works across tags, notes, and titles.
  • Tag name completion is pending.

I think in terms of features that I'm done. I just need to wire up creation of accounts, and the submission of tasks via email. Then un-fuck my actual code.

I guess as a final thing I need to consider email notices. I deliberately do not support or mandate "due dates" for tasks. I think I prefer the idea of an email alert beign sent if a task is marked as urgent and has had no activity in the past 24 hours. (Where activity means "new note". e.g. you'd add "still working on this", or similar to cancel the pending alert)

Sending alerts via twitter could also be an option, although I still mostly abhor it.

I've had a brief look at both tadalist.com and rememberthemilk.com both seem nice .. but I'm still not sure on a winner yet.

ObFilm: Chasing Amy

| 4 comments

 

The pain of a new IP address

13 October 2010 21:50

Tonight I was having some connectivity issues, so after much diagnostic time and pain I decided to reboot my router. At the moment my home router came back my (external) IP address changed, and suddenly I found I could no longer login to my main site.

Happily however I have serial console access, and I updated things such that my new IP address was included in the hosts.allow file. [*]

The next step was to push that change round my other boxes, and happily I have my own tool slaughter which allows me to make such global changes in a client-pulled fashion. 60 minutes later cron did its magic and I was back.

This reminds me that I let the slaughter tool stagnate. Mostly because I only use it to cover my three remote boxes and my desktop, and although I received one bug report (+fix!) I never heard of anybody else using it.

I continue to use and like CFEngine at work. Puppet & Chef have been well argued against elsewhere, and I'm still to investigate BFG2 + FAI.

Mostly I'm happy with slaughter. My policies are simple, readable, and intuitive. Learn perl? Learn the "CopyFile" and you're done. For example.

By contrast the notion of state machines, functional operations, and similar seem over-engineered in other tools. Perhaps thats my bug, perhaps that's just the way things are - but the rants linked to above makes sense to me and I find myself agreeing 100%.

Anyway; slaughter? What I want to do is rework it such that all policies are served via rsync and not via HTTP. Other changes, such as the addition of new primitives, don't actually seem necessary. But serving content via rsync just seems like the right way to go. (The main benefit is recursive copies of files become trivial.)

I'd also add the ability to mandate GPG-signatures on policies, but that's possible even now. The only step backwards I see is that currently I can serve content over SSL, but that should be fixable even if via stunnel.


*

My /etc/hosts.allow file contains this:

ALL: 127.0.0.1
ALL: /etc/hosts.allow.trusted
ALL: /etc/hosts.allow.trusted.apache

Then hosts.allow.trusted contains:

# www.steve.org.uk
80.68.85.46

# www.debian-administration.org
80.68.80.176

# my home.
82.41.x.x

I've never seen anybody describe something similar, though to be fair it is documented. To me it just seems clean to limit the IPs in a single place.

To conclude hosts.allow.trusted.apache is owned by root.www-data, and can be updated via a simple CGI script - which allows me to add a single IP address on the fly for the next 60 minutes. Neat.

ObQuote: Tony is a little boy that lives in my mouth. - The Shining

| 1 comment

 

So slaughter is definitely getting overhauled

24 October 2012 21:50

There have been a few interesting discussions going on in parallel about my slaughter sysadmin tool.

I've now decided there will be a 2.0 release, and that will change things for the better. At the moment there are two main parts to the system:

Downloading polices

These are instructions/perl code that are applied to the local host.

Downloading files

Polices are allowed to download files. e.g. /etc/ssh/sshd_config templates, etc.

Both these occur over HTTP fetches (SSL may be used), and there is a different root for the two trees. For example you can see the two public examples I have here:

A fetch of the policy "foo.policy" uses the first prefix, and a fetch of the file "bar" uses the latter prefix. (In actual live usage I use a restricted location because I figured I might end up storing sensitive things, though I suspect I don't.)

The plan is to update the configuration file to read something like this:

transport = http

#
# Valid options will be
#    rsync | http | git | mercurial | ftp
#

#
# each transport will have a different prefix
#
prefix = http://static.steve.org.uk/private

# for rsync:
#  prefix=rsync.example.com::module/
#
# for ftp:
#  prefix=ftp://ftp.example.com/pub/
#
#  for git:
#  prefix=git://github.com/user/repo.git
#
#  for mercurial
#  prefix=http://repo.example.com/path/to/repo
#

I anticipate that the HTTP transport will continue to work the way it currently does. The other transports will clone/fetch the appropriate resource recursively to a local directory - say /var/cache/slaughter. So the complete archive of files/policies will be available locally.

The HTTP transport will continue to work the same way with regard to file fetching, i.e. fetching them remotely on-demand. For all other transports the "remote" file being copied will be pulled from the local cache.

So assuming this:

transport = rsync
prefix    = rsync.company.com::module/

Then the following policy will result in the expected action:

if ( UserExists( User => "skx" ) )
{
    # copy
    FetchFile(
            Source => "/global-keys",
              Dest => "/home/skx/.ssh/authorized_keys2",
             Owner => "skx",
             Group => "skx",
              Mode => "600" );
}

The file "/global-keys" will refer to /var/cache/slaughter/global-keys which will have been already downloaded.

I see zero downside to this approach; it allows HTTP stuff to continue to work as it did before, and it allows more flexibility. We can benefit from knowing that the remote policies are untampered with, for example, via the checking built into git/mercurial, and the speed gains of rsync.

There will also be an optional verification stage. So the code will roughly go like this:

  • 1. Fetch the policy using the specified transport.
  • 2. (Optionally) run some local command to verify the local policies.
  • 3. Execute policies.

I'm not anticipating additional changes, but I'm open to persuasion.

| No comments

 

Expiration checking services?

31 October 2013 21:50

Today I'm recuperating, and almost back to full health.

Unfortunately I made the mistake of online-shopping, oops.

Good job I stopped myself from registaring all the domains, but I did get two that I liked: spare.io & edinburgh.io.

I've updated my database to record them, but I wonder what do other people use to remind them about expiration dates of domains, SSL-certificates, & etc?

I googled and didn't find a definitive free/paid service, but it seems like something lots of people need to be reminded about..

Maybe people just rely on registrars sending strident emails. (Of course the redemption period for domains make it reasonably safe to forget for a day or two, until your customers complain and your emails start to bounce..)

| 8 comments