About Archive Tags RSS Feed

 

Entries tagged blog

Other things just make you swear and curse

26 June 2007 21:50

I find myself in need of a simple "blogging system" for a small non-dynamic site I'm putting together.

In brief I want to be able to put simple text files into "blog/", and have static HTML files build from them, with the most recent N being included in an index - and each one individually linked to.

At a push I could just read "entries/*.blog", then write a perl script to extract a date + title and code it myself - but I'm sure such a thing must already exist? I vaguely remember people using debian/changelog files as blogs a while back - that seems similar?

Update: NanoBlogger it is.

| No comments

 

So helpless against what is coming.

14 April 2008 13:04

I've made a new release of the chronicle blog compiler.

There are a couple of minor changes to the variables exported to the theme templates, as contributed by MJ Ray, and a new spooling system.

This works in a simple fashion, and allows you to queue up posts. For example If you write a new entry containing the psuedo-header "Publish: 20th April 2008" and you have a crontab entry containing this:

  chronicle-spooler
    --spool-dir=~/blog/spool/  \
    --live-dir=~/blog/data/  \
    --post-move='cd ~/blog && make upload'

It works as expected. When you call this on the 20th April the file will be moved from ~/blog/spool into ~/blog/data, and your blog will be rebuilt & uploaded.

The implementation was different than the original suggestion, but is nice and clean, and can be used to do other things with a little bit of effort.

Anyway if you see this entry the spooling system is working!

ObQuote: 30 Days of Night.

| No comments

 

I am the Earl of Preston

6 September 2009 21:50

Paul Wise recently reported that the Planet Debian search index hadn't updated since the 7th of June. The search function is something I added to the setup, and although I don't use it very often when I do find it enormously useful.

Anyway normal service should now be restored, but the search index will be missing the content of anything posted for the two months the indexer wasn't running.

Recently I tried to use this search functionality to find a post that I knew I'd written upon my blog a year or so ago, which I'd spectacularly failed to find via grep and my tag list.

Ultimately this lead to my adding a search interface to my own blog entries using the namazu2 package. If I get some free time tomorrow I'll write a brief guide to setting this up for the Debian Administration website - something that has been a little neglected recently.

ObFilm: Bill & Ted's Excellent Adventure

| 3 comments

 

Because I don't trust myself with you.

28 June 2010 21:50

Debian Packages

Every now and again I look over server logs and see people downloading random .deb packages from my mirrors, or from my servers, via wget or Firefox (rather than with apt-get/aptitude).

Personally I don't often download random binaries even from people I believe I can trust. Instead I'll download source and rebuild.

But it bugs me that somebody might download a work-in-progress, decide it isn't complete or otherwise good enough, and miss out on an update of awesome-sauce a day or two later.

I suspect there is no real solution to this "problem", and that including /etc/apt/sources.list.d/ entries inside a binary package to "force" an upgrade behind the scenes is a little too evil to tolerate. And yet .. something something dark-side .. something something seductive something?

Blog Update

This is my last film-subject entry. In the future I will have more accurate subjects, albeit more dull ones.

I still amuse myself with quotations, and before that the song lyrics, but I guess that now is a good time to call it a day with that.

ObFilm: Cruel Intentions

| 9 comments

 

Security changes have unintended effects.

7 September 2012 21:50

A couple of months ago I was experimenting with adding no-new-privileges to various systems I run. Unfortunately I was surprised a few weeks later at unintended breakge.

My personal server has several "real users", and several "webserver users". Each webserver user runs a single copy of thttpd under its own UID, listening on 127.0.0.1:xxxx, where xxxx is the userid:

steve@steve:~$ id -u s-steve
1019

steve@steve:~$ sudo lsof -i :1019
COMMAND  PID    USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
thttpd  9993 s-steve    0u  IPv4 7183548      0t0  TCP localhost:1019 (LISTEN)

Facing the world I have an IPv4 & IPv6 proxy server that routes incoming connections to these local thttpd instances.

Wouldn't it be wonderful to restrict these instances, and prevent them from acquiring new privileges? Yes, I thought. Unfortunately I stumbled across a down-side: Some of the servers send email, and they do that by shelling out to /usr/sbin/sendmail which is setuid (and thus fails). D'oh!

The end result was choosing between:

  • Leaving "no-new-privileges" in place, and rewriting all my mail-sending CGI scripts.
  • Removing the protection such that setuid files can be executed.

I went with the latter for now, but will probably revisit this in the future.

In more interesting news recently I tried to recreate the feel of a painting, as an image which was successful. I think.

I've been doing a lot more shooting recently, even outdoors, which has been fun.

ObQuote: "You know, all the cheerleaders in the world wouldn't help our football team." - Bring it On

| 3 comments