About Archive Tags RSS Feed

 

Entries tagged meta

We're all to blame

22 July 2007 21:50

Jose Carlos Garcia Sogo: Whilst breasts are everywhere I find it highly inappropriate for people to link to videos of them on Planet Debian.

I'd comment to that effect upon your post but I'm getting a 500 error from your server.

In other news Joey Hess reminded me this evening that it is pretty much the 1 year anniversary of my Xen Hosting setup.

In the next few days, once I've checked dates and looked to see if we can upgrade, etc, I'll be requesting payment from those people who wish to continue.

| No comments

 

That I can't show you how

30 July 2007 21:50

Russell Coker has recently started posting random tech-tips and recipes in his blog :

To improve things in this regard I plan to increase the number of posts I write with solutions to random technical problems that I encounter with the aim of providing a resource for google searches and to randomly inform people who read my blog.

This is nice to see on Planet Debian - although I hope we continue to see the personal entries.

For anybody else who is considering posting things like this I would be delighted if you'd copy them to the Debian Administration website. There have been numerous times when I've been just about to write something on a topic, seen it posted elsewhere and figured I shouldn't do so:

  • Because it would be duplication.
  • Because it would look like plagiarism

(Notable examples off the top of my head: Introduction to OpenVZ, Introduction to GIT, several Xen pieces.)

I don't get many submissions, which I'm getting resigned to, but it is easy and people really really are greatful for new posts.

In other news linuxlinks.com are a bunch of spammers and will be reported as such. I utterly fail to care that they've added "my software" to their list; if I cared I'd join their site and agree to receive emails from them..

| No comments

 

Children Of The Dammed

13 December 2007 21:50

After a lot of hacking I've now got chronicle displaying comments upon entries.

Since my blog is compiled on my home desktop machine and comment submission happens upon a remote machine the process involves a bit of a hack:

Publish Blog

The blog is compiled and uploaded to the live location using rsync.

Wait For Comments

Once the blog is live there are embedded forms which may be used to receive comments.

The CGI script which is the target of the forms will then write each comment out to a text file, located outside the HTTP-root.

Sync Them

Prior to rebuilding the blog the next time I update I rsync the comments directory to my local machine - such that the comments posted are included in the output

This my local tree looks something like this:

~/blog/
|-- comments/
|-- data/
|-- output/
|-- Makefile
`-- chroniclerc

Here I have a Makefile to automate the import of the comments from the live site to the local comments/ directory, rebuild, and finally upload.

All this means that I can rebuild a blog created by a combination of plain text post files and plain text comment files.

It also means that there is a fair lag between comment submission and publication - though I guess there is nothing stopping me from auto-rebuilding and syncing every hour or two via cron...

I'll make a new release with this comment.cgi script and changes once the package hits Debian unstable...

| 4 comments

 

Offer me everything I ask for

29 April 2008 21:50

I installed Debian upon a new desktop machine yesterday, via a PXE network boot.

It was painless.

Getting xen up and running, with a 32-bit guest and a 64-bit guest each running XDMCP & VNC was also pretty straightforward.

There is a minor outstanding problem with the 32-bit xen guest though; connecting to it from dom0, via XDMCP, I see only a blank window - no login manager running.

GDM appears painlessly when I connect via VNC.

The relevent configuration file looks like this:

# /etc/gdm/gdm.conf
[security]
AllowRoot=true
AllowRemoteRoot=true

[xdmcp]
Enable=true

The same configuration on the 64-bit guest works OK for both cases.

(I like to use XDMCP for accessing the desktop of Xen guests, since it means that I get it all full-screen, and don't have to worry about shortcuts affecting the host system and not the guest - as is the case if you're connecting via VNC, etc).

Weirdness. Help welcome; I'm not 100% sure where to look

Anyway, once again, a huge thank you to the Debian Developers, bug submitters, and anybody else involved peripherally (such as myself!) with Debian!

I love it when a plan comes together.

SSL

ObRandom: Where is the cheapest place to get an SSL certificate, for two years, which will work with my shiny Apache2 install?

Somebody, rightly, called me for not having SSL available as an option on my mail filtering website.

I've installed a self-signed certificate just now, but I will need to pay the money and buy a "real" one shortly.

So far completessl.com seems to be high in the running:

  • 1 year - £26
  • 2 years - £49

For double-bonus points they accept Paypal which most of my customers pay with ..

ObQuote: The Princess Bride

| 9 comments

 

Please tell me what you're feeling.

24 October 2008 21:50

Twice, recently, people have commented on my post titles. I keep a real journal elsewhere where entries are either untitled, or are titled by quotations. In the past I used to keep a running total of who had guessed the source of the quote, but these days that game gets ignored.

Titles here used to be song lyrics, but these days they are film quotes.

When I started I intended to be a little interesting, and make "happy posts" contain quotes from films I enjoyed watching, and "ironic" or "ranty" posts contain quotes from films I disliked. But that didn't last for long, and I suspect nobody noticed anyway. (Why would they?!)

Regardless that's the only explaination I'll give.

Once or twice I've had people "complain" :

Your titles have no relationship to your posts.

I don't know if your post is a serious one I should read, or a trivial one I should ignore.

To those comments I have several potential response:

  • Either I believe the posts are serious, or I believe they are not. My opinion might coincide with yours, but frankly I have no expectation either way.
  • Read the body, ignore the title. Consider it an ironic commentry on the importance of fashion, where appearance is more important than content.
  • It quickly gets old making titles of the form "Hey, asql released. Again.".
  • I find your ideas intruiging and wish to subscribe to your newsletter.

In short, if you (dis)like what I have to say I'm sure that the title of the post you disagree with is the least important part of that. I'm sure I could say more either way, but since the recent reminder I figured I should write something.

I've almost done so several times but I find it hard to be specific. Regardless the summary is:

"I'll keep this up, whether you like it or not, until I get bored. Regardless of how much I might respect you, and your opinion."

PS. tscreen released. Again.

ObFilm: Star Trek #2 - Wrath of Khan

| 7 comments

 

It's in your nature to destroy yourselves.

6 November 2008 21:50

Elections

I've said this elsewhere, but it bears repeating:

Anybody who expects a nation to turnaround overnight, due to a changing government, hasn't watched/read enough documentaries.

Television

Who is going to make documentaries when David Attenborough dies?

ObFilm: Terminator 2

| 1 comment

 

Death is... whimsical... today.

12 January 2009 21:50

I'm not sure how you can pre-announce something in a way that cannot be later faked.

The best I can imagine is you write it in a text file and post the size / hash of the file.

steve@skx:~$ ls -l 10-march-2009
-rw-r--r-- 1 steve users 234 Jan 12 21:40 10-march-2009
steve@skx:~$ sha1sum 10-march-2009
99d1b6d625ed4c15a3be2be5fec63c17941c370d  10-march-2009
steve@skx:~$ md5sum 10-march-2009
1a0e68b8fbb3b0fe30e5b4a9413ceeec  10-march-2009

I don't need anybody to keep me honest, but I welcome interesting suggestions on more neat ways to pre-confirm you have content that hasn't been changed between being written and being released...?

I guess you could use GPG and a disposible key-pair, and then post the secret key afterward, but that feels kinda wrong too.

Update of course you could post the detached signature. D'oh.

Shamir's Secret Sharing could be another option - posting just enough pieces of the secret to make recovery possible with the addition of one piece that was witheld until the later date. Jake wrote a nice introduction to secret sharing a couple of years ago.

ObFilm: Léon

| 12 comments

 

I am the Earl of Preston

6 September 2009 21:50

Paul Wise recently reported that the Planet Debian search index hadn't updated since the 7th of June. The search function is something I added to the setup, and although I don't use it very often when I do find it enormously useful.

Anyway normal service should now be restored, but the search index will be missing the content of anything posted for the two months the indexer wasn't running.

Recently I tried to use this search functionality to find a post that I knew I'd written upon my blog a year or so ago, which I'd spectacularly failed to find via grep and my tag list.

Ultimately this lead to my adding a search interface to my own blog entries using the namazu2 package. If I get some free time tomorrow I'll write a brief guide to setting this up for the Debian Administration website - something that has been a little neglected recently.

ObFilm: Bill & Ted's Excellent Adventure

| 3 comments

 

Hack the planet!

22 September 2009 21:50

Recently I was viewing Planet Debian and there was an entry present which was horribly mangled - although the original post seemed to be fine.

It seemed obvious to me that that some of the filtering which the planet software had applied to the original entry had caused it to become broken, malformed, or otherwise corrupted. That made me wonder what attacks could be performed against the planet aggregator software used on Planet Debian.

Originally Planet Debian was produced using the planet software.

This was later replaced with the actively developed planet-venus software instead.

(The planet package has now been removed from Debian unstable.)

Planet, and the Venus project which forked from it, do a great job at scrutinising their input and removing malicious content. So my only hope was to stumble across something they had missed. Eventually I discovered the (different) filtering applied by the two feed aggregators missed the same malicious input - an image with a src parameter including javascript like this:

<img src="javascript:alert(1)">

When that markup is viewed by some browsers it will result in the execution of javascript. In short it is a valid XSS attack which the aggregating software didn't remove, protect against, or filter correctly.

In fairness it seems most of the browsers I tested didn't actually alert when viewing that code - but as a notable exception Opera does.

I placed a demo online to test different browsers:

If your browser executes the code there, and it isn't Opera, then please do let me know!

The XSS testing of planets

Rather than produce a lot of malicious input feeds I constructed and verified my attack entirely off line.

How? Well the planet distribution includes a small test suite, which saved me a great deal of time, and later allowed me to verify my fix. Test suites are good things.

The testing framework allows you to run tiny snippets of code such as this:

# ensure onblur is removed:
HTML( "<img src=\"foo.png\" onblur=\"alert(1);\" />",
      "<img src=\"foo.png\" />" );;

Here we give two parameters to the HTML function, one of which is the input string, and the other is the expected output string - if the sanitization doesn't produce the string given as the expected result an error is raised. (The test above is clearly designed to ensure that the onblur attribute and its value is removed.)

This was how I verified initially that the SRC attribute wasn't checked for malicious content and removed as I expected it to be.

Later I verified this by editing my blog's RSS feed to include a malicious, but harmless, extra section. This was then shown upon the Planet Debian output site for about 12 hours.

During the twelve hour window in which the exploit was "live" I received numerous hits. Here's a couple of log entries (IP + referer + user-agent):

xx.xx.106.146 "http://planet.debian.org/" "Opera/9.80
xx.xx.74.192  "http://planet.debian.org/" "Opera/9.80
xx.xx.82.143  "http://planet.debian.org/" "Opera/9.80
xx.xx.64.150  "http://planet.debian.org/" "Opera/9.80
xx.xx.20.18   "http://planet.debian.net/" "Opera/9.63
xx.xx.42.61   "-"                         "gnome-vfs/2.16.3
..

The Opera hits were to be expected from my previous browser testing, but I'm still not sure why hits were with from User-Agents identifying themselves as gnome-vfs/n.n.n. Enlightenment would be rewarding.

In conclusion the incomplete escaping of input by Planet/Venus was allocated the identifier CVE-2009-2937, and will be fixed by a point release.

There are a lot of planets out there - even I have one: Pluto - so we'll hope Opera is a rare exception.

(Pluto isn't a planet? I guess thats why I call my planet a special planet ;)

ObFilm: Hackers.

| 6 comments

 

There's no such thing as a wrong war

13 October 2009 21:50

Once upon a time I wrote a blog compiler, a simple tool that would read in a bunch of text files and output a blog. This blog would contain little hierarchies for tags, historical archives, etc. It would also have a number of RSS feeds too.

Every now and again somebody will compare it to ikiwiki and I'll ignore that comparison entirely, because the two tools do different things in completely different fashions.

But I was interested to see Joey talk about performance tweaks recently as I have a blog which has about 900 pages, and which takes just over 2 minutes to build from start to finish. (Not this one!)

I've been pondering performance for a while as I know my current approach is not suited to high speed. Currently the compiler reads in every entry and builds a giant data structure in memory which is walked in different fashions to generate and output pages.

The speed issue comes about because storing the data structure entirely in memory is insane, and because sometimes a single entry will be read from disk multiple times.

I've made some changes over the past few evenings such that a single blog entry will be read no more than once from disk (and perhaps zero times if Memcached is in use :) but that doesn't solve the problem of the memory usage.

So last night I made a quick hack - using my introduction to SQLite as inspiration I wrote a minimal reimplementation of chronicle which does things differently:

  • Creates a temporary SQLite database with tables: posts, tags, comments.
  • Reads every blog entry and inserts it into the database.
  • Uses the database to output pages.
  • Deletes the database.

This is a significantly faster approach than the previous one - with a "make steve" job taking only 18 seconds, down from just over 2 minutes 5 seconds.

("make steve" uses rsync to pull in comments on entries, rebuilds the blog, then uses rsync to push the generated output into its live location.)

ObFilm: If...

| 6 comments

 

Because I don't trust myself with you.

28 June 2010 21:50

Debian Packages

Every now and again I look over server logs and see people downloading random .deb packages from my mirrors, or from my servers, via wget or Firefox (rather than with apt-get/aptitude).

Personally I don't often download random binaries even from people I believe I can trust. Instead I'll download source and rebuild.

But it bugs me that somebody might download a work-in-progress, decide it isn't complete or otherwise good enough, and miss out on an update of awesome-sauce a day or two later.

I suspect there is no real solution to this "problem", and that including /etc/apt/sources.list.d/ entries inside a binary package to "force" an upgrade behind the scenes is a little too evil to tolerate. And yet .. something something dark-side .. something something seductive something?

Blog Update

This is my last film-subject entry. In the future I will have more accurate subjects, albeit more dull ones.

I still amuse myself with quotations, and before that the song lyrics, but I guess that now is a good time to call it a day with that.

ObFilm: Cruel Intentions

| 9 comments

 

jQuery in use upon this blog

30 August 2010 21:50

Blog Update

I've just updated the home-grown javascript I was using upon this blog to be jQuery powered.

This post is a test.

I'll need to check but I believe I'm almost 100% jQuery-powered now.

AJAX Proxies

It is a well-known fact that AJAX requests are only allowed to be made to the server the javascript was loaded from. The so-called same-origin security restriction.

To pull content from other sites users are often encouraged to write a simple proxy:

  • http://example.com/ serves Javascript & HTML.
  • http://example.com/proxy/http://example.com allows arbitrary fetching.

Simples? No. Too many people write simple proxies which use PHP's curl function, or something similar, with little restriction on either the protocol or the destination of the requested resource.

Consider the following requests:

  • http://example.com/proxy.php?url=/etc/passwd
  • http://example.com/proxy.php?url=file:///etc/passwd

If you're using some form of Javascript/AJAX proxy make sure you test for this. (ObRandom: Searching google for inurl:"proxy.php?url=http:" shows this is a real problem. l33t.)

ObQuote: "You're asking me out? That's so cute! What's your name again? " - 10 things I hate about you.

| No comments

 

I miss the old Debian

11 November 2010 21:50

I miss the days when Debian was about making software work well together.

These days the user mailing lists are full of posts from users asking for help on Ubuntu [*1*], people suggesting that we copy what Ubuntu has done, and people asking for "howtos" because documentation is too scary or too absent for them to read.

Yesterday the whole commercial spam on Planet debate started. Its yet another example of how in-fighting[*2*] seems to be the most fun part of Debian for too many.

Me? I started and folded a company. I got Debian help and users. Some threw money at me.

Joey Hess? Started making the nice-looking ikiwiki-powered branchable.com.

Commercial? Yes. Spam? No.

I guess there is little I can do. I could quit - as over time I've become less patient dealing with the project as a whole, but simultaneously more interested in dealing with a few specific people. But I suspect the net result would be no change. Either people would say "Ok , bye" or worse still offer my flttry: "Don't go - we lurve you".

Meh.

I shouldn't write when I'm annoyed, but living in a hotel will do that to you.

Footy-Mc-Foot-notes. Cos HTML is hard. Lets go shopping Eat Cake.

1

The Ubuntu forums are largely full of the blind leading the blind. Or equally often the blind being ignored.

I do believe that an Ubuntu stackoverflow site would be more useful than forums. But that's perhaps naive. People will still most often say "My computer doesn't work" missing all useful the details.

The only obvious gain is you can avoid "me too!!!" comments, and "fix this now or I'm gonna go .. use gentoo?".

2

Back a few years when people were less civil some mailing lists and irc channesl were unpleasant places to be.

These days we've solved half the problem: People mostly don't swear at each other.

The distractions, the threads that don't die, even if you ignore them and don't join in the "hilarity" they still have a devisively negative effect.

| 13 comments

 

Testing the blog feed

3 March 2013 21:50

My previous entry, about templating, didn't make it into Planet Debian.

This entry is just a test to see if it is my fault.

| No comments

 

So I accidentally ... a service.

23 June 2014 21:50

This post is partly introspection, and partly advertising. Skip if it either annoys you.

Back in February I was thinking about what to do with myself. I had two main options "Get a job", and "Start a service". Because I didn't have any ideas that seemed terribly interesting I asked people what they would pay for.

There were several replies, largely based "infrastructure hosting" (which was pretty much 50/50 split between "DNS hosting", and project hosting with something like trac, redmine, or similar).

At the time DNS seemed hard, and later I discovered there were already at least two well-regarded people doing DNS things, with revision control.

So I shelved the idea, after reaching out to both companies to no avail. (This later lead to drama, but we'll pretend it didn't.) Ultimately I sought and acquired gainful employment.

Then, during the course of my gainful employment I was exposed to Amazons Route53 service. It looked like I was going to be doing many things with this, so I wanted to understand it more thoroughly than I did. That lead to the creation of a Dynamic-DNS service - which seemed to be about the simplest thing you could do with the ability to programatically add/edit/delete DNS records via an API.

As this was a random hack put together over the course of a couple of nights I didn't really expect it to be any more popular than anything else I'd deployed, and with the sudden influx of users I wanted to see if I could charge people. Ultimately many people pretended they'd pay, but nobody actually committed. So on that basis I released the source code and decided to ignore the two main missing features - lack of MX records, and lack of sub-sub-domains. (Isn't it amazing how people who claim they want "open source" so frequently mean they want something with zero cost, they can run, and never modify and contribute toward?)

The experience of doing that though, and the reminder of the popularity of the original idea made me think that I could do a useful job with Git + DNS combined. That lead to DNS-API - GitHub based DNS hosting.

It is early days, but it looks like I have a few users, and if I can get more then I'll be happy.

So if you want to to store your DNS records in a (public) GitHub repository, and get them hosted on geographically diverse anycasted servers .. well you know where to go: Github-based DNS hosting.

| No comments

 

How could you rationally fork Debian?

9 November 2014 21:50

The topic of Debian forks has come up a lot recently, and as time goes on I've actually started considering the matter seriously: How would you fork Debian?

The biggest stumbling block is that the Debian distribution contains thousands of packages, which are maintained by thousands of developers. A small team has virtually no hope of keeping up to date, importing changes, dealing with bug-reports, etc. Instead you have to pick your battle and decide what you care about.

This is why Ubuntu split things into "main" and "universe". Because this way they didn't have to deal with bug reports - instead they could just say "Try again in six months. Stuff from that repository isn't supported. Sorry!"

So if you were going to split the Debian project into "supported" and "unsupported" what would you use as the dividing line? I think the only sensible approach would be :

  • Base + Server stuff.
  • The rest.

On that basis you'd immediately drop the support burden of GNOME, KDE, Firefox, Xine, etc. All the big, complex, and user-friendly stuff would just get thrown away. What you'd end up with would be a Debian-Server fork, or derivative.

Things you'd package and care about would include:

  • The base system.
  • The kernel.
  • SSHD.
  • Apache / Nginx / thttpd / lighttpd / etc
  • PHP / Perl / Ruby / Python / etc
  • Jabberd / ircd / rsync / etc
  • MySQL / PostGres / Redis / MariadB / etc.

Would that be a useful split? I suspect it would. It would also be manageable by a reasonably small team.

That split would also mean if you were keen on dropping any particular init-system you'd not have an unduly difficult job - your server wouldn't be running GNOME, for example.

Of course if you're thinking of integrating a kernel and server-only stuff then you might instead prefer a BSD-based distribution. But if you did that you'd miss out on Docker. Hrm.

| 10 comments

 

Apologies for the blog-churn.

19 February 2017 21:50

I've been tweaking my blog a little over the past few days, getting ready for a new release of the chronicle blog compiler (github).

During the course of that I rewrote all the posts to have 100% lower-case file-paths. Redirection-pages have been auto-generated for each page which was previously mixed-case, but unfortunately that will have meant that the RSS feed updated unnecessarily:

That triggered a lot of spamming, as the URLs would have shown up as being new/unread/distinct.

| 3 comments