About Archive Tags RSS Feed

 

Entries posted in June 2014

Reflections on Lua-based email clients

2 June 2014 21:50

Until recently I was very happy with my console mail client, Lumail, thinking I'd written it in a flexible manner, with lots of Lua-callable primitives.

Now I'm beginning to suspect that I might have gone down the wrong path at some point.

The user interface, at startup consists of a list of mailboxes. The intention being that you select a mailbox, and open it. That then takes you to a list of messages. (There is a cool and simple to use option to open the union of multiple mailboxes, which is something I use daily.)

Now the list of mailboxes is sorted alphabetically, so the user interface looks something like this:

UI

Now the issue that triggered my rethink:

  • Can it be possible for Lua to sort the maildir list? So I could arbitrarily have the Maildir .people.katy at the top of the list, always?

Sure you think. It's just a list of strings. You could pass an array to a lua on_sort_maildirs function, and then use the returned array/table as teh display order. Simple.

Simple until you realize the user might want to do more than operate solely on the list of strings. Perhaps they wish to put folders with unread messages at the top. At which point you need a "count_unread( maildir )" function. (Which does exist.)

Anyway the realisation I had is that the CMaildir object, on the C++ side, isn't exposed to the Lua-side. So the (useful) member functions which should be exported/visible are not.

Really what I'm trying to say is that I think I've implemented and exported useful Lua primitives, but actually many more parts of the implementation could be usefully exported - but are not, and that comes from the choice I made to expose "things" not "objects". If I'd exposed objects right from the start I'd have been in a better place.

Oh well.

I continued to toy with a basic GUI mail-client last week, but I've pretty much written that off as a useful way to spend my time. For the moment I'll leave email alone, I've done enough and despite niggles what I have is absolutely the best mail client for me.

(It is a shame that Mutt is so heavyweight and hard to deal with, and that notmuch never really took off.)

| 3 comments

 

setuid/setgid binaries in Debian's Wheezy release?

7 June 2014 21:50

If anybody has access to a complete mirror of the Debian Wheezy release, and was willing to share a list of all setuid/setgid binaries that would be greatly appreciated.

It doesn't seem to be something you can find online, so you need to manually unpack each .deb file and look at the permissions.

I don't have access to a (complete) local mirror, and so I cannot easily build such a thing, unless I go to ebay and buy a random DVD-archive.

This list would be useful for folk wanting to direct their audits ..

| 13 comments

 

I'm still not a developer, but ..

10 June 2014 21:50

Some coding updates:

My templer static site generator has now been uploaded to CPAN, and is available as App::Templer.

I've converted most of my Dockerfiles to work with docker 1.0.0, which is nice.

I also hacked up a fun DNS-server for sharing JSON-encoded data, within a LAN or other environment:

Finally I updated the blogspam-detecting site a little, on the back-end. The code is now running inside Docker containers which means I can redeploy more easily in the future.

My blog post about looking for a job received some attention via a Reddit advert I posted to /r/edinburgh + /r/sysadmin, but thus far has mostly resulted in people wanting me to write code for them .. which is frustrating.

For the moment I'm working on a fun challenge involving (email) spam-detection. That takes me back.

| 2 comments

 

I did get a job

10 June 2014 21:50

In my previous blog-post I mentioned, briefly, that I'd posted a couple of adverts on Reddit looking for work.

To give more detail I did three things:

  • I made a brief blog-post on the Debian-Administration website, highlighting what I thought were interesting/useful/expected skills and experience I have.
  • I updated the site to give that link a little prominance, because .. I can.
  • I paid Reddit $10 to advertise links to that blog-post. ($5 being the minimum you could spend on any targetted advert.)

The advertisement was set to be shown in /r/edinburgh (where I live), and /r/sysadmin (where I thought some people might look if they were struggling for help).

The advertising on Reddit was painless to setup, and the traffic stats were interesting, but even though this worked out well I'm a little loathe to repeat the process - since the "non-sterling transaction fee" from my bank effectively doubled my budget.

I received a few (private) emails and comments, along with the expected grammar corrections. The end result was that I received contact from an American company founder who seemed interested.

He allowed me to write some code to solve a fun problem, appeared to enjoy the code I sent (Ruby code for dealing with (exim) email spam, that's as specific as I will be). The end result was a three month contract, which we obviously hope will lead to more permanent work.

Anyway I thought this was an atypical route to find a work, and was about a million times nicer than working with recruiters, so .. consider this documentation!

In other news it is now 10pm and I need to go to the gym and pub, in that order.

| 2 comments

 

Amazon's Route53 API is nice.

13 June 2014 21:50

It is unfortunate that some of the client libraries are inefficient, but I'm enjoying my exposure to Amazon's Route53 API.

(This is unrelated to the previous post(s) about operating a DNS service..)

For an idea of scale I host just over 170 zones at the moment.

For the first 25 zones Amazon would charge $0.50 a month, then $0.10 after that. Which would mean:

25  * $0.50  +
150 * $0.10
             = $12.50

That seems reasonably .. reasonable.

| 4 comments

 

So here's a proof of concept

14 June 2014 21:50

The simplest possible DNS-based service which I could write to explore Amazon's DNS offering has to be dynamic DNS, so I set one up..

The record skx.dhcp.io can be updated to point to your current IP by running:

curl http://dhcp.io/set/efa6961c-f3dd-11e3-955b-00163e0816a2

Or to a fixed IP:

curl http://dhcp.io/set/efa6961c-f3dd-11e3-955b-00163e0816a2/1.2.3.4

The code is modular and pretty nice, and the Amazon integration is simple.

(Although I need to write code to allow users to sign-up. I'll do that if it seems useful, I suspect there are already enough free ddns providers out there - though I might be the first to support IPv6 when I commit my next chunk of work!)

| 7 comments

 

DNS is now resolved

17 June 2014 21:50

I used to work for Bytemark, being a sysadmin and sometimes handling support requests from end-users, along with their clients.

One thing that never got old was marking DNS-related tickets as "resolved", or managing to slip that word into replies.

Similarly being married to a Finnish woman you'd be amazed how often Finnish and Finished become interchangeable.

Anyway that's enough pun-discussion.

Over the past few days I've, obviously, been playing with DNS. There are two public results:

DHCP.io

This is my simple Dynamic-DNS host, which has now picked up a few users.

I posted a token on previous entry, and I've had fun seeing how people keep changing the IP address of the host skx.dhcp.io.. I should revoke the token and actually claim the name - but to be honest it is more fun seeing it update.

What is most interesting is that I can see it being used for real - I see from the access logs some people have actually scheduled curl to run on an hourly basis. Neat.

DNS-API.org

This is a simple lookup utility, allowing queries to be made, such as:

Of the two sites this is perhaps the most useful, but again I expect it isn't unique.

That about wraps things up for the moment. It may well be the case that in the future there is some Git + DNS + Amazon integration for DNS-hosting, but I'm going to leave it alone for the moment.

Despite writing about DNS several times in the past the only reason this flurry of activity arose is that I'm hacking some Amazon & CPanel integration at the moment - and I wanted to experiment with Amazon's API some more.

So, we'll mark this activity as resolved, and I shall go make some coffee now this entry is Finnish.

ObRandomUpdate: At least there was a productive side-effect here - I created/uploaded to CPAN CGI::Application::Plugin::Throttle.

| 5 comments

 

The perils of the cloud..

20 June 2014 21:50

Recently two companies have suffed problems due to compromised AWS credentials:

  • Code Spaces
    • The company has effectively folded. Thier AWS account was compromised, and all their data and backups were deleted.
  • Bonsai
    • Within two minutes all their instances were terminated.
    • This is still live - watch updates of the recovery process.

I'm just about to commit to using Amazon for hosting DNS for paying customers, so this is the kind of thing that makes me paranoid.

I'll be storing DNS-data in Git, and if the zones were nuked on the Amazon-side I could re-upload them, but users would be dead regardless - because they'd need to update the nameservers in whois before the re-uploaded data would be useful.

I suspect I need to upload to two DNS providers, to get more redundency.

Currently I have a working system which allows me to push DNS records to a Git repository, and that seamlessly triggers a DNS update (i.e. A webhook trigged by github/bitbucket/whatever).

Before I publish anything I need to write more code, more documentation, and agree on pricing details. Then I'll setup a landing-page at http://dns-api.com/.

I've been challenged to find paying customers before launching, and thus far have two, which is positive.

The DHCP.io site has now been freed. I'm no longer going to try to commercialize it, instead I will only offer the Git-based product as a commercial service. On that basis I upped the service so users could manage up to five names per account, more if you mail me privately and beg ;)

(ObRandom: Google does hosted DNS with an API. They're expensive. I'm surprised I'd not heard of them doing this.)

| No comments

 

So I accidentally ... a service.

23 June 2014 21:50

This post is partly introspection, and partly advertising. Skip if it either annoys you.

Back in February I was thinking about what to do with myself. I had two main options "Get a job", and "Start a service". Because I didn't have any ideas that seemed terribly interesting I asked people what they would pay for.

There were several replies, largely based "infrastructure hosting" (which was pretty much 50/50 split between "DNS hosting", and project hosting with something like trac, redmine, or similar).

At the time DNS seemed hard, and later I discovered there were already at least two well-regarded people doing DNS things, with revision control.

So I shelved the idea, after reaching out to both companies to no avail. (This later lead to drama, but we'll pretend it didn't.) Ultimately I sought and acquired gainful employment.

Then, during the course of my gainful employment I was exposed to Amazons Route53 service. It looked like I was going to be doing many things with this, so I wanted to understand it more thoroughly than I did. That lead to the creation of a Dynamic-DNS service - which seemed to be about the simplest thing you could do with the ability to programatically add/edit/delete DNS records via an API.

As this was a random hack put together over the course of a couple of nights I didn't really expect it to be any more popular than anything else I'd deployed, and with the sudden influx of users I wanted to see if I could charge people. Ultimately many people pretended they'd pay, but nobody actually committed. So on that basis I released the source code and decided to ignore the two main missing features - lack of MX records, and lack of sub-sub-domains. (Isn't it amazing how people who claim they want "open source" so frequently mean they want something with zero cost, they can run, and never modify and contribute toward?)

The experience of doing that though, and the reminder of the popularity of the original idea made me think that I could do a useful job with Git + DNS combined. That lead to DNS-API - GitHub based DNS hosting.

It is early days, but it looks like I have a few users, and if I can get more then I'll be happy.

So if you want to to store your DNS records in a (public) GitHub repository, and get them hosted on geographically diverse anycasted servers .. well you know where to go: Github-based DNS hosting.

| No comments

 

Slowly releasing bits of code

29 June 2014 21:50

As previously mentioned I've been working on git-based DNS hosting for a while now.

The site was launched for real last Sunday, and since that time I've received enough paying customers to cover all the costs, which is nice.

Today the site was "relaunched", by which I mean almost nothing has changed, except the site looks completely different - since the templates/pages/content is all wrapped up in Bootstrap now, rather than my ropy home-made table-based layout.

This week coming I'll be slowly making some of the implementation available as free-software, having made a start by publishing CGI::Application::Plugin::AB - a module designed for very simple A/B testing.

I don't think there will be too much interest in most of the code, but one piece I'm reasonably happy with is my webhook-receiver.

Webhooks are at the core of how my service is implemented:

  • You create a repository to host your DNS records.
  • You configure a webhook to be invoked when pushing to that repository.
  • The webhook will then receive updates, and magically update your DNS.

Because the webhook must respond quickly, otherwise github/bitbucket/whatever will believe you've timed out and report an error, you can't do much work on the back-end.

Instead I've written a component that listens for incoming HTTP POSTS, parses the body to determine which repository that came from, and then enqueues the data for later processing.

A different process will be constantly polling the job-queue (which in my case is Redis, but could be beanstalkd, or similar. Hell even use MySQL if you're a masochist) and actually do the necessary magic.

Most of the webhook processor is trivial, but handling different services (github, bitbucket, etc) while pretending they're all the same is hard. So my little toy-service will be released next week and might be useful to others.

ObRandom: New pricing unveiled for users of a single zone - which is a case I never imagined I'd need to cover. Yay for A/B testing :)

| 4 comments