Project cleanup

Friday, 27 July 2018

For the past couple of days I've gone back over my golang projects, and updated each of them to have zero golint/govet warnings.

Nothing significant has changed, but it's nice to be cleaner.

I did publish a new project, which is a webmail client implemented in golang. Using it you can view the contents of a remote IMAP server in your browser:

  • View folders.
  • View messages.
  • View attachments
  • etc.

The (huge) omission is the ability to reply to messages, compose new mails, or forward/delete messages. Still as a "read only webmail" it does the job.

Not a bad hack, but I do have the problem that my own mailserver presents ~/Maildir over IMAP and I have ~1000 folders. Retrieving that list of folders is near-instant - but retrieving that list of folders and the unread-mail count of each folder takes over a minute.

For the moment I've just not handled folders-with-new-mail specially, but it is a glaring usability hole. There are solutions, the two most obvious:

  • Use an AJAX call to get/update the unread-counts in the background.
    • Causes regressions as soon as you navigate to a new page though.
  • Have some kind of open proxy-process to maintain state and avoid accessing IMAP directly.
    • That complicates the design, but would allow "instant" fetches of "stuff".

Anyway check it out if you like. Bug reports welcome.

| No comments

 

Automating builds via CI

Sunday, 15 July 2018

Gitlab is something I've no interest in self-hosting, and Jenkins is an abomination, so I figured I'd write down what kind of things I did and explore options again. My use-cases are generally very simple:

  • I trigger some before-steps
    • Which mostly mean pulling a remote git repository to the local system.
  • I then start a build.
    • This means running debuild, hugo, or similar.
    • This happens in an isolated Docker container.
  • Once the build is complete I upload the results somehere.
    • Either to a Debian package-pool, or to a remote host via rsync.

Running all these steps inside a container is well-understood, but I cheat. It is significantly easier to run the before/after steps on your host - because that way you don't need to setup SSH keys in your containers, and that is required when you clone (private) remote repositories, or wish to upload to hosts via rsync over SSH.

Anyway I wrote more thoughts here:

Then I made it work. So that was nice.

To complete the process I also implemented a simple HTTP-server which will list all available jobs, and allow you to launch one via a click. The output of the build is streamed to your client in real-time. I didn't bother persisting output and success/failure to a database, but that would be trivial enough.

It'll tide me over until I try ick again, anyway. :)

| No comments

 

Odroid-go initial impressions

Friday, 13 July 2018

Recently I came across a hacker news post about the Odroid-go, which is a tiny hand-held console, somewhat resembling a gameboy.

In terms of hardware this is powered by an ESP32 chip, which is something I'm familiar with due to my work on Arduino, ESP8266, and other simple hardware projects.

Anyway the Odroid device costs about $40, can be programmed via the Arduino-studio, and comes by default with a series of emulators for "retro" hardware:

  • Game Boy (GB)
  • Game Boy Color (GBC)
  • Game Gear (GG)
  • Nintendo Entertainment System (NES)
  • Sega Master System (SMS)

I figured it was cheap, and because of its programmable nature it might be fun to experiment with. Though in all honesty my intentions were mostly to play NES games upon it. The fact that it is programmable means I can pretend I'm doing more interesting things than I probably would be!

Most of the documentation is available on this wiki:

The arduino setup describes how to install the libraries required to support the hardware, but then makes no mention of the board-support required to connect to an ESP32 device.

So to get a working setup you need to do two things:

  • Install the libraries & sample-code.
  • Install the board-support.

The first is documented, but here it is again:

 git clone https://github.com/hardkernel/ODROID-GO.git \
     ~/Arduino/libraries/ODROID-GO

The second is not documented. In short you need to download the esp32 hardware support via this incantation:

    mkdir -p ~/Arduino/hardware/espressif && \
     cd ~/Arduino/hardware/espressif && \
     git clone https://github.com/espressif/arduino-esp32.git esp32 && \
     cd esp32 && \
     git submodule update --init --recursive && \
     cd tools && \
     python2 get.py

(This assumes that you're using ~/Arduino for your code, but since almost everybody does ..)

Anyway the upshot of this should be that you can:

  • Choose "Tools | Boards | ODROID ESP32" to select the hardware to build for.
  • Click "File | Examples |ODROID-Go | ...." to load a sample project.
    • This can now be compiled and uploaded, but read on for the flashing-caveat.

Another gap in the instructions is that uploading projects fails. Even when you choose the correct port (Tools | Ports | ttyUSB0). To correct this you need to put the device into flash-mode:

  • Turn it off.
  • Hold down "Volume"
  • Turn it on.
  • Click Upload in your Arduino IDE.

The end result is you'll have your own code running on the device, as per this video:

Enough said. Once you do this when you turn the device on you'll see the text scrolling around. So we've overwritten the flash with our program. Oops. No more emulation for us!

The process of getting the emulators running, or updated, is in two-parts:

  • First of all the system firmware must be updated.
  • Secondly you need to update the actual emulators.

Confusingly both of these updates are called "firmware" in various (mixed) documentation and references. The main bootloader which updates the system at boot-time is downloaded from here:

To be honest I expect you only need to do this part once, unless you're uploading your own code to it. The firmware pretty much just boots, and if it finds a magic file on the SD-card it'll copy it into flash. Then it'll either execute this new code, or execute the old/pre-existing code if no update was found.

Anyway get the tarball, extract it, and edit the two files if your serial device is not /dev/ttyUSB0:

  • eraseflash.sh
  • flashall.sh

One you've done that run them both, in order:

$ ./eraseflash.sh
$ ./flashall.sh

NOTE: Here again I had to turn the device off, hold down the volume button, turn it on, and only then execute ./eraseflash.sh. This puts the device into flashing-mode.

NOTE: Here again I had to turn the device off, hold down the volume button, turn it on, and only then execute ./flashall.sh. This puts the device into flashing-mode.

Anyway if that worked you'll see your blue LED flashing on the front of the device. It'll flash quickly to say "Hey there is no file on the SD-card". So we'll carry out the second step - Download and place firmware.bin into the root of your SD-card. This comes from here:

(firmware.bin contains the actual emulators.)

Insert the card. The contents of firmware.bin will be sucked into flash, and you're ready to go. Turn off your toy, remove the SD-card, load it up with games, and reinsert it.

Power on your toy and enjoy your games. Again a simple demo:

Games are laid out in a simple structure:

  ├── firmware.bin
  ├── odroid
  │   ├── data
  │   └── firmware
  └── roms
      ├── gb
      │   └── Tetris (World).gb
      ├── gbc
      ├── gg
      ├── nes
      │   ├── Lifeforce (U).nes
      │   ├── SMB-3.nes
      │   └── Super Bomberman 2 (E) [!].nes
      └── sms

You can find a template here:

| 6 comments.

 

Another golang port, this time a toy virtual machine.

Monday, 2 July 2018

I don't understand why my toy virtual machine has as much interest as it does. It is a simple project that compiles "assembly language" into a series of bytecodes, and then allows them to be executed.

Since I recently started messing around with interpreters more generally I figured I should revisit it. Initially I rewrote the part that interprets the bytecodes in golang, which was simple, but now I've rewritten the compiler too.

Much like the previous work with interpreters this uses a lexer and an evaluator to handle things properly - in the original implementation the compiler used a series of regular expressions to parse the input files. Oops.

Anyway the end result is that I can compile a source file to bytecodes, execute bytecodes, or do both at once:

I made a couple of minor tweaks in the port, because I wanted extra facilities. Rather than implement an opcode "STRING_LENGTH" I copied the idea of traps - so a program can call-back to the interpreter to run some operations:

int 0x00  -> Set register 0 with the length of the string in register 0.

int 0x01  -> Set register 0 with the contents of reading a string from the user

etc.

This notion of traps should allow complex operations to be implemented easily, in golang. I don't think I have the patience to do too much more, but it stands as a simple example of a "compiler" or an interpreter.

I think this program is the longest I've written. Remember how verbose assembly language is?

Otherwise: Helsinki Pride happened, Oiva learned to say his name (maybe?), and I finished reading all the James Bond novels (which were very different to the films, and have aged badly on the whole).

| No comments

 

Hosted monitoring

Tuesday, 26 June 2018

I don't run hosted monitoring as a service, I just happen to do some monitoring for a few (local) people, in exchange for money.

Setting up some new tests today I realised my monitoring software had an embarassingly bad bug:

  • The IMAP probe would connect to an IMAP/IMAPS server.
  • Optionally it would login with a username & password.
    • Thus it could test the service was functional

Unfortunately the IMAP probe would never logout after determining success/failure, which would lead to errors from the remote host after a few consecutive runs:

 dovecot: imap-login: Maximum number of connections from user+IP exceeded
          (mail_max_userip_connections=10)

Oops. Anyway that bug was fixed promptly once it manifested itself, and it also gained the ability to validate SMTP authentication as a result of a customer user-request.

Otherwise I think things have been mixed recently:

  • I updated the webserver of Charlie Stross
  • Did more geekery with hardware.
  • Had a fun time in a sauna, on a boat.
  • Reported yet another security issue in an online PDF generator/converter
    • If you read a remote URL and convert the contents to PDF then be damn sure you don't let people submit file:///etc/passwd.
    • I've talked about this previously.
  • Made plaited bread for the first time.
    • It didn't suck.

(Hosted monitoring is interesting; many people will give you ping/HTTP-fetch monitoring. If you want to remotely test your email service? Far far far fewer options. I guess firewalls get involved if you're testing self-hosted services, rather than cloud-based stuff. But still an interesting niche. Feel free to tell me your budget ;)

| No comments

 

Monkeying around with intepreters - Result

Monday, 18 June 2018

So I challenged myself to writing a BASIC intepreter over the weekend, unfortunately I did not succeed.

What I did was take an existing monkey-repl and extend it with a series of changes to make sure that I understood all the various parts of the intepreter design.

Initially I was just making basic changes:

  • Added support for single-line comments.
    • For example "// This is a comment".
  • Added support for multi-line comments.
    • For example "/* This is a multi-line comment */".
  • Expand \n and \t in strings.
  • Allow the index operation to be applied to strings.
    • For example "Steve Kemp"[0] would result in S.
  • Added a type function.
    • For example "type(3.13)" would return "float".
    • For example "type(3)" would return "integer".
    • For example "type("Moi")" would return "string".

Once I did that I overhauled the built-in functions, allowing callers to register golang functions to make them available to their monkey-scripts. Using this I wrote a simple "standard library" with some simple math, string, and file I/O functions.

The end result was that I could read files, line-by-line, or even just return an array of the lines in a file:

 // "wc -l /etc/passwd" - sorta
 let lines = file.lines( "/etc/passwd" );
 if ( lines ) {
    puts( "Read ", len(lines), " lines\n" )
 }

Adding file I/O was pretty neat, although I only did reading. Handling looping over a file-contents is a little verbose:

 // wc -c /etc/passwd, sorta.
 let handle = file.open("/etc/passwd");
 if ( handle < 0 ) {
   puts( "Failed to open file" )
 }

 let c = 0;       // count of characters
 let run = true;  // still reading?

 for( run == true ) {

    let r = read(handle);
    let l = len(r);
    if ( l > 0 ) {
        let c = c + l;
    }
    else {
        let run = false;
    }
 };

 puts( "Read " , c, " characters from file.\n" );
 file.close(handle);

This morning I added some code to interpolate hash-values into a string:

 // Hash we'll interpolate from
 let data = { "Name":"Steve", "Contact":"+358449...", "Age": 41 };

 // Expand the string using that hash
 let out = string.interpolate( "My name is ${Name}, I am ${Age}", data );

 // Show it worked
 puts(out + "\n");

Finally I added some type-conversions, allowing strings/floats to be converted to integers, and allowing other value to be changed to strings. With the addition of a math.random function we then got:

 // math.random() returns a float between 0 and 1.
 let rand = math.random();

 // modify to make it from 1-10 & show it
 let val = int( rand * 10 ) + 1 ;
 puts( "math.random() -> ", val , "\n");

The only other signification change was the addition of a new form of function definition. Rather than defining functions like this:

 let hello = fn() { puts( "Hello, world\n" ) };

I updated things so that you could also define a function like this:

 function hello() { puts( "Hello, world\n" ) };

(The old form still works, but this is "clearer" in my eyes.)

Maybe next weekend I'll try some more BASIC work, though for the moment I think my monkeying around is done. The world doesn't need another scripting language, and as I mentioned there are a bunch of implementations of this around.

The new structure I made makes adding a real set of standard-libraries simple, and you could embed the project, but I'm struggling to think of why you would want to. (Though I guess you could pretend you're embedding something more stable than anko and not everybody loves javascript as a golang extension language.)

| No comments

 

Monkeying around with intepreters

Saturday, 16 June 2018

Recently I've had an overwhelming desire to write a BASIC intepreter. I can't think why, but the idea popped into my mind, and wouldn't go away.

So I challenged myself to spend the weekend looking at it.

Writing an intepreter is pretty well-understood problem:

  • Parse the input into tokens, such as "LET", "GOTO", "INT:3"
    • This is called lexical analysis / lexing.
  • Taking those tokens and building an abstract syntax tree.
    • The AST
  • Walking the tree, evaluating as you go.
    • Hey ho.

Of course BASIC is annoying because a program is prefixed by line-numbers, for example:

 10 PRINT "HELLO, WORLD"
 20 GOTO 10

The naive way of approaching this is to repeat the whole process for each line. So a program would consist of an array of input-strings each line being treated independently.

Anyway reminding myself of all this fun took a few hours, and during the course of that time I came across Writing an intepreter in Go which seems to be well-regarded. The book walks you through creating an interpreter for a language called "Monkey".

I found a bunch of implementations, which were nice and clean. So to give myself something to do I started by adding a new built-in function rnd(). Then I tested this:

let r = 0;
let c = 0;

for( r != 50 ) {
   let r = rnd();
   let c = c + 1;
}

puts "It took ";
puts c;
puts " attempts to find a random-number equalling 50!";

Unfortunately this crashed. It crashed inside the body of the loop, and it seemed that the projects I looked at each handled the let statement in a slightly-odd way - the statement wouldn't return a value, and would instead fall-through a case statement, hitting the next implementation.

For example in monkey-intepreter we see that happen in this section. (Notice how there's no return after the env.Set call?)

So I reported this as a meta-bug to the book author. It might be the master source is wrong, or might be that the unrelated individuals all made the same error - meaning the text is unclear.

Anyway the end result is I have a language, in go, that I think I understand and have been able to modify. Now I'll have to find some time to go back to BASIC-work.

I found a bunch of basic-intepreters, including ubasic, but unfortunately almost all of them were missing many many features - such as implementing operations like RND(), ABS(), COS().

Perhaps room for another interpreter after all!

| 4 comments.

 

A brief metric-update, and notes on golang-specific metrics

Saturday, 2 June 2018

My previous post briefly described the setup of system-metric collection. (At least the server-side setup required to receive the metrics submitted by various clients.)

When it came to the clients I was complaining that collectd was too heavyweight, as installing it pulled in a ton of packages. A kind twitter user pointed out that you can get most of the stuff you need via the use of the of collectd-core package:

 # apt-get install collectd-core

I guess I should have known that! So for the moment that's what I'm using to submit metrics from my hosts. In the future I will spend more time investigating telegraf, and other "modern" solutions.

Still with collectd-core installed we've got the host-system metrics pretty well covered. Some other things I've put together also support metric-submission, so that's good.

I hacked up a quick package for automatically submitting metrics to a remote server, specifically for golang applications. To use it simply add an import to your golang application:

  import (
    ..
    _ "github.com/skx/golang-metrics"
    ..
  )

Add the import, and rebuild your application and that's it! Configuration is carried out solely via environmental variables, and the only one you need to specify is the end-point for your metrics host:

$ METRICS=metrics.example.com:2003 ./foo

Now your application will be running as usual and will also be submitting metrics to your central host every 10 seconds or so. Metrics include the number of running goroutines, application-uptime, and memory/cpu stats.

I've added a JSON-file to import as a grafana dashboard, and you can see an example of what it looks like there too:

| No comments

 

On collecting metrics

Saturday, 26 May 2018

Here are some brief notes about metric-collection, for my own reference.

Collecting server and service metrics is a good thing because it lets you spot degrading performance, and see the effect of any improvements you've made.

Of course it is hard to know what metrics you might need in advance, so the common approach is to measure everything, and the most common way to do that is via collectd.

To collect/store metrics the most common approach is to use carbon and graphite-web. I tend to avoid that as being a little more heavyweight than I'd prefer. Instead I'm all about the modern alternatives:

  • Collect metrics via go-carbon
    • This will listen on :2003 and write metrics beneath /srv/metrics
  • Export the metrics via carbonapi
    • This will talk to the go-carbon instance and export the metrics in a compatible fashion to what carbon would have done.
  • Finally you can view your metrics via grafana
    • This lets you make pretty graphs & dashboards.

Configuring all this is pretty simple. Install go-carbon, and give it a path to write data to (/srv/metrics in my world). Enable the receiver on :2003. Enable the carbonserver and make it bind to 127.0.0.1:8888.

Now configure the carbonapi with the backend of the server above:

  # Listen address, should always include hostname or ip address and a port.
  listen: "localhost:8080"

  # "http://host:port" array of instances of carbonserver stores
  # This is the *ONLY* config element in this section that MUST be specified.
  backends:
    - "http://127.0.0.1:8888"

And finally you can add your data-source to grafana of 127.0.0.1:8080, and graph away.

The only part that I'm disliking at the moment is the sheer size of collectd. Getting metrics of your servers (uptime, I/O performance, etc) is very useful, but it feels like installing 10Mb of software to do that is a bit excessive.

I'm sure there must be more lightweight systems out there for collecting "everything". On the other hand I've added metrics exporting to my puppet-master, and similar tools very easily so I have lightweight support for that in the tools themselves.

I have had a good look at metricsd which is exactly the kind of tool I was looking for, but I've not searched too far afield for other alternatives and choices just yet.

I should write more about application-specific metrics in the future, because I've quizzed a few people recently:

  • What's the average response-time of your application? What's the effectiveness of your (gzip) compression?
    • You don't know?
  • What was the quietest time over the past 24 hours for your server?
    • You don't know?
  • What proportion of your incoming HTTP-requests were for HTTP?
    • Do you monitor HTTP-status-codes? Can you see how many times people were served redirects to the SSL version of your site? Will using HST save you bandwidth, if so how much?

Fun times. (Terrible pun is terrible, but I was talking to a guy called Tim. So I could have written "Fun Tims".)

| 1 comment.

 

This month has been mostly golang-based

Monday, 21 May 2018

This month has mostly been about golang. I've continued work on the protocol-tester that I recently introduced:

This has turned into a fun project, and now all my monitoring done with it. I've simplified the operation, such that everything uses Redis for storage, and there are now new protocol-testers for finger, nntp, and more.

Sample tests are as basic as this:

  mail.steve.org.uk must run smtp
  mail.steve.org.uk must run smtp with port 587
  mail.steve.org.uk must run imaps
  https://webmail.steve.org.uk/ must run http with content 'Prayer Webmail service'

Results are stored in a redis-queue, where they can picked off and announced to humans via a small daemon. In my case alerts are routed to a central host, via HTTP-POSTS, and eventually reach me via the pushover

Beyond the basic network testing though I've also reworked a bunch of code - so the markdown sharing site is now golang powered, rather than running on the previous perl-based code.

As a result of this rewrite, and a little more care, I now score 99/100 + 100/100 on Google's pagespeed testing service. A few more of my sites do the same now, thanks to inline-CSS, inline-JS, etc. Nothing I couldn't have done before, but this was a good moment to attack it.

Finally my "silly" Linux security module, for letting user-space decide if binaries should be executed, can-exec has been forward-ported to v4.16.17. No significant changes.

Over the coming weeks I'll be trying to move more stuff into the cloud, rather than self-hosting. I'm doing a lot of trial-and-error at the moment with Lamdas, containers, and dynamic-routing to that end.

Interesting times.

| No comments

 

Recent Posts

Recent Tags