1 April 2008 21:50
There's a tagging system which is starting to creak under the sheer number of different tags, and several back-end parts of the site make use of AJAX calls.
Most of the script lives in a single file common.js which I cobbled via a process of trial and error, augmented with a little copy & paste coding.
It works. But I knew I could do better ..
This was my first attempt to make a site be truely dynamic and "pretty". It has succeeded in that respect, although the lack of members makes the site itself essentially a failure.
This library made it almost too easy to add flash. I liked it a lot.
Having said that though the sheer scope of the library and the way it didn't fit in the way that I coded made it painful to use at times.
It works, and it works well. Like it? Yes. Love it no?
Most of the code here is the simple kind, reverting back to the way I worked on the Debian Administration site; we're talking about basic effects such as:
- show/hide a div
- make an AJAX request every now and again.
- Do a bit of auto-completion.
To get more of a feel for whats out there I wrote this initially with my own code, then later migrated it to jQuery.
Quite frankly jQuery rocks. The way it works is a little strange at first, but it is so natural after a while. As an example:
// find the div called "foo" - hide it.
I'm liking this library a lot recently, but only time will tell if I use it more.
In conclusion I filed #473125: ITP jQuery failing to see the existing ITP already present.
ObQuote: Stand By Me
7 May 2008 21:50
Well a brief post about what I've been up to over the past few days.
An alioth project was created for the maintainance of the bash-completion package. I spent about 40 minutes yesterday committing fixes to some of the low-lying fruit.
I suspect I'll do a little more of that, and then back off. I only started looking at the package because there was a request-for-help bug filed against it. It works well enough for me with some small local additions
The big decision for the bash-completion project is how to go forwards from the current situation where the project is basically a large monolithic script. Ideally the openssh-client package should contain the completion for ssh, scp, etc..
Making that transition will be hard. But interesting.
In other news I submitted a couple of "make-work" patches to the QPSMTPD SMTP proxy - just tidying up a minor cosmetic issues. I'm starting to get to the point where I understand the internals pretty well now, which is a good thing!
I love working on QPSMTPD. It rocks. It is basically the core of my antispam service and a real delight to code for. I cannot overemphasise that enough - some projects are just so obviously coded properly. Hard to replicate, easy to recognise...
I've been working on my own pre-connection system which is a little more specialied; making use of the Class::Pluggable library - packaged for Debian by Sarah.
(The world -> Pre-Connection/Load-Balancing Proxy -> QPSMTPD -> Exim4. No fragility there then ;)
I still need to sit down and work through the Apache2 bugs I identified as being simple to fix. I've got it building from SVN now though; so progress is being made!
Finally this weekend I need to sit down and find the time to answer Steve's "Team Questionnaire". Leave it any longer and it'll never get answered. Sigh.
ObQuote: Shooting Fish
18 July 2008 21:50
ObQuote: Short Circuit.
27 November 2009 21:50
A couple of days ago I was lamenting the state of webstats, although I was a little vague as to my purpose. Specifically I was wanting to find out about the screen resolutions and user-agents viewing a couple of sites.
Of course there are drawbacks:
- You cannot capture everything.
e.g. HTTP status code isn't available.
- Finds the screen resolution.
- Finds the HTTP referer.
- Finds the current page's title.
- Then submits that to a server-side collection script, via a one-by-one pixel IMG
The script that receives the data writes out the data to a small per-domain SQLite database, which I can then use to generate prettyness. However I suck at being pretty, in most ways, so I've only got functional:
All of this is dynamic and most of the data is anchored to "today", as thats proof of concept enough. Were piwik not written in vile PHP I'd use that - I don't see anything similar out there which is Perl..
The big decision is now "Keep it dynamic" vs. "Output static pages". (vs. call off the experiment now I know that I'm safe to assume "big resolutions").
(Naming software is hard. Recent stuff I've done has had an skx prefix primarily for google-juice. e.g. Randomly I notice that if you search for my personal site on Google's UK engine I come top. Cool.)
ObSubject: The Bourne Identity
18 April 2010 21:50
This past week has had a couple of minor software releases:
I made a new release which improves support for foreign language; so dates can be internationalised, etc.
The online demos now include one in with French month names.
The perl-based sysadmin tool had a minor update earlier today, after it was pointed out that I didn't correctly cope with file content checks.
I'm still pretty pleased with the way this works out, even if it is intentionally simple.
This is a simple bug-record-thingy which I was playing with recently, and I've now started using to record bugs in other projects.
I'll pretend its a fancy distributed-bug-tracker, but actually it isn't. It's nothing more than a bunch of text-files associated with a project, which have sufficiently random names that collisions are unlikely and which thus becomes semi-distributed-friendly.
I need to either come up with my own which looks like galleriffic, or port the thumbnail bits over.
(I'm currently using a slightly modified version of gallerifific for my people-shots.)
30 August 2010 21:50
this blog to be
This post is a test.
I'll need to check but I believe I'm almost 100% jQuery-powered now.
It is a well-known fact that AJAX requests are only allowed
To pull content from other sites users are often encouraged
to write a simple proxy:
- http://example.com/proxy/http://example.com allows arbitrary fetching.
Simples? No. Too many people write simple proxies which use
PHP's curl function, or something similar, with little restriction on either the protocol or the destination of the requested resource.
Consider the following requests:
ObQuote: "You're asking me out? That's so cute! What's your name again? " - 10 things I hate about you.
2 February 2014 21:50
Last night I was up again, really hard to sleep when you have a bad cold.
I decided to do something fun, and allow my tweaking guide to accept comments.
Like many of my sites this is 100% static, and generated by templer, so comments are "hard".
I've seen a few people try to rewrite disqus as a general-purpose solution, and I like that idea, because I don't trust that particular service.
I wasn't so ambitious though, I just hacked up a quick sinatra server:
- "GET /comments/ID"
- Retrieves the comments on the specified identifier as a JSON array of comment-hashes.
- "POST /comments/ID"
- Append the submitted comment to a redis set.
Perhaps something for the future.
In conclusion if people wish they can now leave feedback on most of the pages :)
6 February 2014 21:50
The simple external-comments code is now complete enough for me to stop poking it on a daily basis:
- Although the comments are styled minimally you can override that with CSS.
- Although the default "Add your reply" form is ugly you can replace it with your own.
- The reply-form may go above or below the comments.
- If you add an email field then your comments will include a gravitar link.
- Comments are assumed to be in markdown now.
- The commments may be retrieved in newest-first, or oldest-first order.
- There's now a simple anti-spam plugin system present.
In an ideal world the client-side code should be a jQuery plugin, but I've not worked out how to make a static method (the JSONP callback) be a member of a jQuery plugin-object. So without that I have to re-pass the options around too many places, rather than making them a member of "this".
Meh, pull requests welcome for adding new storage back-ends (redis and sqlite are supported by default), and similarly for cleanups.