About Archive Tags RSS Feed

 

Entries tagged redis

Cover your heart! Cover your heart!

13 January 2010 21:50

I've long been a fan of Danga's Memcached, back in 2005 I wrote about it for the first time.

Recently I've been looking at the persistant version, memcachedb which is essentially a Berkeley Database using the memcached protocol as the transport layer. This allows you to get persistance for free, and means you can rely upon the content being present, rather than just hoping it will be there as an opportunistic cache.

However in this brave new world of nosql I've been mulling over different options for new projects / new toys. I have to say that memcachedb lost out for me, and that Redis is my preferred flavour of new hot.

Redis has a rich API allowing you to store interesting datatypes such as lists, and it allows both replication and distribution (neither of which I need just yet, but both are useful to have as options for the future).

The documentation is great including a well-documented Twitter clone, and the facilities that are available are a step up from memcached, albeit obvious extensions which you'd expect if you were doing interesting things.

The redis-server package is already part of Debian GNU/Linux, and creating .deb package of the Redis module was trivial.

So now that I'm familiar with it, what to use it for? Well I think the first thing will be the new 2010 Valentines site.

Over the years I've hosted several Livejournal Valentine sites, and this year is no different. (Why? I like matching people, and I like social stuff. This is why I did ctrl-alt-delete in the past, even if that failed it was a great learning experience and I'd do it again if I had the time/patience/more optimism).

So far the site only has a couple of users but I fully expect it to ramp up considerably before the magical day (ha!). In the past I've limited the lifetime of the site(s) from Feb 1st to Feb 14th and still managed to achieve 5000+ users. Right now I'm using flat files for storage specifically to avoid database-load issues (which caused me considerable pain one year).

I suspect the backend will be Redis-based in the very very near future, but in the meantime if you're open to amusement and use Livejournal then feel free to poke at it: ljvalentine.com.

ObFilm: Indiana Jones and the Temple of Doom

| 7 comments

 

I'm a CPAN author.

23 July 2010 21:50

As of this morning I'm a published author on CPAN!

Thus far I have only a single module to my name, but that will most likely change in the future:

CGI::Session::Driver::redis

A module for storing (CGI) session data within a Redis database.

A while back I setup a dynamic website which was 100% redis backed, using my redis backports for lenny, and realised I needed somewhere to store the session data too. Hence this module.

I'll create a .deb package of the module, and stick it alongside the redis server.

ObQuote:I like to keep this handy... for close encounters.

Aliens

| No comments

 

I've accidentally written a replication-friendly filesystem

29 July 2010 21:50

This evening I was mulling over a recurring desire for a simple, scalable, and robust replication filesystem. These days there are several out there, including Gluster.

For the past year I've personally been using chironfs for my replication needs - I have /shared mounted upon a number of machines and a write to it on any will be almost immediately reflected in the others.

This evening, when mulling over a performance problem with Gluster I was suddenly struck by the idea "Hey, Redis is fast. Right? How hard could it be?".

Although Redis is another one of those new-fangled key/value stores it has several other useful primitives, such as "SETS" and "LISTS". Imagine a filesystem which looks like this:

 /
 /srv
 /tmp
 /var/spool/tmp/

Couldn't we store those entries as members of a set? So we'd have:

  SET ENTRIES:/              -> srv, tmp, var
  SET ENTRIES:/var/spool     -> tmp
  SET ENTRIES:/var/spool/tmp -> (nil)

If you do that "readdir(path):" becomes merely "SMEMBERS entries:$path" ("SMEMBERS foo" being "members of the set named foo"). At this point you can add and remove directories with ease.

The next step, given an entry in a directory "/tmp", called "bob", is working out the most important things:

  • Is /tmp/bob a directory?
    • Read the key DIRECTORIES:/tmp/bob - if that contains a value it is.
  • What is the owner of /tmp/bob?
    • Read the key FILES:/tmp/bob:UID.
  • If this is a file what is the size? What are the contents?
    • Read the key FILES:/tmp/bob:size for the size.
    • Read the key FILES:/tmp/bob:data for the contents.

So with a little creative thought you end up with a filesystem which is entirely stored in Redis. At this point you're thinking "Oooh shiny. Fast and shiny". But then you think "Redis has built in replication support..."

Not bad.

My code is a little rough and ready, using libfus2 & the hiredis C API for Redis. If there's interest I'll share it somewhere.

It should be noted that currently there are two limitations with Redis:

  • All data must fit inside RAM.
  • Master<->Slave replication is trivial, and is the only kind of replication you get.

In real terms the second limitation is the killer. You could connect to the Redis server on HostA from many locations - so you get a true replicated server. Given that the binary protocol is simple this might actually be practical in the real-world. My testing so far seems "fine", but I'll need to stress it some more to be sure.

Alternatively you could bind the filesystem to the redis server running upon localhost on multiple machines - one redis server would be the master, and the rest would be slaves. That gives you a filesystem which is read-only on all but one host, but if that master host is updated the slaves see it "immediately". (Does that setup even have a name? I'm thinking of master-write, slave-read, and that gets cumbersome.)

ObQuote: Please, please, please drive faster! -- Wanted

| 6 comments

 

I updated my redis-based filesystem

2 March 2011 21:50

In July last year I made a brief post about a simple filesystem I'd put together which used Redis for the storage.

At that time I thought it was a cute hack, and didn't spend too much time with it. But recently I found a use for it so I cleaned it up, synced up the C client for Redis which I used and generally started to care again.

If it is useful you can now find it online:

The basic idea is the same as it was before, except I did eventually move to an INODE-like system. Each file/directory entry receives a unique identifier (integer) - and then I store the meta-data in a key based off that name.

This means for a file I might have keys, and values,like this:

KeyValue
INODE:1:NAMEThe name of the file (e.g. "passwd").
INODE:1:SIZEThe size of the file (e.g. "1661" )
INODE:1:GIDThe group ID of the file's owner (e.g. "0")
INODE:1:UIDThe user ID of the file's owner (e.g. "0")
INODE:1:MODEThe mode of the file (e.g. 0755)

To store these things I use a Redis "SET" which allows me to easily iterate over all the entries in each directory.

ObQuote: "They fuck up, they get beat. We fuck up, they give us pensions. " - The Wire

| 3 comments

 

Goodbye mysql ..

22 November 2011 21:50

Yesterday evening I updated my server to remove MySQL:

steve:~# dpkg --purge mysql-client-5.1 \
                      mysql-common     \
                      mysql-server-5.1 \
                      mysql-server-core-5.1 \
                      python-mysqldb        \
                      libdbd-mysql-perl     \
                      libdatetime-format-mysql-perl

Until last month I had two database in use, one each for a pair of web-applications. As of now one is using redis - which I'm already using for my image hosting - and the other application is using SQLite.

Until recently I had a high opinion of SQLite, although that has now been downgraded a little, it is still a thoroughly excellent piece of software. I was just surprised at little things it was missing, to the extent I had to rewrite my applications SQL.

Still one less service is a good thing, and the migration wasn't so painful..

In more productive news I recently acquired a nice external flash - the Yongnuo YN-460 II is (very) cheap and cheerful, it can be fired remotely with my triggers so I've had a lot of fun with opportunistically taking pictures and experimenting with lighting.

Most of the results are NSFW, but there are some other examples lurking around including the first time I managed to successfully capture a falling water-drop. (Not the best picture, not the most explicit effect, but fun regardless. I both can and will do better next time!)

Somebody recently asked me to write about "camera stuff under linux" and happily I declined.

Why decline? Because there are so many good tools, applications, and utilities. (I use local tools for organisation and duplicate detection, rawtherapee for RAW conversion and GIMP for touchups). Having many available options is fantastic though, and something hard to appreciate for "newcomers" to Linux.

(Yeah I waited 90 seconds - if I remembered to add -nojava - for Netscape Navigator to start, under X10, with 8Mb of RAM. Happier days are here. Sure DRM is bad, secure boot .. an open question, but damn we have it good compared to almost any previous point in time!)

ObQuote: "Yeah, obviously it is only a tactical party. I'm only having a party to eventually get sex." - Peep Show

| 2 comments

 

Software and hardware..

13 October 2012 21:50

Software

I've been using redis for a while now. It is a fast in-memory storage system which offers persistence (unlike memcached), as well as several primitive data-types such as lists & hashes.

Anyway it crossed my mind that I don't have a backup of the data it contains, so I knocked up a simple script to dump the contents in plain-text:

In other software-news I've had some interesting and useful feedback and made two new releases of my slaughter sysadmin tool - it now contains a wee test suite and more robustness.

Hardware

I received an email last night to say that my Raspberry PI has shipped. Ordered 24/05/2012, and dispatched 12/10/2012 - I'd almost forgotten about it.

My plan is to make it a media-serving machine, SNES emulator, or similar. Not 100% decided yet.

Finally I've taken the time to repaint my office. When I last wrote about working from home I didn't include pictures - I just described the process of using a "work computer" and a "personal computer".

So this is what my office used to look like. As you can see there are two machines and a huge desk.

With a few changes I now have an office which looks like this - the two machines are glued-together with a KVM. and I have much more room behind it for another desk, more books, and similar toys. Additionally my dedication is now enforced - I simply cannot play with both computer as the same time.

The chair was used to mount the picture - usually I sit on a kneeling chair, which is almost visible.

What inspired the painting? Partly the need for more space, but mostly water damage. I had a leaking ceiling. (Local people will know all about my horrible leaking roof situation).

The end?

| 3 comments

 

Some productive work

11 January 2014 21:50

Having decided to take a fortnight off, between looking for a new job, I assumed I'd spend a while coding.

Happily my wife, who is a (medical) doctor, has been home recently so we've got to spend time together instead.

I'm currently pondering projects which will be small enough to be complete in a week, but large enough to be useful. Thus far I've just reimplemented RSS -> chat which I liked a lot at Bytemark.

I have my own chat-server setup, which doesn't have any users but myself. Instead it has a bunch of rooms setup, and different rooms get different messages.

I've now created a new "RSS" room, and a bunch of RSS feeds get announced there when new posts appear. It's a useful thing if you like following feeds, and happen to have a chat-room setup.

I use Prosody as my chat-server, and I use my http2xmpp code to implement a simple HTTP-POST to XMPP broadcast mechanism.

The new script is included as examples/rss-announcer and just polls RSS feeds - URLs which haven't been broadcast previously are posted to the HTTP-server, and thus get injected into the chatroom. A little convoluted, but simple to understand.

This time round I'm using Redis to keep track of which URLs have been seen already.

Beyond that I've been doing a bit of work for friends, and have recently setup an nginx server which will handle 3000+ simultaneous connections. Not too bad, but I'm sure we can make it do better - another server running on BigV which is nice to see :)

I'll be handling a few Squeeze -> Wheezy upgrades in the next week too, setting up backups, and doing some other related "consultation".

If I thought there was a big enough market locally I might consider doing that full-time, but I suspect that relying upon random work wouldn't work long-term.

| 2 comments

 

Blogspam moved, redis alternatives being examined

10 July 2014 21:50

As my previous post suggested I'd been running a service for a few years, using Redis as a key-value store.

Redis is lovely. If your dataset will fit in RAM. Otherwise it dies hard.

Inspired by Memcached, which is a simple key=value store, redis allows for more operations: using sets, using hashes, etc, etc.

As it transpires I mostly set keys to values, so it crossed my mind last night an alternative to rewriting the service to use a non-RAM-constrained service might be to juggle redis out and replace it with something else.

If it were possible to have a redis-compatible API which secretly stored the data in leveldb, sqlite, or even Berkley DB, then that would solve my problem of RAM-constraints, and also be useful.

Looking around there are a few projects in this area nds fork of redis, ssdb, etc.

I was hoping to find a Perl Redis::Server module, but sadly nothing exists. I should look at the various node.js stub-servers which exist as they might be easy to hack too.

Anyway the short version is that this might be a way forward, the real solution might be to use sqlite or postgres, but that would take a few days work. For the moment the service has been moved to a donated guest and has 2Gb of RAM instead of the paltry 512Mb it was running on previously.

Happily the server is installed/maintained by my slaughter tool so reinstalling took about ten minutes - the only hard part was migrating the Redis-contents, and that's trivial thanks to the integrated "slave of" support. (I should write that up regardless though.)

| 5 comments

 

A partial perl-implementation of Redis

11 July 2014 21:50

So recently I got into trouble running Redis on a host, because the data no-longer fits into RAM.

As an interim measure I fixed this by bumping the RAM allocated to the guest, but a real solution was needed. I figure there are three real alternatives:

  • Migrate to Postgres, MySQL, or similar.
  • Use an alternative Redis implementation.
  • Do something creative.

Looking around I found a couple of Redis-alternatives, but I was curious to see how hard it would be to hack something useful myself, as a creative solution.

This evening I spotted Protocol::Redis, which is a perl module for decoding/encoding data to/from a Redis server.

Thinking "Ahah" I wired this module up to AnyEvent::Socket. The end result was predis - A perl-implementation of Redis.

It's a limited implementation which stores data in an SQLite database, and currently has support for:

  • get/set
  • incr/decr
  • del/ping/info

It isn't hugely fast, but it is fast enough, and it should be possible to use alternative backends in the future.

I suspect I'll not add sets/hashes, but it could be done if somebody was keen.

| 2 comments

 

A simple Perl alternative to storing data in Redis

16 December 2016 21:50

I continue to be a big user of Perl, and for many of my sites I avoid the use of MySQL which means that I largely store data in flat files, SQLite databases, or in memory via Redis.

One of my servers was recently struggling with RAM, and the suprising cause was "too much data" in Redis. (Surprising because I'd not been paying attention and seen how popular it was, and also because ASCII text compresses pretty well).

Read/Write speed isn't a real concern, so I figured I'd move the data into an SQLite database, but that would require rewriting the application.

The client library for Perl is pretty awesome, and simple usage looks like this:

# Connect to localhost.
my $r = Redis->new()

# simple storage
$r->set( "key", "value" );

# Work with sets
$r->sadd( "fruits", "orange" );
$r->sadd( "fruits", "apple" );
$r->sadd( "fruits", "blueberry" );
$r->sadd( "fruits", "banannanananananarama" );

# Show the set-count
print "There are " . $r->scard( "fruits" ) . " known fruits";

# Pick a random one
print "Here is a random one " . $r->srandmember( "fruits" ) . "\n";

I figured, if I ignored the Lua support and the other more complex operations, creating a compatible API implementation wouldn't be too hard. So rather than porting my application to using SQLite directly I could juse use a different client-library.

In short I change this:

use Redis;
my $r = Redis->new();

To this:

use Redis::SQLite;
my $r = Redis::SQLite->new();

And everything continues to work. I've implemented all the set-related functions except one, and a random smattering of the other simple operations.

The appropriate test-cases in the Redis client library (i.e. removing all references to things I didn't implement) pass, and my own new tests also make me confident.

It's obviously not a hard job, but it was a quick solution to a real problem and might be useful to others.

My image hosting site, and my markdown sharing site now both use this wrapper and seem to be performing well - but with more free RAM.

No doubt I'll add more of the simple primitives as time goes on, but so far I've done enough to be useful.

| No comments

 

I'm a bit of a git (hacker?)

28 July 2020 21:00

Sometimes I enjoy reading the source code to projects I like, use, or am about to install for the first time. This was something I used to do on a very regular basis, looking for security issues to report. Nowadays I don't have so much free time, but I still like to inspect the source code to new applications I install, and every now and again I'll find the time to look at the source to random projects.

Reading code is good. Reading code is educational.

One application I've looked at multiple times is redis, which is a great example of clean and well-written code. That said when reading the redis codebase I couldn't help noticing that there were a reasonably large number of typos/spelling mistakes in the comments, so I submitted a pull-request:

Sadly that particular pull-request didn't receive too much attention, although a previous one updating the configuration file was accepted. I was recently reminded of these pull-requests when I was when I was doing some other work. So I figured I'd have a quick scan of a couple of other utilities.

In the past I'd just note spelling mistakes when I came across them, usually I'd be opening each file in a project one by one and reading them from top to bottom. (Sometimes I'd just open files in emacs and run "M-x ispell-comments-and-strings", but more often I'd just notice them with my eyes). It did strike me that if I were to do this in a more serious fashion it would be good to automate it.

So this time round I hacked up a simple "dump comments" utility, which would scan named files and output the contents of any comments (be they single-line, or multi-line). Once I'd done that I could spell-check easily:

 $ go run dump-comments.go *.c > comments
 $ aspell -c comments

Anyway the upshot of that was a pull-request against git:

We'll see if that makes its way live sometime. In case I get interested in doing this again I've updated my sysbox-utility collection to have a comments sub-command. That's a little more robust and reliable than my previous hack:

$ sysbox comments -pretty=true $(find . -name '*.c')
..
..

The comments sub-command has support for:

  • Single-line comments, for C, as prefixed with //.
  • Multi-line comments, for C++, as between /* and */.
  • Single-line comments, for shell, as prefixed with #.
  • Lua comments, both single-line (prefixed with --) and multiline between --[[ and --]].

Adding new support would be trivial, I just need a start and end pattern to search against. Pull-requests welcome:

| 1 comment