About Archive Tags RSS Feed

 

Entries tagged backups

You're gonna need a bigger boat.

20 December 2009 21:50

I'd like to nominate this week of the year to be :

  • National Backup Week.

(I'd pick a single date but the only date I care about is March 10th, and that has already been assigned National Fishnet Day.)

So, national backup week? This week is the week I'd like to suggest that you ensure that all the hosts, systems, and machines you care about are backed up.

Maybe not today, maybe not tomorrow, but soon they will die. If you have a system die without a backup you're screwed.

If this week is a quiet time at work, or if you're having time off, take an hour, take a few, and ensure you have backups .. current backups.

ObFilm: Jaws

| 10 comments

 

So about that off-site encrypted backup idea ..

28 September 2012 21:50

I'm just back from having spent a week in Helsinki. Despite some minor irritations (the light-switches were always too damn low) it was a lovely trip.

There is a lot to be said for a place, and a culture, where shrugging and grunting counts as communication.

Now I'm back, catching up on things, and mostly plotting and planning how to handle my backups going forward.

Filesystem backups I generally take using backup2l, creating local incremental backup archives then shipping them offsite using rsync. For my personal stuff I have a bunch of space on a number of hosts and I just use rsync to literally copy my ~/Images, ~/Videos, etc..

In the near future I'm going to have access to a backup server which will run rsync, and pretty much nothing else. I want to decide how to archive my content to that - securely.

The biggest issue is that my images (.CR2 + .JPG) will want to be encrypted remotely, but not locally. So I guess if I re-encrypt transient copies and rsync them I'll end up having to send "full" changes each time I rsync. Clearly that will waste bandwidth.

So my alternatives are to use incrementals, as I do elsewhere, then GPG-encrypt the tar files that are produced - simple to do with backup2l - and us rsync. That seems like the best plan, but requires that I have more space available locally since :

  • I need the local .tar files.
  • I then need to .tar.gz.asc/.tar.gz.gpg files too.

I guess I will ponder. It isn't horrific to require local duplication, but it strikes me as something I'd rather avoid - especially given that we're talking about rsync from a home-broadband which will take weeks at best for the initial copy.

| 19 comments

 

Updating my backups

6 November 2013 21:50

So I recently mentioned that I stopped myself from registering all the domains, but I did aquire rsync.io.

Today I'll be moving my last few backups over to using it.

In terms of backups I'm pretty well covered; I have dumps of databases and I have filesystem dumps too. The only niggles are the "random" directories I want to backup from my home desktop.

My home desktop is too large to backup completely, but I do want to archive some of it:

  • ~/Images/ - My archive of photos.
  • ~/Books/ - My calibre library.
  • ~/.bigv/ - My BigV details.
  • etc.

In short I have a random assortment of directories I want to backup, with pretty much no rhyme or reason to it.

I've scripted the backup right now, using SSH keys + rsync. But it feels like a mess.

PS. Proving my inability to concentrate I couldn't even keep up with a daily blog for more than two days. Oops.

| 6 comments

 

Switched to using attic for backups

19 December 2014 21:50

Even though seeing the word attic reminds me too much of leaking roofs and CVS, I've switched to using the attic backup tool.

I want a simple system which will take incremental backups, perform duplication-elimination (to avoid taking too much space), support encryption, and be fast.

I stopped using backup2l because the .tar.gz files were too annoying, and it was too slow. I started using obnam because I respect Lars and his exceptionally thorough testing-regime, but had to stop using it when things started getting "too slow".

I'll document the usage/installation in the future. For the moment the only annoyance is that it is contained in the Jessie archive, not the Wheezy one. Right now only 2/19 of my hosts are Jessie.

| 6 comments

 

Restoring my system .. worked

2 January 2016 21:50

A while back I wrote about some issues with converting a two-disk RAID system to a one-disk system, but just to recap:

  • We knew were were moving to Finland.
  • The shared/main computer we used in the UK was old and slow.
  • A new computer in Finland would be more expensive than it should be.
  • Equally transporting a big computer from the UK would also be silly.

In the end we bought a small form-factor PC, with only a single drive and I moved one of the two drives from the old machine into it. Then converted it to run happily with only a single drive, and not email every day to say "device missing".

So there things stood, we had a desktop with a single drive, and I ensured that I took full daily backup via attic.

Over Chrismas the two-year old drive failed. To the extent I couldn't even get it to be recognized by the BIOS, and thus couldn't pull data off it. Time to test my backups in anger! I bought a new drive, installed a minimal installation of the Jessie release of Debian onto the system, and then ran:

 cd /
 .. restore latest backup ..

Two days later I'd pulled 1.3Tb over the network, and once I fixed up grub, /etc/fstab, and a couple of niggles it all just worked. Rebooted to make sure the temporary.home hostname, etc, was all gone and life was good.

Restored backup! No errors! No data-loss! Perfect!

The backup-script I use every day was very very good at making sure nothing was missed:

attic create --stats --checkpoint-interval=7200 attic@${remote}:/attic/storage::${host}-$(date +%Y-%m-%d-%H)
  --exclude=/proc      \
  --exclude=/sys       \
  --exclude=/run       \
  --exclude=/dev       \
  --exclude=/tmp       \
  --exclude=/var/tmp   \
  --exclude=/var/log   \
  /

In other news I published my module for controlling the new smart lights I've bought

| No comments

 

Getting ready for Stretch

25 May 2017 21:50

I run about 17 servers. Of those about six are very personal and the rest are a small cluster which are used for a single website. (Partly because the code is old and in some ways a bit badly designed, partly because "clustering!", "high availability!", "learning!", "fun!" - seriously I had a lot of fun putting together a fault-tolerant deployment with haproxy, ucarp, etc, etc. If I were paying for it the site would be both retired and static!)

I've started the process of upgrading to stretch by picking a bunch of hosts that do things I could live without for a few days - in case there were big problems, or I needed to restore from backups.

So far I've upgraded:

  • master.steve
    • This is a puppet-master, so while it is important killing it wouldn't be too bad - after all my nodes are currently setup properly, right?
    • Upgrading this host changed the puppet-server from 3.x to 4.x.
    • That meant I had to upgrade all my client-systems, because puppet 3.x won't talk to a 4.x master.
    • Happily jessie-backports contains a recent puppet-client.
    • It also meant I had to rework a lot of my recipes, in small ways.
  • builder.steve
    • This is a host I use to build packages upon, via pbuilder.
    • I have chroots setup for wheezy, jessie, and stretch, each in i386 and amd64 flavours.
  • git.steve
    • This is a host which stores my git-repositories, via gitbucket.
    • While it is an important host in terms of functionality, the software it needs is very basic: nginx proxies to a java application which runs on localhost:XXXX, with some caching magic happening to deal with abusive clients.
    • I do keep considering using gitlab, because I like its runners, etc. But that is pretty resource intensive.
    • On the other hand If I did switch I could drop my builder.steve host, which might mean I'd come out ahead in terms of used resources.
  • leave.steve
    • Torrent-box.
    • Upgrading was painless, I only run rtorrent, and a simple object storage system of my own devising.

All upgrades were painless, with only one real surprise - the attic-backup software was removed from Debian.

Although I do intend to retry using Larss' excellent obnum in the near future pragmatically I wanted to stick with what I'm familiar with. Borg backup is a fork of attic I've been aware of for a long time, but I never quite had a reason to try it out. Setting it up pretty much just meant editing my backup-script:

s/attic/borg/g

Once I did that, and created some new destinations all was good:

[email protected] ~ $ borg init /backups/git.steve.org.uk.borg/
[email protected] ~ $ borg init /backups/master.steve.org.uk.borg/
[email protected] ~ $ ..

Upgrading other hosts, for example my website(s), and my email-box, will be more complex and fiddly. On that basis they will definitely wait for the formal stretch release.

But having a couple of hosts running the frozen distribution is good for testing, and to let me see what is new.

| 1 comment