About Archive Tags RSS Feed

 

Entries tagged dns

You tortured me? You tortured me!

30 July 2009 21:50

DNS is hard, let's go shopping.

<irony>

CNAME & MX records do not mix.

</irony>

ObFilm: V for Vendetta.

| No comments

 

Optimization Recipes

31 January 2014 21:50

Today I am mostly in my bed suffering from "the plague".

Between naps I've worked on a new site a little:

Hopefully this will become updated, contributions welcome, and be useful to the world.

(Source available on github.)

| 2 comments

 

Some things on DNS and caching

29 March 2014 21:50

Although there wasn't too many comments on my what would you pay for? post I did get some mails.

I was reminded about this via Mario Langs post, which echoed a couple of private mails I received.

Despite being something that I take for granted, perhaps because my hosting comes from the Bytemark, people do seem willing to pay money for DNS hosting.

Which is odd. I mean you could do it very very very cheaply if you had just four virtual machines. You can get complex and be geo-fancy, and you could use anycast on a small AS, but really? You could just deploy four virtual machines0 to provide a.ns, b.ns, c.ns, d.ns, and be better than 90% of DNS hosters out there.

The thing that many people mentioned was Git-backed, or Git-based DNS. Which would be trivial if you used tinydns, and no much harder if you used bind.

I suspect I'm "not allowed" to do DNS-things for a while, due to my contract with Dyn, but it might be worth checking...

ObRandom: Beat me to it. Register gitdns.io, or similar, and configure hooks from github to compile tinydns records.

In other news I started documenting early thoughts about my caching reverse proxy, which has now got a name stockpile.

I wrote some stub code using node.js, and although it was functional it soon became callback hell:

  • Is this resource cachable?
  • Does this thing exist in the cache already?
  • Should we return the server's response to the client, archive to memcached, or do both?

Expressing the rules neatly is also a challenge. I want the server core to be simple and the configuration to be something like:

is_cachable ( vhost, source, request, backened )
{
    /**
     * If the file is static, then it is cachable.
     */
    if ( request.url.match( /\.(jpg|png|txt|html?|gif)$/i ) ) {
        return true;
    }

    /**
     * If there is a cookie then the answer is false.
     */
    if ( request.has_cookie? ) { return false ; }

    /**
     * If the server is alive we'll now pass the remainder through it
     * if not then we'll serve from the cache.
     */
    if ( backend.alive? ) {
        return false;
    }
    else {
        return true;
    }
}

I can see there is value in judging the cachability based on the server response, but I plan to ignore that except for "Expires:", "Etag", etc ,etc)

Anyway callback hell does make me want to reexamine the existing C/C++ libraries out there. Because I think I could do better.

| 3 comments

 

I was beaten to the punch, but felt nothing

19 April 2014 21:50

A while back I mented github-backed DNS hosting.

Turns out NameCast.net does that already, and there is an interesting writeup on the design of something similar, from the same authors in 2009.

Fun to read.

In other news applying for jobs is a painful annoyance.

Should anybody wish to employ an Edinburgh-based system administrator, with a good Debian record, then please do shout at me. Remote work is an option, as is a local office, if you're nearby.

Now I need to go hide from the sun, lest I get burned again...

Good news? Going on holiday to Helsinki in a week or so, for Vappu. Anybody local who wants me should feel free to grab me, via the appropriate channels.

| 4 comments

 

Amazon's Route53 API is nice.

13 June 2014 21:50

It is unfortunate that some of the client libraries are inefficient, but I'm enjoying my exposure to Amazon's Route53 API.

(This is unrelated to the previous post(s) about operating a DNS service..)

For an idea of scale I host just over 170 zones at the moment.

For the first 25 zones Amazon would charge $0.50 a month, then $0.10 after that. Which would mean:

25  * $0.50  +
150 * $0.10
             = $12.50

That seems reasonably .. reasonable.

| 4 comments

 

So here's a proof of concept

14 June 2014 21:50

The simplest possible DNS-based service which I could write to explore Amazon's DNS offering has to be dynamic DNS, so I set one up..

The record skx.dhcp.io can be updated to point to your current IP by running:

curl http://dhcp.io/set/efa6961c-f3dd-11e3-955b-00163e0816a2

Or to a fixed IP:

curl http://dhcp.io/set/efa6961c-f3dd-11e3-955b-00163e0816a2/1.2.3.4

The code is modular and pretty nice, and the Amazon integration is simple.

(Although I need to write code to allow users to sign-up. I'll do that if it seems useful, I suspect there are already enough free ddns providers out there - though I might be the first to support IPv6 when I commit my next chunk of work!)

| 7 comments

 

DNS is now resolved

17 June 2014 21:50

I used to work for Bytemark, being a sysadmin and sometimes handling support requests from end-users, along with their clients.

One thing that never got old was marking DNS-related tickets as "resolved", or managing to slip that word into replies.

Similarly being married to a Finnish woman you'd be amazed how often Finnish and Finished become interchangeable.

Anyway that's enough pun-discussion.

Over the past few days I've, obviously, been playing with DNS. There are two public results:

DHCP.io

This is my simple Dynamic-DNS host, which has now picked up a few users.

I posted a token on previous entry, and I've had fun seeing how people keep changing the IP address of the host skx.dhcp.io.. I should revoke the token and actually claim the name - but to be honest it is more fun seeing it update.

What is most interesting is that I can see it being used for real - I see from the access logs some people have actually scheduled curl to run on an hourly basis. Neat.

DNS-API.org

This is a simple lookup utility, allowing queries to be made, such as:

Of the two sites this is perhaps the most useful, but again I expect it isn't unique.

That about wraps things up for the moment. It may well be the case that in the future there is some Git + DNS + Amazon integration for DNS-hosting, but I'm going to leave it alone for the moment.

Despite writing about DNS several times in the past the only reason this flurry of activity arose is that I'm hacking some Amazon & CPanel integration at the moment - and I wanted to experiment with Amazon's API some more.

So, we'll mark this activity as resolved, and I shall go make some coffee now this entry is Finnish.

ObRandomUpdate: At least there was a productive side-effect here - I created/uploaded to CPAN CGI::Application::Plugin::Throttle.

| 5 comments

 

So I accidentally ... a service.

23 June 2014 21:50

This post is partly introspection, and partly advertising. Skip if it either annoys you.

Back in February I was thinking about what to do with myself. I had two main options "Get a job", and "Start a service". Because I didn't have any ideas that seemed terribly interesting I asked people what they would pay for.

There were several replies, largely based "infrastructure hosting" (which was pretty much 50/50 split between "DNS hosting", and project hosting with something like trac, redmine, or similar).

At the time DNS seemed hard, and later I discovered there were already at least two well-regarded people doing DNS things, with revision control.

So I shelved the idea, after reaching out to both companies to no avail. (This later lead to drama, but we'll pretend it didn't.) Ultimately I sought and acquired gainful employment.

Then, during the course of my gainful employment I was exposed to Amazons Route53 service. It looked like I was going to be doing many things with this, so I wanted to understand it more thoroughly than I did. That lead to the creation of a Dynamic-DNS service - which seemed to be about the simplest thing you could do with the ability to programatically add/edit/delete DNS records via an API.

As this was a random hack put together over the course of a couple of nights I didn't really expect it to be any more popular than anything else I'd deployed, and with the sudden influx of users I wanted to see if I could charge people. Ultimately many people pretended they'd pay, but nobody actually committed. So on that basis I released the source code and decided to ignore the two main missing features - lack of MX records, and lack of sub-sub-domains. (Isn't it amazing how people who claim they want "open source" so frequently mean they want something with zero cost, they can run, and never modify and contribute toward?)

The experience of doing that though, and the reminder of the popularity of the original idea made me think that I could do a useful job with Git + DNS combined. That lead to DNS-API - GitHub based DNS hosting.

It is early days, but it looks like I have a few users, and if I can get more then I'll be happy.

So if you want to to store your DNS records in a (public) GitHub repository, and get them hosted on geographically diverse anycasted servers .. well you know where to go: Github-based DNS hosting.

| No comments

 

Slowly releasing bits of code

29 June 2014 21:50

As previously mentioned I've been working on git-based DNS hosting for a while now.

The site was launched for real last Sunday, and since that time I've received enough paying customers to cover all the costs, which is nice.

Today the site was "relaunched", by which I mean almost nothing has changed, except the site looks completely different - since the templates/pages/content is all wrapped up in Bootstrap now, rather than my ropy home-made table-based layout.

This week coming I'll be slowly making some of the implementation available as free-software, having made a start by publishing CGI::Application::Plugin::AB - a module designed for very simple A/B testing.

I don't think there will be too much interest in most of the code, but one piece I'm reasonably happy with is my webhook-receiver.

Webhooks are at the core of how my service is implemented:

  • You create a repository to host your DNS records.
  • You configure a webhook to be invoked when pushing to that repository.
  • The webhook will then receive updates, and magically update your DNS.

Because the webhook must respond quickly, otherwise github/bitbucket/whatever will believe you've timed out and report an error, you can't do much work on the back-end.

Instead I've written a component that listens for incoming HTTP POSTS, parses the body to determine which repository that came from, and then enqueues the data for later processing.

A different process will be constantly polling the job-queue (which in my case is Redis, but could be beanstalkd, or similar. Hell even use MySQL if you're a masochist) and actually do the necessary magic.

Most of the webhook processor is trivial, but handling different services (github, bitbucket, etc) while pretending they're all the same is hard. So my little toy-service will be released next week and might be useful to others.

ObRandom: New pricing unveiled for users of a single zone - which is a case I never imagined I'd need to cover. Yay for A/B testing :)

| 4 comments

 

Robbing Peter to pay Paul, or location spoofing via DNS

17 October 2015 21:50

I rarely watched TV online when I was located in the UK, but now I've moved to Finland with appalling local TV choices it has become more common.

The biggest problem with trying to watch BBC's iPlayer, and similar services, is the location restrictions.

Not a huge problem though:

  • Rent a virtual machine.
  • Configure an OpenVPN server on it.
  • Connect from $current-country to it.

The next part is the harder one - making your traffic pass over the VPN. If you were simple you'd just say "Send everything over the VPN". But that would slow down local traffic, so instead you have to use trickery.

My approach was just to run a series of routing additions, similar to this (except I did it in the openvpn configuration, via pushed-routes):

ip -4 route add .... dev tun0

This works, but it is a pain as you have to add more and more routes. The simpler solution which I switched to after a while was just configuring mitmproxy on the remote OpenVPN end-point, and then configuring that in the browser. With that in use all your traffic goes over the VPN link, if you enable the proxy in your browser, but nothing else will.

I've got a network device on-order, which will let me watch netflix, etc, from my TV, and I'm lead to believe this won't let you setup proxies, or similar, to avoid region-bypass.

It occurs to me that I can configure my router to give out bogus DNS responses - if the device asks for "iplayer.bbc.com" it can return 10.10.10.10 - which is the remote host running the proxy.

I imagine this will be nice and simple, and thought I was being clever:

  • Remote OpenVPN server.
  • MITM proxy on remote VPN-host
    • Which is basically a transparent HTTP/HTTPS proxy.
  • Route traffic to it via DNS.
    • e.g. For any DNS request, if it ends in .co.uk return 10.10.10.10.

Because I can handle DNS-magic on the router I can essentially spoof my location for all the devices on the internal LAN, which is a good thing.

Anyway I was reasonably pleased with the idea of using DNS to route traffic over the VPN, in combination with a transparent proxy. I was even going to blog about it, and say "Hey! This is a cool idea I've never heard of before".

Instead I did a quick google(.fi) and discovered that there are companies offering this as a service. They don't mention the proxying bit, but it's clearly what they're doing - for example OverPlay's SmartDNS.

So in conclusion I can keep my current setup, or I can use the income I receive from DNS hosting to pay for SmartDNS, or other DNS-based location-fakers.

Regardless. DNS. VPN. Good combination. Try it if you get bored.

| 8 comments