Category Archives: tech - Page 5

Some more on IPv6 in ZA

Well, there we have it. Our ministers have just said that there’s absolutely no reason it should be so hard to do:

Speaking to ITWeb yesterday, acting deputy director-general of the Presidential National Commission on Information Society and Development Themba Phiri said: “The IPv6 issue is a simple transition. All you need is the equipment that enables new registration of the domain names.”

Source: this article. I’m not entirely sure which part about this worries me out the most, but there’s a few strong contenders in-between all the apparently stupidity, clueless and out-of-reachness that this sort of opinion entails. But the fact that these people are policy-makers and decision-makers on things regarding technology here…well, their lack of knowledge is concerning. If their advisors are at fault for this opinion, their advisors should be fired. If their inability to recognize their advisor’s information as complete bullshit is the cause for them to propagate this opinion, it’s just as concerning. How do we fix this?

[update] for the non-technical readers: basically, there’s significantly more to actually getting IPv6 onto a network than just “the registration of the domain names”. Most importantly, domain names have almost nothing to do with IPv6 (short of how your computer can know to access something using IPv6 as opposed to IPv4, and where to actually try to find the resource you’re attempting to access).

Billing Failure

Alternative post title: No, really, I’m not kidding. I *really* want to give you money.

Part of the software stack/services that we have at my employer is a billing system for ISP environments. Given this, the typical features you would imagine to have in your billing system for such an environment would include the ability to bill for physical connections as well as services rendered over those connections. Possibly even just having the services without the connection. Do we have all of those? Check. Other things you might want? Hmm, let’s see, online management of your products and services? Check. Ability to pay online using credit card etc? Check. Fault logging? Check. The reason I list all these things? Because I want to show that I’m not completely unfamiliar with billing systems and/or the challenges associated with them.

With that all in mind, let’s go over to the Telkom SA billing site to pay our DSL connection bill (check out all the options you have for doing this over here).

So I do all the things required for getting my bill online [edit: I've actually done this a few months ago, this post is just coming up now because the issue is causing me problems now] and then open it here, as the page says I should. Hey great, it shows me some stuff!


Not too bad so far. Let me click on that View bill online link on the side.

Well, uh, that’s nice. No listed invoices. And the My Accounts page linked to from the first paragraph takes me to the page I just navigated from. Can’t view past invoices or search for any with success either. So I have no option for immediately paying my bill and get to go with any of the other options that range from mildly to highly inconvenient for me. Great, thanks.

I’m attempting to contact the Telkom Online Billing department, and will be using this URL as the summary of the complaint; mostly because I’ve gotten so damned tired with having to spend 5 minutes repeating the issue to every new person in the callcenter that I get redirected to each time I phone in an attempt to sort this crap out. Who knows, with any luck I might even be able to get some useful information about how other people could solve this if they also run into it.

Human-friendly headless torrents

A little while ago I was looking for a nice way to handle my automated torrent downloading on my server and consequently going through the options. Here are some that I’ve known about and used before:

  • rtorrent – a *nix command-line torrent client. Not very people-friendly at all in its default state, but a reasonably capable client
  • Deluge – a multiplatform client written in Python. Comes in web interface, daemon and GUI flavours (perhaps CLI too? I didn’t check)
  • Transmission – another multiplatform client. Comes in various flavours as well, bundled by default on Ubuntu (last time I saw, anyway)
  • ųTorrent – an extremely lightweight torrent client for Windows and Mac
  • kTorrent – a KDE-based torrent client. Can’t find platform information on a quick search.
  • qBittorrent – a Qt-based multiplatform torrent client
  • Vuze – a Java-based torrent downloader. One of the granddaddies of torrent clients.

A few years ago I used kTorrent until I had some issues with it not being quite stable yet during the KDE3 to KDE4 transition, and I tried out Transmission and Deluge for a while. Transmission’s user interface – like many other GTK applications – felt a bit like a dinky toy, but other than that the client mostly worked (I had some speed issues too that didn’t seem to come up with other clients, but never investigated much because I didn’t care for it). Deluge was what I went to after this, flicking over to Vuze every now and then when I felt I could stomach the Java crap. Through all of this, I was still usually downloading from my desktop.

After a while, I wanted to try out some other clients and at this time became a user of qBittorrent and ųTorrent, depending on what OS I was booted into. This was not ideal: I would like my downloads to continue somewhere I don’t have to care about them, regardless of what OS I was in. Which brings us to the list of requirements:

  • Keep downloading, no matter what OS I’m in
  • Be simple to interface with
  • Have the ability to do ratio-based download control as well as speed control
  • Give me some useful statistics and information. If there’s something wrong, I might want to try debug and fix it (this is probably why Transmission failed, in retrospect)
  • Preferably have some way to automatically start things by following RSS/Atom, or something of the sort.
  • Play well with the hardware/resources I would be able to give it

So, requirement 1 essentially meant I wouldn’t be using X (not even xvfb). I wouldn’t want to run a Windows VM either, and now and then ųTorrent under Wine could bomb out. So that leaves us with Deluge, rtorrent, Transmission and Vuze.

Having gotten annoyed at Transmission before, and not having enough RAM around to use Vuze without screwing over the rest of my Proliant Microserver, Deluge was next up on the list. Ends up being reasonably simple to get going, satisfies most/all of the requirements I had, but I did find some issues. The web interface can get *really* slow when you have a lot of torrents loaded; and deluged would get “stuck” sometimes, bad enough that it’d turn into a zombie process when terminated (this may or may not be because I run my server on Debian Testing). Obviously not ideal for uninterrupted autonomous work. So as little as I wanted to, I ended up with rtorrent.

After a while of dicking around with the configs and automation bits, here’s a working setup with everything you may want to get started with.

First off, rssdler config (to fetch .torrent files over RSS):

[global]
downloadDir = /storage/rss/var/watch
workingDir = /storage/rss/var/rssdler/working
minSize = 0
log = 3
logFile = /storage/rss/var/log/rssdler-downloads.log
verbose = 4
cookieFile = /storage/rss/etc/rssdler/cookies.txt
cookieType = MozillaCookieJar
scanMins = 10
sleepTime = 2
runOnce = True

[source1]
link = http://site1.tld/rss
directory = /storage/rss/var/watch
regExTrue = Gutenberg|Baen
regExFalse = PDF

[source2]
link = http://site2.tld/rss
directory = /storage/rss/var/watch
regExTrue = Oranges
regExFalse = Apples

Most of those options are pretty self-explanatory, so the only ones I’ll comment are are regExTrue and regExFalse: these are used to decide what from the RSS feed you want, and what you don’t. Things that match regExTrue are kept, things that match regExFalse are discarded.

A momentary note about the cookie-file format too: using MozillaCookieJar means your cookie file should be in the Netscape cookie format. Tab-separated columns on the cookie item lines.

I run rssdler from cron every 15 minutes (yes, I know it can run as a daemon, let’s just say I’m erring on the side of caution after the deluged experience on my box at the moment). The config line for my user’s cron look as follows:

# torrent stuff
*/15 * * * * /storage/rss/bin/rssdler.py -c /storage/rss/etc/rssdler/rssdler.cfg -r

Next up, the rtorrent config:

## general settings
# set our working directories and sessions folder so we can persist torrents across application sessions
directory = /storage/rss/var/content/downloading
session = /storage/rss/var/sessions
port_range = 1024-65535

# set initial download rates
upload_rate = 30
download_rate = 25

## scheduler
# watch for new torrents
schedule = watch_directory,5,5,load_start=/storage/rss/var/watch/*.torrent
schedule = untied_directory,5,5,stop_untied=
schedule = tied_directory,5,5,start_tied=

# time-based throttles
schedule = throttle_1,22:00:00,24:00:00,download_rate=25
schedule = throttle_2,00:00:00,24:00:00,download_rate=25
schedule = throttle_3,05:00:00,24:00:00,download_rate=25

## dht
dht = auto
peer_exchange = yes

## events
# notification
system.method.set_key = event.download.finished,notify_me,"execute=/storage/bin/mailnotify.sh,$d.get_name="

# move on complete
system.method.set_key = event.download.finished,move_complete,"execute=mv,-u,$d.get_base_path=,/storage/rss/var/content/done;d.set_directory=/storage/rss/var/content/done"

## encryption
encryption = allow_incoming,try_outgoing,enable_retry

## allow rpc remote control
scgi_port = 127.0.0.1:5001
#xmlrpc_dialect=i8
encoding_list = UTF-8

Most of the comments are in the appropriate sections, since rtorrent’s internal config language is a bit crap to read. Also note that I think my throttles are configured incorrectly at the moment, need to verify that. With rtorrent in this config, I start it running under screen, which means it can continue running even when I’m logged out.

Lastly, the nginx config:

        location /RPC2 {
                scgi_pass   127.0.0.1:5001;
                include     scgi_params;
                scgi_param  SCRIPT_NAME  /RPC2;
        }

This last bit is necessary for me to use Transdroid, a very nifty Android app I found which can act as a frontend to a few different clients from one’s mobile phone, which is damn nifty for checking up on the status of things from any random location.

And that’s it. The setup will download things from an RSS feed automatically, notify me with an email when any download completes, can be checked on quickly from my phone (and optionally any of the other rtorrent web interfaces which I didn’t check out), manage rates and ratios effectively, and I could even drop a .torrent file into the watch folder (by way of dropbox and some symlinks, for instance) and expect the same behaviour as any other torrent.

On Clouds and Wavey Hands

A friend of mine, Jonathan, was recently busy investigating some web technologies for bootstrapping a project he’s starting on and during his search for easy non-local database alternatives he came across this post that compares offerings from Microsoft and Amazon. Upon reading the post, the following quote caught my eye:

“Not surprisingly, the access to SimpleDB is much faster from inside Amazon’s network than from outside (about three times as fast). Searching the web can lead to some references about how Amazon has optimized its internal networks so that internal access to the different services is much faster than outside access. To me, it seems as a form of lock-in, not a desirable feature, …”

I’ve ranted a bit about a lack of infrastructure understanding before, so even so I encounter something every now and then which leaves me impressed with how little people in general seem to care about how things work; or, otherwise put, with only caring that they work. I’m reminded of the one scene somewhere in the series of The Matrix movies:

Neo: Why don’t you tell me what’s on your mind, Councillor?
Councillor Harmann: There is so much in this world that I do not understand. See that machine? It has something to do with recycling our water supply. I have absolutely no idea how it works. But I do understand the reason for it to work.

Both parts of that statement hold true, and I feel that it’s the latter part that people sometimes miss out on. To bring my point back to the original excerpt, I feel it’s somewhat silly to point out the fact of higher latency access without indicating that you attempted to get an understanding of what causes this, especially if you then want to jump to the next point of saying “it feels like lock-in”. Certainly it’s true that Amazon would try to improve the offering within their network, as it just makes sense to bundle a good services experience, but there are factors to consider when using this sort of service from elsewhere, factors which influence things to varying degrees. The foremost I’d list among these is physics: it takes time for the digital to reach from one location to another, because there’s various forms of media conversion likely to take place (light-to-copper, copper-to-light), there’s routing and switching which needs to happen, there’s probably some service-specific firewalling, loadbalancing and application-server interfacing likely to happen. The list goes from these “run of the mill” items which you’ll encounter on a regular basis to other things such as TCP setup time (which can also influence things in various ways depending on a whole other set of factors).

On a bigger scale, this sort of almost cargo cult thinking is pervasive in various different areas of life, and a quote from Arthur C. Clarke comes to mind: “Any sufficiently advanced technology is indistinguishable from magic”. At the end of the day, I’m a big advocate for understanding how something works, as well as pushing boundaries and trying to improve things. So while I don’t think we should ever just sit back and be complacent with how things are, I do also think that we should strive to understand just that little bit more than we need to. I feel it’s usually, perhaps always, that extra little bit which puts us ahead of just “churning” and into the position of actually producing something just a little bit better.

Even though that little bit might not be much, a few hundred/thousand/hundred-thousand/… of it adds up. Hopefully I’m not just preaching to the choir with my current audience, but that someone else might also come across this post. And, as always, comments and thoughts welcome!

IPConnect And You

The alternative title for this post would be “How your DSL connection actually works (if you’re in South Africa)”, but of course that’s silly long so we won’t go for that. This post is the one that was promised in a recent post, because it’s a subject that is often not entirely understood when being discussed, and I figured I’d rather get it done before I forget about it.

So, first things first: PPPoE. PPPoE is the protocol that’s used to dial up your session to your ISP. Okay, actually, I’ll have to back up a bit further than that. To summarise very shortly from Wikipedia, “Digital subscriber line (DSL) is a family of technologies that provides digital data transmission over the wires of a local telephone network”. The gist of this means “fast internet over your phoneline, while leaving the voice circuit free”. It accomplishes this is by sending specific digital signals over the same line at a higher frequency (this is why POTS filters are used, see more here), and these digital signals are often ethernet frames transported over an ATM circuit. This very last bit isn’t important to the layman reader, except to understand that in the configuration we have in South Africa, it’s not the ideal way to manage a connection.

Now there’s two ways one can normally work with this traffic when you, the customer, dials in. The first is how it currently works: you dial in from your computer, and Telkom “terminates” the session. What “terminates” means in this instance is that their systems are the peer that your communication speaks to (think tin-cans-with-string). The second instance (a scenario called Bitstream) is where your ISP would be the peer for your communication, and they would terminate the session on their LNS (L2TP Network Server). In either case, how this dialing works is by encapsulating a protocol called the Point-to-Point Protocol, or PPP, inside the ethernet frames (think school textbook with your notepapers pushed into the book at all the relevant pages). So effectively the PPP packets carry your actual data, with the ethernet bits being the boat for the river that is the Public Switched Telephone Network, or PSTN.

As mentioned in the previous paragraph, Telkom terminates the PPPoE session here. When you’re dialing in, their AAA servers get an access request for “alex@mydomain.co.za”, look up the AAA servers responsible for “mydomain.co.za” and sends off an access request with the same information, essentially asking “is alex@mydomain.co.za allowed to dial in with this information that was given?”. If your ISPs AAA servers respond “yes”, Telkom’s equipment will go ahead with attempting to set up your connection.

Here’s where it gets sticky. Because it’s Telkom’s network terminating the connection, there isn’t a lot of control that they can give over to the ISP on how to handle customer sessions on their equipment at a national scale, so instead they go for preconfiguring their routers with specific IP pools and routing information. This is why, if you dial from a certain line over and over again, you can quite possibly end up getting the same IP (because the device terminating your connection has a specific, finite set of IPs it could possibly give you). The configuration which Telkom uses for this is designed only around IPv4, and around their equipment “forwarding” your ISP your traffic once it has “de-encapsulated” it. Consequently, for various reasons (technical and otherwise), it is essentially impossible to deliver native IPv6 to an ADSL user in South Africa dialing up with PPPoE. This same configuration is also why all the static IP products in the market require a bit “extra” on top of just a normal dialing process.

The alternative configuration, Bitstream, is one where your ISP would be terminating all traffic, and could give you whatever services they are able to provide (within the constraints of their technical ability). Obviously the latter is the more desired option, and has been requested from Telkom for quite some time now.

Well, that’s it. I’ve tried to not go into an abundance of overly technical details in this post as I felt those could be better served elsewhere, but if there’s any questions or remarks, please do leave a comment so that I could look into it and attempt to answer or clarify.

Terminal-based Quicksearch

The title of this post might be a bit misleading, since it’s not about the history-search feature you often find in shells such as zsh, which is the shell I use. Incidentally, if you don’t know about this feature, try it out! See below:

Press ctrl+r and start typing out a partial command that you’ve used previously, you should see it pop up on your commandline, ready for use. In zsh, this is the history-incremental-search-backward feature on the line editor, which you can see more of over on this page.

But as mentioned, this post is about something else. Some time ago I saw Jonathan Hitchcock mention use of the open(1) command on OSX and thought this was pretty nifty, leading to me looking around for the equivalent on Linux. I came across ‘xdg-open’, which works with the freedesktop standards and thus generally respects your desktop-environment-of-choice’s application preferences. After using it a bit, I decided it was unwieldy (since there were too many commands starting with ‘xdg-‘), and aliased it to ‘xopen’, which has the benefit of being both short and easily tab-completable.

This has been working pretty well for me since then, and only recently did I come up with a slightly improved use of it. Every now and then I want to quickly check up something online, and I could certainly use lynx/elinks for this, but they’re also a bit painful to navigate with on many sites, so they’re not exactly ideal candidates. To the rescue comes my handy xopen alias!

function googsearch() {
  xopen "http://www.google.com/search?q=$*"
}

function googsearchphrase() {
  xopen "http://www.google.com/search?q=\"$*\""
}

Those are the functions I created, and they expand quite easily on my shell, suiting me on both laziness and versatility/speed. The end effect is they quickly fire off a query to google in my preferred browser, which can be one alt-tab away or focus by default (depending on your DE config). Later I *might* investigate using another search engine, but my typical use is on Google.

The only downside I can see to this is I can still only make it work on a local shell at this stage, so I’d have to see how I can make it work through ssh tunnels or somesuch. Maybe some sort of hack emulating a socket-forward as agent forwarding is done? If anyone has any ideas, please post them in the comments, I’d be glad to hear about it.

Update: just for clarity, what I meant with the last paragraph is that I’d want to call this command (or something to the same effect) on a remote server, and have the query executed on my local machine.

South African IPv6 Usage

Over the past while Simeon’s blog has had a few posts concerning IPv6, and this alongside a few other posts that I’ve come across essentially indicate a very sad state of IPv6 in South Africa.

A quick check on Sixxs shows that while there’s a whole lot of allocations, many aren’t seen on the internet at all. We (AS37105) have had our network fully IPv6-capable for quite some time and we’ve even tested native IPv6 connectivity (dual-stack and IPv6-only) delivered to the customer over iBurst‘s network on a PPPoE session, so with all this IPv6 and no-one to send packets to we started looking at who we could get online. We’ve had a pretty good relationship with JAWUG over the years, and as of last night we’re transiting a bit of best-effort IPv6 for them. One of our customers, SA Digital Villages, has also had an IPv6 allocation for some time and their transit is now IPv6-enabled as well.

Here’s to hoping for more IPv6 in SA soon!

 

P.S. In another post I’ll explain why it’s hard to get IPv6 to a Telkom DSL customer in South Africa natively.

Time, NTP and Shiny Things

I see that Regardt beat me to the punch on this one, but we recently got a Meinberg timeserver going. It’s stratum1, publicly accessible and speaks IPv6 fluently! We’ve added it to the pools, so if you use the poolservers you’re quite likely to end up on it sometime.

Zenoss – Find transforms

So I was looking around in one of my zenoss installs some time ago to find what EventClasses I’d set up transforms in, but didn’t feel like digging around through the entire tree of EventClasses (a cursory check now reveals that there’s 136 of them in my one installation). At the time, I solved the problem, extracted the data I needed, and then consequently forgot about it.

And then today I needed that info again. \o/ for IRC logs. To do this, connect to the dmd (on my system, which is installed with the debian package, the command for this is su -c “/usr/local/zenoss/zenoss/bin/zendmd” zenoss. Adjust it for your own system), and then run the following code

foo = dmd.Events.getSubEventClasses()
for i in foo:
    if len(i.transform) != 0: print "%s :: \n%s\n\n" % (i.getOrganizerName(), i.transform)

This will give you human-readable list of all your existing transforms, which makes it easy to find and re-use them.

Edit: this is confirmed working on 3.2.1 (and probably works on the rest of 3.x as well, post in the comments if it doesn’t). Thanks to jmp242 from #zenoss for testing.

DNS Platform Migration Fun

This post could go by the alternative title “Screw you, ISC, and thanks for making software that makes me hate DNS even more”. So let’s dive right in, shall we?

(to those who don’t care for the intermediate ranting and DNS explanations, page down for the tech bits)

There are various criticisms of the Domain Name System — the thing which enables anything on the internet to turn “www.google.com” or any other such name into something that is meaningful to a computer (see here) — but for the most part it works reasonably well. You set up some DNS software, perhaps battle with the config for a while, and then it works. But as a quote I’ve seen somewhere (and can’t find the origin of now with a quick search) says, “you can’t truly recommend some software [tool] until you can tell me why it sucks.” And ISC’s BIND is arguably a highly irritating piece of software, which has over the years led to a rise in popularity for various other options. Amongst these you’ll find some general free/opensource implementations, as well as some commercial platforms:

(That’s the nice thing about diversity and openness — in this regard, an open protocol — you always get some choice and you can pick which one best suits your needs.)

 

Some years ago, long before my time at my current employer, there was a business requirement for some DNS support in our product suite. And BIND was chosen as the platform, since it’s a fairly well-known one. As time progresses, so do the things we do, and one day we found BIND was no longer sufficient to do what we needed to. Amongst others, things like a supermaster (a master from which a slave will accept all domain information, regardless of whether that slave knows of such a domain) and dynamic backend functionality were some of those needs.
Now some options like bind-dlz and friends existed, but none of these really suited us. In the end we decided upon PowerDNS with our own custom software written to handle the dynamic things as business rules would require, and set forth on this path. Some time passes with Rossi writing all the backend code which we’ve then successfully been running in combination with PowerDNS for some time now.

 

Of course, we still have all those old BIND-based installations to get upgraded, and this is that tale. Thankfully, the latest version of our platform was designed with exactly this sort of scenario in mind, since we have to inter-operate with other AXFR-speaking nameservers. So I think “let’s just use the config interface to add the migration host as a second slave, massage the data as required on there and then port that data over to the new platform” even as a tiny voice in my head says “it’s never that simple and you know it.” About 2 hours later I’m found at my desk swearing violently about all manner of things, which is my out when dealing with frustrating software. This is because I’d ended up trying to find out why BIND wasn’t actually slaving anything to my “new” nameserver, even though all the configs and zonefiles were right. Not just that, it had also at some point stopped slaving everything it should to the secondary nameserver, which at this point isn’t a worry since I’m replacing it anyway.

 

:: TECH ::

After figuring out the bits of the migration that matter — such as fixing up the SQL output (from the handy

1
zone2sql

tool from pdns) that had some oddities due to what looked like multiple $ORIGIN statements in one file — had been figured out, it was pretty painless to move. There were some fun points, like handling multiple $INCLUDE statements in a zonefile, and *hattip* to Jonathan Hitchcock (for the pre- and post-insert idea) and Bryn Divey (for googling better than I).

 

So, sed trick 1, splitting the file into parts:
1
2
3
cat foo.zone | sed -n '1,/match/p' > firstbit
cat foo.zone | sed -n '1,/match/!p' > secondbit
cat firstbit secondbit > newfoo
Sed trick 2, reading in an external file to use it as the replacement text. We have:
1
2
3
# grep INCLUDE 10_in-addr_arpa.zone
$INCLUDE "/var/cache/bind/10_in-addr_arpa.zone.ns";
$INCLUDE "/var/cache/bind/10_in-addr_arpa.zone.mx";
We do:
1
2
3
sed -i '/$INCLUDE.*\.ns.*$/ r 10_in-addr_arpa.zone.ns' 10_in-addr_arpa.zone
sed -i '/$INCLUDE.*\.mx.*$/ r 10_in-addr_arpa.zone.mx' 10_in-addr_arpa.zone
<code>

And tada, instant awesome. This reads the 10_in-addr_arpa.zone.mx file for us, and replaces from the appropriate “$INCLUDE” start to end with the contents of said file.

 

Another issue I ran into was having the generate the appropriate reverse-entry zones for all the public IP netblocks, and with two /21s and a /18 to worry about I wasn’t planning to do myself if I could help it, so I employed a quick hack with ipcalc and dnspython to transform my /18 into its various component /24s, and then generate reverses:
1
2
3
4
5
ipcalc 11.22.33.0/18 /24 | grep 'Network.*/24' | awk '{print $2}' | cut -d"/" -f 1
11.22.0.0
11.22.1.0
11.22.2.0
...
We can then easily manipulate these in python or sed or cut, depending on how hacky we feel, but I went with python since I was already using MySQLdb to insert the records after massaging them into the right form.
1
2
3
4
&gt;&gt;&gt; import dns.reversename
&gt;&gt;&gt; range = "10.22.0.0"
&gt;&gt;&gt; print dns.reversename.from_address(range).to_text().split(".",1)[1]
0.22.10.in-addr.arpa.
And that’s it for somewhat useful little tricks. There was a bit of a discussion had about delimited formats like this, and Piet Delport (see blogroll) hacked up a neat little delimited datatype which you can find over here. Quick usage instructions:
1
2
3
4
5
6
&gt;&gt;&gt; d = delimited('foo.bar.baz', '.'); d.sort(); print d
bar.baz.foo
d[1:] -&gt; 'bar.baz'
d[1:2] = ['x', 'y', 'z']; d -&gt; 'foo.x.y.z.baz'
&gt;&gt;&gt; d = delimited('0.2.1.10.in-addr.arpa', '.'); del d[0]; print d
2.1.10.in-addr.arpa.
And now as the sounds of Mogwai, Flunk and Placebo massage my tired noggin, it’s time for me to go to bed.