Author Archives: froztbyte - Page 7

“Maybe your problem is lint?”

And indeed it was!

One of the products we sell at work is a caching platform, something that sells quite well into many of our African clients’ networks because transit is often ridiculously expensive and every Mbps saved is USD500~2000 you can use on something else. Traditionally we’ve been deploying on HP hardware, and as of earlier this week we have some SuperMicro equipment to try out for the platform. One 1U unit, and one 3U 8-blade unit. This post is about the latter of these two.

After racking the thing, and re-installing the blades (took them out to move the chassis. Side note: the clasp which holds the blades in is kinda crappy for self-locking. You need to wiggle it a bit to ensure the blade is properly in place). I started poking around on the systems. First issue I found is that the ethernet controllers are Intel 82580’s, which is not supported in the squeeze kernel we had on our PXEBoot server at the time (updated kernel which does have support is included in 6.0.4, or any version greater than 2.6.32-33). Now we were informed by our supplier ahead of time that there was one blade which was DOA and that they had a replacement on the way, so I got started on preparing the other systems in the meantime (as they would form a cache cluster). Doing this, I experienced some strange weirdness with the power sequence. Sometimes all the blades would power on, sometimes only the first 4 bays, sometimes only 3. Sometimes I could power 5 on, one off, another one, then attempt to reverse the power sequence of the last two but not succeed. A few more combinations like this were tried, including removing a unit far enough to disconnect it from its connector and then reseat it, but suffice it to say that it didn’t make sense.

At this point in time I’m left with the options of removing units from the bays (to eliminate PSU overload), and of changing the order of the units in the bays to see if that makes a difference (which one would not expect, but if all the possible options have been eliminated then whatever’s left is probably the answer). As I start removing the units one by one, I notice that there’s lint on the one blade’s connector. This hadn’t been there when I installed them, so I asked one of my coworkers to bring my torch over so that I could inspect the inside of the chassis. Turns out there’s a bit of lint hanging loose (about 6cm worth, presumably from the sleeving of one of the fans’ power connectors?) inside the chassis, and that it had somehow managed to get caught up in/on the connector. I remove the lint, and suddenly everything is working as expected.

Lessons learned:

  • SuperMicro BMC units probably have a shared power control bus
  • If you’re seeing weird things happening, maybe it’s lint!

Dokuwiki mredirect plugin URL bug

At work we have an installation of Dokuwiki which is now slowly but surely being replaced by Jira (because of some context richness being easier in Jira). During this migration we wanted to be able to automatically redirect people to the new Jira entry if the content has been moved. For redirecting within the wiki, we’ve previously used the mredirect plugin, but we recently found it has a bug in handling redirect to an external URL.

So, here’s the patch:

$ diff oldaction.old action.php
25c25,31
<         $url = ($p[2] == '') ? wl($p[1]) : wl($p[1]) . '#' . sectionID($p[2], $check);
---
>         if ($p[2] == '') {
>           if (preg_match("/:\/\//", $p[0])) {
>             $url = preg_replace("/^\[\[/","",preg_replace("/\]\]$/","",$p[0]));
>           } else {
>             $url = wl($p[1]);
>           }
>         } else { $url = wl($p[1]) . '#' . sectionID($p[2], $check); }
31c37
< ?>
\ No newline at end of file
---
> ?>

This is just a very basic check to see whether the matched text contains “://”, which should never be within a Dokuwiki URL path (ie., “http://wiki.domain.tld/ns:entry” or “http://wiki.domain.tld/ns/entry” should be the only paths one can ever run across). If it contains this text, the Dokuwiki bracket syntax is stripped out of the string and the resulting content is the URL (verbatim) to redirect to.

I’ve mailed it to the author of the plugin as well, so hopefully it’ll get patched upstream and work for other people using Dokuwiki too.

Side note: I wonder what the/a nice way to post small diffs like this is instead of just dropping it in a quote on a post. Github’s gist or somesuch?

[Update] plugin author has responded and indicated that the upstream plugin is now updated.
[Update 2] thanks Axu for pointing out that I was asleep as hell when making this patch

Some more on IPv6 in ZA

Well, there we have it. Our ministers have just said that there’s absolutely no reason it should be so hard to do:

Speaking to ITWeb yesterday, acting deputy director-general of the Presidential National Commission on Information Society and Development Themba Phiri said: “The IPv6 issue is a simple transition. All you need is the equipment that enables new registration of the domain names.”

Source: this article. I’m not entirely sure which part about this worries me out the most, but there’s a few strong contenders in-between all the apparently stupidity, clueless and out-of-reachness that this sort of opinion entails. But the fact that these people are policy-makers and decision-makers on things regarding technology here…well, their lack of knowledge is concerning. If their advisors are at fault for this opinion, their advisors should be fired. If their inability to recognize their advisor’s information as complete bullshit is the cause for them to propagate this opinion, it’s just as concerning. How do we fix this?

[update] for the non-technical readers: basically, there’s significantly more to actually getting IPv6 onto a network than just “the registration of the domain names”. Most importantly, domain names have almost nothing to do with IPv6 (short of how your computer can know to access something using IPv6 as opposed to IPv4, and where to actually try to find the resource you’re attempting to access).

Billing Failure

Alternative post title: No, really, I’m not kidding. I *really* want to give you money.

Part of the software stack/services that we have at my employer is a billing system for ISP environments. Given this, the typical features you would imagine to have in your billing system for such an environment would include the ability to bill for physical connections as well as services rendered over those connections. Possibly even just having the services without the connection. Do we have all of those? Check. Other things you might want? Hmm, let’s see, online management of your products and services? Check. Ability to pay online using credit card etc? Check. Fault logging? Check. The reason I list all these things? Because I want to show that I’m not completely unfamiliar with billing systems and/or the challenges associated with them.

With that all in mind, let’s go over to the Telkom SA billing site to pay our DSL connection bill (check out all the options you have for doing this over here).

So I do all the things required for getting my bill online [edit: I've actually done this a few months ago, this post is just coming up now because the issue is causing me problems now] and then open it here, as the page says I should. Hey great, it shows me some stuff!


Not too bad so far. Let me click on that View bill online link on the side.

Well, uh, that’s nice. No listed invoices. And the My Accounts page linked to from the first paragraph takes me to the page I just navigated from. Can’t view past invoices or search for any with success either. So I have no option for immediately paying my bill and get to go with any of the other options that range from mildly to highly inconvenient for me. Great, thanks.

I’m attempting to contact the Telkom Online Billing department, and will be using this URL as the summary of the complaint; mostly because I’ve gotten so damned tired with having to spend 5 minutes repeating the issue to every new person in the callcenter that I get redirected to each time I phone in an attempt to sort this crap out. Who knows, with any luck I might even be able to get some useful information about how other people could solve this if they also run into it.

Human-friendly headless torrents

A little while ago I was looking for a nice way to handle my automated torrent downloading on my server and consequently going through the options. Here are some that I’ve known about and used before:

  • rtorrent – a *nix command-line torrent client. Not very people-friendly at all in its default state, but a reasonably capable client
  • Deluge – a multiplatform client written in Python. Comes in web interface, daemon and GUI flavours (perhaps CLI too? I didn’t check)
  • Transmission – another multiplatform client. Comes in various flavours as well, bundled by default on Ubuntu (last time I saw, anyway)
  • ųTorrent – an extremely lightweight torrent client for Windows and Mac
  • kTorrent – a KDE-based torrent client. Can’t find platform information on a quick search.
  • qBittorrent – a Qt-based multiplatform torrent client
  • Vuze – a Java-based torrent downloader. One of the granddaddies of torrent clients.

A few years ago I used kTorrent until I had some issues with it not being quite stable yet during the KDE3 to KDE4 transition, and I tried out Transmission and Deluge for a while. Transmission’s user interface – like many other GTK applications – felt a bit like a dinky toy, but other than that the client mostly worked (I had some speed issues too that didn’t seem to come up with other clients, but never investigated much because I didn’t care for it). Deluge was what I went to after this, flicking over to Vuze every now and then when I felt I could stomach the Java crap. Through all of this, I was still usually downloading from my desktop.

After a while, I wanted to try out some other clients and at this time became a user of qBittorrent and ųTorrent, depending on what OS I was booted into. This was not ideal: I would like my downloads to continue somewhere I don’t have to care about them, regardless of what OS I was in. Which brings us to the list of requirements:

  • Keep downloading, no matter what OS I’m in
  • Be simple to interface with
  • Have the ability to do ratio-based download control as well as speed control
  • Give me some useful statistics and information. If there’s something wrong, I might want to try debug and fix it (this is probably why Transmission failed, in retrospect)
  • Preferably have some way to automatically start things by following RSS/Atom, or something of the sort.
  • Play well with the hardware/resources I would be able to give it

So, requirement 1 essentially meant I wouldn’t be using X (not even xvfb). I wouldn’t want to run a Windows VM either, and now and then ųTorrent under Wine could bomb out. So that leaves us with Deluge, rtorrent, Transmission and Vuze.

Having gotten annoyed at Transmission before, and not having enough RAM around to use Vuze without screwing over the rest of my Proliant Microserver, Deluge was next up on the list. Ends up being reasonably simple to get going, satisfies most/all of the requirements I had, but I did find some issues. The web interface can get *really* slow when you have a lot of torrents loaded; and deluged would get “stuck” sometimes, bad enough that it’d turn into a zombie process when terminated (this may or may not be because I run my server on Debian Testing). Obviously not ideal for uninterrupted autonomous work. So as little as I wanted to, I ended up with rtorrent.

After a while of dicking around with the configs and automation bits, here’s a working setup with everything you may want to get started with.

First off, rssdler config (to fetch .torrent files over RSS):

[global]
downloadDir = /storage/rss/var/watch
workingDir = /storage/rss/var/rssdler/working
minSize = 0
log = 3
logFile = /storage/rss/var/log/rssdler-downloads.log
verbose = 4
cookieFile = /storage/rss/etc/rssdler/cookies.txt
cookieType = MozillaCookieJar
scanMins = 10
sleepTime = 2
runOnce = True

[source1]
link = http://site1.tld/rss
directory = /storage/rss/var/watch
regExTrue = Gutenberg|Baen
regExFalse = PDF

[source2]
link = http://site2.tld/rss
directory = /storage/rss/var/watch
regExTrue = Oranges
regExFalse = Apples

Most of those options are pretty self-explanatory, so the only ones I’ll comment are are regExTrue and regExFalse: these are used to decide what from the RSS feed you want, and what you don’t. Things that match regExTrue are kept, things that match regExFalse are discarded.

A momentary note about the cookie-file format too: using MozillaCookieJar means your cookie file should be in the Netscape cookie format. Tab-separated columns on the cookie item lines.

I run rssdler from cron every 15 minutes (yes, I know it can run as a daemon, let’s just say I’m erring on the side of caution after the deluged experience on my box at the moment). The config line for my user’s cron look as follows:

# torrent stuff
*/15 * * * * /storage/rss/bin/rssdler.py -c /storage/rss/etc/rssdler/rssdler.cfg -r

Next up, the rtorrent config:

## general settings
# set our working directories and sessions folder so we can persist torrents across application sessions
directory = /storage/rss/var/content/downloading
session = /storage/rss/var/sessions
port_range = 1024-65535

# set initial download rates
upload_rate = 30
download_rate = 25

## scheduler
# watch for new torrents
schedule = watch_directory,5,5,load_start=/storage/rss/var/watch/*.torrent
schedule = untied_directory,5,5,stop_untied=
schedule = tied_directory,5,5,start_tied=

# time-based throttles
schedule = throttle_1,22:00:00,24:00:00,download_rate=25
schedule = throttle_2,00:00:00,24:00:00,download_rate=25
schedule = throttle_3,05:00:00,24:00:00,download_rate=25

## dht
dht = auto
peer_exchange = yes

## events
# notification
system.method.set_key = event.download.finished,notify_me,"execute=/storage/bin/mailnotify.sh,$d.get_name="

# move on complete
system.method.set_key = event.download.finished,move_complete,"execute=mv,-u,$d.get_base_path=,/storage/rss/var/content/done;d.set_directory=/storage/rss/var/content/done"

## encryption
encryption = allow_incoming,try_outgoing,enable_retry

## allow rpc remote control
scgi_port = 127.0.0.1:5001
#xmlrpc_dialect=i8
encoding_list = UTF-8

Most of the comments are in the appropriate sections, since rtorrent’s internal config language is a bit crap to read. Also note that I think my throttles are configured incorrectly at the moment, need to verify that. With rtorrent in this config, I start it running under screen, which means it can continue running even when I’m logged out.

Lastly, the nginx config:

        location /RPC2 {
                scgi_pass   127.0.0.1:5001;
                include     scgi_params;
                scgi_param  SCRIPT_NAME  /RPC2;
        }

This last bit is necessary for me to use Transdroid, a very nifty Android app I found which can act as a frontend to a few different clients from one’s mobile phone, which is damn nifty for checking up on the status of things from any random location.

And that’s it. The setup will download things from an RSS feed automatically, notify me with an email when any download completes, can be checked on quickly from my phone (and optionally any of the other rtorrent web interfaces which I didn’t check out), manage rates and ratios effectively, and I could even drop a .torrent file into the watch folder (by way of dropbox and some symlinks, for instance) and expect the same behaviour as any other torrent.

On Clouds and Wavey Hands

A friend of mine, Jonathan, was recently busy investigating some web technologies for bootstrapping a project he’s starting on and during his search for easy non-local database alternatives he came across this post that compares offerings from Microsoft and Amazon. Upon reading the post, the following quote caught my eye:

“Not surprisingly, the access to SimpleDB is much faster from inside Amazon’s network than from outside (about three times as fast). Searching the web can lead to some references about how Amazon has optimized its internal networks so that internal access to the different services is much faster than outside access. To me, it seems as a form of lock-in, not a desirable feature, …”

I’ve ranted a bit about a lack of infrastructure understanding before, so even so I encounter something every now and then which leaves me impressed with how little people in general seem to care about how things work; or, otherwise put, with only caring that they work. I’m reminded of the one scene somewhere in the series of The Matrix movies:

Neo: Why don’t you tell me what’s on your mind, Councillor?
Councillor Harmann: There is so much in this world that I do not understand. See that machine? It has something to do with recycling our water supply. I have absolutely no idea how it works. But I do understand the reason for it to work.

Both parts of that statement hold true, and I feel that it’s the latter part that people sometimes miss out on. To bring my point back to the original excerpt, I feel it’s somewhat silly to point out the fact of higher latency access without indicating that you attempted to get an understanding of what causes this, especially if you then want to jump to the next point of saying “it feels like lock-in”. Certainly it’s true that Amazon would try to improve the offering within their network, as it just makes sense to bundle a good services experience, but there are factors to consider when using this sort of service from elsewhere, factors which influence things to varying degrees. The foremost I’d list among these is physics: it takes time for the digital to reach from one location to another, because there’s various forms of media conversion likely to take place (light-to-copper, copper-to-light), there’s routing and switching which needs to happen, there’s probably some service-specific firewalling, loadbalancing and application-server interfacing likely to happen. The list goes from these “run of the mill” items which you’ll encounter on a regular basis to other things such as TCP setup time (which can also influence things in various ways depending on a whole other set of factors).

On a bigger scale, this sort of almost cargo cult thinking is pervasive in various different areas of life, and a quote from Arthur C. Clarke comes to mind: “Any sufficiently advanced technology is indistinguishable from magic”. At the end of the day, I’m a big advocate for understanding how something works, as well as pushing boundaries and trying to improve things. So while I don’t think we should ever just sit back and be complacent with how things are, I do also think that we should strive to understand just that little bit more than we need to. I feel it’s usually, perhaps always, that extra little bit which puts us ahead of just “churning” and into the position of actually producing something just a little bit better.

Even though that little bit might not be much, a few hundred/thousand/hundred-thousand/… of it adds up. Hopefully I’m not just preaching to the choir with my current audience, but that someone else might also come across this post. And, as always, comments and thoughts welcome!

A Post Most Significantly Devoid

….of any sensationalism whatsoever. Or of any content in any form. This post is just for a URL that one can see without a crushing feeling of disappointment overwhelming you when there is, in fact, no feat or image of epic proportions in said post.

Here, have a picture of some spaghetti:

(Mental note: set up category-based RSS feeds too)

[edit] P.S. I blame daffy

Convenience Services And Consumers

(this post is now a bit late to the party…)

From Light Reading:

“Steven Glapa, senior director of field marketing at the Wi-Fi offload vendor [Ruckus Wireless], says that most operators are at least exploring how to charge for Wi-Fi now. Most, like AT&T Inc. (NYSE: T), which has 29,000 hot spots, offer it free as a value-added service today. But Glapa says operators, in general, are considering bundling in an extra cost for the off-network access into data plans and counting that usage against the data cap. ”

Well, gee, imagine that. Can’t do boundless data plans without costing it in somewhere. I’m sure that lesson was somewhere in my high school economics class ~10 years ago, although the content would have been slightly different (likely a comparison of ice cubes in your drink would have applied). When playing in the infrastructure and/or edge services game, it’s hardly like things have no cost (or a very minimal one, at that) whatsoever. Perhaps this is a different scenario to the “cloud” space, where resources are extremely cheap and easy to come by. This is why some ISPs have gone with bundling services like these as VAS products on top of their normal offerings. An example of this would be M-Web in South Africa who bundles 500MB monthly free hotspot data into their packages, available for use at any Always-On hotspot in the country.

All in all, I’m somewhat surprised it took this long for people to realize this, but I hope that the kneejerk reactions from the suppliers can be controlled somewhat and that they rather come up with some moderately sane products to supplant internet access instead.

Oh, and can we please stop saying “free wifi” ever again? It’s a term that I’m almost convinced causes the death of baby unicorns each time it’s used.

IPConnect And You

The alternative title for this post would be “How your DSL connection actually works (if you’re in South Africa)”, but of course that’s silly long so we won’t go for that. This post is the one that was promised in a recent post, because it’s a subject that is often not entirely understood when being discussed, and I figured I’d rather get it done before I forget about it.

So, first things first: PPPoE. PPPoE is the protocol that’s used to dial up your session to your ISP. Okay, actually, I’ll have to back up a bit further than that. To summarise very shortly from Wikipedia, “Digital subscriber line (DSL) is a family of technologies that provides digital data transmission over the wires of a local telephone network”. The gist of this means “fast internet over your phoneline, while leaving the voice circuit free”. It accomplishes this is by sending specific digital signals over the same line at a higher frequency (this is why POTS filters are used, see more here), and these digital signals are often ethernet frames transported over an ATM circuit. This very last bit isn’t important to the layman reader, except to understand that in the configuration we have in South Africa, it’s not the ideal way to manage a connection.

Now there’s two ways one can normally work with this traffic when you, the customer, dials in. The first is how it currently works: you dial in from your computer, and Telkom “terminates” the session. What “terminates” means in this instance is that their systems are the peer that your communication speaks to (think tin-cans-with-string). The second instance (a scenario called Bitstream) is where your ISP would be the peer for your communication, and they would terminate the session on their LNS (L2TP Network Server). In either case, how this dialing works is by encapsulating a protocol called the Point-to-Point Protocol, or PPP, inside the ethernet frames (think school textbook with your notepapers pushed into the book at all the relevant pages). So effectively the PPP packets carry your actual data, with the ethernet bits being the boat for the river that is the Public Switched Telephone Network, or PSTN.

As mentioned in the previous paragraph, Telkom terminates the PPPoE session here. When you’re dialing in, their AAA servers get an access request for “alex@mydomain.co.za”, look up the AAA servers responsible for “mydomain.co.za” and sends off an access request with the same information, essentially asking “is alex@mydomain.co.za allowed to dial in with this information that was given?”. If your ISPs AAA servers respond “yes”, Telkom’s equipment will go ahead with attempting to set up your connection.

Here’s where it gets sticky. Because it’s Telkom’s network terminating the connection, there isn’t a lot of control that they can give over to the ISP on how to handle customer sessions on their equipment at a national scale, so instead they go for preconfiguring their routers with specific IP pools and routing information. This is why, if you dial from a certain line over and over again, you can quite possibly end up getting the same IP (because the device terminating your connection has a specific, finite set of IPs it could possibly give you). The configuration which Telkom uses for this is designed only around IPv4, and around their equipment “forwarding” your ISP your traffic once it has “de-encapsulated” it. Consequently, for various reasons (technical and otherwise), it is essentially impossible to deliver native IPv6 to an ADSL user in South Africa dialing up with PPPoE. This same configuration is also why all the static IP products in the market require a bit “extra” on top of just a normal dialing process.

The alternative configuration, Bitstream, is one where your ISP would be terminating all traffic, and could give you whatever services they are able to provide (within the constraints of their technical ability). Obviously the latter is the more desired option, and has been requested from Telkom for quite some time now.

Well, that’s it. I’ve tried to not go into an abundance of overly technical details in this post as I felt those could be better served elsewhere, but if there’s any questions or remarks, please do leave a comment so that I could look into it and attempt to answer or clarify.

Terminal-based Quicksearch

The title of this post might be a bit misleading, since it’s not about the history-search feature you often find in shells such as zsh, which is the shell I use. Incidentally, if you don’t know about this feature, try it out! See below:

Press ctrl+r and start typing out a partial command that you’ve used previously, you should see it pop up on your commandline, ready for use. In zsh, this is the history-incremental-search-backward feature on the line editor, which you can see more of over on this page.

But as mentioned, this post is about something else. Some time ago I saw Jonathan Hitchcock mention use of the open(1) command on OSX and thought this was pretty nifty, leading to me looking around for the equivalent on Linux. I came across ‘xdg-open’, which works with the freedesktop standards and thus generally respects your desktop-environment-of-choice’s application preferences. After using it a bit, I decided it was unwieldy (since there were too many commands starting with ‘xdg-‘), and aliased it to ‘xopen’, which has the benefit of being both short and easily tab-completable.

This has been working pretty well for me since then, and only recently did I come up with a slightly improved use of it. Every now and then I want to quickly check up something online, and I could certainly use lynx/elinks for this, but they’re also a bit painful to navigate with on many sites, so they’re not exactly ideal candidates. To the rescue comes my handy xopen alias!

function googsearch() {
  xopen "http://www.google.com/search?q=$*"
}

function googsearchphrase() {
  xopen "http://www.google.com/search?q=\"$*\""
}

Those are the functions I created, and they expand quite easily on my shell, suiting me on both laziness and versatility/speed. The end effect is they quickly fire off a query to google in my preferred browser, which can be one alt-tab away or focus by default (depending on your DE config). Later I *might* investigate using another search engine, but my typical use is on Google.

The only downside I can see to this is I can still only make it work on a local shell at this stage, so I’d have to see how I can make it work through ssh tunnels or somesuch. Maybe some sort of hack emulating a socket-forward as agent forwarding is done? If anyone has any ideas, please post them in the comments, I’d be glad to hear about it.

Update: just for clarity, what I meant with the last paragraph is that I’d want to call this command (or something to the same effect) on a remote server, and have the query executed on my local machine.