Monthly Archives: October 2012

Zenoss Device Listing

Nice for if you need some form of automated report in zenoss, here’s a code snippet to print out a list of devices per class, formatted in markdown markup

typehash = {}
for d in dmd.Devices.getSubDevices():
    if typehash.has_key(d.getDeviceClassPath()) == False:
        typehash[d.getDeviceClassPath()] = []
    typehash[d.getDeviceClassPath()].append(d.viewName())

for key in typehash:
    print """### %s\n""" % (key)
    for item in typehash[key]:
        print "*   %s\n" % (item)
    print ""

Run it in the zenoss console, or anything with a dmd connection. Tested on 2.5.x, but should be trivial to port forward if it breaks.

(PS: I could probably fix up that last bit to do something with .iteritems() instead…)

Post-jump, things everywhere die

Always a bit interesting to check out stats for things on the internet after a major event like Felix Baumgartner’s jump from space today. Here’s a graph for JINX, one of the two .za INXs.

Interestingly, it seems to have all come from only one of the two GGCs that are visible via that path.

Elsewhere we see similar drops, and also some of the GGCs dying (502’s were served up for a while). Here’s the CAR-IX stats:

During the stream, some people were handed off to Akamai edge nodes. I don’t have that much information on what was happening in all the various networks, though. Will be interesting to see that coming to light soon from the likes of Renesys and such.

Update: I see the first image from this page made it onto one of the .za news syndicators without credit. Nice of them.

Old tricks, new coins; same problems

In setting up some mirrors recently, I’ve come to learn that rsync’s algorithm by default doesn’t deal well with long-haul TCP going hand in hand with small files. I still need to set aside some time to find a nice set of optimization flags to tweak that up a bit more. And when I say “doesn’t deal well”, I mean in the region of “can tx about 1/3rd a single-session TCP speedtest can do over the same path”. Later’s worry, though.

So, the point of this point:

receiver$ nc -l 1234 | pv | tar xf
sender$ tar cf - moz | pv | nc receiver.domain.tld 1234

And there you have a minimal-CPU-usage streaming file delivery setup. Of course, this doesn’t deal with retransmits or anything else. This just gets the bulk over. After that you can run an rsync with big block lengths over all of it, and get any files fixed using that.

And the “old trick” part of this? As far as I know, this is pretty much how the whole tape drive thing used to work[0]. I’ve just added some internet in the middle.

[0] – I wouldn’t know for certain, before my time