Select Page

What does puppet manage on a node?

Sometimes it’s nice to try and figure out what resources of a machine are being managed by puppet.  Puppet keeps a state file in either YAML or Marshall format called localconfig.yaml it’s full of useful information, I wrote a quick script to parse it and show you what’s being managed.

Typical output is:

Classes included on this node:
        nephilim.ml.org
        common::linux
        <snip>

Resources managed by puppet on this node:
        service{smokeping: }
                defined in common/modules/smokeping/manifests/service.pp:6

        file{/etc/cron.d/mrtg: }
                defined in common/modules/puppet/manifests/init.pp:201
<snip>

It will show all classes and all resources including where in your manifests the resource comes from.  Unfortunately for resources created by defines it shows the define as the source but I guess you can’t have it all.

You can get the code here it’s pretty simple, just pass it a path to your localconfig.yaml file, it supports both YAML and Marshal formats.

The file also has every property of the resources in it etc, so you can easily extend this to print a lot of other information, just use something like pp to dump out the contents of Puppet::TransObject objects to see what’s possible.

SSH socks proxies hanging

I use SSH’s socks proxy feature a lot, in fact I use it all the time, most of my browsing, IM, etc all goes over it out via my hosted virtual machines,

I do this to simplify my life for things like firewall rules and also to get around things like age blocks on mobile networks.  I work for a site deemed adult by most of them so I can’t even see my nagios without age verifying.

Recently they have been driving me nuts, every now and then the whole session would just lock up and sit there doing nothing, I’ve not seen this happen before and was a bit stumped.

Turns out, it chooses to speak to TCP/53 sometimes instead of UDP/53 for resolving, not sure why exactly, I’ve not tried to figure out what queries cause this – I know there are limits to response sizes which will force it to go over TCP.  Why it’s only started doing this now I don’t know, maybe a update changed behavior, I’ve never had TCP/53 open on the cache. 

My firewall was blocking TCP/53 on the local cache so this would lock up the whole ssh session, maybe the whole ssh process is single threaded and so waiting in SYN_SENT mode just hangs the whole thing, that’s a bit sucky, I might need a better proxy.

Imposter Alert!

You’d be thinking based on the last 2 posts that someone is trying to convince the world that I’ve gone mad and do actually like Debian.

Actually I am letting some other people guest blog here, the first is Mark Webster aka LSD, he’s a developer, systems dude and all round kewl guy working in London on all sorts of interesting stuff, most recently about optimizing Linux kernels to get insane amounts of packets per second out of them.

Look out for more great posts from Mark hopefully detailing more of his experiences tuning kernels and such.

I’d also be interested to hear from other like minded people who want to guest blog here, I’ll over the next while take out some of the links and stuff that makes this site personal and more friendly to guest bloggers.

Compiling Custom Kernels in Debian

<historylesson>

Things aren’t how they used to be.

It seems I spent most of the ’90s and the early ’00s (damn you Y2K!) building my own kernels. The preposterous thought of sticking with the default kernel of your chosen distribution simply never crossed a lot of minds. For one thing, your hardware was pretty much guaranteed not to work out of the box, and RAM was expensive! It was in your best interests to tweak the hell out of the settings to get the best performance out of your hardware, and to avoid compiling in any unnecessary code or modules, to reduce your system memory footprint.

This involved much learning of new terminology and options, many failures and unbootable systems, and many trips to the local steakhouse or coffee shop, since each compile would take hours on your trusty 80386 or 80486.

Things aren’t the way they used to be, thank goodness. Chances are, the vanilla kernel you received with your latest Masturbating Monkey Ubuntu 13.0 installation performs well and works with most of your hardware. You don’t need to roll your own kernel. In fact, you should probably avoid it, especially if you’re thinking of installing servers in a live production environment.

Well, usually. Sometimes, you really need to tweak some code to get the feature or performance you were counting on, or try out some awesome new patch which might just revolutionise the systems you are developing.

</historylesson>

I’m a sucker for Debian. I love dpkg and apt(-itude). The package manager is powerful and I enjoy using it.ย I like everything except building packages from source. It’s rarely as straightforward as it should be, and sometimes it’s incredibly difficult to obtain a package which installs the same way and into the same places with the same features as the upstream pre-built package that you’re supposedly building.

Building kernel packages is worse yet. Far worse. When rolling your own kernel, especially if you don’t want the package manager to install the latest version over your own, you are forced to play ball with apt, and you must play by apt’s rules.

I’ve tried a multitude of incantations of dpkg-buildpackage, debuild, make-kpkg, etc. All I want to be able to do is patch the kernel source, make some changes, append a custom version tag, and build a .deb which I can safely install, yet each HOWTO or set of instructions I tried failed to do this to my (misguided?) specifications. I had particularly nasty problems with the grub update post-inst scripts in all cases.

(more…)

Introductions

Hello World.

I’m Mark, and I do a lot of programming, designing of systems, working ridiculous hours, ranting about many things, and I am frequently guilty of re-inventing wheels (which shall henceforth be refered to as either ‘improvement’ or ‘learning’ :-). Currently, I am involved in re-inventing designing a new suite of telephone conference call back-end systems for a rapidly expanding conference call company, from the ground up.
Doing this kind of work on the carrier grade level involves the convergence of a lot of technologies, and there’s a truckload of R&D involved, which is bloody fantastic since I get bored too easily when there’s nothing new to learn, or not enough diversity.
I arrived in this part of the IT industry after doing some weird things:
  • Five years in the games industry (hellishly boring; trust me – I’ll explain why another time)
  • A few years developing bespoke systems, providing services and Linux “appliances” for businesses around South Africa
  • Loads of freelance development on various platforms (mobile handsets, 8-bit embedded systems, even Windows apps)
  • An entire childhood & adolescence involved (misspent?) in the demoscene. Epic fun. Low-level programming, register fiddling, cycle counting and reverse engineering is the shit!
Anyway, talk is cheap, and I’ve clogged the Intertubes quite enough!
Coming up next, something that has been a constant thorn in my side as a wretched Debian user: building custom kernels.