by R.I. Pienaar | Sep 9, 2009 | Front Page
Reductive Labs released version 0.25.0 of Puppet recently, there are some teething problems but I got most of it sorted out by upgrading Mongrel.
Today I decided to upgrade 60 of my nodes and they’ve slowly been running in a for loop all day, the results on machines with many file resources are staggering.
One node that uses snippets heavily has 450 file resources and used to take a while to run, here’s some before and after stats:
Finished catalog run in 235.63 seconds
Finished catalog run in 231.43 seconds
Finished catalog run in 42.74 seconds
Finished catalog run in 32.14 seconds
Very impressed.
by R.I. Pienaar | Sep 1, 2009 | Front Page
I did a fresh install of Snow Leopard on my Macbook and soon realized my Samba shares were broken through finder – but still worked from the CLI.
Worse still once I tried to access the shares Finder would basically be dead, you’d need to Force Quit it to make it work again.
Eventually I reached for tcpdump and wireshark and found it’s the pesky .DS_Store files again, seems my QNAP is denying access to them, Finder did not cope well with this.
A quick bit of hackery of my smb.conf solved it:
veto files = /.AppleDB/.AppleDouble/.AppleDesktop/.DS_Store/:2eDS_Store/Network Trash Folder/Temporary Items/TheVolumeSettingsFolder/.@__thumb/.@__desc/
delete veto files = yes
Once I got this removed and samba restarted my shares were working again in Snow Leopard. A bit annoying but not too hard in the end.
by R.I. Pienaar | Aug 31, 2009 | Code, Front Page
Often while writing Puppet manifests you find yourself needing data, things like the local resolver, SMTP relay, SNMP Contact, Root Aliases etc. once you start thinking about it the amount of data you deal with is quite staggering.
It’s strange then that Puppet provides no way to work with this in a flexible way. By flexible I mean:
- A way to easily retrieve it
- A way to choose data per host, domain, location, data center or any other criteria you could possibly wish
- A way to provide defaults that allow your code to degrade gracefully
- A way to make it a critical error should expected data not exist
- A way that works with LDAP nodes, External Nodes or normal node{} blocks
This is quite a list of requirements, and in vanilla puppet you’d need to use case statements, if statements etc.
For example, here’s a use case, set SNMP Contact and root user alias. Some machines for a specific client should have different contact details than other, indeed even some machines should have different contact details. There should be a fall back default value should nothing be set specifically for a host.
You might attempt to do this with case and if statements:
class snmp::config {
if $fqdn == "some.box.your.com" {
$contactname = "Foo Sysadmin"
$contactemail = "sysadmin@foo.com"
}
if $domain == "bar.com" {
$contactname = "Bar Sysadmin"
$contactemail = "sysadmin@bar.com"
}
if $location == "ldn_dc" && (! $contactname && ! $contactemail) {
$contactname = "London Sysadmin"
$contactemail = "ldnops@your.com"
}
if (! $contactname && ! $contactemail) {
$contactname = "Sysadmin"
$contactemail = "sysadmin@you.com"
}
} |
class snmp::config {
if $fqdn == "some.box.your.com" {
$contactname = "Foo Sysadmin"
$contactemail = "sysadmin@foo.com"
}
if $domain == "bar.com" {
$contactname = "Bar Sysadmin"
$contactemail = "sysadmin@bar.com"
}
if $location == "ldn_dc" && (! $contactname && ! $contactemail) {
$contactname = "London Sysadmin"
$contactemail = "ldnops@your.com"
}
if (! $contactname && ! $contactemail) {
$contactname = "Sysadmin"
$contactemail = "sysadmin@you.com"
}
}
You can see that this might work, but it’s very unwieldy and your data is all over the code and soon enough you’ll be nesting selectors in case statements inside if statements, it’s totally unwieldy not to mention not reusable throughout your code.
Not only is it unwieldy but if you wish to add more specifics in the future you will need to use tools like grep, find etc to find all the cases in your code where you use this and update them all. You could of course come up with one file that contains all this logic but it would be aweful, I’ve tried it’s not viable.
What we really want to do is just this, and it should take care of all the code above, you should be able to call this wherever you want with complete disregard for the specifics of the overrides in data:
$contactname = extlookup("contactname")
$contactemail = extlookup("contactemail") |
$contactname = extlookup("contactname")
$contactemail = extlookup("contactemail")
I’ve battled for ages with ways to deal with this and have come up with something that fits the bill perfectly, been using it and promoting it for almost a year now and so far found it to be totally life saver.
Sticking with the example above, first we should configure a lookup order that will work for us, here is my actual use:
$extlookup_precedence = ["%{fqdn}", "location_%{location}", "domain_%{domain}", "country_%{country}", "common"] |
$extlookup_precedence = ["%{fqdn}", "location_%{location}", "domain_%{domain}", "country_%{country}", "common"]
This sets up the lookup code to first look for data specified for the host, then the location the host is hosted at, then the domain, country and eventually a set of defaults.
My current version of this code uses CSV files to store the data simply because it was convenient and universally available with no barrier to entry. It would be trivial to extend the code to use a database, LDAP or other system like that.
For my example if I put into the file some.box.your.com.csv the following:
contactemail,sysadmin@foo.com
contactname,Foo Sysadmin |
contactemail,sysadmin@foo.com
contactname,Foo Sysadmin
And in common.csv if I put:
contactemail,sysadmin@you.com
contactname,Sysadmin |
contactemail,sysadmin@you.com
contactname,Sysadmin
The lookup code will use this data whenever extlookup(“contactemail”) gets called on that machine, but will use the default when called from other hosts. If you follow the logic above you’ll see this completely replace the case statement above with simple data files.
Using a system like this you can model all your data needs and deal with the data and your location, machine, domain etc specific data outside of your manifests.
The code is very flexible, you can reuse existing variables in your code inside your data, for example:
ntpservers,1.pool.%{country}.ntp.org,2.pool.%{country}.ntp.org |
ntpservers,1.pool.%{country}.ntp.org,2.pool.%{country}.ntp.org
In this case if you have $country defined in your manifest the code will use this variable and put it into the answer. This snippet of data also shows that it supports arrays.
Here is another use case:
package{"screen":
ensure => extlookup("pkg_screen", "absent")
} |
package{"screen":
ensure => extlookup("pkg_screen", "absent")
}
This code will ensure that, unless otherwise specified, I do not want to have screen installed on any of my servers. I could now though decide that all machines in a domain, or all machines in a location, country or specific hosts could have screen installed by simply setting them to present in the data file.
This makes the code not only configurable but configurable in a way that suits every possible case as it depends on the precedence defined above. If your use case does not rely on countries for example you can just replace the country ordering with whatever works for you.
I use this code in all my manifests and it’s helped me to make an extremely configurable set of manifests. It has proven to be very flexible as I can use the same code for different clients in different industries and with different needs and network layouts without changing the code.
The code as it stands is available here: http://www.devco.net/code/extlookup.rb
Follow the comments in the script for install instructions and full usage guides.
by R.I. Pienaar | Aug 31, 2009 | Code, Front Page
Yesterday I released version 0.4 of my Ruby PowerDNS development framework.
Version 0.3 was a feature complete version but lacked in some decent error handling in all cases which resulted in weird unexplained crashes when things didn’t work as hoped, for example syntax errors in records could kill the whole thing.
Version 0.4 is a big push in stability, I’ve added tons of exception handling there should now be very few cases of unexpected terminations, I know of only one case and that’s when the log can’t be written too, all other cases should be logged and recovered from in some hopefully sane way.
I’ve written loads of unit tests using Test::Unit and have created a little testing harness that can be used to test your records without putting them on the server, using this for example you can test GeoIP based records easily since you can specify any source address.
Overall I think this is a production ready release, it would be a 1.0 release was it not for some features I wish to add before calling it 1.0. The features are about logging stats and about consuming data from external sources, these will be my next priorities.
by R.I. Pienaar | Aug 24, 2009 | Uncategorized
So I use Hetzner a lot for my machines, I’ve about 10 to 15 of their machines now across various clients and am mostly quite happy with them. They provide a service that matches the price – ie. good enough.
One area of their service though really grates me, they give you old machines, when those machines fail they replace them with other old machines similarly for drives etc.
On more than one occasion now have I had hard drives fail only to see them replaced with other shitty drives. Each time they claim the drives are well tested and each time they pull the old ‘it could be the cable’ trick and then replace the machine and the drive.
Since this has happened to me every single time I’ve changed a disk so far I have to wonder if this is everyones experience?
From where I sit its simple. They made a choice to take out drives reported broken by someone, they then test it and put it back when their tests fail to find any problem, they do this to save them money knowing full well that drives will fail and all they’re doing is shifting the risk onto their clients, while the clients keep subsidizing their expansion.
So given this is the quality of service they’re aiming at, surely once this policy bites a good long standing user offering some kind of payback for the inconvenience would be good business practice? Apparently not.
This is pretty poor, even after complaining to them they swapped my chassis and again put a disk with > 6000 hours under its belt in my machine.
So I guess you need to be pretty sure your softraids are setup properly when you want to use this company, their support stand is clear:
I’m sorry but we don’t promise anywhere that we built always new hardware into our servers. I can only ensure you that all hardware is always well tested and without any problem before we build it into a server.
Ie., screw you, we don’t care for any evidence and repeated failures, and we take zero responsibility for our equipment, we’ll just keep taking your money.