Select Page
NOTE: This is a static archive of an old blog, no interactions like search or categories are current.

Last night I had a bit of a mental dump on twitter about structured data and non structured data when communicating with a cluster or servers – Twitter fails at this kind of stuff so figured I’ll follow up with a blog post.

I started off asking for a list of tools in the cluster admin space and got some great pointers which I am reproducing here:

fabric, cap, func, clusterssh, sshpt, pssh, massh, clustershell, controltier, rash (related), dsh, chef knife ssh, pdsh+dshbak and of course mcollective. I was also sent a list of ssh related tools which is awesome.

The point I feel needs to be made is that in general these tools just run commands on remote servers. They are not aware of the commands output structure, what denotes pass or fail in the context of the command etc. Basically the commands people run are commands designed for ages to be looked at by human eyes and then parsed by a human mind. Yes they are easy to pipe and grep and chop up, but ultimately it was always designed to be run on one server at a time.

The parallel ssh’ers run these commands in parallel and you tend to get a mash of output. The output is mixed STDOUT and STDERR and often output from different machines are multiplexed into each other so you get a stream of text that is hard to decipher even on 2 machines, not to mention 200 at once.

Take as an example a simple yum command to install a package:

% yum install zsh
Loaded plugins: fastestmirror, priorities, protectbase, security
Loading mirror speeds from cached hostfile
372 packages excluded due to repository priority protections
0 packages excluded due to repository protections
Setting up Install Process
Package zsh-4.2.6-3.el5.i386 already installed and latest version
Nothing to do

When run on one machine you pretty much immediately know whats going on, package was already there so nothing got done, now lets see cap invoke:

# cap invoke COMMAND="yum -y install zsh"
  * executing `invoke'
  * executing "yum -y install zsh"
    servers: ["web1", "web2", "web3"]
    [web2] executing command
    [web1] executing command
    [web3] executing command
 ** [out :: web2] Loaded plugins: fastestmirror, priorities, protectbase, security
 ** [out :: web2] Loading mirror speeds from cached hostfile
 ** [out :: web3] Loaded plugins: fastestmirror, priorities, protectbase
 ** [out :: web3] Loading mirror speeds from cached hostfile
 ** [out :: web3] 495 packages excluded due to repository priority protections
 ** [out :: web2] 495 packages excluded due to repository priority protections
 ** [out :: web3] 0 packages excluded due to repository protections
 ** [out :: web3] Setting up Install Process
 ** [out :: web2] 0 packages excluded due to repository protections
 ** [out :: web2] Setting up Install Process
 ** [out :: web1] Loaded plugins: fastestmirror, priorities, protectbase
 ** [out :: web3] Package zsh-4.2.6-3.el5.x86_64 already installed and latest version
 ** [out :: web3] Nothing to do
 ** [out :: web1] Loading mirror speeds from cached hostfile
 ** [out :: web1] Install       1 Package(s)
 ** [out :: web2] Package zsh-4.2.6-3.el5.x86_64 already installed and latest version
 ** [out :: web2] Nothing to do
 ** [out :: web1] 548 packages excluded due to repository priority protections
 ** [out :: web1] 0 packages excluded due to repository protections
 ** [out :: web1] Setting up Install Process
 ** [out :: web1] Resolving Dependencies
 ** [out :: web1] --> Running transaction check
 ** [out :: web1] ---> Package zsh.x86_64 0:4.2.6-3.el5 set to be updated
 ** [out :: web1] --> Finished Dependency Resolution
 ** [out :: web1]
 ** [out :: web1] Dependencies Resolved
 ** [out :: web1]
 ** [out :: web1] ================================================================================
 ** [out :: web1] Package      Arch            Version                Repository            Size
 ** [out :: web1] ================================================================================
 ** [out :: web1] Installing:
 ** [out :: web1] zsh          x86_64          4.2.6-3.el5            centos-base          1.7 M
 ** [out :: web1]
 ** [out :: web1] Transaction Summary
 ** [out :: web1] ================================================================================
 ** [out :: web1] Install       1 Package(s)
 ** [out :: web1] Upgrade       0 Package(s)
 ** [out :: web1]
 ** [out :: web1] Total download size: 1.7 M
 ** [out :: web1] Downloading Packages:
 ** [out :: web1] Running rpm_check_debug
 ** [out :: web1] Running Transaction Test
 ** [out :: web1] Finished Transaction Test
 ** [out :: web1] Transaction Test Succeeded
 ** [out :: web1] Running Transaction
 ** [out :: web1] Installing     : zsh                                                      1/1
 ** [out :: web1]
 ** [out :: web1]
 ** [out :: web1] Installed:
 ** [out :: web1] zsh.x86_64 0:4.2.6-3.el5
 ** [out :: web1]
 ** [out :: web1] Complete!
    command finished
zlib(finalizer): the stream was freed prematurely.
zlib(finalizer): the stream was freed prematurely.
zlib(finalizer): the stream was freed prematurely.

Most of this stuff scrolled off my screen and at the end all I had was the last bit of output. I could scroll up and still figure out ok what was going on – 2 of the 3 already had it installed, one got it. Now imagine 100 or 500 of these machines output all mixed in? Just parsing this output would be prone to human error and you’re likely to miss that something failed.

So here is my point, your cluster management tool need to provide an API around the every day commands like packages, process listing etc. It should return structured data and you could use the structured data to create tools more fit for the purpose of using on large amount of machines. Being that the output is standardized it should provide generic tools that just do the right thing out of the box for you.

With the package example above knowing that all 500 machines had spewed out a bunch of stuff while installing isn’t important, you just want to know the result in a nice way. Here’s what mcollective does:

$ mc-package install zsh
 
 * [ ============================================================> ] 3 / 3
 
web2.my.net                      version = zsh-4.2.6-3.el5
web3.my.net                      version = zsh-4.2.6-3.el5
web1.my.net                      version = zsh-4.2.6-3.el5
 
---- package agent summary ----
           Nodes: 3 / 3
        Versions: 3 * 4.2.6-3.el5
    Elapsed Time: 16.33 s

In the case of a package you want to just know the version post the event and a summary of status. Just by looking at the stats I know the desired result was achieved, if I had different versions listed I could very quickly identify the problem ones.

Here’s another example – NRPE this time:

% mc-rpc nrpe runcommand command=check_disks
 
 * [ ============================================================> ] 47 / 47
 
 
dev1.my.net                      Request Aborted
   CRITICAL
          Exit Code: 2
   Performance Data:  /=4111MB;3706;3924;0;4361 /boot=26MB;83;88;0;98 /dev/shm=0MB;217;230;0;256
             Output: DISK CRITICAL - free space: / 24 MB (0% inode=86%);
 
 
Finished processing 47 / 47 hosts in 766.11 ms

Here notice I didn’t use a NRPE specific mc- command, I just used the generic rpc caller and the caller knows that I am only interesting in seeing the results of machines that are in WARNING or CRITICAL state. If you run this on your console you’d see the ‘Request Aborted’ would be red and the ‘CRITICAL’ would be yellow. Immediately pulling your eye to the important information. Also note how the result shows human friendly field names like ‘Performance Data’.

The formatting, highlighting, knowledge to only show failing resources and human friendly headings all happen automatically, no programming of client side UI is required you get the ability to do this for free simply from the fact that mcollective focuses on putting structure around outputs.

Here’s the earlier package install example with the standard rpc caller not with a specialized package frontend:

% mc-rpc package install package=zsh
Determining the amount of hosts matching filter for 2 seconds .... 47
 
 * [ ============================================================> ] 47 / 47
 
Finished processing 47 / 47 hosts in 2346.05 ms

Everything worked, all 47 machines have the package installed and your desired action was taken. So no point in spamming you with pages of junk, who cares to see all the Yum output? Had an install failed you’d have had usable error message just for the host that failed. The output would be equally usable on one or a thousand hosts with very little margin for human error in knowing the result of your request.

This happens because mcollective has a standard structure of responses, each response has a absolute success value that tells you if the request failed or not and by using this you can get generic CLI, Web, etc tools that displays large amounts of data from a network of hosts in a way that is appropriate and context aware.

For reference here’s the response as received on the client:

{:sender=>"dev1.my.net",
 :statuscode=>1,
 :statusmsg=>"CRITICAL",
 :data=>
  {:perfdata=>
    " /=4111MB;3706;3924;0;4361 /boot=26MB;83;88;0;98 /dev/shm=0MB;217;230;0;256",
   :output=>"DISK CRITICAL - free space: / 24 MB (0% inode=86%);",
   :exitcode=>2}}

Only by thinking about CLI and admin tasks in this way do I believe we can take the Unix utilities that we call on remote hosts and turn them into something appropriate for large scale parallel use that doesn’t overwhelm the human at the other end with information. Additionally since this is an API that is computer friendly it makes those tools usable in many other places like code deployers – for example to enable your continues deployment using robust use of unix tools via such an API.

There are many other advantages to this approach. Requests are authorized on a very fine level, requests are audited. API wrappers are code that’s versioned, that can be tested in development and makes the margin for error much smaller than just running random unix commands ad hoc. Finally if you’re using the code on a CLI ad-hoc as above or in your continues deployer you share the same code that you’ve already tested and trust.