R.I.Pienaar

MCollective Async Result Handling

08/19/2012

This ia a post in a series of posts I am doing about MCollective 2.0 and later.

Overview


The kind of application I tend to show with MCollective is very request-response orientated. You request some info from nodes and it shows you the data as they reply. This is not the typical thing people tend to do with middleware, instead what they do is create receivers for event streams processing those into databases or using it as a job queue.

The MCollective libraries can be used to build similar applications and today I’ll show a basic use case for this. It’s generally really easy creating a consumer for a job queue using Middleware as covered in my recent series of blog posts. It’s much harder doing it when you want to support multiple middleware brokers, support pluggable payload encryption, different serializers add some Authentication, Authorization and Auditing into the mix and soon it becomes a huge undertaking.

MCollective already has a rich sets of plugins for all of this so it would be great if you could reuse these to save yourself some time.

Request, but reply elsewhere


One of the features we added in 2.0.0 is more awareness of the classical reply-to behaviour common to middleware brokers to the core MCollective libraries. Now every request specifies a reply-to target and the nodes will send their replies there, this is how we get replies back from nodes and if the brokers support it this is typically done using temporary private queues.

But it’s not restricted to this, lets see how you can use this feature from the command line. First we’ll setup a listener on a specific queue using my stomp-irb application.

% stomp-irb -s stomp -p 6163
Interactive Ruby shell for STOMP
 
info> Attempting to connect to stomp://rip@stomp:6163
info> Connected to stomp://rip@stomp:6163
 
Type 'help' for usage instructions
 
>> subscribe :queue, "mcollective.nagios_passive_results"
Current Subscriptions:
        /queue/mcollective.nagios_passive_results
 
=> nil
>>

We’re now receiving all messages on /queue/mcollective.nagios_passive_results, lets see how we get all our machines to send some data there:

% mco rpc nrpe runcommand command=check_load --reply-to=/queue/mcollective.nagios_passive_results
Request sent with id: 61dcd7c8c4a354198289606fb55d5480 replies to /queue/mcollective.nagios_passive_results

Note this client recognised that you’re never going to get replies so it just publishes the request(s) and shows you the outcome. It’s real quick and doesn’t wait of care for the results.

And over in our stomp-irb we should see many messages like this one:

<<stomp>> BAh7CzoJYm9keSIB1QQIewg6CWRhdGF7CToNZXhpdGNvZGVpADoMY29tbWFu
ZCIPY2hlY2tfbG9hZDoLb3V0cHV0IihPSyAtIGxvYWQgYXZlcmFnZTogMC44
MiwgMC43NSwgMC43MToNcGVyZmRhdGEiV2xvYWQxPTAuODIwOzEuNTAwOzIu
MDAwOzA7IGxvYWQ1PTAuNzUwOzEuNTAwOzIuMDAwOzA7IGxvYWQxNT0wLjcx
MDsxLjUwMDsyLjAwMDswOyA6D3N0YXR1c2NvZGVpADoOc3RhdHVzbXNnIgdP
SzoOcmVxdWVzdGlkIiU2MWRjZDdjOGM0YTM1NDE5ODI4OTYwNmZiNTVkNTQ4
MDoMbXNndGltZWwrBwjRMFA6DXNlbmRlcmlkIgl0d3AxOgloYXNoIgGvbVdV
V0RXaTd6a04xRWYrM0RRUWQzUldsYjJINTltMUdWYkRBdWhVamJFaEhrOGJl
Ykd1Q1daMnRaZ3VBCmx3MW5DeXhtT2xWK3RpbzlCNFBMbnhoTStvV3Z6OEo4
SVNiYTA4a2lzK3BVTVZ0cGxiL0ZPRVlMVWFPRQp5K2QvRGY3N2I2TTdGaGtJ
RUxtR2hONHdnZTMxdU4rL3hlVHpRenE0M0lJNE5CVkpRTTg9CjoQc2VuZGVy
YWdlbnQiCW5ycGU=

What you’re looking at is a base64 encoded serialized MCollective reply message. This reply message is in this case signed using a SSL key for authenticity and has the whole MCollective reply in it.

MCollective to Nagios Passive Check bridge


So as you might have guessed from the use of the NRPE plugin and the queue name I chose the next step is to connect the MCollective NRPE results to Nagios using its passive check interface:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
require 'mcollective'
require 'pp'
 
# where the nagios command socket is
NAGIOSCMD = "/var/log/nagios/rw/nagios.cmd"
 
# to mcollective this is a client, load the client config and
# inform the security system we are a client
MCollective::Applications.load_config
MCollective::PluginManager["security_plugin"].initiated_by = :client
 
# connect to the middleware and subscribe
connector = MCollective::PluginManager["connector_plugin"]
connector.connect
connector.connection.subscribe("/queue/mcollective.nagios_passive_results")
 
# consume all the things...
loop do
  # get a mcollective Message object and configure it as a reply
  work = connector.receive
  work.type = :reply
 
  # decode it, this will go via the MCollective security system
  # and validate SSL etcetc
  work.decode!
 
  # Now we have the NRPE result, just save it to nagios
  result = work.payload
  data = result[:body][:data]
 
  unless data[:perfdata] == ""
    output = "%s|%s" % [data[:output], data[:perfdata]]
  else
    output = data[:output]
  end
 
  passive_check = "[%d] PROCESS_SERVICE_CHECK_RESULT;%s;%s;%d;%s" % [result[:msgtime], result[:senderid], data[:command].gsub("check_", ""), data[:exitcode], output]
 
  begin
    File.open(NAGIOSCMD, "w") {|nagios| nagios.puts passive_check }
  rescue => e
    puts "Could not write to #{NAGIOSCMD}: %s: %s" % [e.class, e.to_s]
  end
end

This code connects to the middleware using the MCollective Connector Plugin, subscribes to the specified queue and consumes the messages.

You’ll note there is very little being done here that’s actually middleware related we’re just using the MCollective libraries. The beauty of this code is that if we later wish to employ a different middleware or different security system or configure our middleware connections to use TLS to ActiveMQ nothing has to change here. All the hard stuff is done in MCollective config and libraries.

In this specific case I am using the SSL plugin for MCollective so the message is signed so no-one can edit the results in a MITM attack on the monitoring system. This came for free I didn’t have to write any code here to get this ability – just use MCollective.

Scheduling Nagios Checks and scaling them with MCollective


Now that we have a way to receive check results from the network lets look at how we can initiate checks. I’ll use the very awesome Rufus Scheduler Gem for this.

I want to create something simple that reads a simple config file of checks and repeatedly request my nodes – possibly matching mcollective filters – to do NRPE checks. Here’s a sample checks file:

nrpe "check_load", "1m", "monitored_by=monitor1"
nrpe "check_swap", "1m", "monitored_by=monitor1"
nrpe "check_disks", "1m", "monitored_by=monitor1"
nrpe "check_bacula_main", "6h", "bacula::node monitored_by=monitor1"

This will check load, swap and disks on all machines monitored by this monitoring box and do a bacula backup check on machines that has the bacula::node class included via puppet.

Here’s a simple bit of code that takes the above file and schedules the checks:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
require 'rubygems'
require 'mcollective'
require 'rufus/scheduler'
 
# (ab)use mcollective logger...
Log = MCollective::Log
 
class Scheduler
  include MCollective::RPC
 
  def initialize(destination, checks)
    @destination = destination
    @jobs = []
 
    @scheduler = Rufus::Scheduler.start_new
    @nrpe = rpcclient("nrpe")
 
    # this is where the magic happens, send all the results to the receiver...
    @nrpe.reply_to = destination
 
    instance_eval(File.read(checks))
  end
 
  # helper to schedule checks, this will create rufus jobs that does NRPE requests
  def nrpe(command, interval, filter=nil)
    options = {:first_in => "%ss" % rand(Rufus.parse_time_string(interval)),
               :blocking => true}
 
    Log.info("Adding a job for %s every %s matching '%s', first in %s" % [command, interval, filter, options[:first_in]])
 
    @jobs << @scheduler.every(interval.to_s, options) do
      Log.info("Publishing request for %s with filter '%s'" % [command, filter])
 
      @nrpe.reset_filter
      @nrpe.filter = parse_filter(filter)
      @nrpe.runcommand(:command => command.to_s)
    end
  end
 
  def parse_filter(filter)
    new_filter = MCollective::Util.empty_filter
 
    return new_filter unless filter
 
    filter.split(" ").each do |filter|
      begin
        fact_parsed = MCollective::Util.parse_fact_string(filter)
        new_filter["fact"] << fact_parsed
      rescue
        new_filter["cf_class"] << filter
      end
    end
 
    new_filter
  end
 
  def join
    @scheduler.join
  end
end
 
s = Scheduler.new("/queue/mcollective.nagios_passive_results", "checks.txt")
s.join

When I run it I get:

% ruby schedule.rb
info 2012/08/19 13:06:46: activemq.rb:96:in `on_connecting' TCP Connection attempt 0 to stomp://nagios@stomp:6163
info 2012/08/19 13:06:46: activemq.rb:101:in `on_connected' Conncted to stomp://nagios@stomp:6163
info 2012/08/19 13:06:46: schedule.rb:25:in `nrpe' Adding a job for check_load every 1m matching 'monitored_by=monitor1', first in 36s
info 2012/08/19 13:06:46: schedule.rb:25:in `nrpe' Adding a job for check_swap every 1m matching 'monitored_by=monitor1', first in 44s
info 2012/08/19 13:06:46: schedule.rb:25:in `nrpe' Adding a job for check_disks every 1m matching 'monitored_by=monitor1', first in 43s
info 2012/08/19 13:06:46: schedule.rb:25:in `nrpe' Adding a job for check_bacula_main every 6h matching 'bacula::node monitored_by=monitor1', first in 496s
info 2012/08/19 13:07:22: schedule.rb:28:in `nrpe' Publishing request for check_load with filter 'monitored_by=monitor1'
info 2012/08/19 13:07:29: schedule.rb:28:in `nrpe' Publishing request for check_disks with filter 'monitored_by=monitor1'
info 2012/08/19 13:07:30: schedule.rb:28:in `nrpe' Publishing request for check_swap with filter 'monitored_by=monitor1'
info 2012/08/19 13:08:22: schedule.rb:28:in `nrpe' Publishing request for check_load with filter 'monitored_by=monitor1'

All the checks are loaded, they are splayed a bit so they don’t cause a thundering herd and you can see the schedule is honoured. In my nagios logs I can see the passive results being submitted by the receiver.

MCollective NRPE Scaler


So taking these ideas I’ve knocked up a project that does this with some better code than above, it’s still in progress and I’ll blog later about it. For now you can check out the code on GitHub it includes all of the above but integrated better and should serve as a more complete example than I can realistically post on a blog post.

There are many advantages to this method that comes specifically from combining MCollective and Nagios. The Nagios scheduler visit hosts one by one meaning you get this moving view of status over a 5 minute resolution. Using MCollective to request the check on all your hosts means you get a 1 second resolution – all the load averages Nagios sees are from the same narrow time period. Receiving results on a queue has scaling benefits and the MCollective libraries are already multi broker aware and supports failover to standby brokers which means this isn’t a single point of failure.

Conclusion


So we’ve seen that we can reuse much of the MCollective internals and plugin system to setup a dedicated receiver of MCollective produced data and I’ve shown a simple use case where we’re requesting data from our managed nodes.

Today what I showed kept the request-response model but split the traditional MCollective client into two. One part scheduling requests and another part processing results. These parts could even be on different machines.

We can take this further and simply connect 2 bits of code together and flow arbitrary data between them but securing the communications using the MCollective protocol. A follow up blog post will look at that.

MCollective Batched Requests

07/23/2012

This ia a post in a series of posts I am doing about MCollective 2.0 and later.

We’ve discussed Direct Addressing Mode before and today I’ll show one of the new features this mode enables.

Overview


MCollective is very fast which is great usually. Sometimes though when you’re restarting webservers the speed and concurrency can be a problem. Restarting all your webservers at the same time is generally a bad idea.

In the past the general way to work around this was using a fact like cluster=a to cut your server estate into named groups and then only address them based on that. This worked OK but was clearly not the best possibly outcome.

Apart from this the concurrency also meant that once a request is sent you cannot ^C out of it. Any mistake made is final and processing cannot be interrupted.

Since MCollective 2.0 has the ability to address nodes directly without broadcasting it has become much easier to come up with a good solution to these problems. You can now construct RPC requests targeted at 100s of nodes but ask MCollective to communicate with them in smaller batches with a configurable sleep in between batches. You can ^C at any time and only batches that has already received requests will be affected.

Using on the CLI


Using this feature on the CLI is pretty simple, all RPC clients have some new CLI options:

% mco service restart httpd --batch 10 --batch-sleep 2
Discovering hosts using the mongo method .... 26
 
 * [============================================================> ] 26 / 26
 
.
.
.
 
Finished processing 26 / 26 hosts in 6897.66 ms

What you will see when running it on the CLI is that the progress bar will progress in groups of 10, pause 2 seconds and then do the next 10. In this case you could ^C at any time and only the machines in earlier batches and the 10 of the current batches will have restarted, future nodes would not yet be affected in any way.

Under the hood MCollective detects that you want to do batching then force the system into Direct Addressing Mode and makes batches of requests. The requestid stays the same throughout, auditing works, results work exactly as before and display behaviour does not change apart from progressing in steps.

Using in code


Naturally you can also use this from your own code, here’s a simple script that does the same thing as above.

1
2
3
4
5
6
7
8
9
10
11
#!/usr/bin/ruby
 
require 'mcollective'
include MCollective::RPC
 
svcs = rpcclient("service")
 
svcs.batch_size = 10
svcs.batch_sleep_time = 2
 
printrpc svcs.restart(:service => "httpd")

The key lines here are lines 8 and 9 that has the same behaviour as –batch and –batch-sleep

MCollective Pluggable Discovery

07/06/2012

This ia a post in a series of posts I am doing about MCollective 2.0 and later.

In my previous post I detailed how you can extend the scope of the information MCollective has available to it about a node using Data Plugins, this was node side plugins today we’ll look at ones that runs on the client.

Background


Using the network as your source of truth works for a certain style of application but as I pointed out in an earlier post there are kinds of application where that is not appropriate. If you want to build a deployer that rolls out the next version of your software you probably want to provide it with a list of nodes rather than have it discover against the network, this way you know when a deploy failed because a node is down rather than it just not being discovered.

These plugins give you the freedom of choice to discover against anything that can give back a list of nodes with mcollective identities. Examples are databases, CMDBs, something like Noah or Zookeeper etc.

To get this to work requires Direct Addressing, I’ll recap an example from the linked post:

c = rpcclient("service")
 
c.discover :nodes => File.readline("hosts.txt").map {|i| i.chomp}
 
printrpc c.restart(:service => "httpd")

In this example MCollective is reading hosts.txt and using that as the source of truth and attempts to communicate only with the hosts discovered against that file. This, as was covered in the previous post, is in stark contrast with MCollective 1.x that had no choice but to use the network as source of truth.

Building on this we’ve built a plugin system that abstracts this away into plugins that you can use on the CLI, web etc – once activated the MCollective usage on the CLI and any existing code can use these plugins without code change.

Using Discovery Plugins


Using these plugins is the same as you’d always do discovery, in fact as of version 2.1.0 if you use mcollective you’re already using this plugin, lets see:

% mco rpc rpcutil ping
Discovering hosts using the mc method for 2 second(s) .... 26
 
 * [============================================================> ] 26 / 26
.
.
---- rpcutil#ping call stats ----
           Nodes: 26 / 26
     Pass / Fail: 26 / 0
      Start Time: Fri Jul 06 09:47:06 +0100 2012
  Discovery Time: 2002.07ms
      Agent Time: 311.14ms
      Total Time: 2313.21ms

Notice the discovery message says it is using the “mc” method, this is the traditional broadcast mode as before, it’s the default mode and will remain the default mode.

Lets look at the generic usage of the hosts.txt above:

% mco rpc rpcutil ping --nodes hosts.txt -v
Discovering hosts using the flatfile method .... 9
 
 * [============================================================> ] 9 / 9
.
.
---- rpcutil#ping call stats ----
           Nodes: 9 / 9
     Pass / Fail: 9 / 0
      Start Time: Fri Jul 06 09:48:15 +0100 2012
  Discovery Time: 0.40ms
      Agent Time: 34.62ms
      Total Time: 35.01ms

Note the change in the discovery message, it is now using the flatfile discovery method and doesn’t have a timeout. Take a look at the Discovery Time statistic, the flatfile example took a fraction of a second vs the usual 2 seconds spent discovering.

There’s a longer form of the above command:

% mco rpc rpcutil ping --disc-method flatfile --disc-option hosts.txt
Discovering hosts using the flatfile method .... 9
.
.

So you can pick a discovery method and they can take options. You can figure out what plugins you have available to you using the plugin application:

% mco plugin doc
Please specify a plugin. Available plugins are:
.
.
Discovery Methods:
  flatfile        Flatfile based discovery for node identities
  mc              MCollective Broadcast based discovery
  mongo           MongoDB based discovery for databases built using registration
  puppetdb        PuppetDB based discovery

And more information about a plugin can be seen:

% mco plugin doc mc
MCollective Broadcast based discovery
 
      Author: R.I.Pienaar <rip@devco.net>
     Version: 0.1
     License: ASL 2.0
     Timeout: 2
   Home Page: http://marionette-collective.org/
 
DISCOVERY METHOD CAPABILITIES:
      Filter based on configuration management classes
      Filter based on system facts
      Filter based on mcollective identity
      Filter based on mcollective agents
      Compound filters combining classes and facts

The discovery methods have capabilities that declare what they can do. The flatfile one for example has no idea about classes, facts etc so it’s capabilities would only be identity filters.

If you decide to always use a different plugin than mc as your discovery source you can set it in client.cfg:

default_discovery_method = mongo

The RPC api obviously can also choose method and supply options, below code forces the flatfile mode:

c = rpcclient("service")
 
c.discovery_method = "flatfile"
c.discovery_options << "hosts.txt"
 
printrpc c.restart(:service => "httpd")

This has the same effect as mco rpc service restart service=httpd –dm=flatfile –do=hosts.txt

Writing a Plugin


We’ll look at the simplest plugin which is the flatfile one, this plugin ships with MCollective but it’s a good example.

This plugin will let you issue commands like:

% mco service restart httpd
% mco service restart httpd -I some.host
% mco service restart httpd -I /domain/ -I /otherdomain/

So your basic identity filters with regular expression support or just all hosts.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
module MCollective
  class Discovery
    class Flatfile
      def self.discover(filter, timeout, limit=0, client=nil)
        unless client.options[:discovery_options].empty?
          file = client.options[:discovery_options].first
        else
          raise "The flatfile discovery method needs a path to a text file"
        end
 
        raise "Cannot read the file %s specified as discovery source" % file unless File.readable?(file)
 
        discovered = []
        hosts = File.readlines(file).map{|l| l.chomp}
 
        unless filter["identity"].empty?
          filter["identity"].each do |identity|
            identity = Regexp.new(identity.gsub("\/", "")) if identity.match("^/")
 
            if identity.is_a?(Regexp)
              discovered = hosts.grep(identity)
            elsif hosts.include?(identity)
              discovered << identity
            end
          end
        else
          discovered = hosts
        end
 
        discovered
      end
    end
  end
end

Past the basic boiler plate in lines 5 to 11 we deal with the discovery options, you’ll notice discovery options is an array so users can call –disc-option many times and each call just gets appended to this array. We’ll just take one flat file and raise if you didn’t pass a file or if the file can’t be read.

Lines 13 and 14 sets up a empty array where the selected nodes will go into and reads all the hosts found in the file.

Lines 16 and 17 checks if we got anything in the identity filter, if it was not we set the discovered list to all hosts in the file in line 27. The filters are arrays so in the case of multiple -I passed you will have multiple entries here, line 17 loops all the filters. You do not need to worry about someone accidentally setting a Class filter as MCollective will know from the DDL that you are incapable of doing class filters and will just not call your plugin with those.

The body of the loop in lines 18 to 25 just does regular expression matching or exact matching over the list and if anything is found it gets added to the discovered list.

In the end we just return the list of discovered nodes, you do not need to worry about duplicates in the list or sorting it or anything.

As there were automatic documentation generated and input validation done you need to create a DDL file that describes the plugin and the data it can accept and return, here’s the DDL for this plugin:

1
2
3
4
5
6
7
8
9
10
11
metadata    :name        => "flatfile",
            :description => "Flatfile based discovery for node identities",
            :author      => "R.I.Pienaar <rip@devco.net>",
            :license     => "ASL 2.0",
            :version     => "0.1",
            :url         => "http://marionette-collective.org/",
            :timeout     => 0
 
discovery do
    capabilities :identity
end

The meta block is familiar – set timeout to 0 if there’s no timeout and then MCollective will not inform the user about a timeout in the discovery message. Lines 9 to 11 declares the capabilities, possible capabilities are :classes, :facts, :identity, :agents, :compound. Technically :compound isn’t usable by your plugins as MCollective will force the mc plugin when you use any -S filters as those might contain references to data plugins that has to be done using the nodes as source of truth.

Finally store this in a directory like below and you can package it into a RPM or a Deb:

% tree flatfile
flatfile
└── discovery
    ├── flatfile.ddl
    └── flatfile.rb
% cd flatfile
% mco plugin package
Created package mcollective-flatfile-discovery
% ls -l *rpm
-rw-rw-r-- 1 rip rip 2893 Jul  6 10:20 mcollective-flatfile-discovery-0.1-1.noarch.rpm

Install this plugin to all your clients and it will be available to use, if you do not want to use the packages just dump the files in $libdir/discovery/.

Available Plugins


There are a few plugins available now, you saw the mc and flatfile ones here.

If you use the MongoDB based discovery system there is a fully capable discovery plugin that can work against a local MongoDB instance. This plugin has all the capabilities possible with full regular expression support and full sub collective support. I use this as my default discovery method now.

We’re also working on a PuppetDB one, it is not quite ready to publish as I am waiting for PuppetDB to get wildcard support. And finally there is a community plugin that discovers using Elastic Search.

Conclusion


These plugins conclude the big rework done on MCollective discovery. You can now mix and match any source of truth you like even ones we as MCollective developers are not aware of as you can write your own plugin.

Use the network when appropriate, use databases or flat files when appropriate and you can switch freely between modes during the life of a single application.

Using these plugins is fun as they can be extremely fast. The short 1 minute video embedded below (click here if its not shown) shows the mco, puppetdb and mongodb plugins in action.

Version 2.1.0 made these plugins available, we’re looking to bump the Production branch to support these soon.

MCollective 2.1 – Data Plugins for Discovery

06/30/2012

This ia a post in a series of posts I am doing about MCollective 2.0 and later.

In my previous post I covered a new syntax for composing discovery queries and right at the end touched on a data plugin system, today I’ll cover those in detail and show you how to write and use such a plugin.

Usage and Overview


These plugins allow you to query any data available on your nodes. Examples might be stat() information for a file, sysctl settings, Augeas matches – really anything you could potentially interact with from Ruby that exist on your managed nodes can be used as discovery data. You can write your own and distribute it and we ship a few with MCollective.

I’ll jump right in with an example of using these plugins:

$ mco service restart httpd -S "/apache/ and fstat('/etc/rsyslog.conf').md5 = /51b08b8/"

Here we’re using the -S discovery statement so we have full boolean matching. We match machines with the apache class applied and then do a regular expression match over the MD5 string of the /etc/rsyslog.conf file, any machines with both conditions met are discovered and apache is restarted.

The fstat plugin ships with MCollective 2.1.0 and newer ready to use, we can have a look at our available plugins:

$ mco plugin doc
.
.
Data Queries:
  agent           Meta data about installed MColletive Agents
  augeas_match    Augeas match lookups
  fstat           Retrieve file stat data for a given file
  resource        Information about Puppet managed resources
  sysctl          Retrieve values for a given sysctl

And we can get information about one of these plugins, lets look at the agent one:

$ mco plugin doc agent
Agent
=====
 
Meta data about installed MColletive Agents
 
      Author: R.I.Pienaar <rip@devco.net>
     Version: 1.0
     License: ASL 2.0
     Timeout: 1
   Home Page: http://marionette-collective.org/
 
QUERY FUNCTION INPUT:
 
              Description: Valid agent name
                   Prompt: Agent Name
                     Type: string
               Validation: (?-mix:^[\w\_]+$)
                   Length: 20
 
QUERY FUNCTION OUTPUT:
 
           author:
              Description: Agent author
               Display As: Author
 
           description:
              Description: Agent description
               Display As: Description
 
           license:
              Description: Agent license
               Display As: License
 
           timeout:
              Description: Agent timeout
               Display As: Timeout
 
           url:
              Description: Agent url
               Display As: Url
 
           version:
              Description: Agent version
               Display As: Version

This shows what the query is that this plugin is expecting and what data it returns, so we can use this to discover all machines with version 1.6 of a specific MCollective agent:

$ mco find -S "agent('puppetd').version = 1.6"

And if you’re curious what exactly a plugin would return you can quickly find out using the rpcutil agent:

% mco rpc rpcutil get_data query=puppetd source=agent
 
devco.net                                
         agent: puppetd
        author: R.I.Pienaar
   description: Run puppet agent, get its status, and enable/disable it
       license: Apache License 2.0
       timeout: 20
           url: https://github.com/puppetlabs/mcollective-plugins
       version: 1.6

Writing your own plugin


Lets look at writing a plugin. We’re going to write one that can query a Linux sysctl value and let you discover against that. We’ll want this plugin only to activate on machines where /sbin/sysctl exist.

When we’re done we want to be able to do discovery like:

% mco service restart iptables -S "sysctl('net.ipv4.conf.all.forwarding').value=1"

To restart iptables on all machines with that specific sysctl enabled. Additionally we’d be able to use this plugin in any of our agents:

action "query" do
   reply[:value] = Data.sysctl(request[:sysctl_name]).value
end

So these plugins really are nicely contained reusable bits of data retrieval logic shareable between discovery, agents and clients.

This is the code for our plugin:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
module MCollective; module Data
  class Sysctl_data<Base
    activate_when { File.exist?("/sbin/sysctl") }
 
    query do |sysctl|
      out = %x{/sbin/sysctl #{sysctl}}
 
      if $?.exitstatus == 0
        value = out.chomp.split(/\s*=\s*/)[1]
 
        if value
          value = Integer(value) if value =~ /^\d+$/
          value = Float(value) if value =~ /^\d+\.\d+$/
 
          result[:value] = value
        end
      end
    end
  end
end;end

These plugins have to be called Something_data and they go in the libdir called data/something_data.rb.

On line 3 we use the activate_when helper to ensure we don't enable this plugin on machines without sysctl. The same confinement system as you might have seen in Agents.

Lines 5 to 18 we run the sysctl command and do some quick and dirty parsing of the result ensuring we return Integers and Floats so that numeric comparison works fine on the CLI.

You'd think we need to do some input validation here to avoid bogus data or shell injection but below you will see that the DDL defines validation and MCollective will validate the input for you prior to invoking your code. This validation happens on both the server and the client. DDL files also help us generate the documentation you saw above, native OS packages and in some cases command line completion and web UI generation.

The DDL for this plugin would be:

metadata    :name        => "Sysctl values",
            :description => "Retrieve values for a given sysctl",
            :author      => "R.I.Pienaar <rip@devco.net>",
            :license     => "ASL 2.0",
            :version     => "1.0",
            :url         => "http://marionette-collective.org/",
            :timeout     => 1
 
dataquery :description => "Sysctl values" do
    input :query,
          :prompt      => "Variable Name",
          :description => "Valid Variable Name",
          :type        => :string,
          :validation  => /\A[\w\-\.]+\z/,
          :maxlength   => 120
 
    output :value,
           :description => "Kernel Parameter Value",
           :display_as  => "Value"
end

This stuff is pretty normal anyone who has written any MCollective agents would have seen these and the input, output and metadata formats are identical. The timeout is quite important if your plugin is doing something like talking to Augeas then set this timeout to a longer period, the client when doing discovery will wait an appropriate period of time based on these timeouts.

With the DDL deployed to both the server and the client you can be sure people won't be sending you nasty shell injection attacks and if someone accidentally tries to access a non existing return they'd get an error before sending traffic over the network.

You're now ready to package up this plugin we support creating RPMs and Debs of mcollective plugins:

% ls data
sysctl_data.ddl  sysctl_data.rb
% mco plugin package
Created package mcollective-sysctl-values-data
% ls -l
-rw-rw-r-- 1 rip rip 2705 Jun 30 10:05 mcollective-sysctl-values-data-1.0-1.noarch.rpm
% rpm -qip mcollective-sysctl-values-data-1.0-1.noarch.rpm
Name        : mcollective-sysctl-values-data  Relocations: (not relocatable)
Version     : 1.0                               Vendor: Puppet Labs
Release     : 1                             Build Date: Sat 30 Jun 2012 10:05:24 AM BST
Install Date: (not installed)               Build Host: devco.net
Group       : System Tools                  Source RPM: mcollective-sysctl-values-data-1.0-1.src.rpm
Size        : 1234                             License: ASL 2.0
Signature   : (none)
Packager    : R.I.Pienaar <rip@devco.net>
URL         : http://marionette-collective.org/
Summary     : Retrieve values for a given sysctl
Description :
Retrieve values for a given sysctl

Install this RPM on all your machines and you're ready to use your plugin. The version and meta data like author and license in the RPM comes from the DDL file.

Conclusion


This is the second of a trio of new discovery features that massively revamped the capabilities of MCollective discovery.

Discovery used to be limited to only CM Classes, Facts and Identities now the possibilities are endless as far as data residing on the nodes go. This is only available in the current development series - 2.1.x - but I hope this one will be short and we'll get these features into the production supported code base soon.

In the next post I'll cover discovering against arbitrary client side data - this was arbitrary server side data.

MCollective 2.0 – Complex Discovery Statements

06/23/2012

This ia a post in a series of posts I am doing about MCollective 2.0 and later.

In the past discovery was reasonably functional, certainly at the time I first demoed it around 2009 it was very unique. Now other discovery frameworks exist that does all sorts of interesting things and so we did 3 huge improvements to discovery in MCollective that again puts it apart from the rest, these are:

  • Complex discovery language with full boolean support etc
  • Plugins that lets you query any node data as discovery sources
  • Plugins that lets you use any data available to the client as discovery sources

I’ll focus on the first one today. A quick example will be best.

$ mco service restart httpd -S "((customer=acme and environment=staging) or environment=development) and /apache/"

Here we are a hypothetical hosting company and we want to restart all the apache services for development. One of the customers though use their staging environment as development so it’s a bit more complex. This discovery query will find the acme customers staging environment and development for everyone else and then select the apache machines out of those.

You can also do excludes and some other bits, these 2 statements are identical:

$ mco find -S "environment=development and !customer=acme"
$ mco find -S "environment=development and not customer=acme"

This basic form of the language can be described with the EBNF below:

compound = ["("] expression [")"] {["("] expression [")"]}
expression = [!|not]statement ["and"|"or"] [!|not] statement
char = A-Z | a-z | < | > | => | =< | _ | - |* | / { A-Z | a-z | < | > | => | =< | _ | - | * | / | }
int = 0|1|2|3|4|5|6|7|8|9{|0|1|2|3|4|5|6|7|8|9|0}

It’s been extended since but more on that below and in a future post.

It’s very easy to use this filter in your code, here’s a Ruby script that sets the same compound filter and restarts apache:

#!/usr/bin/ruby
 
require "mcollective"
 
include MCollective::RPC
 
c = rpcclient("service")
c.compound_filter '((customer=acme and environment=staging) or environment=development) and /apache/'
 
printrpc c.restart(:service => "httpd")

These filters are combined with other filters so you’re welcome to mix in Identity filters etc using the other filter types and they will be evaluated additively.

These filters also supports querying node data, a simple example of such a query can be seen here:

$ mco service restart httpd -S "fstat('/etc/httpd/conf/httpd.conf').md5 = /51b08b8/"

This will match all machines with a certain MD5 hash for the apache config file and restart them. More on these plugins the next post where I’ll show you how to write your own and use them.

Newer Posts
Older Posts