In the past there used to be a puppet-labs project called puppet-dashboard
, which would let you see the state of your managed-nodes. Having even a very basic and simple "report user-interface" is pretty neat when you're pushing out a change, and you want to see it be applied across your fleet of hosts.
There are some other neat features, such as allowing you to identify failures easily, and see nodes that haven't reported-in recently.
This was spun out into a community-supported project which is largely stale:
Having a dashboard is nice, but the current state of the software is less good. It turns out that the implementation is pretty simple though:
- Puppet runs on a node.
- The node reports back to the puppet-master what happened.
- The puppet-master can optionally HTTP-post that report to the reporting node.
The reporting node can thus receive real-time updates, and do what it wants with them. You can even sidestep the extra server if you wish:
- The puppet-master can archive the reports locally.
For example on my puppet-master I have this:
root@master /var/lib/puppet/reports # ls | tail -n4
smaug.dh.bytemark.co.uk
ssh.steve.org.uk
www.dns-api.com
www.steve.org.uk
Inside each directory is a bunch of YAML files which describe the state of the host, and the recipes that were applied. Parsing those is pretty simple, the hardest part would be making a useful/attractive GUI. But happily we have the existing one to "inspire" us.
I think I just need to write down a list of assumptions and see if they make sense. After all the existing installation(s) won't break, it's just a matter of deciding whether it is useful/worthwhile way to spend some time.
- Assume you have 100+ hosts running puppet 4.x
- Assume you want a broad overview:
- All the nodes you're managing.
- Whether their last run triggered a change, resulted in an error, or logged anything.
- If so what changed/failed/was output?
- For each individual run you want to see:
- Assume you don't want to keep history indefinitely, just the last 50 runs or so of each host.
Beyond that you might want to export data about the managed-nodes themselves. For example you might want a list of all the hosts which have "bash" installed on them. Or "All nodes with local user "steve"." I've written that stuff already, as it is very useful for auditing & etc.
The hard part about that is that to get the extra data you'll need to include a puppet module to collect it. I suspect a new dashboard would be broadly interesting/useful but unless you have that extra detail it might not be so useful. You can't point to a slightly more modern installation and say "Yes this is worth migrating to". But if you have extra meta-data you can say:
- Give me a list of all hosts running
wheezy
.
- Give me a list of all hosts running exim4 version
4.84.2-2+deb8u4
.
And that facility is very useful when you have shellshock, or similar knocking at your door.
Anyway as a hacky start I wrote some code to parse reports, avoiding the magic object-fu that the YAML would usually invoke. The end result is this:
root@master ~# dump-run www.steve.org.uk
www.steve.org.uk
Puppet Version: 4.8.2
/var/lib/puppet/reports/www.steve.org.uk/201707291813.yaml
Runtime: 2.16
Status:changed
Time:2017-07-29 18:13:04 +0000
Resources
total -> 176
skipped -> 2
failed -> 0
changed -> 3
out_of_sync -> 3
scheduled -> 0
corrective_change -> 3
Changed Resources
Ssh_authorized_key[skx@shelob-s-fi] /etc/puppet/code/environments/production/modules/ssh_keys/manifests/init.pp:17
Ssh_authorized_key[skx@deagol-s-fi] /etc/puppet/code/environments/production/modules/ssh_keys/manifests/init.pp:22
Ssh_authorized_key[[email protected]] /etc/puppet/code/environments/production/modules/ssh_keys/manifests/init.pp:27
Skipped Resources
Exec[clone sysadmin utils]
Exec[update sysadmin utils]
Tags: puppet, puppet-dashboard
|