This was a very cool conference. I picked up a lot of useful information on both the open source tool, Puppet, and some ideas on infrastructure.
What also made this conference unique, is how honest the Puppet team and community were about the projects strength and weaknesses. Those that have deployed Puppet on a larger scale (MessageOne and Google) seemed to go through the same iterations in attempting to scale out their Puppetmaster’s. From WEBrick (which is what I’m currently running Puppet with :) ), which is hated by all since its a single process/thread web server that can only handle one request at a time. To Mongrel, which you have to manage a mongrel cluster script, feed it lots of memory, and then throw an apache proxy server in front of them. Now, people are starting to settle on using Passenger/mod_rack, which is what I spent most of yesterday looking into and setting up. This allows apache to mount a rails instance, and then you don’t actually have to run puppetmasterd. This still requires some decent hardware, and I’m currently running my puppetmaster on a VM with 2GB or memory, so I’ll have to watch out for that. Chris, the one who introduced me to Puppet, said he still uses WEBrick for all of his DB, Tomcat, and Apache servers (I think he said something like 200 systems) and it has been working out nicely. He, like the guys at Google, also doesn’t run puppet as a daemon.
Anyway, the point is, we learned a lot about the project, way more than if a sales person had come to us and just told us the things puppet does well, or how it operates on paper (cough LANDesk cough). It was really awesome to talk with Andrew Pollock and Nigel Kersten from Google. See, I was a little unsure about Puppet in our environment, where we have multi-purpose servers, computer servers, and desktops that we have to manage. It seemed, at a first glance, that most of the Puppet users out there have a homogeneous environment, and Andrew (Shafer) had stressed the concept of single role servers. After talking with them, I felt a lot more comfortable pursuing Puppet across our servers and desktops. Did I mention they were super cool and friendly?
We also learned a lot about the Puppet developers, which had its own interesting advantage. I have a lot of respect for what Luke Kanies has been able to do, and by the end of the conference, he showed significant mastery in what he has done, as well as some humility and admitting what he has not been able to do and why. I was a little put off the first day though, when both him and Andrew came off a little arrogant and crass. It did make me step back and think, “Is this project going to be well managed in the future with personalities like this in charge? Is their answer of ‘don’t do that!’ tongue in cheek, or are they not supportive of a diverse environment?”. In the end, I have more respect for the project than ever, and with it still being a young project, I hope they listened to some of the feedback, and I also can’t wait to see where it ends up in the next year.
Andrew, the Puppet Andrew, came up to us a lot during the conference, and he was fun to talk too, and he’s very academic and he had a lot of abstract concepts to talk about. Also, he said this was the first conference he has arranged, and I think he did a fantastic job. Jenny had commented that this was the first conference she had lasted the entire duration, so that says a lot about the pacing and content of PuppetCamp. I felt the same way, every session was incredibly engaging, and how Andrew had setup the democratic and chaotic Open Sessions was very impressive. Lets put it this way, I even got up there and pitched a topic, which is something I would have never done. Hurray for me stepping outside of my comfort zone!
Warning: side topic!
Now that I’ve had the weekend to google all the cool technologies I was exposed too, I’m also reminded why I really like having a FreeBSD server at my disposal. They had talked about CouchDB, so on a whim I did a
~> cd /usr/ports
/usr/ports> make search name=couchdb
Info: A document database server, accessible via a RESTful JSON API
B-deps: ca_root_nss-3.11.9_2 curl-7.19.6_1 erlang-lite-r13b01_6,1 gettext-0.17_1 gmake-3.81_3 icu-3.8.1_2 libiconv-1.13.1 libtool-2.2.6a nspr-4.8 perl-5.8.9_3 spidermonkey-1.7.0
R-deps: ca_root_nss-3.11.9_2 curl-7.19.6_1 erlang-lite-r13b01_6,1 gettext-0.17_1 gmake-3.81_3 icu-3.8.1_2 libiconv-1.13.1 libtool-2.2.6a nspr-4.8 perl-5.8.9_3 spidermonkey-1.7.0
Info: Simple Librairy to Allow Python Applicationto Use CouchDB
B-deps: py26-httplib2-0.5.0 py26-py-restclient-1.3.2 py26-setuptools-0.6c9 python26-2.6.2_3
R-deps: py26-httplib2-0.5.0 py26-py-restclient-1.3.2 py26-setuptools-0.6c9 python26-2.6.2_3
I did a ‘make install’, and I had a cool little couchdb up and running. What is also cool is FreeBSD likes to give you very helpful information when you install something. For example, this is what is printed out when you install the CouchDB port:
===> COMPATIBILITY NOTE:
CouchDB is still pre-stable; between 0.8 and 0.9 the database format
changed which breaks BC. In current trunk, the format changed again, so
please double-check in case you are updating an existing installation.
More info: * http://wiki.apache.org/couchdb/Breaking_changes?action=show&redirect;=BreakingChanges * http://wiki.apache.org/couchdb/BreakingChangesUpdateTrunkTo0Dot9
See, isn't that helpful? Best of all, I didn't have to enable additional repositories, or fetch the src manually, and its dependencies and then figure out how to run the right configure script flags... FreeBSD makes it easy, and since it automatically uses what you already have with what is required, its an incredibly stable build. Removing it is pretty simple as well, just:
> pkg_deinstall -R couchdb
—> Deinstalling ‘couchdb-0.9.0_1,1’
—> Deinstalling ‘erlang-lite-r13b02,1’
[Updating the pkgdb in /var/db/pkg … - 118 packages found (-1 +0) (…) done]
—> Deinstalling ‘curl-7.19.6_1’
[Updating the pkgdb in /var/db/pkg … - 117 packages found (-1 +0) (…) done]
—> Deinstalling ‘ca_root_nss-3.11.9_2’
—> Deinstalling ‘spidermonkey-1.7.0’
—> Deinstalling ‘nspr-4.8’
[Updating the pkgdb in /var/db/pkg … - 116 packages found (-1 +0) (…) done]
—> Deinstalling ‘gmake-3.81_3’
[Updating the pkgdb in /var/db/pkg … - 115 packages found (-1 +0) (…) done]
—> Deinstalling ‘perl-threaded-5.8.9_3’
[Updating the pkgdb in /var/db/pkg … - 114 packages found (-1 +0) (…) done]
—> Deinstalling ‘gettext-0.17_1’
—> Deinstalling ‘libiconv-1.13.1’
—> Deinstalling ‘icu-3.8.1_2’
—> Deinstalling ‘libtool-2.2.6a’
** Listing the failed packages (-:ignored / *:skipped / !:failed)
! curl-7.19.6_1 (pkg_delete failed)
! ca_root_nss-3.11.9_2 (pkg_delete failed)
! perl-threaded-5.8.9_3 (pkg_delete failed)
! gettext-0.17_1 (pkg_delete failed)
! libiconv-1.13.1 (pkg_delete failed)
This does a upwards recursive dependency removal. Also, if one dependency is relied on by another, it wont get removed. Like, if Perl58 was a dependency of a package, it wouldn’t be removed if perl58 is used by many other packages. This is smart. So, above, the packages that failed to deinstall where ones that are required dependencies of other installed packages.
Speaking of package management; have you ever installed something that ended up having a few dozen dependencies, then you want to uninstall that package with a “rpm -e cba8”, or something equivalent, but what about all the other cruft that came along with it? You would have to keep track of each dependency, and specify all of them and hope you don’t break another program. FreeBSD has a few tools to do this, one in particular, portmaster can remove all ports that were once a dependency but no longer used:
> portmaster -s
Information for neon28-0.28.4:
An HTTP and WebDAV client library for Unix systems
===>>> neon28-0.28.4 is no longer depended on, delete? [n] y
===>>> Delete old and new distfiles for www/neon28
without prompting? [n] y
===>>> Running pkg_delete -f neon28-0.28.4
Information for rubygem-actionwebservice-1.2.6:
I ended up removing 4 packages that were no longer used.
CentOS and RHEL are the larger Puppet consumers, I’m still a big proponent for FreeBSD, and at work, it has allowed me to quickly build an Apache + Puppet + RubyPassenger/mod_rack stack with the minimal dependencies installed. So, the puppet server is still pretty lean, which means updates are smaller and faster. It still surprises me that its relatively unknown, even though Netcraft always has it listed in the top domains with the best uptime and consistently growing over the years. Why do I feel like an AmigaOS fan sometimes?
Hmm, it is sort of weird that this turned into a FreeBSD ports management entry :)
Okay, final word: PuppetCamp09 was Freaking awesome. There were a lot of smart developers and sysadmins there. We even got a very cool git howto, which I found useful. It was very diverse, which is strange for a conference based on one project in particular.