I'm trying to do some research for a customer who has a problem with V1Bugzilla
, the integration between VersionOne and Bugzilla
. Bugzilla was working the last time I needed it for testing. Now it's not. It's hard to tell what happened because I share one instance across development, testing, and support. We do that because it takes a couple hours to set up an instance. Not only do we maintain a virtual machine with the latest stable build of Bugzilla, but we have to maintain previous versions so we can support those previous versions when customers have problems. To complicate matters even further, V1Bugzilla requires changes to Bugzilla, so it's not even "plain, vanilla, out/of/the/box" Bugzilla that needs to get installed.
Packages and Installers
Of course, there is a Bugzilla apt package
for Debian/Ubuntu and an unofficial Windows installer
. Unfortunately, both are only for Bugzilla 3 when Bugzilla 4 has been available for some time. V1Bugzilla is packaged as a zip so there is still a manual step to get the right Perl code from the zip into the right Bugzilla subdirectory.
Scripting can take me a bit farther. Now I can get the right Perl code from the V1Bugzilla zip into the right Bugzilla subdirectory. When I have Bugzilla installed on a VM and want the integration local (for example, when coding it), then I still have to manually configure the integration to point to the Bugzilla instance. And, I still have to find a way to run the scripts on new hosts. Scripting makes it highly automated, but not fully automated.
The solution involves the notion of DevOps, which is characterized by DevOps.com
as "Helping finish what Agile development started." To put it differently, DevOps is the application of Agile development practices applied to infrastructure. DevOps tools are emerging that help treat infrastructure as code so people can use version control to tag, branch, and release infrastructure configurations, just like they can source code. This means that infrastructure configuration should have a lifecycle with development, testing, and deployment, and enables highly/automated testing. The result is configuration management through automation, not documentation. Chef
is one of the tools that helps implement DevOps principles. It is an open/source product with a commercial offering on top. With 100s of open /source "cookbooks" there is a rich community sharing automation and understanding.
Chef/Client: Distributed Processing
Chef/client is aware of both packages/installers and scripts so it can call both if you already have these building blocks. Chef/client can also process cookbooks and recipes, which are themselves cross/platform programs, written in a Ruby DSL. These cookbooks and recipes use a declarative model of programming instead of imperative so reading the program tells you how you want a target configured, not how to do the configuration. Chef/client gets these cookbooks and recipes from a central server so all you have to do to a new host is install the chef/client and it will do the rest.
Chef/Server: Centralized Configuration
Chef/Server holds the library of available cookbooks and recipes but it also knows what to run on which nodes (a host) and how to configure those nodes. Chef/clients can query this central information to get information they need for the cookbooks/recipes they want to run. For example, the recipe for V1Bugzilla integration can query the chef/server for a Bugzilla instance.
DevOps is not Magic
Without Chef, it takes hours to configure each Bugzilla instance. With Chef, the configuration take minutes. However, I do have to invest time in writing recipes, cookbooks, and configuring chef/server. My first time through, it took days to get something working. Mostly, that was overhead in learning the Chef DSL and framework. I expect the next target to take less time. Yet there is still a matter of prioritization. The upfront investment needs to be balanced with the frequency and pain of manually configuring a new target.