This post is from the XebiaLabs blog and has not been updated since the original publish date.
Part 1: What's Next with Continuous Integration & Continuous Delivery
As more and more teams advance from Continuous Integration into Continuous Delivery, we sat down with Sarah Goff-Dupont, Product Marketing Manager for Atlassian, maker of Bamboo, to talk about where it's headed. The following is the first in a 2-part series. What trends in Continuous Integration (CI) are you seeing at Atlassian? The two biggest trends we're seeing in our own CI practice are relentless integration on all our active code branches (of which there are LOTS), and getting ever more aggressive about tightening the feedback loop with faster builds. Atlassian dev teams have adopted the practice of "story branching" where a branch is created for every user story or bug fix implemented. This would have been a huge pain with a centralized VCS like Subversion, but our switch to Git makes it feasible (even sensible!) because of Git's great support for branching and merging. These branches are typically active for only a few days, and there used to be a lot of overhead to pull a CI scheme onto them, so it didn't always get done. This resulted in quite a few "surprises" being introduced into the main code line, and is one of the reasons that we added better support for this workflow in Bamboo, Atlassian's Continuous Integration server. So now with the ability to discover new branches in your repo and automatically apply your CI plan to them, we get the best of both worlds: isolated development on branches so the main line isn't polluted, AND no-overhead Continuous Integration on those branches so we're confident about what we merge to main when it's time. With regard to faster feedback, we are constantly measuring, analyzing and iterating on our build processes. We have builds that run as many as 25 batches of tests in parallel. We have "inner-loop" builds that simply compile and run unit tests as a sanity check, then if successful, trigger "outer-loop" builds that go through the longer-running functional test suites. We have developed a plugin for Bamboo that measures how long each individual build step is taking and graphs it so you can easily identify bottlenecks. We have also done the yeoman's work of inspecting our tests to ensure they are efficiently designed. One team was able to cut their build time down 20% by refactoring a single test utility that was running through unnecessary steps. 20%! What is your vision for CI for the future? My near-term vision for CI is simply wider adoption of story branching, shored up by a robust suite of tests that run several times a day on all those branches, and automatically-provisioned environments where the build is automatically deployed so UI-level and remote API tests can be run. And the pieces are pretty much available to us now. Automated environment creation and provisioning is probably the least mature link in that chain, but it's maturing fast. The good folks at Amazon AWS, Heroku, and elsewhere are making cloud-based environments so easy and cheap that they are essentially disposable. Couple that with tools like Puppet, Chef and Capistrano that make automated provisioning a reality. Automated deployment is another area seeing growing adoption, and being better served by tools like XL Deploy and LiveRebel. It sounds cheesy, but in many ways the future is now. The software community has a pretty good idea of what the best practices are for a killer CI system, and we have ever-improving tools to support them --now we just need more software teams to actually put all this to use. What is the biggest possible improvement you think could be made to CI? Thinking farther ahead, I'd love to see some kind of tool that grabs stack traces from production logs, matches it up with the user actions that led to it, then uses those reverse-engineered reproduction steps to auto-generate a new test for it in your suite. There are tools out there which auto-generate test cases based on static analysis of your code, and tools that analyze huge amounts of data from logs. But I'm unaware of any that combine those concepts and generate tests based on actual user behavior in the wild. What added functionality will CI require as organizations move to public, private and hybrid cloud environments? Will organizations use a single product or platform to manage the “code to cache” cycle in a modern cloud environment, or will we see integrated combinations of “best of breed” products? It's a lot to ask of any one tool to handle that whole cycle and do it really well. Tools already exist that handle small chunks of that cycle really well, so I think it's a matter of integrating and orchestrating them. The role of CI servers is to conduct that orchestra (if you will) and report on it. For example, Bamboo doesn't do the nuts n' bolt compilation of your code --Maven or Ant does. Same is true with tools that handle environment/server creation, deployments and code analysis. CI servers give those instruments their cue, and collects all the data from them. It's a paradigm that allows for both power and flexibility. So I think we'll see more and more tools that can connect to each other and allow teams to create the code-to-cache ecosystem that works best for them. You mentioned a lot of recent cases of customers looking to move from Continuous Integration to Continuous Delivery (CD) and deployment. How are they justifying this step? What is the business case for CD? Better quality and less wasted time is the crux of it. First, your deployments are repeatable and reliable because a script or tool is going to perform the same steps in the same order every time --not something we can say for humans deploying code manually. So you spend less time untangling things when a deploy goes awry. And with deploy steps codified like that, there's no need to panic when your favorite sys admin comes down with the flu on release day. You don't have to reverse engineer the process or have them walk you through it over the phone while in a fever-induced delirium. Second, it's simply more efficient to have a machine perform repeatable deploy steps while your Ops team focuses on more interesting work. You also get increased traceability with automated deploys, which is really important for the financial and health care industry, and for anyone concerned with PIC and SOX compliance. And just the act of delivering fresh builds frequently tightens the feedback loop for developers and reduces context-switching, even if your continuous delivery scheme is only within pre-production environments. If a developer's commit breaks something unexpected, an integration test can catch it within a few minutes and a fix can be deployed shortly thereafter. Without continuous delivery, that cycle could take days, during which time the developer's brain is already fully focused on another piece of code. Frequent, automated deploys are a huge time-saver in the long run. … Check out the second part in the series! _______________________________________________________________
As Product Marketing Manager for Atlassian Bamboo, Sarah has been working in software for over 10 years as a manual tester, automated test engineer, scrum master and now as a marketer for Atlassian Bamboo. As a champion of Agile development and automation, she loves talking to developers about their triumphs and their frustrations, then blending those insights with her own experience and sharing it. It's all about making life easier for the nerds.