This post is from the XebiaLabs blog and has not been updated since the original publish date.
Think You’re Doing DevOps? Think Again
What’s the point of DevOps? While there are many answers to that question, I think it boils down to something pretty simple: breaking down the walls between everyone in the software delivery chain (Dev, test, Ops, release managers, etc.) in order to deliver software from code to production (or idea to business value) faster, more efficiently, and more reliably.
With that definition in mind—and at the risk of being pelted with tomatoes—I’m going to say two things:
If you’re using Jenkins for continuous integration (CI) to get code to your Dev server and calling it DevOps, you’re not doing DevOps.scripting overhead, which pulls your talented (and expensive!) software engineers away from building new features and fixing defects. So, while CI and configuration automation are super important, they’re just individual pieces that solve discrete problems in the software delivery process. DevOps requires thinking holistically about the entire pipeline, including all its people, tools, and processes, from end-to end so that it’s optimized to bring value to the business.
How Do I Get to DevOps?To achieve the benefits of DevOps, you need these key things:1. Visibility across people, processes, and tools. If you can’t see what’s going on across the whole development and delivery chain, you can’t optimize it for speed and quality. Where are the bottlenecks? Is a release at risk and why? With unintegrated point tools, you’re left with an incomplete picture of your application delivery process. Enterprises require an end-to-end orchestration platform that connects all the tools in the software development lifecycle.
2. Fully automated deployments. Point tools are code based. If you depend on them for deployments, your developers will spend more time writing and maintaining unique deployment scripts than creating software. Developers won’t be happy, handoffs between Dev and Ops will still be painful, and you’ll never be able to standardize your deployment process.
3. Empowerment of everyone in the pipeline. Software delivery is not just the responsibility of development teams. As you move closer to production, it also involves designers, product managers, architects, operations directors, release managers, QA staff, business analysts, and more. Products that are purely code-centric don’t meet the needs of large enterprises where people of varying skills and responsibilities are required for getting software to production and ultimately to customers. In addition to your developers, it’s important to empower these other critical team members to work efficiently.
4. The ability to scale across an enterprise. Back to scripting. In an enterprise, you can’t standardize across teams and users working on hundreds of applications in hundreds of different deployment environments if your developers are writing scripts. Some vendors offer workflow capabilities for planning deployments (or include a few out-of-the-box deployment steps), but you’ll just go from writing and maintaining a mass of scripts to writing and maintaining all or most of the steps in your workflows. That won’t scale either. No matter how talented your developers are or how many of them you have, it is absolutely inefficient to use scripting or workflows to create software release pipelines.
5. Process controls for ensuring and tracking compliance. How do you ensure you’re meeting compliance obligations if you have hundreds of different deployment processes? How do you keep an audit trail if you don’t have an efficient way to keep track of and report on all the intricacies of who did what and when every time an application or a deployment environment changes? These are unavoidable requirements in an enterprise, especially in highly regulated industries like financial services. Open source and point solutions alone are not enough to meet these demands.
6. Intelligence about how well the pipeline is working. When everything in the delivery chain is connected, you can analyze the effectiveness of the pipeline and see whether it’s yielding business value. Are we delivering faster? Did we meet our release goals? Which features are customers adopting? You can’t answer these questions with unintegrated point tools.
Check out this white paper to learn the 11 DevOps “black holes” you can easily get sucked into… and how to avoid them!
Application Release Automation – The Closest Thing to DevOps?Full-fledged DevOps calls for a framework that unifies all your DevOps tools, so you can automate, organize, control, and scale your software delivery pipeline. That’s where Application Release Automation (ARA)—sometimes called Continuous Delivery and Release Automation (CDRA) or Application Release Orchestration (ARO)—has proven to be the great “unifier” of all the steps and tools in the software delivery cycle. ARA may not be the most descriptive term, but it enables a decidedly effective path forward for organizations looking to truly deliver on the promise of DevOps. Application Release Automation integrates all the products and processes required to get application code into production. It integrates related point tools, such as continuous integration and provisioning, so companies can standardize deployments—including compliance requirements—across teams and projects. [caption id="attachment_19914" align="aligncenter" width="585"] The XebiaLabs DevOps Platform for Application Release Automation[/caption] By offering a central place, ARA enables thirty-thousand-foot visibility and management capabilities of the entire chain. Everyone can precisely determine the status of a release, find bottlenecks, manage tasks, keep an audit trail, and do just about anything else needed to keep the end-to-end process well-oiled and efficient. Data collected from the pipeline, along with features for continually measuring and reporting on that data, make it easy to figure out whether the delivery process links up with business priorities, even in pipelines with hundreds of interconnected applications and many different tools and teams. With all this automation, standardization, and visibility, even an extremely complex enterprise can organize, optimize, and control the pipeline, and deliver great software while scaling it across a large organization. Let’s not forget that, while technology is important, nurturing a collaborative culture lies at the heart of DevOps. ARA allows everyone in the pipeline to see the end goal, cooperate to reach that goal, and understand their impact.
Avoiding Dead-EndsSome people argue that you can extend Jenkins, Puppet, or Chef with custom scripts to get code into production. Sure, doing that might look like DevOps if you’re dealing with just one project, but it won’t move a company with hundreds (or more commonly, thousands) of projects and teams closer to bridging the gap between Dev and Ops. In fact, it will actually move it further away from that goal by turning its most expensive developers into infrastructure engineers writing plumbing code, which, by the way, the Ops staff will not understand and won’t be able to maintain. Then the next team will have to start from scratch, condemning the delivery process to countless silos because it will be too cumbersome and time intensive to write all that code. That’s a DevOps killer. To recap: you cannot achieve the promise of DevOps by solving one bottleneck at a time using a single tool and extending that tool with custom scripts. It will just make things worse. The good news is, you can keep your point tools to solve specific issues. But to improve the agility, velocity and reliability of your delivery process, you need a framework that brings it all altogether and that’s ARA.
- The Forrester Wave: Continuous Delivery and Release Automation
- Gartner 2017 Magic Quadrant: Application Release Automation
- 8 Things To Know Before Starting Application Release Automation (ARA)
- How Application Release Automation Helps You Deploy to the Cloud
- The XebiaLabs DevOps Platform: Scaling Enterprise Software Delivery for Cloud, Container, and Legacy Environments