Introduction
Last time when I described how to build a Jenkins server on CentOS to work with CollabNet TeamForge, I promised a follow up on how to use the integration. For this blog, I’ll take a look at what it means to integrate Jenkins with TeamForge and what each of those plugins I had you install in my previous blog post do. With the exception of xframe-options, the plugins all come from CollabNet and help TeamForge and Jenkins to work together. The integration between TeamForge and Jenkins enables 4 different things:
- Using TeamForge as the source for access control to Jenkins and Jenkins projects
- Tracking builds that Jenkins runs, either back to TeamForge source code commits or other activities, or simply reporting a build status to a tracker artifact
- Uploading documents like build logs or other documents from the build process into the TeamForge Documents manager
- Uploading build products like jar files or war files or .exe files into the TeamForge File Release System
These are all important aspects of including Jenkins, its jobs and the build products it generates into an Application Lifecycle Management workflow.
Let’s start by taking a look at how TeamForge provides and enforces user authentication and access controls for Jenkins.
Access Control
Jenkins is a very powerful tool, and it looks deceptively easy to use, and it comes out of the box very open to all users. But, as with lots of things, with great power comes great responsibility. Jenkins can be a bit fragile as can jobs in Jenkins. Even within my small team here, we’ve made changes in Jenkins that have caused our colleagues heartburn when we inadvertently broke something they were using. In an enterprise setting, this can cause havoc. The CollabNet TeamForge integration with Jenkins provides security and role-based access control (RBAC) to Jenkins for TeamForge users, making sure only those with the experience and permissions can break your Jenkins jobs.
Sign On
After installing the CollabNet TeamForge plugin for Jenkins, by default TeamForge can be set up as the Security Realm for Jenkins. This means that TeamForge can be the single-signon source for Jenkins, and signing in to Jenkins will require TeamForge credentials, not any other Jenkins credentials. The value here is in minimizing the number of logins that need to be managed, but also this is the set up for using TeamForge RBAC to control what people can do with Jenkins. This screen shot of the Jenkins Configure Global Security page shows how it looks.
RBAC
When Jenkins has been configured as above to use TeamForge for authentication, within a Jenkins job you can require that access to the job be controlled by roles and permissions within the project. In the diagram below, I’ve specified a project name and had Jenkins create roles in that project. Specifying a project name restricts access to only members of that project that have the appropriate roles. If the Project is left blank, access to this build job is wide open.
Jenkins Project with TeamForge authentication
Checking the “Create Jenkins roles in CollabNet TeamForge for this project” box adds four roles to TeamForge:
- Hudson Build/Cancel
- Hudson Configure
- Hudson Delete
- Hudson Promote
- Hudson Read
Like any other roles, these can be assigned to various project members using the Permissions User-Role Matrix in the TeamForge Project Admin menu.
Build Tracking
The quality of work from a development group improves when the software is built regularly and everybody on the team knows how the build is going. This was true 20 years ago when I was last running builds for a large project that did a build once per day, and it’s the underlying principle of the Continuous Integration (CI) build practice of agile development. TeamForge provides nice capabilities for helping a team using TeamForge to stay on top of their Jenkins builds.
Traceability to Commit, Workitem, Code Review
Using TeamForge EventQ (formerly known as Orchestrate), a Jenkins build job can notify TeamForge when a build completes. Because Jenkins has a lot of information at this moment in the process, capturing status at this point can be very valuable. Jenkins knows whether the build succeeded or failed, as well as the commit that triggered the build. By passing this information back to EventQ, EventQ can create the traceability chain from the build, back to the commit, any code reviews and the task or story or defect that defies the reason for the change. For a good build, this is the documentation needed to show that all the work for the task was done, but for a failing build this lets everyone on the team know what the commit was that caused the failing build and what was being worked so the entire team can make sure the build gets back on track. By electing to turn on “Notify TeamForge Orchestrate when a build completes” in Jenkins and setting up a Build Server EventQ Source associated with the source code repository being used, you can have EventQ capture traceability information across the lifecycle a shown in the figure below.
Traceability from build to commit to code review and back to the workitem (task) that caused the change.
Build Status Tracker Update
The EventQ method above provides both status and traceability which gives an extra level of assurance that everything has been done. Sometimes, however, all that’s really needed is just the status. This time honored concept of having a build artifact that either gets created or updated based on some criteria is easily done in TeamForge by enabling the Build Status Tracker Update. This will cause an artifact in TeamForge to get updated. The options are pretty self explanatory, just pick the type of Tracker artifact you want created or updated and Jenkins will update the artifact:
With this in place, a tracker Defect artifact is updated every time a build runs, with the log of the build as an attachment.
Build Tracker Artifact
Document and Log File Upload
When a build completes, whether successful or not, there may be times when it is advantageous to capture the build log. This is especially valuable with builds later in the cycle such as release builds. It’s probably overkill for a CI build. There may be other documents that are created by the build process that you may want to capture as well such as Javadoc or other document output.
Document Uploader Settings
There are a few fields that may require some thought and care when providing the values. When setting up things like this in TeamForge, I find it easiest to have information about the build and paths open in a separate browser tab or window.
- Project – Easiest to copy this from the TeamForge project home page.
- Upload Folder Path – This path is relative to the Root Folder of the Documents area in the project. For example, if you want a document to go into the folder for Release 1 that’s under folder Project 1, then the path would be /Product 1/Release 1. Spaces are allowed. The settings shown above (/Product 1/Release 1 and the upload the build log) will result in documents like this:
- File Patterns to Upload – This can be a bit more interesting and because it entails the use of regular expressions the full details are beyond this blog. Because the plugin uses Ant style path regexps which are a bit different and more limited:
- ? matches one character
- * matches zero or more characters
- ** matches zero or more ‘directories’ in a path
With this, a regexp like s*.xml will upload settings.xml in the current directory. **/s*.xml will upload settings.xml (and anything else that matches that pattern) in any directory in the tree. Be careful with your use of these, they can cause build times to jump significantly as the regexp scans through the entire Jenkins workspace. Once documents have been uploaded, they can go through the normal document review process if that’s appropriate.
Binary File Release Upload
TeamForge provides a Binary File Release management capability. This feature is similar to a binary object repository such as Artifactory or Nexus, but is tightly integrated into TeamForge and the Jenkins plugin, although it has fewer features than a dedicated tool such as Nexus. In TeamForge, these binaries are organized around Packages and Releases. These Releases are the same ones that can be used in other places in TeamForge (like on a Tracker artifact) as a release identifier. While the File Releases pages define the structure of Packages and Releases and the planning folders that go with them, what your build uploads into this structure is defined in the Jenkins job:
Upload all WAR files
In this example, any file with the .war extension anywhere in the Jenkins workarea will be uploaded to the Release 1 area of the Game of Life package. Jenkins also puts these URLs in the Build Record to make it easy to fetch the build result for any build:
Release download from build history
Release Download from Build Record
These files will have a unique URL that can be accessed by ARA (Application Release Automation) tools or infrastructure-as-code tools to facilitate DevOps initiatives around Continuous Deployment.
Wrap Up
The TeamForge Jenkins plugins together do a lot of things that can be valuable to your development and deployment initiatives. Hopefully I’ve covered them in enough detail that you can put them to use for yourself. We’re always interested in what our customers find useful, so as you use the features (or decide they don’t have any value for you), please post a comment and let us know what features you are using and also let us know if there are any features you’d like to see in these plugins that aren’t there today. I’ve tried to make this article technically accurate, but if you find something that’s not right, please leave a comment about that as well and I’ll update the post. In the mean time, have fun with the Jenkins plugins for CollabNet TeamForge!
Are you ready to scale your enterprise?
Explore
What's New In The World of Digital.ai
What is SAFe PI Planning?
PI Planning aims to bring together all the people doing the work and empower them to plan, estimate, innovate, and commit to work that aligns with the business’s high-level goals, vision, and strategy.
How to bring external data to Digital.ai Agility
Silvia Davis, Sr. Product Marketing Manager at Digital.ai, tells her story of how a positive app experience led to the realization that proper data integration is essential to the entire application lifecycle.
Happy Anniversary Digital.ai!
This year, Digital.ai turns two! Continue reading for insight on Digital.ai’s journey and what plans we have for the future.