Introduction
Stop me if you’ve heard this one before. You have multiple ALM platforms and development point tools that all have to work together. You decided for a data synchronization approach so that you can benefit from unified reporting, save license costs and still enable all stakeholders to use their favorite tools with their favorite platform while having access to the full data. The pilot goes well, demonstrates the benefits you hoped for and you decide to go in production. For the first projects you sync, things work out as planned but the more projects you add, you feel the synchronization overhead (waste in the lean terminology) in mainly two areas:
-
Resource bloat on the sync server: The more projects (requirement types, trackers) you sync, the longer it takes before artifact updates come through and the more resources like memory and CPU you have to dedicate to the sync server.
-
Administrative overhead: The more projects you sync, the more time you have to spend defining field transformation rules and keeping them in synch with the meta data of all connected systems.
Our way to lean – build, measure, learn
Many customers have told us about those challenges and our Software Lifecycle Integration Platform (CCF) got more and more features to appropriately deal with those. In the very early days of CCF (2008), we came up with a round robin streaming algorithm that made sure all synchronized trackers are equally treated, IOW, a burst of activity in one tracker (e.g. a mass update) would not result in starving synch activity for tracker with less update frequency.
It was at the same time when we started collecting best sync practices from our customers in a wiki page, most importantly in this context the practices to standardize on tracker meta data layout, define a common set of fields to be used across trackers and to establish a change process for meta data changes. To efficiently benefit from standardized repository layouts, we introduced field mapping templates in CCF 2.0. With that feature, you only have to define a field mapping for a certain tracker type once and can use it across projects. In CCF 2.1, we reduced the administrative overhead of field mapping creation and maintenance again by releasing our reverse mapping wizard.
While those features helped a lot to use CCF for bidirectional defect synching with Quality Center, requirements synch proved to be more challenging. While there is only one defect type per QC project, our average QC requirement customer has more than five different requirement types per project. Consequently, the resource bloat and the administrative overhead for a project using requirements was at least five times as high as for defect synch. When we discussed with our customers how to solve this problem, we came up with two approaches:
-
Make it possible to shift load from one synchronization server to the other: This is one of the main features of CCF 2.2.1 and enables our customers to cope with hundreds of projects and millions of artifacts.
-
Make it possible to treat all requirement types of a QC project as one – One field mapping, one repository mapping, one target TeamForge tracker. This is what the main feature of CCF 2.3 is all about.
The rest of this blog post is pretty technical. If you are not interested in the nitty gritty details, you will love to hear that with CCF 2.3, you will be able to handle requirement synchronization between TeamForge and Quality Center with approximately 20 percent of the computing resources and 20 percent of the field mapping administration overhead as before. For anybody who likes to learn how this works exactly, read on.
Technical Details on our last Product Iteration
Before CCF 2.3, our sync platform could only sync QC requirements to TF trackers based on QC requirement types i.e. for each requirement type a separate repository mapping had to be created.
This approach created the scalability issues mentioned above, namely
-
even if you wanted to sync multiple QC requirement types to the same TF tracker, you would need multiple repository mappings and associated field mappings.
-
every repository mapping direction resulted in a QC connection which was causing OS resource constraints and stability issues because of QC’s COM API and its 32 bit address space limitations (the upper maximum of open QC connections seems to be around 150 per process).
From CCF 2.3 on, we can sync all QC requirements of a project using one repository mapping (and only two QC connections in the bidirectional case). To do this, we introduced an artificial requirements type “ALL” which can be selected if you create a new repository mapping:
This will result in repository mappings for requirements type “ALL” which will feel responsible for all QC requirements of this project:
Field mapping configuration for the corresponding repository mapping directions are supported by all three field mapping types i.e. handwritten XSLT, MapForce based field mappings and our own graphical mapping rule wizard.
For hand written XSLT, we provide a sample XSLT file that show how to deal with multiple requirement types at once. The example files are
QC2TF- samples/QC2TF/xslt/ sample_QC_ALL_Req_2_TF_Tracker.xsl
TF2QC- samples/TF2QC/xslt/ Sample_TF_Tracker_2_QC_ALL_Req.xsl
Let’s have a look into the Quality Center to TeamForge XSLT file
4748 49 50 QC_REQUIREMENT_TYPE 51flexField 5253
The field RQ_TYPE_ID contains the actual requirement type of the current artifact to be transformed.
Our XSLT scripts assumes that you have a flex field called “QC_REQUIREMENT_TYPE” in TeamForge. This flex field is used to display the ”real” requirement type (e.g. Folder, Functional) of the corresponding QC artifact in TeamForge’s artifact data. Depending on your particular scenario you may use a different field or decide not to map this information at all.
Probably you will decide based on the value of RQ_TYPE_ID how to map the other fields or whether to completely ignore requirements of certain types (by setting targetArtifactAction to “ignore”):
22 2324 25 26 2827 29 31 3230
If you map from TeamForge to Quality Center, you would use a flex field like QC_REQUIREMENT_TYPE or something similar (like a single value dropdown list) to define what kind of requirement to create in Quality Center:
4546 47 48 RQ_TYPE_ID 4950
For MapForce based field mappings and our own graphical mapping rules we show all QC requirement fields irrespective of requirement types, IOW we will display the superset of available fields for all requirement types whose artifacts may be transformed.
As in the handwritten case, the QC requirement type of the artifact which is currently transformed can be derived (QC2TF)/set (TF2QC) using the RQ_TYPE_ID field:
Apart from the special “ALL” requirement type and the special treatment of RQ_TYPE_ID, there is no difference to ordinary repository mappings, you can pause and resume them, use field mapping templates, the reverse mapping wizard and all artifact transformation and filtering techniques you are used to.
Endnotes
CCF already went a long way, making bidirectional artifact synchronization leaner and leaner by reducing administrative and resource overhead. With our most recent product iteration, CCF 2.3, we brought that philosophy to a new level by reducing field mapping rule maintenance effort by up to 80 percent and making it possible to sync more than five times more Quality Center requirement types on the same sync server. As our build, measure, learn process is a closed feedback loop, we won’t stop here but ask you to check out our newest features and tell us what to do next in order to make bidirectional artifact synchronization even leaner.
Acknowledgments
This blog post was inspired by Eric Ries’ book the “The Lean Startup”. Please forgive me if I stretched metaphors and structural similarities too much
Are you ready to scale your enterprise?
Explore
What's New In The World of Digital.ai
What is SAFe PI Planning?
PI Planning aims to bring together all the people doing the work and empower them to plan, estimate, innovate, and commit to work that aligns with the business’s high-level goals, vision, and strategy.
How to bring external data to Digital.ai Agility
Silvia Davis, Sr. Product Marketing Manager at Digital.ai, tells her story of how a positive app experience led to the realization that proper data integration is essential to the entire application lifecycle.
Happy Anniversary Digital.ai!
This year, Digital.ai turns two! Continue reading for insight on Digital.ai’s journey and what plans we have for the future.