Last Updated Nov 13, 2014 — Enterprise Agile Planning expert
Enterprise Agile Planning

As I promised in my last blog post, I would like to continue the topic of PCI-DSS 3.0, which has been in effect since January 2014. As the January 2015 deadline for meeting the 3.0 requirements rapidly approaches, many companies are working to address areas of the standard that were somewhat neglected in the 2.0 version, such as development tools and processes. It is important to continue developing a better understanding of the standard and its effect on use of development technologies in highly regulated industries. Let’s take a closer look at additional PCI-DSS 3.0 requirements.

PCI-DSS 3-0.jpg

In my last blog I shared a story of my friend who is currently exploring the possibility of migrating from SVN to Git and is in the process of understanding the impact of PCI-DSS on their SCM technology choice and the project scope.

Since my last blog post, my friend has made a lot of progress. He met with the security and compliance officers in his company to formalize the requirements gathering process for the SVN to Git migration. As a part of this effort, his team looked at the compliance requirements closely and discovered new implications to be aware of. In particular, they discovered that even though there is no explicit technology recommendation in the standard itself, the 3.0 version is a lot more prescriptive, which may result in different third party technology selection for them. Another interesting realization occurred during the discussion between the engineering and customer account owners, which opened a whole new can of worms for the Git team. Their engineering department is automating the process of delivering customer-requested hotfixes to their hosted environments. With this process in place, once the code fix is verified and submitted, it would be automatically applied to the particular client’s target environment. While this had seemingly nothing to do with SCM technology choice, after digging into PCI-DSS 3.0 they came to a realization that in their particular case, the automation of patch delivery can amplify compliance related risks. But wait…isn’t automation a good thing? Well, it depends. If it introduces a major risk, then maybe you should not rush it, at least until you found some way to mitigate the associated risk. Let’s go back to the standard again and carefully examine relevant requirements to see what the risks are and what can be done about them.

Requirement 6

Develop and maintain secure systems and applications

Unscrupulous individuals use security vulnerabilities to gain privileged access to systems. Many of these vulnerabilities are fixed by vendor- provided security patches, which must be installed by the entities that manage the systems. All systems must have all appropriate software patches to protect against the exploitation and compromise of cardholder data by malicious individuals and malicious software.

Note: Appropriate software patches are those patches that have been evaluated and tested sufficiently to determine that the patches do not conflict with existing security configurations. For in-house developed applications, numerous vulnerabilities can be avoided by using standard system development processes and secure coding techniques.”

6.3.1

Remove development, test and/or custom application accounts, user IDs, and passwords before applications become active or are released to customers.

This requirement has a huge implication in the case of my friend’s company because they are fully automating the way the fix is traveling from the testing environment to production. The company’s developers have a habit of hard-coding their testing usernames and passwords into configuration files and these files normally live inside their local branches. When they submit the code fixes nobody remembers to remove these config files. Of course, they have a paper policy that tells them to not do that! The problem is, the policy is not easily enforceable in distributed Git environments that are designed for peer integrations. Developers get environment changes voluntary or involuntary from each other when they perform integrations and local merges. So when you get a “bad” config file from your fellow developer – as long as your code doesn’t break – you may never realize you did, as long as other configuration parameters are working. This means every developer has a “hidden” power to submit these “user IDs and passwords” without knowing their local  branch is “infected”. When your company is on a 3-month release schedule, the risk of this happening is not very high. With T2P automation features, the risk of bad configuration files being happily propagated to production is a lot higher.

Now that we understand the risk, let’s see what can be done about it. A popular way to handle this problem in Git environments is rewriting history and force push that will get rid of bad files. Since Git gives you tools like ‘filter-branch’ or ‘rebase interactive’, to do history rewrites effectively. However, automation makes human errors very challenging to catch, and the bad files can sit undetected in the latest branch and in production for months. So why develop a workaround instead of dealing with the problem at its core – the policy enforcement issue? The ideal approach is to have some process in place that would require strict checks to detect the presence of bad config files in local branches.

Luckily, the solution can be found in the technology selected for code version control. Modern code control systems can play an active role in making a paper policy enforceable and ensuring that all conditions regarding code quality and compliance are met before a commit is merged into the master branch.  Then it is OK to safely trigger a pipeline that will eventually promote it into production.

This task is very easy with CollabNet’s Git backend, Gerrit.  CollabNet offers a code quality gate wizard feature for Gerrit. It comes with a collection of predefined policies and lets you graphically design your own quality gates. If you know how to define email filter rules, you will be able to setup the code quality wizard just as easily. Compliance is just one use case for it, and you can set a variety of other rules for it – four-eye peer review, legal approvals of copyright file changes, senior staff signoff, feature acceptance criteria, etc. Regardless of what your code quality gates may look like, you can enforce it without having to write a single line of code. Johannes Nicolai published a few excellent blog posts on this topic earlier; they explain the feature in great detail.

The PCI-DSS requirements has many other implications, and we will continue our discussion in my next blog. See you in a week!

Related Posts

Migrating from Subversion to Git: What Your PCI-DSS Guy Will Not Tell You, Part 1

Scaling Compliance with Git:  What You PCI-DSS Guy Will Not Tell You, Part 3

Follow CollabNet on Twitter and LinkedIn for more insights from our industry experts.

Are you ready to scale your enterprise?

Explore

What's New In The World of Digital.ai

May 19, 2023

What is SAFe PI Planning?

PI Planning aims to bring together all the people doing the work and empower them to plan, estimate, innovate, and commit to work that aligns with the business’s high-level goals, vision, and strategy.

Learn More
July 5, 2022

How to bring external data to Digital.ai Agility

Silvia Davis, Sr. Product Marketing Manager at Digital.ai, tells her story of how a positive app experience led to the realization that proper data integration is essential to the entire application lifecycle.
Key points:  
– Product managers, portfolio managers, and scrum masters need visibility in the entire application lifecycle to avoid risks of application delivery delays.  
– Integration between Digital.ai Agility and other application development tools is an essential element in getting visibility within the whole application lifecycle.  
– Agility has out-of-the-box connectors to other application development tools via marketplace, and an API, allowing customers and partners to quickly create integrations with their own application’s ecosystem. 

Learn More
April 19, 2022

Happy Anniversary Digital.ai!

This year, Digital.ai turns two! Continue reading for insight on Digital.ai’s journey and what plans we have for the future.

Learn More