Skip to main content
DevOps icon showing cogs

This post is from the XebiaLabs blog and has not been updated since the original publish date.

Last Updated Jan 21, 2019 — DevOps Expert

Best Practices for Custom Deployments with the XebiaLabs DevOps Platform



Whenever I give a demo of, or a training class on, the XebiaLabs DevOps Platform, I mention that all actions in deploying an application can be broken down into two categories: moving data (files) and executing commands.

In a typical deployment, the XL Deploy module of the XebiaLabs DevOps Platform performs these two actions on a remote system, to which a connection is made via SSH for Unix/Linux operating systems and WinRM for Windows.

XL Deploy has a convenient control task called "Check Connection" that proves that these two actions can be performed successfully. It transfers a dummy data file and runs a command to list the contents of a temporary directory. It proves that the protocols are correctly configured, firewalls and ports are open, and the login credentials are valid.

So are we ready to deploy?

Some users are tempted at this point to structure a deployment using plugins that provide these exact two actions in their most basic form:  the "command" plugin with its cmd.Command object type and the "file" plugin with its file.File object type.
A user goes into XL Deploy and configures a deployment in this fashion:

  1. Commands that precede the file transfers

cmd.Command object with order = 45
cmd.Command object with order = 55

  1. File transfers

file.File object for the first file with default order 60
file.File object for the second file with default order 60
file.File object for the third file with default order 60
file.File object for the fourth file with default order 60

  1. Commands that follow the file transfers

cmd.Command object with order = 65
cmd.Command object with order = 75

What are the pros and cons of this approach?

As seen in the next two images, the objects involved in the deployment are:

  • Easy to configure
  • Easy to read and comprehend

For the command object, the user simply enters the command to be executed in the command line property. For the file object, the user uploads the file and indicates the target path, at minimum.  The example below uses a placeholder for the latter.

On the other hand, this approach has some shortcomings:

  • Fragile. You're working with an open command line susceptible to errors.
  • Incomplete. The commands don't take rollbacks and reruns into account effectively.
  • Not portable. You'll have to rewrite the commands for another OS such as Windows.
  • Doesn't identify that all these items go together if they are part of a package containing other deployables.
  • Doesn't provide the benefits of XL Deploy's object model. For example, if you wanted to "subclass" this configuration for behavior slightly different than this, you have to rewrite it.

Best Practices

XebiaLabs recommends the following best practices for XL Deploy when it comes to deployments not already supported by an existing plugin.

Combine the file artifacts, both text and binary, into a single zip-style archive. 

This follows  the same rationale for bundling application files together as a jar, war, or ear file. They move together through your CI/CD pipelines as a single unit, developed and deployed together. Pack them in the build job, and unpack them when they reach their final home on the target system.

Make use of classpath resources when able. 

These might be installation binaries or control templates that don't change between deployed versions and therefore don't have to be bundled into the actual application files.

Control the commands required for your deployment with xl-rules and classpath scripts. 

The scripts are easily parameterized, and XL Deploy can send the Linux or Windows version of a script depending on the OS targeted.

A Plugin Example

Now we'll begin constructing a plugin by working in the XL-DEPLOY-SERVER/ext directory.  As a first step, add a definition to your synthetic.xml:

<?xml version="1.0" ?>
<synthetic xmlns="">
    <type type="demo.DeployedBestPractices" extends="udm.BaseDeployedArtifact"
        deployable-type="demo.BestPractices" container-type="overthere.Host">
        <generate-deployable type="demo.BestPractices" extends="udm.BaseDeployableArtifact" />
        <property name="targetdir1" />
        <property name="targetdir2" />
        <property name="targetdir3" />
        <property name="targetdir4" />

Adding a definition to our synthetic.xml gives us the ability to define a custom artifact, along with some properties for the deployment, in this case the target directories for each of the files when we unzip them. Of course, you can define any properties necessary for the deployment, making use of such data-types (kinds) as strings, integers, booleans, key-value maps, or even references to other objects.
Now, on to defining the deployment behavior via xl-rules:

<?xml version="1.0"?>
<rules xmlns="">
    <rule name="demo.DeployedBestPractices.Create" scope="deployed">
                <description>Execute script 1</description>
                <description>Execute script 2</description>
                <description>Handle the files</description>
                <description>Execute script 3</description>
                <description>Execute script 4</description>

We have replaced each of our four commands with an os-script section, pointing to a script in the XL Deploy classpath. Each os-script tag represents a step, and for each one there are four properties that will be applied to it: a description, a script path, an order number, and a boolean to tell XL Deploy whether or not to upload the artifact(s) in the object.

The ext directory can now be structured like this:

├── demo
│  ├──
│  ├──
│  ├──
│  ├──
│  └──
├── readme.txt
├── synthetic.xml
└── xl-rules.xml
1 directory, 8 files

And the four "command" scripts now look like this:

$ cat demo/
echo "Executing command 1 on ${}
$ cat demo/
echo "Executing command 2 on ${}
$ cat demo/
echo "Executing command 3 on ${}"
$ cat demo/
echo "Executing command 4 on ${}"

Note the FreeMarker references to the deployed and to objects referenced by it. With the appropriate chain of reference pointers, you can reach any object in XL Deploy's model.

And we have added an upload step to handle the files within our zip archive and move them to their proper directories, taking advantage of FreeMarker templating for properties of the deployed object:

$ cat demo/
unzip -o ${deployed.file.path} myfile1 -d ${deployed.targetdir1}
unzip -o ${deployed.file.path} myfile2 -d ${deployed.targetdir2}
unzip -o ${deployed.file.path} myfile3 -d ${deployed.targetdir3}
unzip -o ${deployed.file.path} myfile4 -d ${deployed.targetdir4}

Here is our deployment output for the this script, which illustrates how XL Deploy uploads the script and the artifact are uploaded to a temporary directory, and then executes the script from there on the host's operating system.

Notice that the rules file only specified demo/createBestPracticeUpload, without the .sh or .ftl extensions. Since XL Deploy knows this deployment is going to a Linux system, it will look for the version of the script having the .sh extension. The .ftl extension directs XL Deploy to process the script throughout FreeMarker. 

If we wanted to deploy to Windows, we would have included a .bat or .cmd version of the script.
So we end up with the same result as we had with the four command and the four file objects. And there are many more options available in XL Deploy to make this example fully functional:

  • This type can be subclassed for variations on the core behavior.
  • Rollback behavior can be controlled with additional rules. See the rules reference for all the options available with XL Rules.
  • Parameterization is very flexible with FreeMarker.

Related Resources



More from the Blog

View more
Ascension Launch Banner
Apr 26, 2022

Get ready for peak performance with’s newest AI-Powered DevOps Platform Ascension Release

Today, is excited to announce our latest AI-Powered DevOps ...
Read More
Jan 24, 2022 Value Stream Delivery for SAFe®: The key to amazing business outcomes

The Scaled Agile Framework (SAFe) is the world’s leading framework for ...
Read More
Dec 09, 2021

How SaaS and cloud-based solutions helped the U.S. Department of Veterans Affairs achieve digital transformation

Modernizing legacy systems was an ongoing goal for the U.S. Department ...
Read More
Nov 29, 2021

Increase velocity and reduce risk with AI and machine learning

Artificial Intelligence (AI) and machine learning (ML) have proven use ...
Read More
Contact Us