Posts tagged continuous delivery


Today’s post will be about another proof-of-concept I’ve been doing recently — using Puppet to manage the test lab (and more). By the way, if you’re interested in working for me, here’s the job description.

What is Puppet?

Puppet is an infrastructure management software that allows to control the configuration of multiple servers from one central place. The configuration is defined in a declarative way via so-called manifests. A manifest is a collection of resource definitions and each resource describes the desired state one thing, e.g. a file with name X should exist and have this or that content or service Y should be running.

Puppet consists of two components, an agent and a server (a.k.a. master). The agent needs to be installed on each managed machine and it’s purpose is to apply the manifests sent by the master to the local machine. Agent software is free (Puppet Open Source) and can run on any OS. Master on the other hand is part of Puppet Enterprise and obviously is not a free software.

Other interesting thing about Puppet is the Forge. It is a place where the community can exchange Puppet modules (packaged, reusable configuration elements).

Last but not least, there is the idea of master-less Puppet. In such scenario there is no central server and agents get their manifests straight from some package repository or even have the manifests pushed to them (e.g. using Pulp).

Puppet for Windows

It’s probably not a surprise that Puppet is focused on non-Microsoft OS, in particular Red Hat and Debian Linux distributions. Support for Windows is not that complete but all the important parts are working (e.g. file manipulation, service management, package installation). The only problem might be that the Puppet master is not available for Windows. It would pose a challenge for me (and our IT department) if we wanted to use it, but… this slide explains why we’ve chosen the master-less way. One more reason for going that route is the fact that I’d like to keep my manifests in the source code repository. But I am getting ahead of myself.

Puppet in a test lab

Why do we even need puppet to manage our test lab? We decided that for each project we run we automatically create two virtual environments, one for automated and one for manual testing. Spinning up these environments should be effort-less and repeatable. This directly leads to Puppet or similar technologies. A big advantage is that, for projects for which we also run the production environment, we can use the very same process to manage the production VMs.

In order to deploy Puppet in the master-less way one needs to implement the manifest distribution himself. Since Octopus Deploy, our favorite deployment engine, uses NuGet for packaging, we decided to use the same package format for distributing the manifests. But first, how do you know which manifests should go where? We devised a very simple schema that allows us to describe our machines like this

	<Machine name="Web">
			<Role name="Web"/>
			<Role name="App"/>
	<Machine name="Web2">
			<Role name="Web"/>
			<Role name="App"/>

And their roles in terms of manifests

	<Role name="Web">
			<Manifest file="Web.pp"/>
			<Manifest file="Common.pp"/>
			<Module name="joshcooper-powershell"/>
	<Role name="App">
			<Manifest file="App.pp"/>
			<Manifest file="Common.pp"/>

These files are part of so-called infra repository. We have one such (git) repo for each Team Project. The infra repo also contains Puppet modules and manifests in a folder structure like this:

|- machines.xml
|- roles.xml
|- Modules
|  |- joshcooper-powershell
|  |  |- Modulefile
|  |  \- ...
|  \- puppetlabs-dism
|     \- ...
\- Manifests
   |- app.pp
   |- web.pp
   |- common.pp
   \- ...

On our lovely TeamCity build server we run a PowerShell script to create one NuGet package for each module (using the Modulefile as a source of metadata) and one package for each machine. It uses the xml files to calculate which manifests should be included in the package. We also use the module information in the role definition file to define dependencies of the machine packages so that when we do

nuget install INFN1069.Infra.Web.1.0.0

on the target machine, NuGet automatically fetches the modules manifests depend on. I’ll leave the exercise of writing such a PowerShell script to the reader. Last but not least, we need another small script that will run periodically on each machine in the test lab. This script should download the packages and call

puppet apply [folder with manifests] --modulepath=[folder with modules]

to apply the latest manifests.

VN:F [1.9.22_1171]
Rating: 4.5/5 (2 votes cast)

I’m hiring a DevOps kinda person

Infusion is rethinking its build & deployment process with the DevOps spirit. We are (and in particular, I am, because that’s my team) looking for a young and ambitions person who would start by doing implementation tasks around the process and gradually take on more responsibilities for defining the vision as the process as it moves into continuous improvement stage. In the long run, the person would be responsible for evolving the process.

Key Responsibilities and Duties

  • Develop custom tools for integrating COTS products from the build & deployment are (e.g. TFS, TeamCity)
  • Developing scripts for automation of every aspect of dev, build & deployment process
  • Evaluate, configure and deploy build & deployment tools
  • Administer the UAT build & deployment infrastructure (e.g. TFS, TeamCity, Octopus) where new changes to the process are applied and tested before going live to production infrastructure
  • Prepare VM images for both workstations and servers using tools like Puppet

Key Skills & Experiences

  • Bachelor’s degree in computer science, computer engineering or related education.
  • Strong focus on automation (in every area)
  • C#
  • PowerShell
  • Experience in configuring/administering TeamCity is a plus
  • Experience in configuring/administering TFS is a plus
  • Experience in configuring Puppet is a plus
  • Experience in Ruby is a plus

Last, but not least, all the tools this team creates need to be an example of perfection in terms of development, build, test and deploy in order to show to the rest of the organization that the team uses the practices they preach and these practices do have value.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Go as continuous delivery tool for .NET

Following my previous post regarding a possible design of continuous delivery scheme for an ISV, I’d like to focus today on ThoughtWorks Go. This tool used to be quite expensive but just a few days ago ThoughtWorks made it completely free and open source (under Apache 2.0 license). Because of this dramatic price drop I thought that I would give Go a second chance as try to replicate the same stuff I did with TeamCity. Let me share my insights after spending few days with Go.


The name is probably Go’s biggest problem. It is absolutely impossible to google for any information regarding it. Try ‘NUnit Go‘ for example. Really, these days when choosing name for a product one should think about it’s googleability.


As we’re a .NET shop, I installed Go on my Windows machine. It was quick and easy. Good job here. Same for installing the agents.


Go’s docs are very clean and nice but I have an impression that there’s more chrome than content in them if you know what I mean. Take NUnit integration for example. The only thing I found was the information that Go ‘supports NUnit out of the box’. It turned out that by ‘support’ they mean it can process NUnit’s TestResult.xml file and display ugly (yes, I mean very ugly) test summary on release candidate details page. In order to generate this file I need to run NUnit on my own using the task ‘framework’ (more on that later). Of course I need to install NUnit runner on the agent first.

By the way, there is quite a lot of video how-to’s but personally I don’t think that’s what devs are looking for. On the good side, the HTTP API is very well documented.

Last but not least, I have a feeling that Go’s docs lack transparency a bit, especially compared to Octopus. I mean things like what is the protocol between the server and the agents and why it is secure should be better explained so that I as an ISV can use them to convince my clients to using Go.


Go has a concept of pipeline which lets you define complex build and deployment workflows. Each pipeline has one or more stages executed sequentially, either automatically or with manual approval. Each stage consists of multiple jobs which can be executed in parallel on multiple agents. Finally, each job is a sequence of tasks.

To add even more possibilities, pipelines can be chained together so that completion of one pipeline kicks of another one. Pretty neat. I really like it. The sequential-parallel-sequential design is clean and easy to understand and is expressive enough to implement complex processes and constrained enough to not let these processes become a pile ugly spaghetti.


Go’s agents are universal. They can execute any shell command for you and pass the results back to the server. They have no built-in intelligence like TeamCity (build-specific) or Octopus (deployment-specific) agents and can be used for both building and deploying. Plus they are free. Good job.


Tasks are in my opinion the second biggest (just after the name) failure in Go. A task can be either Ant or NAnt script or… any shell command you can imagine. While I appreciate the breadth of possibilities that come from being able to execute just anything, I really don’t like the fact that I have to do everything myself.

Do you, like me, enjoy TeamCity’s MSBuild configuration UI? Or it’s assembly version patch feature? Or maybe it’s visual NUnit runner configurator? Nothing like this here. To be fair, there is a concept of command repository which allows you to import frequently used command examples but it really isn’t something comparable to TeamCity.

What surprised me is that there seem to be no plug-in system for tasks and for sure no lively plug-in ecosystem. I would expect that if ThoughtWorks made a decision to focus on workflow and agents (which are really good), they would publish and document some API that would allow people to easily write custom task types as plugins. For example, if I would install NUnit plugin into my Go, I would expect NUnit runner to be deployed automatically to my agents.


I managed to build a simple pipeline that does build my source code, packages it up into NuGets (using OctoPack) and runs the unit tests. It’s for sure doable but it’s way more work compared to TeamCity. Because I don’t like a role of release manager who owns the build and deployment infrastructure and prefer teams to own their own stuff, I made a decision to drop Go and focus on TeamCity. It is much friendlier and I don’t want to scare people when I am helping them set up their builds. If ThoughtWorks or the community that will probably form around Go gives some love to defining tasks I will consider switching to Go in future. Go is definitely worth observing but in my opinion, for a .NET shop it is not yet worth adopting.

To be fair, TeamCity is not a perfect tool either. To be able use it we have to overcome two major problems

  • No support for defining deployment pipelines (everything is a build type). Bare TeamCity lacks higher-level concepts
  • While TeamCity’s base price is reasonable, a per-agent price is insane if one wants to use agents to execute long running tests (e.g. acceptance)

More on dealing with these problems in following posts.

VN:F [1.9.22_1171]
Rating: 4.7/5 (10 votes cast)

Evaluating OctopusDeploy in context of an ISV

It is quite obvious that all these continuous delivery and deployment automation tools are very good fit for organizations that develop software for themselves, either for internal use or meant to be published in software-as-a-service way. It is not so when it comes to an ISV, which is a Microsoft’s name for a company that uses their tools and platforms to develop custom software for other organizations. I work for Infusion which is more-or-less this kind of company. Big part of our business is developing custom software. We have quite a lot of clients and each engagement is different, also in terms of the responsibilities around deployment and hosting the app. Possible scenarios range from just passing the code (not very frequent), through passing binaries and assisting in production deployment as far as to maintaining the whole infrastructure and taking care of deployments and maintenance on behalf of the client. Clearly, there is no standard way of doing things at Infusion (which is of course good).

One to rule them all

On the other hand, there is a huge need to bring some sanity into the release process. We can’t just reinvent the wheel every single time we approach the go-live date. As part of an initiative aimed to standardize the release process we’ve been evaluating multiple products. One of them is OctopusDeploy. I had a pleasure working with Octopus few times before so I have full confidence in the product. What I want now is to confirm that is can be used in our ISV scenario. The first step was coming out with the following diagram:

Continuous Delivery ISV


The left part of the picture is our ISV. There is a developer there who commits to the source code repository. Then, a CI server (likely to be TeamCity) builds to code and runs the unit tests in the process called Commit Stage in continuous delivery lingo. The Commit Stage is designed to provide short feedback loop to the developers so only the fastest (no DB or any external access) tests can be run as part of it. If this stage succeeds, the second one kicks in where TC asks Octopus to deploy the code to the integration environment. Depending on the concrete scenario, it might be either a single integration tests assembly or a complete application along with some kind of test scripts. Bottom line is, we execute integration tests in this environment rather than on CI server. It allows us to more closely mimic real-world scenarios and also frees up the CI agent when the lengthy tests are being executed.

Human factor

When tests are done, the result file (an XML) is uploaded and imported into TeamCity. If the results are good, TC kicks off another job that asks Octopus to deploy the app to the internal QA environment where our lovely QA specialists can play with it a bit. Whenever necessary we can easily add more environments/test types to the process (such as performance tests, usability tests) but in most of our projects these two environments should be enough. After the tests are done, the QA engineer can mark the particular build as OK, allowing the Team Lead or the Project Manager to publish the build package to the customer.

Crossing the gap

The publishing format used is NuGet (which is a flavour of Open Packaging Conventions, OPC). In order to achieve desired level of security, NuGet packages are being digitally signed by the build server with our company’s certificate. Although signing is not supported by NuGet, it does not interfere with it in any way. Published packages are transferred to customer’s NuGet repository from which customer’s staff can deploy them to either UAT or production environments.


The deployment process is shared between all the environments on both sides (our ISV and customer) to ensure flawless deployments to production. The process is defined in Octopus and can be synchronized between ISV and customer’s Octopus instance using its great REST API.

Bright future?

Although we are still in the early stages of the implementation, it looks like this process can be without major modifications used to deploy any kind of web application we do (on-premise, Azure-hosted and SharePoint). The key to the success in our initiative is the ability to convince our customers to installing Octopus on their infrastructure. Luckily Octopus has a very thorough documentation with regards to security features it includes that can be used to dispel customer’s fears of automated deployment. The other important thing is the fact that octopus is free for small installations so the initial cost for the customer is close to zero making the entry barrier smaller. We hope that as soon as our customers start using it, they will love it and will include it as first-class citizen in their IT infrastructure.

In the ideal world (for us, an ISV) each our customer maintains their own instance of Octopus along with their environment configuration and release process (e.g. UAT, staging, production) and we agree on standard way of publishing packages and synchronizing the deployment process.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

TFS 2010 and multiple projects output

This post is a part of automated deployment story

Some time ago we were forced to switch from TFS 2008 to TFS 2010. I must emphasise here that choosing TFS as source code repository and CI software was not my choice in the first place. We were forced to use it. Anyway, we wanted to move because the new system was in the same physical network as the whole environment so it would make transporting binary packages much easier and safer.

Apart from obvious changes, like replacing MSBuild with workflow, there is one thing that is more subtle but has so tremendous impact that nearly blocked our adoption of 2010.

It turns out that 2010 overrides the bin directory when building projects so that output of all compilations goes to one folder. By doing so it saves a lot of effort of copying copy local binaries here and there. There is a downside however. Having out output directory makes it really hard to build more than one application in the solution. The result is, all the binaries are mixed together and you can’t figure out (by the results alone) which one belongs to which application (and which are shared).

The problem is less dramatic with web applications because they have publish feature out of the box. What publish does is especially it gathers all the files related to the app and puts them into a zip file. It also gathers all the directly and indirectly referenced binaries, which is cool.

What about console applications then? In the obj directory you can find only the result of compiling the application. There are libraries it depends on. How can we find them?

We can use the very same file that you use to publish web applications. It is called Microsoft.WebApplication.targets and it is located in MSBuild folder in Program Files. All you need to do is strip it from all stuff that does not apply to console apps. Here’s what remains of it:

<Project DefaultTargets="Build" xmlns="">
  <UsingTask TaskName="Microsoft.WebApplication.Build.Tasks.CopyFilesToFolders"  AssemblyFile="Microsoft.WebApplication.Build.Tasks.dll" />


  <Target Name="PackageBinaries" DependsOnTargets="ResolveReferences">
    <!-- Log -->
    <Message Text="Generating binary package for $(MSBuildProjectName)" />

    <!-- copy any referenced assemblies -->
    <Copy SourceFiles="@(ReferenceCopyLocalPaths)"

    <!-- Copy content files -->
    <Copy SourceFiles="@(Content)" Condition="'%(Content.Link)' == ''"
          RetryDelayMilliseconds="$(CopyRetryDelayMilliseconds)" />


It automatically hooks to a compile process, calculates the list of transitive dependencies of your project and fetches their binary forms from the common output location.

Now all you need to do is include the file in your .csproj files just under this line

<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />

Then, after the build is finished, the obj folders of your projects are populated with all necessary files.

VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)