Archive for the 'continuous integration' Category

Published by breki on 18 Oct 2010

Web Testing & Gallio: A Little Helpful Trick

When doing automatic testing of Web apps using unit testing frameworks, it can be a pain in the butt to pinpoint the proper HTML element. A lot of times tests will fail because you used a wrong locator, but since the browser will automatically close after the test, you don’t have an access to the HTML code of the page to look at what’s actually there.

Fortunately Gallio provides a class called TestContext which contains the current information about the running test and which you can use to determine if the latest test is successful or not. This can then be used to run your custom handling code during the test teardown:

        protected virtual void Teardown()
            if (TestContext.CurrentContext.Outcome.Status == TestStatus.Failed)
                using (TestLog.BeginSection("Failed web page HTML"))

In the above snippet, we record the current Web page’s HTML code into Gallio’s log (the TestLog class). To avoid spamming the log, we do this for failed tests only.

Gallio provides a powerful framework which I think is very much underused, mostly because the documentation is not very detailed (to say the least).

Published by breki on 12 Nov 2009

Changing The Build Server

Recently I’ve upgraded my home development machine to Windows 7. It is useful to do these cleanings from time to time, since it forces you to take a step back from your current configuration and think whether something new might be better.

So yesterday while I was preparing a new release of GroundTruth, I was missing my CruiseControl.NET server and started to reinstall it. After hitting a few snags I got bored with the whole idea: I constantly keep resolving the same issues with CC.NET installations (since I use it both at home and at work). Remembering I once played with Hudson and liked it, I decided to give it a serious try.

Setting Up Hudson

The installation was really easy – just download the .war file and run it using java. I did have to move the home directory from the default user’s profile directory to my data disk in order to make the whole installation more portable.

This is what I like about Hudson: no hassle with XML configuration files, you can configure it with a user-friendly Web GUI. Also, you don’t need IIS – the Hudson has its own integrated web server. And it even provides a button for installing Hudson as a Windows service!

The only real drawback with Hudson is that it’s a Java application and if you want to extend it with your own plug-ins, you need to write Java code. Which is fine, but not as accessible for .NET developers. But lately I’ve started using the build server just for building, labeling and packaging and I don’t really need any special plug-ins for that. Anyway, Hudson already has a lot of plug-ins, some of them even for .NET, so I don’t think I will need to write my own any time soon.

Good Bye Cruising

I’ve had quarrels with CC.NET before. Now I think of CC.NET as a nice introduction into CI world, but after a while you need something else. My view is that the .NET world needs a new open source CI project, which would build upon experience of CC.NET, both positive and negative. This is what I would like to see:

  • No hassle: just copy a single executable and run it. No IIS setup, no nothing. Web server comes with the package. Windows service installation with a single mouse click.
  • Portability: server configuration has to be separated from the server executables. If a new version of the build server arrives, the upgrade be as simple as overwriting a single executable file. No messing around with Web.configs, dashboard.configs etc.
  • Web-based installation & upgrade: I like how WordPress is doing things: you can upgrade your WordPress installation using the Web dashboard. It would also be nice to be able to install plug-ins just by pointing to its home URL.
  • Simple GUI: simple as in “Google search simple”. 95% of the time you only need a build radiator and nothing else. Everything else should be accessible, but not on the first page. And think simple permalinks. BTW: in my view, even Hudson’s GUI could be improved in this aspect.
  • Interactive GUI: I want to see a live build log, without manual refreshing. More Ajax please.

Published by breki on 16 Jun 2009

Gallio: Starting And Stopping Selenium Server Automatically During Testing Using AssemblyFixture

UPDATE (June 17th): I’ve updated the code, see the reasons for it at the end of the post.

In previous projects I worked on we made sure the Selenium Java server was running by manually starting it on our machines (both developers’ and build ones). This was cumbersome: restarting the build server meant we had to log on to the server after the reboot and run the Selenium server again. Of course, a lot of times we forgot to do this, which caused the build to fail.

This got me into thinking: is there a way in Gallio to specify some initialization (and cleanup) actions on the test assembly level? And of course, the answer is yes: using the AssemblyFixture attribute. This is what I like about Gallio/MbUnit: most of the time the feature requests I come up with are actually already implemented.

So anyway, you can specify this attribute on a class and then add FixtureSetUp and FixtureTearDown attributes on its methods. These will be executed on the test assembly-level: setup methods will be executed before any test fixtures have been run and teardown methods will be executed before the test assembly has been unloaded by the test runner.

I then used this nice feature to start the Selenium server and then dispose of it after tests:

public class SeleniumTestingSetup : IDisposable
    public void Setup()
        seleniumServerProcess = new Process();
        seleniumServerProcess.StartInfo.FileName = "java";
        seleniumServerProcess.StartInfo.Arguments =
            "-jar ../../../lib/Selenium/selenium-server/selenium-server.jar -port 6371";

    /// <summary>
    /// Performs application-defined tasks associated with freeing, releasing, or
    /// resetting unmanaged resources.
    /// </summary>
    public void Dispose()

    /// <summary>
    /// Disposes the object.
    /// </summary>
    /// <param name="disposing">If <code>false</code>, cleans up native resources. 
    /// If <code>true</code> cleans up both managed and native resources</param>
    protected virtual void Dispose(bool disposing)
        if (false == disposed)
            if (disposing)

            disposed = true;

    private void DisposeOfSeleniumServer()
        if (seleniumServerProcess != null)
                bool result = seleniumServerProcess.WaitForExit(10000);
                seleniumServerProcess = null;

    private bool disposed;
    private Process seleniumServerProcess;

Note that the class is disposable – this ensures the Selenium server is stopped even if you run tests in the debugger and then force the debugger to stop before finishing the work. The Dispose method calls DisposeOfSeleniumServer, which does the actual work of killing the process and disposing of the evidence.

NOTE: This is a second version of the code. I needed to update the old one because I noticed that when running the tests in CruiseControl.NET, the Selenium server java process was not stopped properly. The only way I could stop it is by killing it, which in general isn’t a good practice. The unfortunate side effect of this “killing” is that the CruiseContol.NET service cannot be stopped normally – it also has to be killed when you need to restart it. I’ll try to solve this problem in the future.

Published by breki on 09 Jun 2009

FxCop: How To Use It

Long Beach Harbor Patrol Say No Photography From a Public Sidewalk
Creative Commons License photo credit: Thomas Hawk

This is a second post on my “guidelines” series. Some information here is specific to the projects I’m working on, and some is more general and so applicable to any project.

FxCop is a free static source code analysis tool created by Microsoft. It analyses your built assemblies and reports any issues it encounters. It has a set of rules (which can be turned on/off) which it uses to detect these issues. FxCop comes both in the command-line form (FxCopCmd) and as a Windows GUI.


The general approach in our build scripts is: the script compiles the code and then immediately runs the FxCop analysis. If it detects any issues, it automatically runs the GUI and opens the relevant FxCop project. NOTE: you need to press F5 key (rebuild project) in order for FxCop to show the latest results!

When run “headless” (on the build server), the script of course doesn’t open the GUI – it just simply fails and then kindly informs the offender via email.

Resolving Issues

There are basically three ways how to resolve issues detected by FxCop, described in the following subsections.

Fixing The Code

Fixing the code where the issue has been detected is the preferred approach.

Applying Suppression Attribute On The Code

SuppressMessageAttribute can be used to suppress reporting of a specific defect by FxCop:

[SuppressMessage("Microsoft.Performance", "CA1822:MarkMembersAsStatic")]

You can autogenerate these suppress lines by right-clicking on the issue, selecting Copy As -> Suppress Message. This will copy the suppress attribute into the clipboard and you can then paste it into the offending code. Depending on the type of the issue, the code has to be copied either in front of a class/interface, method, property or a field.

You should only suppress issues which you think

  1. aren’t really issues
  2. aren’t relevant
  3. the offending code has been designed on purpose.

On all other occasions, it is better to fix the code.

Excluding Issues In FxCop GUI

Sometimes (in rare situations) it is difficult to suppress an issue. In that case you can exclude them in the FxCop GUI by right clicking on the issue(s) and selecting Exclude.

NOTE: if you do any excluding in the GUI, don’t forget to save the FxCop project before exiting!

Language Issues

Since FxCop checks the spelling of class, method and other names in the source code, some FxCop issues are related to English dictionary. For example, on the latest project I’m working on, FxCop reported a spelling error for classes containing the MVC string, since MVC “word” isn’t in the standard English dictionary.

This can be resolved by adding such words in the CustomDictionary.xml file which is stored in the root directory of the project’s solution:

<?xml version="1.0" encoding="utf-8" ?>

Adding FxCop Analysis On New VS Projects

When creating new projects in the solution, you have to do two things:

  1. Add the CODE_ANALYSIS compilation symbol in the project (using VisualStudio). Don’t forget to add it both for Debug and Release configurations. Without this symbol, FxCop will ignore any of your Suppress attributes in the code.
  2. Add the project to the FxCop project file, example:
<Target Name="$(ProjectDir)/MyTool/bin/MyTool.dll" Analyze="True" AnalyzeAllChildren="True" />

In general, I tend not to add FxCop analysis for unit test projects because unit test classes usually violate certain rules on purpose (or by design). One example is having test methods which don’t reference class instance stuff, for which FxCop reports the “mark member as static” issue.


There are some other minor “quirks” which can occur, I’ll try to add additional guidance if/when my team members encounter them :).

Published by breki on 08 Jun 2009

Gallio: Filter Attributes For Test Methods

There are three attributes which function as filters when running tests using any of the Gallio’s test runners:

  • Pending: tests which are in development and currently don’t run should be marked with the Pending attribute. This means the test runner will skip them when running the build script.
  • Ignore: this attribute is used for marking tests which are never to be run (they are kept in the code as a history). In general, it is a good practice to avoid such tests – you can get the history from your source control.
  • Explicit: tests marked with this attribute will only run when selected individually in the VS (Resharper, TestDriven.NET). They will not be run as part of the build script. Explicit tests are usually those which depend on a certain external system which cannot be guaranteed to be available at all times – we don’t want such tests to cause failures in our builds.

It is a good practice to supply these attributes with a string argument describing the reasons for marking the test.

UPDATE: Jeff Brown kindly provided some additional information about these attributes:

Tests marked [Ignored] and [Pending] will show as Warning annotations in the test report in addition to being skipped. In ReSharper they will also be shown with a yellow warning stripe to remind you that they are there and need to be looked at.

You can also add your own annotations to various code elements with the [Annotation] attribute.

This is my first article in the “guidelines” series I plan to write in the future. I want to maintain these guidelines separate from concrete project’s documentation since in the past I always had to copy this kind of stuff from one project’s wiki to another.

Published by breki on 05 Mar 2009

Fix: Slow Debugging In Visual Studio

Morty Comes Home
Creative Commons License photo credit: Kevin Eddy

I’ve just found the cure for slow debugging in Visual Studio. By “slow” I mean waiting couple of seconds after each debugger step. The solution was suggested by Jeff Brown on one of Gallio Google Groups threads: turning off the “Enable property evaluation…” setting in Debugger options (Tools -> Options -> Debugger):

Visual Studio Debugger Options

After turning this off, I don’t notice any real delay between debugging steps. The downside is that you won’t get automatic updates of values of object’s properties in Watch and other debugger windows. Instead you get a nice little Refresh button for each of the properties and you’ll need to click on it to get the current value:

Visual Studio Debugger 2

I think this is a minor nuisance compared to the substantial increase of the debugging speed. Not that I’m a big fan of big debugger usage. To quote Scott Belware:

Debugging code is a slow, time consuming process.  Time spent in a debugger is sloth time.  You might be thinking that you’re perfectly effective in a debugger and that you don’t have any objections to doing code validation in a debugger rather than in a well-factored unit test.  This is merely an assumption fed by how habituated you are to using a debugger.  Without having a TDD practice, you have no basis of comparison for how ineffective debugging is compared to writing well-factored unit tests for well-factored code.

Also check out Jeremy D. Miller’s posts about TDD and debugging.

Published by breki on 02 Feb 2009

CCNet Filtered Source Control: Ready To Shoot My CI Server

happiness is a warm gun
Creative Commons License photo credit: badjonni

UPDATE (April 1st 2009): no, it’s not an April fool’s joke… The fun continues with the new version of CCNet… read the update below

After about two hours of exercises in futility I finally managed to persuade CruiseControl.NET to start the build when a particular file on the disk changes. I was just about ready to give up and start implementing my own CI software before the following configuration managed to do what I wanted:

<sourcecontrol type="filtered">
    <sourceControlProvider type="filesystem">

Notice the <pathFilter> tag? I have no idea why only this particular filter value works. The file path I wanted to cover is C:\Temp\CopyAndRun.bat. I tried several other (and more logical) filter values, like:

  • CopyAndRun.bat
  • /CopyAndRun.bat
  • \CopyAndRun.bat
  • C:\Temp\CopyAndRun.bat

… and some others, but to no avail, CCNet reported that none of the modified files matched the specified filter. Needless to say I couldn’t find any relevant documentation and samples for this situation.

I’m more and more of the opinion that CCNet, although powerful and flexible, is pretty horrible for configuring and maintaining. Maybe it’s really time for me to start working on my own solution. Well, to be honest, I’ve already made first steps

UPDATE (April 1st 2009): it turns out the new v1.4.3 version decided to do it differently… and breaks the existing behavior, again in an untraceable way. The configuration block I posted above doesn’t work anymore, the new configuration now looks like this:

<sourcecontrol type="filtered">
    <sourceControlProvider type="filesystem">


  • repositoryRoot now must not have a trailing backslash
  • pattern now works without the ** wildcard.

Again, it is very hard to determine the right configuration. The only help (if it is help at all) is the CCNet service log file, but it doesn’t really tell you why a particular file does not match the filter critieria. Urghhhhh…

Published by breki on 16 Jan 2009

Gallio: Running Tests In Parallel

Creative Commons License photo credit: Shahram Sharif


Yesterday we finally managed to get our tests to run using our acceptance tests framework. I promise to write more about it some other time, but I’ll make a quick introduction now.

First let’s start with the name of the framework: Accipio. The idea of Accipio is to specify acceptance tests in an XML form which is then automatically translated into MbUnit test code. I guess you can call it a lightweight FitNesse – there’s no Wiki, all test specifications are stored in XML files (which are then source-controlled). The XML is quite simple (you can see some initial brainstorming samples here).

But that’s not what I wanted to talk about now.

Time Is Money, They Say

While running the tests we determined that the whole test process took too long. We had around 100 test cases, each of which had to wait for 10 seconds after the initial action before it could assert the test conditions, which means at least 20 minutes of test running time (and we expect much more test cases to be written in the future). We’ll try to refactor that code so that this wait period is not necessary, but nevertheless these tests are written in a way that should allow parallel execution without any negative effects. In fact the parallelization would be welcome since it mimics the "real" situation in production.

Luckily with little Googling we found a thread on gallio-dev forum called "best way to parallelize a test suite" in which Jeff Brown (Gallio arhictect) discusses a new experimental addition to Gallio – the Parallelizable attribute. It can be applied both to test fixtures and test methods. From what I discerned, the Parallelizable attribute applied on test fixtures means that two or more fixtures can run in parallel, while marking test methods Parallelizable means that two or more test methods in the same fixture run in parallel (I simplified description a little here, for more details please read the mentioned thread).

We needed the second option (parallelization of methods), so I downloaded the latest Gallio package, marked all of our test methods with Parallelizable attribute…

[Metadata("UserStory", "SMSUi.AddSubs.Schedule")]
public void AddScheduledSubs()

…and run tests with Gallio.Echo runner. So far so good – the test do run in parallel, although occasionally the runner throws some exceptions, we’ll need to investigate this further (after all, this is an experimental feature, so I’m expecting it to break once in a while ;).

You can set the rough number of concurrent threads that will process test code by setting DegreeOfParallelism value:

public void FixtureSetup()
    Gallio.Framework.Pattern.PatternTestGlobals.DegreeOfParallelism = 20;

NOTE: I’ve added this code to the fixture setup method because I didn’t know any better place to put it. Also, it would be better not to hard-code it like I did, use a configuration file instead.


Not all tests can be executed in parallel, of course. Once we integrate Selenium into acceptance testing we will have to be careful when selecting which tests should be parallelized and which should not – until we give the Selenium Grid a try, we will have to run Web UI tests on a single thread. I guess the best thing to do is keep non-parallalizable tests in fixtures separate from parallelized ones.

Published by breki on 15 Jan 2009

Gallio: Setting Test Outcome Any Way You Like

Creative Commons License photo credit: duimdog


In one of my previous posts I discussed using Assert.Inconclusive() method to mark tests as "not finished yet" during the execution of the test (as opposed to declaring test outcomes using attributes such as [Pending]).

It turns out there’s a better way to do this (which Jeff Brown kindly pointed me to):

   throw new SilentTestException(TestOutcome.Pending, "To be implemented.");

SilentTestException allows you to mark the test with any outcome you like. Which is exactly what I was looking for.

Published by breki on 14 Jan 2009

Brainstorming: Distributed Continuous Integration System

Beauty and the Beast
Creative Commons License photo credit: eyesore9


This is the second part of my brainstorming "session" about automating software deployment and execution on a remote computer from yesterday. I did some investigation and found a very useful resource (part of the Paul Duvall’s excellent "Automation for the people" series) about patterns for automatic deployment (also see the newest article here). Also check out SmartFrog’s Patterns Of Deployment, which covers deployment topics extensively.

Paul’s approach is to use SCP for secure copying (distributing) of files and SSH for remotely invoking processes. The advantages I see in this approach are that it uses standard protocols and is very flexible, including support for public key infrastructure (PKI). The only thing is that I don’t know which tools to use for Windows for SCP and SSH – I’m looking for free software which would be easy to set up both for clients and for servers. And by "easy set up" I don’t mean MSI installations – I would like a simple copy and run type of installation.

A commenter Jean-Philippe Daigle also suggested among other things using rsync for publishing new build packages to other servers. There is also an rsync alternative for Windows called Unison. Again, I’m not sure how much work is needed to set these things up before using them for CI purposes. And also I see a problem of using a bunch of tools each of which solves only a part of the problem – from my experience this results in a CI setup which is fairly brittle.

Feedback Problem

I thought a little about how the continuous integration (CI) process would look like if I implemented an Agent service described in the previous article. The main problem I detected was the lack of good feedback from such an Agent – if it were to run our long running integration tests we wouldn’t be able to see the progress like it is possible when using CI servers like CruiseControl.NET (CCNET).

I started looking at the problem from a wider perspective – what we really need is a distributed CI system which would be able to provide control and monitoring of CI builds from a single point. This is opposed to how things are done with CCNET – you have to set up a separate project for each stage of the CI build and you also have to install separate instances of CCNET on each server that is a part of the build configuration (main build server, integration server, database server etc.). This makes setting up CI for a project quite time consuming.


So what would this CI system have to have to be useful:

  • CI process should be treated integrally: if the CI process consists of several stages (some of which may run in parallel or even on different computers), it should be treated as a single unit (internally separated into several stages). The configuration would specify the CI workflow and then the CI system would take care of all the necessary wiring. That doesn’t mean that you wouldn’t be able to get the feedback for each individual stage, if you wanted.
  • A single configuration point: the configuration for all of these stages should be located in a single place (in the main build service, let’s call it a Controller), not spread out over several computers.
  • Native support for version-controlled CI project configuration file(s): the Controller should only have to have the basic configuration (like how to fetch files from the version control system (VCS)). The rest of the CI configuration would be retrieved from VCS and thus could be dynamically updated with each new version.
  • A single monitoring point: the Controller would have to provide some kind of Web user interface which would display the status of the CI process (including statuses for all of the CI process’ stages on all other servers). The UI would have to be simple, just displaying raw build outputs (although the support for CCNET-like XSLT transformations could be provided).
  • Minimum installation and configuration friction: all of the CI system’s components should be able to run with minimal prerequisites. Edit the configuration file and run the Controller executable from a command line – this should be enough. I’m thinking about an integrated Web server (so no extra Web server installation would be needed) and automatic pushing of CI Agents to all of other servers which are part of the CI process.
  • Support for automatic deployment: the ability to remotely deploy and run software (the essence of yesterday’s post) as part of a CI process stage should be intrinsic. 

This ends my brainstorming session for today.

Next »