Archive for the 'testing' Category

Published by breki on 18 Oct 2010

Web Testing & Gallio: A Little Helpful Trick

When doing automatic testing of Web apps using unit testing frameworks, it can be a pain in the butt to pinpoint the proper HTML element. A lot of times tests will fail because you used a wrong locator, but since the browser will automatically close after the test, you don’t have an access to the HTML code of the page to look at what’s actually there.

Fortunately Gallio provides a class called TestContext which contains the current information about the running test and which you can use to determine if the latest test is successful or not. This can then be used to run your custom handling code during the test teardown:

        [TearDown]
        protected virtual void Teardown()
        {
            if (TestContext.CurrentContext.Outcome.Status == TestStatus.Failed)
            {
                using (TestLog.BeginSection("Failed web page HTML"))
                    TestLog.Write(WebDriver.PageSource);
            }
        }

In the above snippet, we record the current Web page’s HTML code into Gallio’s log (the TestLog class). To avoid spamming the log, we do this for failed tests only.

Gallio provides a powerful framework which I think is very much underused, mostly because the documentation is not very detailed (to say the least).

Published by breki on 20 Jun 2010

Rewriting Code

I’m working on some new Maperitive commands and they require a bit of an extension of the command-line interface. I’ve tried to extend the existing command-line parser code, but it feels like tearing off fingernails. The funny thing is that I’ve rewritten this part of the code a couple of months ago, but it still smells. It’s a typical case of having to extend some functionality slowly bit by bit and not doing it properly, and the crappy code piles up.

So after considering all the options, I’ve decided to do a new rewrite of this code. It’s going to take some time, but it will make me feel better – and that’s the most important thing.

Luckily I have a dozen of existing unit tests which will be very helpful in making the new code run correctly. This is one of the best reasons to write unit tests, people!

Published by breki on 01 Mar 2010

The Delicate Dance Of Transience

H2O
Creative Commons License photo credit: Flowery Luza*

I’m in the final stages of polishing Maperitive before the first release. Well, not really polishing, I have a bunch of tasks yet to do, but I think I can see the light at the end of the tunnel.

As #I stated a while back#, one of the reasons I decided to do a major rewrite of the Kosmos code was to introduce inversion of control (IoC) into the architecture. Maperitive (ex-Kosmos) now heavily relies on Windsor Castle do to the wiring. I can say that investing in IoC (and sticking to SOLIDs) now starts to pay dividends: I can add new features with more ease and they tend to affect a relatively minor part of “old” code.

The problem with using IoC containers like Windsor Castle is when the components registered exceed a certain number that you can handle in your brain (Maperitive currently has about 130 components in IoC container). Then the whole thing gets a life of its own: even if you write extensive unit tests for individual parts, it gets more and more difficult to predict how the “organism” will behave as a whole.

When used in a Windows desktop application like Maperitive, the problem gets compounded by the intricate dependencies between different lifestyles of components. This is what I want to discuss in this post.

I’ll restrict the discussion to singletons and transients, since I only use these two types in Maperitive. I’ll give examples of components in terms of a mapping application (which Maperitive is), but I think it won’t be a problem to translate this to your own areas. I’ll start with the simple case of…

Singletons Depending On Singletons

This is the easy one: singletons are components which typically share the same lifespan as the application. A Map is a good example of a singleton: the user expects the map to be available at all times (after all, what would Google Earth look like without the map?). User typically uses a mouse to move around the map. And we luckily have only one mouse on our machines, so this is an example of another singleton (I’m a bit oversimplifying the stuff here, bear with me). So the Map subscribes to the Mouse events and this can be seen as singleton to singleton dependency.

These kind of dependencies are easy to handle, since the number of created objects is typically small and they all share the same lifespan. Also, singletons are (typically) created automatically by the IoC container at the start of the application so you don’t really need to explicitly create and destroy them yourself.

Transients Depending On Singletons

This is the second common relationship in a desktop application. You have a short-term task to do (usually as a response to user’s actions) and you create a transient object to do this task. One example would be a WebClient which downloads the map data from the server: once the download is done, you can kill the client (hmmm that sounds tempting).

Transients are typically created using factories. Windsor Castle offers a cool feature called TypedFactoryFacility which makes creating factories a lot easier (you don’t need to write the actual creation code, just specify your factory interface according to facility’s conventions).

A WebClient needs to know whether it has to provide credentials for the Web proxy. We can store this information in a Configuration object, which in our case is a singleton.

A transient component states its dependency on a singleton through constructor or property injection. Again, this scenario should be easy to handle, since the transient object has a shorter lifespan than the singleton it depends upon.

Transients Depending On Transients

This is where things can get messy. There are two possible scenarios when a transient object needs another transient object:

  1. They both share the same lifespan. Let’s say our WebClient wants to check if internet connection is up before sending a request to the download server. It can use a Ping service to ping google.com (it’s immoral, I know). Since both WebClient and Ping will be used for a short time, you can inject the Ping dependency in WebClient’s constructor. When WebClient dies, Ping should also die.
  2. The lifespans overlap, but are not the same.  Example: out WebClient has detected a communication problem with the download server and sends a notice to the user. The notice is displayed as a modeless dialog, which can stay on the screen until the user clicks on the Close button. So Notice is a transient object which does not die together with its creator, WebClient: once WebClient notifies the user, it is no longer needed. Notice, however, will live on until the user chooses to close it.

In the second case, having the Notice dependency in WebClient’s constructor or property is not an ideal option, for two reasons:

  1. It is not guaranteed the notice will even be used: if the download went OK, there is no need to notify the user. If your Notice component is expensive to create, this could be an issue.
  2. By using a constructor injection for a transient component, you are effectively claiming the ownership of this component. This conflicts with the fact that the component will live on even after your main component dies.

How do you solve this problem? My suggestion is to use factories – factories are usually singleton objects, so you end up with “transient depending on singleton” case. Even better: Windsor Castle’s TypedFactoryFacility also offers a mechanism to release components created with such factory.

The last (and most problematic) case is…

Singletons Depending On Transients

The problem with this is that your supposedly transient object will live the whole duration of the application lifetime. While in some cases this could be a valid scenario, in lot of cases it’s merely an oversight of the developer (especially when confronted with a huge number of components in a system).

Windsor Castle Inspector

The solution I used in Maperitive is to write a unit test which checks for singleton-to-transient relationships in my container and fails if it finds any:

[Test]
public void SingletonsShouldNotRelyOnTransients()
{
    IWindsorContainer container = Program.CreateWindsorContainer(false);
 
    WindsorContainerInspector inspector = new WindsorContainerInspector(
        container);
    IList<KeyValuePair<ComponentModel, ComponentModel>> dependencies 
         = inspector.FindDependencies(
        (a, b) =>
            {
                return (a.LifestyleType == LifestyleType.Singleton 
                    || a.LifestyleType == LifestyleType.Undefined)
                    && b.LifestyleType == LifestyleType.Transient;
            });
 
    Assert.AreEqual(0, dependencies.Count);
}

Of course, the condition for the test failure could be a bit modified to exclude any “special cases".

The code uses the WindsorContainerInspector, a little utility class I wrote to assist in inspecting the container:

public class WindsorContainerInspector
{
    public WindsorContainerInspector(IWindsorContainer container)
    {
        this.container = container;
    }
 
    public IList<KeyValuePair<ComponentModel, ComponentModel>> FindDependencies(
        Func<ComponentModel, ComponentModel, bool> dependencyPredicate)
    {
        List<KeyValuePair<ComponentModel, ComponentModel>> dependencies 
            = new List<KeyValuePair<ComponentModel, ComponentModel>>();
 
        foreach (GraphNode node in container.Kernel.GraphNodes)
        {
            ComponentModel dependingNode = (ComponentModel) node;
 
            foreach (GraphNode depender in node.Dependents)
            {
                ComponentModel dependerNode = (ComponentModel)depender;
 
                if (dependencyPredicate(dependingNode, dependerNode))
                    dependencies.Add(new KeyValuePair<ComponentModel, ComponentModel>(
                        dependingNode, 
                        dependerNode));
            }
        }
 
        return dependencies;
    }
 
    private readonly IWindsorContainer container;
}

Right now WindsorContainerInspector offers only one method: FindDependencies, which looks for dependencies based on the supplied predicate. I already have some additional ideas on what other possible problems to detect in the container, but I’lll write about this next time.

Published by breki on 16 Jun 2009

Gallio: Starting And Stopping Selenium Server Automatically During Testing Using AssemblyFixture

UPDATE (June 17th): I’ve updated the code, see the reasons for it at the end of the post.

In previous projects I worked on we made sure the Selenium Java server was running by manually starting it on our machines (both developers’ and build ones). This was cumbersome: restarting the build server meant we had to log on to the server after the reboot and run the Selenium server again. Of course, a lot of times we forgot to do this, which caused the build to fail.

This got me into thinking: is there a way in Gallio to specify some initialization (and cleanup) actions on the test assembly level? And of course, the answer is yes: using the AssemblyFixture attribute. This is what I like about Gallio/MbUnit: most of the time the feature requests I come up with are actually already implemented.

So anyway, you can specify this attribute on a class and then add FixtureSetUp and FixtureTearDown attributes on its methods. These will be executed on the test assembly-level: setup methods will be executed before any test fixtures have been run and teardown methods will be executed before the test assembly has been unloaded by the test runner.

I then used this nice feature to start the Selenium server and then dispose of it after tests:

[AssemblyFixture]
public class SeleniumTestingSetup : IDisposable
{
    [FixtureSetUp]
    public void Setup()
    {
        seleniumServerProcess = new Process();
        seleniumServerProcess.StartInfo.FileName = "java";
        seleniumServerProcess.StartInfo.Arguments =
            "-jar ../../../lib/Selenium/selenium-server/selenium-server.jar -port 6371";
        seleniumServerProcess.Start();
    }

    /// <summary>
    /// Performs application-defined tasks associated with freeing, releasing, or
    /// resetting unmanaged resources.
    /// </summary>
    public void Dispose()
    {
        Dispose(true);
        GC.SuppressFinalize(this);
    }

    /// <summary>
    /// Disposes the object.
    /// </summary>
    /// <param name="disposing">If <code>false</code>, cleans up native resources. 
    /// If <code>true</code> cleans up both managed and native resources</param>
    protected virtual void Dispose(bool disposing)
    {
        if (false == disposed)
        {
            if (disposing)
                DisposeOfSeleniumServer();

            disposed = true;
        }
    }

    private void DisposeOfSeleniumServer()
    {
        if (seleniumServerProcess != null)
        {
            try
            {
                seleniumServerProcess.Kill();
                bool result = seleniumServerProcess.WaitForExit(10000);
            }
            finally 
            {
                seleniumServerProcess.Dispose();
                seleniumServerProcess = null;
            }
        }
    }

    private bool disposed;
    private Process seleniumServerProcess;
}

Note that the class is disposable – this ensures the Selenium server is stopped even if you run tests in the debugger and then force the debugger to stop before finishing the work. The Dispose method calls DisposeOfSeleniumServer, which does the actual work of killing the process and disposing of the evidence.

NOTE: This is a second version of the code. I needed to update the old one because I noticed that when running the tests in CruiseControl.NET, the Selenium server java process was not stopped properly. The only way I could stop it is by killing it, which in general isn’t a good practice. The unfortunate side effect of this “killing” is that the CruiseContol.NET service cannot be stopped normally – it also has to be killed when you need to restart it. I’ll try to solve this problem in the future.

Published by breki on 08 Jun 2009

Gallio: Filter Attributes For Test Methods

There are three attributes which function as filters when running tests using any of the Gallio’s test runners:

  • Pending: tests which are in development and currently don’t run should be marked with the Pending attribute. This means the test runner will skip them when running the build script.
  • Ignore: this attribute is used for marking tests which are never to be run (they are kept in the code as a history). In general, it is a good practice to avoid such tests – you can get the history from your source control.
  • Explicit: tests marked with this attribute will only run when selected individually in the VS (Resharper, TestDriven.NET). They will not be run as part of the build script. Explicit tests are usually those which depend on a certain external system which cannot be guaranteed to be available at all times – we don’t want such tests to cause failures in our builds.

It is a good practice to supply these attributes with a string argument describing the reasons for marking the test.

UPDATE: Jeff Brown kindly provided some additional information about these attributes:

Tests marked [Ignored] and [Pending] will show as Warning annotations in the test report in addition to being skipped. In ReSharper they will also be shown with a yellow warning stripe to remind you that they are there and need to be looked at.

You can also add your own annotations to various code elements with the [Annotation] attribute.

This is my first article in the “guidelines” series I plan to write in the future. I want to maintain these guidelines separate from concrete project’s documentation since in the past I always had to copy this kind of stuff from one project’s wiki to another.

Published by breki on 05 Mar 2009

Fix: Slow Debugging In Visual Studio

Morty Comes Home
Creative Commons License photo credit: Kevin Eddy

I’ve just found the cure for slow debugging in Visual Studio. By “slow” I mean waiting couple of seconds after each debugger step. The solution was suggested by Jeff Brown on one of Gallio Google Groups threads: turning off the “Enable property evaluation…” setting in Debugger options (Tools -> Options -> Debugger):

Visual Studio Debugger Options

After turning this off, I don’t notice any real delay between debugging steps. The downside is that you won’t get automatic updates of values of object’s properties in Watch and other debugger windows. Instead you get a nice little Refresh button for each of the properties and you’ll need to click on it to get the current value:

Visual Studio Debugger 2

I think this is a minor nuisance compared to the substantial increase of the debugging speed. Not that I’m a big fan of big debugger usage. To quote Scott Belware:

Debugging code is a slow, time consuming process.  Time spent in a debugger is sloth time.  You might be thinking that you’re perfectly effective in a debugger and that you don’t have any objections to doing code validation in a debugger rather than in a well-factored unit test.  This is merely an assumption fed by how habituated you are to using a debugger.  Without having a TDD practice, you have no basis of comparison for how ineffective debugging is compared to writing well-factored unit tests for well-factored code.

Also check out Jeremy D. Miller’s posts about TDD and debugging.

Published by breki on 16 Jan 2009

Gallio: Running Tests In Parallel

parallel
Creative Commons License photo credit: Shahram Sharif

Introduction

Yesterday we finally managed to get our tests to run using our acceptance tests framework. I promise to write more about it some other time, but I’ll make a quick introduction now.

First let’s start with the name of the framework: Accipio. The idea of Accipio is to specify acceptance tests in an XML form which is then automatically translated into MbUnit test code. I guess you can call it a lightweight FitNesse – there’s no Wiki, all test specifications are stored in XML files (which are then source-controlled). The XML is quite simple (you can see some initial brainstorming samples here).

But that’s not what I wanted to talk about now.

Time Is Money, They Say

While running the tests we determined that the whole test process took too long. We had around 100 test cases, each of which had to wait for 10 seconds after the initial action before it could assert the test conditions, which means at least 20 minutes of test running time (and we expect much more test cases to be written in the future). We’ll try to refactor that code so that this wait period is not necessary, but nevertheless these tests are written in a way that should allow parallel execution without any negative effects. In fact the parallelization would be welcome since it mimics the "real" situation in production.

Luckily with little Googling we found a thread on gallio-dev forum called "best way to parallelize a test suite" in which Jeff Brown (Gallio arhictect) discusses a new experimental addition to Gallio – the Parallelizable attribute. It can be applied both to test fixtures and test methods. From what I discerned, the Parallelizable attribute applied on test fixtures means that two or more fixtures can run in parallel, while marking test methods Parallelizable means that two or more test methods in the same fixture run in parallel (I simplified description a little here, for more details please read the mentioned thread).

We needed the second option (parallelization of methods), so I downloaded the latest Gallio package, marked all of our test methods with Parallelizable attribute…

[Test]
[Metadata("UserStory", "SMSUi.AddSubs.Schedule")]
[Parallelizable]
public void AddScheduledSubs()
{
    …

…and run tests with Gallio.Echo runner. So far so good – the test do run in parallel, although occasionally the runner throws some exceptions, we’ll need to investigate this further (after all, this is an experimental feature, so I’m expecting it to break once in a while ;).

You can set the rough number of concurrent threads that will process test code by setting DegreeOfParallelism value:

[FixtureSetUp]
public void FixtureSetup()
{
    Gallio.Framework.Pattern.PatternTestGlobals.DegreeOfParallelism = 20;
}

NOTE: I’ve added this code to the fixture setup method because I didn’t know any better place to put it. Also, it would be better not to hard-code it like I did, use a configuration file instead.

Conclusion

Not all tests can be executed in parallel, of course. Once we integrate Selenium into acceptance testing we will have to be careful when selecting which tests should be parallelized and which should not – until we give the Selenium Grid a try, we will have to run Web UI tests on a single thread. I guess the best thing to do is keep non-parallalizable tests in fixtures separate from parallelized ones.

Published by breki on 15 Jan 2009

Gallio: Setting Test Outcome Any Way You Like

IMG_1927
Creative Commons License photo credit: duimdog

 

In one of my previous posts I discussed using Assert.Inconclusive() method to mark tests as "not finished yet" during the execution of the test (as opposed to declaring test outcomes using attributes such as [Pending]).

It turns out there’s a better way to do this (which Jeff Brown kindly pointed me to):

   throw new SilentTestException(TestOutcome.Pending, "To be implemented.");

SilentTestException allows you to mark the test with any outcome you like. Which is exactly what I was looking for.

Published by breki on 09 Jan 2009

MbUnit: Inconclusive Test Results

Questions
Creative Commons License photo credit: Oberazzi

UPDATE: Jeff Brown pointed me to a better way of doing some of the things discussed in this post, so I’ve updated the post.

One of the lesser known (and documented) features of MbUnit and Gallio is marking tests as inconclusive:

       [Test]
        public void InconclusiveTest()
        {
            if (WeDeterminedTheTestCannotBeRun)
               Assert.Inconclusive("Inconclusive message");

            WeThrowAnExceptionButItDoesNotMatter()
        }

By calling Assert.Inconclusive() we tell the test run it should mark this test case as inconclusive. Assert.Inconclusive() does not throw any exceptions, the tests continues to run, but even if the later code throws an exception or some assert fails, the test outcome will still be marked as inconclusive:

      456 run, 455 passed, 0 failed, 1 inconclusive, 1 skipped (1 ignored)

The build will not fail if we have one or more inconclusive tests. How does this come in handy? Sometimes you have tests which access certain external resources like internet pages. You want to be able to run such tests without causing the build to fail if internet connection is temporarily not available (I’m not saying that this is a good pattern for writing tests, just giving an example). One way to do this would be to first check the internet connection and mark the test as inconclusive if the connection is not available.

The second scenario (which we actually use in our acceptance test framework) is when you rely on certain test facility methods in order to execute the tests. Since these methods are often implemented in parallel with actual test code, we want to be able to mark them as not available until they are finished. We do this again by invoking Assert.Inconclusive() inside such methods, which will cause all test code that use these methods to have an inconclusive test result.

An alternative would be throwing NotImplementedExceptions, but we want to separate tests which actually failed from those which are not fully implemented.

There is a better way: Gallio: Setting Test Outcome Anyway You Like

Published by breki on 08 Jan 2009

Asserting Than An Assertion Has Failed

Opie thought he would find a kewl looking girl cat on the computer
Creative Commons License photo credit: turtlemom4bacon

UPDATE: Jeff Brown, one of the architects of MbUnit and Gallio, responded to my post suggesting other (=better) ways of checking assertions. I’ve added his suggestions at the bottom of the post.

We’re developing an acceptance test framework (which I will write more about when it reaches some maturity state) which will use MbUnit and Gallio to execute the test code. We developed some utility assertions of our own (which in turn use MbUnit Assert* methods) and we wanted to test them using MbUnit. So basically we wanted to unit test the unit test code ;).

An interesting problem occurred when we wanted to test that one of these assertions actually fails under certain conditions. MbUnit throws AssertionException when an assertion fails, but this gets eaten by test runners as an indicator that the test case has failed (obviously). Of course, we didn’t want the test to fail, since we expect our assertion method to fail… OK I know it sound complicated, so let me show you the code instead of blabbering too much:

try
{
    runner.AssertSmsReceived("incorrect sms");
    Assert.Fail("Exception should have been thrown here");
}
catch (AssertionException ex)
{
    // this is to filter out an assertion for wrong SMS received.
    Assert.IsFalse(ex.Message.Contains("Exception should have been thrown here"));
}

Explanation: we expect runner.AssertSmsReceived() to throw AssertException. That’s why we catch this exception afterwards. But if the method has not failed, we force the failure with Assert.Fail(). Since both conditions throw the same type of an exception (AssertionException), we check its message contents to find out if the proper condition was met.

There’s probably a better way to do this, but I haven’t found it (other than throwing a different type of exception instead of calling Assert.Fail(), but I wanted to avoid this because Assert.Fail() gives a cleaner test result output). Or it’s just too late in the day for me to think…

Yes, there’s a better way (thanks Jeff):

   // check if received message is correct
    AssertionFailure[] failures = AssertionHelper.Eval(()
         => runner.AssertSmsReceived("incorrect sms"));
    Assert.AreEqual(1, failures.Length);
    Assert.IsTrue(failures[0].Message.Contains("did not receive an expected SMS message"));

Jeff also posted a helper class which Gallio guys use for testing MbUnit v3 asserts:

[TestFrameworkInternal]
public static AssertionFailure[] Capture(Gallio.Action action)
{
    AssertionFailure[] failures = AssertionHelper.Eval(action);

    if (failures.Length != 0)
    {
        using (TestLog.BeginSection(&#8221;Captured Assertion Failures&#8221;))
        {
            foreach (AssertionFailure failure in failures)
                failure.WriteTo(TestLog.Default);
        }
    }

    return failures;
}

Next »