Archive for the 'testing' Category

Published by breki on 15 Dec 2008

Continuous Integration Builds – Do’s And Don’t's

IMG_0353

Since I haven’t posted anything on development stuff for a long time, I decided to write about something that I think is very important – doing continuous integration (CI) builds the right way. Below is a collection of advices I collected during the years regarding what to do and what to avoid when developing build scripts if you’re oriented towards CI principles:

  • DO make the main build procedure easy to run. Create a “Build.bat” batch file that will be obvious and simple to run. And place it in an obvious place. And make the main build target the default one. There’s nothing worse than spending the whole day looking around the code to find out how to build it (and this is a common problem once the main bulk of developers leave a project and someone else arrives a year later and is given a task of maintaining the project).
  • DO document the main build procedure. No Asimov’s Foundation series here, just a few lines of short sentences describing build targets. Or your could simply add “description” attributes to your NAnt/MSBuild targets?
  • DO use the same build steps for local (developer’s) builds and CI server builds. The developer must be able to verify the build before he/she commits the code to the repository. Having a different procedure running on CI server just means that sooner or later you’ll end up with half of your CI builds marked as failed because developers could not reproduce builds on their local machines.
  • DO keep the main (“indicator”) build running under 10 minutes. Developers need to be able to detect any problems with their code changes as soon as possible. Forcing them to sit around and wait for half an hour in order to get the feedback will only results in fewer commit cycles and more problems with integration. “Continuous” means continuous, not “I’ll do it after the lunch“.
  • DO separate the build procedure into several stages if your build takes longer than 10 minutes. The indicator build should make sure the code is built, analyzed, unit-tested and packaged. Everything else can be moved to next stage(s), so…
  • DO separate tests into unit and integration/acceptance tests. Unit tests typically work on isolated classes (typically using mocks) and do not use external resources like databases, Web services and similiar. This means they are fast, which makes them ideal for the first stage of the build. Integration tests: they test interactions between various parts of the system. Mocks are still used, but only to mimick certain external systems. These tests typically run on an actual database, which means setting them up and running them is slow and should be moved to the later stages of the build (or even moved physically to a different CI server so we can parallelize the build. While we’re on the subject…
  • DO use test categories to separate unit tests from integration tests. This way you can have both types of tests in the same assembly while telling the test runner (like Gallio or NAnt) to run only certain categories of tests.
  • DO select some of the important integration tests as “smoke” tests and run them before all other tests. This way you won’t have to wait for all of the tests to finish before detecting that the build has failed. The important thing is to make the build fail fast so that bugs can be fixed as soon as possible.
  • DO use FxCop, StyleCop and similar tools – and right from the start of the project. These tools can be a very useful way to auto-review the code of less experienced developers in the team. Which means less work for the project lead – he can concentrate on the substance of the code and leave the form to be polished by authors themselves. And FxCop can sometimes really discover bugs which would be difficult to detect otherwise.
  • DON’T rely on developer’s locally installed 3rd party libraries and tools to help you build the code. Instead, store all of the libraries and tools you need (OK, I’m not talking about VisualStudio and SQL Server here ;) ) under the source control and reference them from there. Also, communicate to the developers the potential problems of having these tools installed in GAC – GAC is your enemy! Why? By having the 3rd party stuff stored under the project’s source control, your team then has the control of which versions of these tools are actually used in the project. Since people tend to work on different projects (sometimes at the same time), relying on their local environment will mean that sooner or later there will be a conflict or a hidden bug because of different versions. From our experience there’s really no need for any of the commonly used stuff (NUnit, MbUnit, Gallio, NCover, Sandcastle, FxCop… just take a look at the lib directory of one of our open source projects) to be installed on the machine to be able to use it. One notable exception is TestDriven.NET, but then again, you don’t really use it to run builds from the command line.
  • DO separate the build script into a common one and the one specific for the individual project. This will make your common script reusable for other projects and you will also have a chance to polish the script in small steps. The project-specific script should just define the stuff that’s unique for the project (for example what files to include in the build package) and then just call the common script to do the rest of the work. You can see a sample common NAnt script and a project-specific script which I managed to construct in 2 years of working on different .NET projects). Feel free to abuse it. We’re moving to a different building tool anyway, but I’ll write about this some other time.
  • DON’T spam your developers! Do not send them e-mail messages for each successful CI build. Set the up CI server to send e-mail notification on failed and fixed builds only. This is the only way to make sure they are informed about problems with the build. Otherwise they’ll just ignore all CI e-mails altogether. 
  • DO treat the database as just a script. Some people think of databases as some evil deities which, once set up, should be left alone or they will inflict some horrible curses on the developer who dares to touch them. Databases are just text files containing the SQL code and data and should be stored under the source control. I would go even one step further and say that they should be recreated/re-migrated as part of each build. This of course means that every developer should have a database engine installed on their development machine. But this is the only way to make sure your .NET code and DB code are synchronized. And that your SQL scripts work! Some would say that they have huge databases with millions of records and they can’t afford to obliterate them 10 times a day, but I would ask them: do you really need 1 million user records to test the code for reading user records? Don’t confuse integration tests and performance tests. Performance tests are usually not part of the CI build because they take too long and setting them up can be a bit of a pain.

Useful links:

Published by breki on 21 Apr 2008

White Facade – Fluent Interface For Writing Tests Using White

A few days ago I wrote about my first experiences in using White. I mentioned I wanted to create a facade around the White API in order to avoid constantly retyping the same code for common use cases. Since fluent interfaces are in fashion now, I decided to create a fluent facade. Just joking, of course.

I’ll present the facade through a few tests of a Kosmos GUI, which is currently under construction. Kosmos GUI is a MDI application for displaying maps from OpenStreetMap geo data. You have to load a Kosmos project, which contains information about where to find the data and how to render it. So after starting the application, the first user action would typically be to open the project:

Kosmos1

After clicking the File|Open Project menu item, a dialog for selecting the Kosmos project file appears:

Kosmos2

One you have selected the project file, a progress box is shown with an option to cancel the operation:

Kosmos3

If you let it finish, a map appears and some additional menu items are now available:

Kosmos4

Test Code

Here’s a sample test case which goes through the above steps:

[Test]
public void OpenProject()
{
   facade.ClickMenu ("File", "Open Project…")
      .FileDialog ("Open Kosmos Project File",
         @”D:\MyStuff\projects\OsmUtils\trunk\Data\Kosmos\KosmosProjectExample1.xml")
      .MainWindow ().ModalDialog ("Loading Kosmos Project")
      .MainWindow().Menu ("View", "Zoom In").AssertIsEnabled (true)
      .ClickMenu ("View", "Zoom In")
      .ClickMenu ("View", "Zoom Out")
      .ClickMenu ("View", "Zoom All");
}

After loading the project the test case executes some of the zoom functions available in the menu. And here’s a test case which cancels the project loading (and then checks that the View|Zoom In menu item is still not enabled):

[Test]
public void OpenProjectCanceled ()
{
   facade.ClickMenu ("File", "Open Project…")
      .FileDialog ("Open Kosmos Project File",
         @”D:\MyStuff\projects\OsmUtils\trunk\Data\Kosmos\KosmosProjectExample1.xml")
      .MainWindow().ModalDialog ("Loading Kosmos Project").Button ("Cancel").Click()
      .MainWindow ().Menu ("View", "Zoom In").AssertIsEnabled (false);
}

Hopefully the code is self-explanatory. The basic idea is for the facade to remember the current window or other UI element to perform actions on. You can change the active window at any moment. The facade is also supplied with some basic assertion methods useful in the test code (they throw InvalidOperationExceptions, so you can use them with any unit test framework).

Oh I almost forgot. This is how you instantiate the facade:

WhiteFacade facade = WhiteFacade.Run (applicationFilePath);

And if the facade does not offer enough for you, you can always access the underlying White’s Application object through facade.Application property.

Download

You can download the latest version of the WhiteFacade class from here. I have to warn you though: the class is in its infancy and will evolve through my effort on testing Kosmos GUI. It provides only for some basic use cases, but I think those common cases represent the majority of the GUI test code. I just wanted to give you an idea on how to make writing GUI test code as painless as possible.

Published by breki on 19 Apr 2008

Using White To Test Kosmos GUI

After a few weeks of work on my new framework for multidocument interfaces called BrekiViews (yes, I know I promised to write a little about it, but I just didn’t have enough time and concentration to put the brainstorms in my head into some coherent words) and refactoring Kosmos code to use it, I’m finally able to show some basic stuff in the new Kosmos GUI application. It’s all pretty simple at the moment, I got just one menu item working: File | Open Project. But I wanted to use this opportunity and start using White from ThoughtWorks to write automated GUI tests, since I was looking forward to trying the White out.

Documentation (Or The Lack Of It)

Unfortunately, the first impressions aren’t that good. The worst problem is lack of documentation. There are some pages written on the project’s CodePlex page and I also stumbled on a few posts which have some more examples on how to use it, but in general the API is undocumented. NDoc documentation that comes in the latest release just contains the API reference, without any human-generated documentation. It even states that it’s based on .NET Framework 1.1, which was obviously autogenerated and is probably wrong, since the API includes generics. I haven’t checked the source code directly though, maybe there is something there.

Yes I know, it’s an open source project and I shouldn’t expect too much, but I still think that such a project for which the whole purpose is to be used through an API should contain at least some documentation. Otherwise you spend a lot of time experimenting with the API to achieve the results.

Keyboard Problems

Like most other UI testing frameworks, White too suffers from quirks. I wanted to start the “Open Project” menu action using the keyboard shortcut, so I tried with:

mainWindow.Keyboard.HoldKey (KeyboardInput.SpecialKeys.CONTROL);
mainWindow.Keyboard.Enter ("o");

Which ended up forcing me to restart the computer, since the OS started acting very funny, as if the Ctrl key was still being pressed. I guess I should have added

mainWindow.Keyboard.LeaveKey (KeyboardInput.SpecialKeys.CONTROL);

to unpress the Ctrl key, but I’m not sure, since I couldn’t find any documentation on using the keyboard in White. Anyway, a helper method that would accept a shortcut as a string (something like KeyboardInput.Press (“Ctrl+O”) would be very helpful).

Clicking The Menus

After abandoning the keyboard approach, I tried to start “Open Project” by clicking on the menu item. I did some experimenting and finally managed to write a basic test, here’s the whole test fixture:

   [TestFixture]
    public class BasicTests
    {
        [Test]
        public void OpenProject()
        {
            Menu fileMenu = mainWindow.MenuBar.MenuItem ("File");
            fileMenu.Click();
            Menu openProjectMenu = fileMenu.SubMenu ("Open Project…");
            openProjectMenu.Click();
        }

        [SetUp]
        public void Setup()
        {
            application = Application.Launch (@”..\..\..\Kosmos.Gui\bin\Debug\Kosmos.Gui.exe");
            mainWindow = application.GetWindow ("Kosmos GUI", InitializeOption.NoCache);

            Assert.IsNotNull(mainWindow);
            Assert.IsTrue (mainWindow.DisplayState == DisplayState.Restored);
        }

        [TearDown]
        public void Teardown()
        {
            application.Kill();
        }

        private Application application;
        private Window mainWindow;
    }

White Facade

From my experience with other UI testing frameworks, if you intend to do more than a few simple UI tests it is a good idea to create some sort of a facade around the testing API. This way you simplify the interface to common use cases you typically use and eliminate any nasty quirks (like memory or resource leaks). Bil Simser has done something to that affect for White, although I don’t support his views on (not) using Setup and Teardown methods in unit tests.

Even better: why not implement a fluent inteface facade? Something like

WhiteFacade.Run(kosmosGuiPath).Press ("Ctrl+O")

or

WhiteFacade.Run(kosmosGuiPath).ClickMenu ("File", "Open Project…")

would be much nicer than the above code, don’t you think? Okey, I’m oversimplifying, I know :), but I will try to do something in the direction of a fluent interface facade, since I don’t intend to abandon the White just yet.

Published by breki on 15 Feb 2008

Friday Goodies – 15. February

.NET Development

  • FastSharp – running C# code inside a text box
  • NDesk.Options – a callback-based program option parser for C#

Development

GPS

Published by breki on 05 Feb 2008

Rhino.Mocks Do() handler using anonymous delegates

A reminder post for me on how to use the Rhino.Mocks Do() handler together with anonymous delegates. This is useful when you want to set a method call expectation and the method accepts some complex type as the parameter. Using assertions you can set expectations for the parameter’s value with your own custom logic. There are other ways of achieving this, off course, but this way is elegant for simple (few lines of code) cases.

The example code starts with defining an IStringProcessor interface (which will be mocked):


    public interface IStringProcessor
    {
        void ProcessString (string value);
    }

This is an example interface, just to demonstrate the power of Do() handler.

In the test method we want to make sure that all calls to the IStringProcessor.ProcessString() method provide strings which contain the ‘text’ substring. We then do the test by calling this method with three different string parameters:


    [Test]
    public void RhinoDoExample ()
    {
        IStringProcessor mockProcessor = mocks.CreateMock <IStringProcessor> ();

        mockProcessor.ProcessString (null);
        LastCall.IgnoreArguments ().Do ((Action<string>)delegate (string value)
            {
                Assert.IsTrue (value.Contains ("text"));
            }).Repeat.Any();

        mocks.ReplayAll();

        mockProcessor.ProcessString ("This is a dummy text.");
        mockProcessor.ProcessString ("This is a dummy text2.");
        mockProcessor.ProcessString ("This is a dummy.");

        mocks.VerifyAll();
    }

The test method will fail because of the third call: “This is a dummy.” string does not contain the required ‘text’ substring.

Published by breki on 01 Feb 2008

Friday Goodies – 01. February

Development

.NET Development

VisualStudio

GPS

« Prev