Exact Magic Software, LLC: Precision tools for serious developers

Archive for April, 2007

Orcas Test Support

Looking forward to future versions, we’re looking at bridging the gap to provide NUnit, MBUnit, Selenium, and maybe CSUnit testing support inside of the Team System Test framework. I’m assuming folks will start moving to the official MS test solution as Orcas rolls out, which leaves you in a pickle. What do you do with all your NUnit tests? Convert them? It seems like a lot of work, and possibly error prone, and what about third party components that you use that have integrated tests, or multiple project teams on multiple schedules?

So, that leaves me with the following theory – it’s better to integrate into what will be the omnipresent MS Test solution by making plug-ins that will run all your existing tests as-is, without modification, so that you can mix and match the best test tool for the job, yet have all the results merge together into one collected test output provided by the Orcas test system.

  • Mark the failure line in the editor
  • Bridge Integration
    • NUnit
    • MBUnit
    • CSUnit
    • Selenium
  • Allow ‘Debug Tests’ from the context menu in the editor
  • Provide a Test Statistics panel with Pass/Fail/Inconclusive ratios and charts
  • Custom Test Types

    • Repeating / Timed
    • ‘With Coverage’
    • ‘With Performance’

What do you think?

  Permalink |  Comments[2]

NUnit Integrated

TestMatrix is all about making NUnit easier and faster to use while you are programming. It’s most useful if you are already using NUnit for testing, and are looking for a faster way to run your tests, gather coverage, and debug you tests without shuffling through a lot of windows or external tools by giving you direct feedback in the editor about your test cases. It’s a free trial to download, give it a shot and it’ll save you time testing.

We’ve got

  • NUnit 2.4
  • Code Coverage, Memory, and Performance Profiling
  • Fully 64 bit support (better than NCover!)

From your test cases, just right click and run. The menu even shows you which test, whether you are in a [TestFixture] or a [TestCase].

Right there, in the editor the test result is marked with a colored indicator (which you can pick the colors)

And most convenient, when you have a failed test the failure and root cause is show to you right there in the editor with the ‘fingers’ test failure marker and a tooltip.

And, we’ve got NUnit GUI style test ‘tree’ as a docking window.

  Permalink |  Comments[0]

More than One Way to Test it

Sometimes when I’m testing, I just want to smoke test the entire solution, particularly when I just synched up multiple changes from the rest of the team. This is when I use the TestMatrix | Test Explorer, then pick the solution in the drop down.

From here I can just hit the play arrows and run the entire solution. See the icon for solution, and the
nested assembly icon? I can click and explore the tests visually like I would with NUnit GUI but one notch more convenient by avoiding the window shuffle toggling back and forth.

I also tend to work in one assembly/project at a time with modular changes. This is where selecting a single project makes sense. The drop down is filtered to be only those projects that reference NUnit to keep it nice and short. So in exploring and running the tests I can work on the solution or a single project.

If I’m just working in a single class, doing pure TDD unit testing as I code, I tend to run the tests right from the editor with the context menu or Ctrl-R,T as a hotkey (remember it as Control Run Test).

And having run the test, the test results show up right there in the editor.

I think about it in terms of big test runs and small test runs. Big runs I tend to do with the explorer, mainly having been hooked on NUnit GUI early on. Small runs I tend to do right in the editor, similar to TestDriven.NET – but with graphical feedback right on the test case. Pass/Fail is a bit more ‘in your face’. Particularly on failed tests, which is what it’s all about. Just hover over the ‘fingers’ – the 5 horizontal markers. This shows you exactly where a test failed, and prints out the message and stack trace. You can see why your test failed without a lot of hopping around.

  Permalink |  Comments[1]

Punishing Test Coverage

Looking around for something complicated to test that folks actually use, I decided to pick on Lucene.NET. My idea was to make sure the code coverage works well on what is a complex piece of software with some really complex tests – and that it isn’t so painfully slow that nobody would ever actually use it.

I ran the tests in Lucene 1.9rc1. After a little bit of setup, and changing the NUnit references to 2.4 using the Fix NUnit References Option in the Test Explorer I ran the entire solution’s Lucene Test solution’s test using the TestExplorer. 301 tests with coverage ran in 3 minutes 9 seconds. There were a few failures in there – looks like the Lucene test run at least a few web connections that I didn’t set up my local IIS to respond:

As a comparison, I ran again with coverage disabled, I just hit the toggle button in the Test Explorer. 2 minutes 8 seconds.

Coverage does have some overhead since it computes activity on each individual line, but not an overwhelming overhead. I’ve had profiler/coverage experiences in the past that have 400% to 800% overhead – this one is just 50%.

  Permalink |  Comments[0]

It Was Mailframe

After having Mailframe and TestRunner in the market for 3 years now, we’ve decided to take things a bit more ‘company’. First off – Mailframe doesn’t say anything about what we’re doing, Exact Magic does. We’re looking to provide programmers with tools that work for you. Taking the drudgery out. Collecting data in the background. Automating common tasks. We want it to seem like magic.

StudioTools is the combination of TestRunner and CodeSpell, along with some new and rather cool navigation tools that let you get right to your types (Alt-G) or open a file by quick name match (Alt-O). We’ve incorporated hundreds of user’s feedback based on TestRunner 2005. The biggest thing, we made it a lot faster. Coverage and profiling is really fast. In our solutions, we’ve seen overhead as low as 25% in running unit tests to collect coverage metrics. The idea here is coverage that is so fast, you can leave it on all the time. Folks that have licenses for TestRunner 2005 or CodeSpell 2005 should contact us about upgrade pricing.

  Permalink |  Comments[2]