Google Website Translator Gadget

Sunday, 01 November 2009

Code Metrics

NOTE: This is a repost of on old post as I am moving onto the Blogger platform

I've been wanting to do a post about code metrics for quite a while now - mostly to organize my thoughts on the topic as it is something that I want to introduce at work, but also to get some feedback from other people as to if and how they are using metrics to assist them in crafting quality code.  After reading Jeremy Miller's post on the topic, I thought I might as well take the plunge.   I'll start by musing over what metrics I found to be useful, then continue with looking at tool support for generating these metrics and finish off with considering when to use these metrics.


I am not going to cover all the different metrics in detail but instead highlight what seems to me to be the most useful metrics and refer to articles/links where other people who have done an excellent job on covering these metrics in detail.  Here are the reference articles that I used:
  1. Robert C. Martin's article on OO design quality metrics [1]
  2. Wikipedia's summary of these package metrics [2]
  3. Kirk Knoernschild's excellent introductory article on metrics with sample refactorings included [3]
  4. Patrick Smacchia's (developer of NDepend software) excellent coverage on all the types of metrics supported by NDepend [4]
  5. Write up on the software metrics supported by the Software Design Metrics software [5]

Size metrics

Size metrics are consistently good indicators of fault-proneness: large methods/classes/packages contain more faults [5].
  • Source Lines Of Code (SLOC) measures the amount of lines of code.  To be really useful comment lines and lines that have been broken into multiple lines need to be factored out.  Some people refer to this as the logical LOC vs. the physical LOC. 
"2 significant advantages of logical LOC over physical LOC are:
  • Coding style doesn’t interfere with logical LOC. For example the LOC won’t change because a method call is spawn on several lines because of a high number of argument.
  • logical LOC is independent from the language. Values obtained from assemblies written with different languages are comparable and can be summed." [4]

Complexity metrics

There is a direct correlation between complexity and the defect rate of software, so keeping code simple is a solid first step toward lowering the defect rate of software [3].
  • Cyclomatic Complexity (CC) measures code complexity by counting the number of linearly independent paths through code. Complex conditionals and boolean operators increase the number of linear paths, resulting in a higher CCN. Methods with a CCN of five or higher are good refactoring candidates to help ensure code remains easy to understand [3]

Package metrics

Coupling/Dependency metrics
Excessive dependencies between packages compromise architecture and design. Complex dependencies inhibit the testability of your system and presents numerous other challenges as presented in [1] and [3].
  • Afferent Coupling (Ca) measures the number of types outside a package that depend on types within the package (incoming dependencies). High afferent coupling indicates that the concerned packages have many responsibilities. [1]
  • Efferent Coupling (Ce) measures the number of types inside a package that depends on types outside of the package (outgoing dependencies). High efferent coupling indicates that the concerned package is dependant. [1]
It goes to reason that:
"...afferent and efferent coupling allows you to more effectively evaluate the cost of change and the likelihood of reuse. For instance, maintaining a module with many incoming dependencies is more costly and risky since there is greater risk of impacting other modules, requiring more thorough integration testing. Conversely, a module with many outgoing dependencies is more difficult to test and reuse since all dependent modules are required ... Concrete modules with high afferent coupling will be difficult to change because of the high number of incoming dependencies. Modules with many abstractions are typically more extensible, so long as the dependencies are on the abstract portion of a module." [3]
  • Instability (I) measures the ratio of efferent coupling (Ce) to total coupling. I = Ce / (Ce + Ca). This metric is an indicator of the package's resilience to change. The range for this metric is 0 to 1, with I=0 indicating a completely stable package and I=1 indicating a completely instable package. [1]
  • Abstractness (A) measures the ratio of the number of internal abstract types (i.e abstract classes and interfaces) to the number of internal types. The range for this metric is 0 to 1, with A=0 indicating a completely concrete package and A=1 indicating a completely abstract package. [1]
  • Distance from main sequence (D) measures the perpendicular normalized distance of a package from the idealized line A + I = 1 (called main sequence). This metric is an indicator of the package's balance between abstractness and stability. A package squarely on the main sequence is optimally balanced with respect to its abstractness and stability. Ideal packages are either completely abstract and stable (I=0, A=1) or completely concrete and instable (I=1, A=0). The range for this metric is 0 to 1. [1][4]
"A value approaching zero indicates a module is abstract is relation to its incoming dependencies. As distance approaches one, a module is either concrete with many incoming dependencies or abstract with many outgoing dependencies. The first case represents a lack of design integrity, while the second is useless design." [3]
Cohesion metrics
"A low cohesive design element has been assigned many unrelated responsibilities. Consequently, the design element is more difficult to understand and therefore also harder to maintain and reuse. Design elements with low cohesion should be considered for refactoring, for instance, by extracting parts of the functionality to separate classes with clearly defined responsibilities." [5]
  • Relational Cohesion (H) measures the average number of internal relationships per type. Let R be the number of type relationships that are internal to this package (i.e that do not connect to types outside the package). Let N be the number of types within the package. H = (R + 1)/ N. The extra 1 in the formula prevents H=0 when N=1. The relational cohesion represents the relationship that this package has to all its types.  As classes inside an package should be strongly related, the cohesion should be high. On the other hand, too high values may indicate over-coupling. A good range for RelationalCohesion is 1.5 to 4.0. Packages where RelationalCohesion < 1.5 or RelationalCohesion > 4.0 might be problematic. [4]
  • Lack of Cohesion of Methods (LCOM) The single responsibility principle states that a class should not have more than one reason to change. Such a class is said to be cohesive. A high LCOM value generally pinpoints a poorly cohesive class [4]
Inheritance metrics
"Deep inheritance structures are hypothesized to be more fault-prone. The information needed to fully understand a class situated deep in the inheritance tree is spread over several ancestor classes, thus more difficult to overview.  Similar to high export coupling, a modification to a design element with a large number of descendents can have a large effect on the system." [5]
  • Depth of Inheritance Tree (DIT) measures the number of base classes for a class or structure.  Types where DIT is higher than 6 might be hard to maintain. However it is not a rule since sometime your classes might inherit from tier classes which have a high value for depth of inheritance. [4]


When it comes to tools, the Mercedes Benz of .NET code metrics tools from my point of view has to be NDepend 2.0NDepend 2 provides more than 60 metrics (including all of the metrics listed above) and includes integration into your automated build via support for MSBuild, NAnt and CruiseControl.NET.  Browse to here for a sample report and here for a demo on how to integrate it into your build. 
There is a visual GUI (VisualNDepend) that allows you to browse your code structure and evaluate the metrics as well as a console application that you can generate the metrics with.  Patrick has also created a CQL (Code Query Language) which allows NDepend to consider your code as a database with CQL being the query language with which you can check some assertions on this database. As a consequence, CQL is similar to SQL and supports the typical SELECT TOP FROM WHERE ORDER BY patterns. Here is an example of a CQL query:

WARN IF Count > 0 IN SELECT METHODS WHERE NbILInstructions > 200 ORDER BY NbILInstructions DESC // METHODS WHERE NbILInstructions > 200 are extremely complex and // should be split in smaller methods.
How cool is this! To quote:
"CQL constraints are customisable and typically tied with a particular application. For example, they can allow the specification of customized encapsulation constraints, such as, I want to ensure that this layer will never use this other layer or I want to ensure that this class will never be instantiated outside this particular namespace."
VisualNDepend also provides a CQL editor which supports intellisense and verbose compile error descriptions to make writing CQL queries a lot easier.  Enough said!  Browse to here for a complete overview of the NDepend 2 features.
Other tools that that you can have a look at include SourceMonitor, DevMetrics, Software Design Metrics and vil to name a few. vil does not support .NET 2.0 and does not seem to be under active development. DevMetrics, after being open-sourced, seemed to have stagnated with no visible activity on SourceForge. SourceMonitor is actively under development and supports a variety of programming languages. However, it supports only a small subset of the metrics mentioned which does not include support for important metrics like efferent and afferent coupling etc. Software Design Metrics takes a novel approach in that it measures the complexity based on the UML models for the software. This has the advantage of being language independent, but you obviously need to have UML models to run
the analysis.

When to use

When should one use these metrics? I agree with Jeremy Miller in his post that the metrics should not replace the visual inspection/QA process and be performed in addition to it. It would be nice to have these metrics at hand to assist in the QA process though. I also agree with Frank Kelly in his post that a working system with no severity 1/2 errors and happy end users are more important than getting the right balance of Ca/Ce or whatever metric you are interested in.
I think I will stick with an approach of identifying a subset of useful metrics and using these as part of an overall process of static code analysis on a regular basis. When I say regular basis I feel it should be part of your continuous build process to prevent people from committing code into your repository that does not satisfy your constraints. With a tool like NDepend you can create your own custom level of acceptance criteria by which the build will fail/succeed and exclude metrics that you feel should not apply to your code base.
As mentioned, the code metrics should form part of a bigger quality process that includes:
  • Visual inspection/QA via peer code reviews (as mentioned having the metrics via a tool like VisualNDepend can greatly assist in the QA)
  • Automated code standards check (I prefer FxCop)
  • Automated code metric check (NDepend seems like the tool to use here)
  • Automated code coverage statistics (I prefer NCover and NCoverExplorer)
Well, that basically covers my thoughts on the topic for now. I'm interested to know how many people are actively using metrics and what metrics they are using. I'd also like to know what processes or tools people are using to generate these metrics on their code. Let me know what's working for you in your environment.

Code Reviews

NOTE: This is a repost of on old post as I am moving onto the Blogger platform

Code reviews are a proven, effective way to minimize defects, improve code quality and keep code more maintainable.  It encourages team collaboration and assists with mentoring developers. Yet, not many projects employ code reviews as a regular part of their development process. Why is this?  Programmer egos and the hassles of packaging source code for review are often sited as some reasons for not doing code reviews.

I felt that code reviews should form part of a good code quality control process that includes:

  • Visual inspection/QA via peer code reviews
  • Automated code standards check (I prefer FxCop)
  • Automated code metrics check (I prefer NDepend)
  • Automated code coverage statistics (I prefer NCover and NCoverExplorer)

In this post I will spend some time looking at code reviews.  I'll start by considering different styles of code review and some code review metrics. I'll then move on to some thoughts on review frequencies and best practices for peer code review.  I'll finish off by considering some tools that can assist with the code review process itself.


NOTE: This is a repost of on old post as I am moving onto the Blogger platform

Traffic lightIn previous posts about Code Metrics and Code Reviews, I explored some metrics and techniques that I felt should form part of any good software quality control process.  One of the tools that I mentioned is FxCop.  In this post I take a closer look at FxCop.  I start by looking at how FxCop works and how you can fit it into your development process.  I then consider the different rule sets to use and look at how you can utilise FxCop to guide your VS 2003/2005/2008 development efforts.  I finish off the article by linking to articles that show you how to develop your own custom FxCop rules.

Continuous Integration in .NET: From Theory to Practice

NOTE: This is a repost of on old post as I am moving onto the Blogger platform

TeamPuzzle Continuous Integration (CI) is a popular incremental integration process whereby each change made to the system is integrated into a latest build. These integration points can occur continuously on every commit to the repository or on regular intervals like every 30 minutes.  They should however not take longer than a day i.e. you need to integrate at least once daily.  In this article I take a closer look at CI.  The article is divided into two main sections: theory and practice.  In the theory section, I consider some CI best practices; look at the benefits of integrating frequently and reflect on some recommendations for introducing CI into your environment.  The practice section provides an in-depth example of how to implement a CI process using .NET development tooling.  I conclude the article by providing some additional, recommended reading.



I used the following references in creating the article:

Continuous Integration in .NET: From Theory to Practice 2nd Edition

NOTE: This is a repost of on old post as I am moving onto the Blogger platform

TeamPuzzle During last year I created a guide on implementing Continuous Integration (CI) for a .NET 2.0 development environment.  The guide illustrates how to create a complete CI setup using VS 2005 and MSBuild (no NAnt) together with tools like FxCop, NCover, TypeMock, NUnit, Subversion, InstallShield, QTP, NDepend, Sandcastle and CruiseControl.NET.  The good news is that I spend some time during the last 2 weeks greatly improving the setup for use on a new VS 2008 project and I have decided to release a 2nd Edition of the guide covering the much improved setup.  Instead of creating another series of blog posts to cover the content, I'm releasing the 2nd edition only as a downloadable PDF guide together with all the associated code and build artefacts.  This will allow new teams to get up and running with CI a lot quicker.

For readers of the first edition of the guide, the most notable differences between the second edition and the first edition of the guide are:

  1. Updated to use VS 2008, .NET 3.5 and MSBuild 3.5 (including new MSBuild features like parallel builds and multi-targeting).
  2. All tools (NUnit, NDepend, NCover etc.) are now stored in a separate Tools folder and kept under source control. The only development tools a developer needs to install are VS 2008, SQL Server 2005 and Subversion. The rest of the tools are retrieved form the mainline along with the latest version of the source code.
  3. Added the CruiseControl.NET configuration (custom style sheets, server setup etc.) to source control and created a single step setup process for the build server. This greatly simplifies the process of setting up a new build server.
  4. Changed from using InstallShield to Windows Installer XML (WiX) for creating a Windows installer (msi).
  5. Added support for running MbUnit tests in addition to the NUnit tests.
  6. Added support for running standalone FxCop in addition to running VS 2008 Managed Code Analysis.
  7. Added targets to test the install and uninstall of the Windows installer created.
  8. Consolidated the CodeDocumentationBuild to become part of the DeploymentBuild.
  9. Removed the QTP integration as this was not a requirement for the new project. If you want to integrate QTP, please refer to the QtpBuild of the first edition of the guide.
  10. Used the latest version of all the tools available.  The tools used in the guide are VS 2008, Subversion, CruiseControl.NET, MSBuild, MSBuild.Community.Tasks, NUnit/MbUnit, FxCop, TypeMock/Rhino.Mocks, WiX, Sandcastle, NCover, NCoverExplorer and NDepend.

I hope you find it to be a useful resource for assisting you with creating your own CI process by harnessing the power of MSBuild!  If you have any questions, additional remarks or any suggestions, feel free to drop me a comment.


Here are the links:
  1. PDF Guide
  2. Code and Build artifacts

Saturday, 31 October 2009

My Ultimate .NET Development Tools 2010 Edition

Here is my 2010 updated list of development tools that I prefer to use when doing .NET development.  I specifically decided to not include any third party control/report libraries.  I focus instead on the tools that assist me in crafting high-quality code quickly and effectively.


  • IDE = Develop/generate/refactor code within the VS IDE or separate IDE
  • SCM = Software Configuration Management (Source Control etc.)
  • TDD = Test Driven Development
  • DBMS = Database Management Systems
  • CI = Continuous Integration
  • FR = Frameworks (Persistence, AOP, Inversion of Control, Logging etc.)
  • UT = Utility Tools
  • CA = Code Analysis (Static + Dynamic)
  • TC = Team Collaboration (Bug tracking, Project management etc.)
  • MD = Modelling
  • QA = Testing Tools
  • DP = Deployment (Installations etc.)



* = free/open source
  1. [IDE] Visual Studio 2010 Premium Edition
  2. [IDE] ReSharper for refactoring, unit test runner and so much more
  3. [IDE] CodeSmith for generating code.  Also consider T4 with Clarius’s Visual T4 Editor.  
  4. [IDE]* GhostDoc for inserting xml code comments
  5. [IDE] Altova Xml Suite for any xml related work.  XmlPad is the best, free alternative I know of.
  6. [DBMS] SqlServer 2008 for DBMS
  7. [SCM]* Subversion for source control
  8. [SCM]* TortoiseSVN as windows shell extension for Subversion
  9. [SCM] VisualSVN for integration of TortoiseSVN into VS.  AnkhSVN is the best, free alternative I know of.
  10. [SCM]* KDiff3 for merging
  11. [TDD]* NUnit as preferred xUnit testing framework
  12. [TDD]* moq as mock framework.
  13. [TDD] NCover for code coverage stats
  14. [CI]* TeamCity as build server
  15. [CI]* MSBuild Extension Pack for additional MSBuild tasks.
  16. [FR]* log4net as logging framework.  Also see Log4View for an excellent UI for the log files.
  17. [FR]* ANTLR and ANTLRWorks for creating custom DSL’s.
  18. [FR] PostSharp as Aspect Oriented Programming framework
  19. [FR]* Ninject as IoC container
  20. [FR]* RunSharp for generating IL at run-time
  21. [FR] MindScape LightSpeed as my Object-Relational-Mapper.  NHibernate is the best free alternative I’m aware of. 
  22. [UT]* Reflector to drill down to the guts of any code library (also check-out the nice plug-ins)
  23. [UT] Silverlight Spy to dissect any Silverlight application.
  24. [UT] RegexBuddy for managing those difficult regular expressions.  Regulator is the best, free alternative I know of. 
  25. [UT]* LINQPad as a easy way to query SQL databases using LINQ and as a general scratchpad application to test C#/VB.NET code snippets.
  26. [UT]* Fiddler to debug all your HTTP traffic in IE.   Also see the neXpert plugin for monitoring performance problems.
  27. [UT]* Firebug to assist with testing web applications running in Firefox. Also see YSlow add-on for performance testing and Web Developer add-on for additional Firefox web development tools.
  28. [CA]* FxCop to enforce .NET coding guidelines
  29. [CA] NDepend to get all the static code metrics I'd ever want
  30. [CA] ANTS Profiler for performance and memory profiling
  31. [MD] Enterprise Architect to do UML Modelling and Model Driven Design if required. Alternatively use Visio with these simple templates
  32. [MD]* FreeMind as mind mapping tool
  33. [TC]* ScrewTurn Wiki for team collaboration
  34. [QA]* Eviware soapUI for functional and load testing of SOA web services
  35. [QA]* Telerik WebAii Testing Framework for automated regression testing of Web 2.0 apps
  36. [DP]* Windows Installer XML (WiX) for creating Windows Installers

Migrating blog onto Blogger

I am in the process of migrating my old blog onto blogger as the current host seems to be extremely slow these days with a lot of spam information as well.  Those of you using my FeedBurner subscription won’t have to do a thing as I’ve rerouted it to my new blogger site.  I’ll be re-posting some of the most popular content of my old blog just to ensure the continuity of the information going forward.  Sorry for the inconvenience :-|

Integrating your Silverlight Test Run Results into TeamCity

We’ve been using TeamCity with great success at our company to do continuous integration.  We have build configurations defined for building and deploying our On Key suite of software;  for running our suites of tests and also for generating our static code analysis metrics. 


One of the problems we faced was integrating the Silverlight UI test results generated using the Microsoft Silverlight Test Framework into TeamCity.  TeamCity comes with out-of-the-box support to automatically pick up and display the tests results generated by the NUnit and MSTest report formats.  However, the Microsoft Silverlight Test Framework works differently to NUnit and MSTest as it runs as a Silverlight application.  The same Silverlight security sandbox restrictions therefore also apply to the Silverlight Test Framework.  This makes it more difficult to get the test results out of the framework as the results are not written to a file like NUnit or MSTest.  Fortunately TeamCity allows you to integrate the test results of any framework by writing special ServiceMessage commands as part of your build output.  TeamCity will listen for these commands and interpret and display the information as part of your build results on the portal. 

A Solution That Works

So we initially solved the problem by automating the test run and screen scraping the results from the Silverlight Test run page.  From the screen scraped results we created TeamCity Service Messages and wrote these messages as part of our build output.  I got the screen scraping idea from this blog post by Jonas Follesoe.  This worked, but it was a clumsy solution at its best as parsing the html, looking for specific div’s to find out whether the tests failed or not was very error prone.  We also had no guarantee that the HTML format for the test report would stay the same between subsequent releases of the Silverlight Test Framework.  However, the real show stopper was that we ran into an issue with TeamCity when the tests failed.  The problem occurred because the test exception would contain invalid XML characters.  When TeamCity tried to run the failed tests, the XML RPC communication between the build agent and TeamCity server would fail due to the invalid characters in the XML stream.  This resulted in the build going into a loop as the TeamCity Server was not able to pick up any response from the Build Agent it was trying to run the build on.

A Solution That Rocks

Instead of trying to panelbeat the existing solution, I decided to investigate alternatives and started by using trusty Reflector to look at the source code of the Silverlight Test Framework.  I wanted to see whether there was not a more elegant way in which to report on the test results.  (Btw, you don’t need to use Reflector – you can download the source code of the Silverlight Test Framework as it is included as part of the Silverlight Toolkit).  Sure enough, I discovered that the framework already included the necessary extensibility points for plugging in your own reporting mechanism.   The LogProvider base class provides the core API for creating your own logger that the Silverlight Test Framework will then call into for you to process the test results.  There is also a TestReportService that seems like the mechanism to use to write the log output by exporting it via a service call.

So the idea was to implement my own TeamCityServiceMessageLogProvider that would write the Test results into ServiceMessages that TeamCity understands.  The implementation turned out to be quite straightforward (download from here).  The Logger opens up an IsolatedStorageFileStream and just writes the test results received to the stream.  You plug the logger into the Silverlight Test Framework by adding it as an additional LogProvider in your App startup:

private void Application_Startup(object sender, StartupEventArgs e)

   UnitTestSettings settings = UnitTestSystem.CreateDefaultSettings();
   settings.LogProviders.Add(new TeamCityServiceMessageLogProvider());

   RootVisual = UnitTestSystem.CreateTestPage(settings);
So this took care of getting the results in the right format, but I still had to figure out how to get the results out of the Silverlight Test Framework into TeamCity.

Some further reflectoring showed that these extensibility areas also seemed to already exist within the Silverlight Test Framework, but I was unable to get them wired up and working correctly.  So I resorted to implementing my own solution by using a WebClient to upload the streamed results onto our web server for reporting to TeamCity. The first thing I had to do was to hook into the Publishing event of the Test framework to allow my custom logger to upload the results.  For this I had to implement the ITestSettingsLogProvider interface that provides an Initialize method that will be invoked by the Silverlight Test Framework.  I then hooked into the Publishing event as follows:

public void Initialize(TestHarnessSettings settings)
  Settings = settings;
  UnitTestHarness testHarness = Settings.TestHarness as UnitTestHarness;
  if (testHarness != null)
    testHarness.Publishing += ((sender, e) => PostTestResults());
  Store = IsolatedStorageFile.GetUserStoreForApplication();
  Stream = Store.CreateFile(LogFile);
  Writer = new StreamWriter(Stream);
In the PostTestResults method, I upload the results on to the server:
private void PostTestResults()
   WebClient client = new WebClient();
   client.OpenWriteCompleted += (sender, e) =>
                                    Stream input = e.UserState as Stream;
                                    Stream output = e.Result;

                                    byte[] buffer =  new byte[4096];
                                    int bytesRead = 0;
                                    input.Position = 0;
                                    while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0)
                                        output.Write(buffer, 0, bytesRead);

   client.OpenWriteAsync(new Uri("http://localhost/OK52/UnitTest.aspx?Results=true", UriKind.Absolute), "POST", Stream);
Notice the use of the Results=true query string parameter.

On the server side, we have a UnitTest.aspx web page that we use as the start page for our Silverlight Tests.  A nice feature that I picked up from another excellent Jonas Follesoe post is that you can pass query string parameters onto the test page and use these as initialization parameters for your Silverlight application.  We use this to restrict the suite of tests being run by using the Tag feature of the Silverlight Test Framework -  i.e. http://localhost/OK52/UnitTest.aspx?Tag=Staff to only run the Staff tests.  We also use this to configure the user that is logging on to the system – i.e. http://localhost/OK52/UnitTest.aspx?Tag=Staff&User=Carel&Password=secret.  When reporting the test results, we post to the same UnitTest.aspx page but send through the Results=true query string parameter to indicate that we want to upload the test results onto the server.  The results are then written to a file on the server for further processing as illustrated below:

public void Page_Load(object sender, EventArgs e)
  if (string.IsNullOrEmpty(Request.QueryString["Results"]))
    if (!string.IsNullOrEmpty(Request.QueryString["Tag"]))
      Tests.InitParameters = "Tag=" + Request.QueryString["Tag"] + ",";
    if (!string.IsNullOrEmpty(Request.QueryString["UserName"]))
      Tests.InitParameters += "UserName=" + Request.QueryString["UserName"] + ",";
    if (!string.IsNullOrEmpty(Request.QueryString["Password"]))
      Tests.InitParameters += "Password=" + Request.QueryString["Password"] + ",";
    if (!string.IsNullOrEmpty(Request.QueryString["DB"]))
      Tests.InitParameters += "DB=" + Request.QueryString["DB"] + ",";
    StreamReader inStream = new StreamReader(Context.Request.InputStream);
    string filePath = Server.MapPath(@"~\Logs\SLTests.log");
    FileStream outstream = File.Open(filePath, FileMode.Create, FileAccess.Write);
    // Read from the input stream in 4K chunks and save to output stream
    const int bufferLen = 4096;
    char[] buffer = new char[bufferLen];
    int bytesRead = 0;
    while ((bytesRead = inStream.Read(buffer, 0, bufferLen)) > 0)
      outstream.Write(Encoding.UTF8.GetBytes(buffer, 0, bytesRead), 0, bytesRead);
Once the file has been uploaded, it is a simple matter of echoing the test results back to the TeamCity server by writing it to the NUnit TestContext: 
public class RunOnKey5ClientIntegrationTests : BaseTest
  private const string OnKeyUri = "http://localhost:80/OK52/UnitTest.aspx";
  private const string LogPath = @"..\..\..\Pragma.OnKey5.Server\Logs\";
  private const string SilverlightTestsLog = "SLTests.log";
  public void RunSilverlightTests_UsingTeamCityServiceMessageLogger_ToShowResultsOnTeamCityPortal()
    // Clear the old results
    if (File.Exists(LogPath + SilverlightTestsLog))
      File.Delete(LogPath + SilverlightTestsLog);
    using (FileSystemWatcher watcher = new FileSystemWatcher(LogPath, SilverlightTestsLog))
      // Navigate to the VDir hosting the tests
      ActiveBrowser.NavigateTo(new Uri(OnKeyUri));
      watcher.EnableRaisingEvents = true;
      WaitForChangedResult result = watcher.WaitForChanged(WatcherChangeTypes.Created);
      if (!result.TimedOut)
          // Echo the TeamCity ServiceMessages so that the test results can be picked by the Portal
          string[] lines = File.ReadAllLines(LogPath + SilverlightTestsLog, Encoding.UTF8);
          foreach (string line in lines)
You’ll notice that the whole test run is managed as a NUnit test fixture that is flagged with “UI” as a Category.  This allows me to use the nunit-console.exe to run the specific tests using the /include=UI command line parameter.  I use the FileSystemWatcher class to wait until the SLTests.log file has been published and them echo the results to TeamCity by writing it to the NUnit TestContext.


I hope you’ll find this information useful for publishing the results of your own Silverlight Test Runs during your automated builds.  Jeff Wilcox has mentioned that more of the build automation extensibility points of the Silverlight Test Framework will be made known in future releases.  Until such time this is one way of doing it.