tag:blogger.com,1999:blog-51519621908983554612024-02-19T06:46:29.214+01:00Creating SoftwareUnknownnoreply@blogger.comBlogger26125tag:blogger.com,1999:blog-5151962190898355461.post-33660105246966335792013-09-30T14:51:00.001+02:002013-09-30T15:32:45.524+02:00How we do Git branching<div dir="ltr" style="text-align: left;" trbidi="on">
At Headfitted we need to solve typical challenges for our development cycle: versioning our source code, supporting a release/bugfixing cycle, and because we work with distributed development teams, we need a branching model that works well across physical locations and time zones. For this we have settled on <a href="http://git-scm.com/" target="_blank">Git</a> and <a href="https://github.com/" target="_blank">GitHub</a> as our preferred version control tools. Taking inspiration from how big open source projects handle challenges much similar to ours and from the excellent <a href="http://nvie.com/posts/a-successful-git-branching-model/" target="_blank">A Successfull Git Branching Model</a> post, we have developed a branching model and review workflow that works well for us. Let me share some of the details.<br />
<br />
<h4 style="text-align: left;">
Main branches</h4>
<div style="text-align: left;">
Any project will start out with these branches:<br />
<br /></div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUeHcegBcAJJtukBIIXV_DO0gq7-tWDutTuQPBhLBx0hmoLa7MQfhCvafhU58jkw5_1mdUVKzTaIP7ibFy6Yuo4_tkG8TrU_yoiseH3uuRJipY8EgC2Zzr4lJcUkN7v-fAcyneKqY1R_c/s1600-h/masterdevelop11.png"><img align="left" alt="masterdevelop" border="0" height="130" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIywfrn-wDVr2hwsrRD0fnLW7Z499Kq8ye7gfAmU88we4sdv5XzM90VYuZgYLSte8Tuqb3YjP1_4EDuAAacnkVkAUz96cL26u5yEn_c6UX6yxVsafQMnFPCU7ehjnSHQx47LGEDYSx-kQ/?imgmax=800" style="background-image: none; border-width: 0px; display: inline; float: left; margin: 5px 15px 5px 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="masterdevelop" width="86" /></a>These are our main branches. Current development goes into develop, and when we do a release, develop is merged into master. In our <a href="http://www.jetbrains.com/teamcity/" target="_blank">TeamCity</a> continuous integration server, there will be a Continuous Integration build that feeds from develop branch and immediately does a build when something is pushed to this branch in the central repository (GitHub). Similarly there is a build that feeds from the master branch and uses our automated deployment system to deploy to production.<br />
<br style="clear: both;" />
We do feature branching. Each feature under development gets it's own branch created, work is done here, and finally the feature branch is merged back into the develop branch using GitHub pull requests.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGzZl0GFysnM76B7PuHF_d_vxMMlMqfWsErP-Z9V_CtZEF1IsFcl1xihxgOWBnaLaB4GI32A_HW_DLD-wcTWy6TRXShhm5ZTiUV6pNG5G3tDr5Ahsc22cuFgrnhJHEZy-hR0a5M4c4ECY/s1600-h/featurebranch12.png"><img align="left" alt="featurebranch" border="0" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmxN2qgA4GVntbUSu3rhItOv4_qtTbH0-uMbJa6pgwWFV2U5vAh7WTnjL5MiNr3OTqNYs-8bgp7121ah5kGRZpM_knjhLpCV9-EHJoBOgDILCjl3llboOyK1mN_At4Rp_lKr4mwm7sLls/?imgmax=800" style="background-image: none; border-width: 0px; display: inline; float: left; margin: 5px 15px 5px 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="featurebranch" width="360" /></a><br />
A feature branch is usually pushed to the central repository (GitHub), which enables more than one developer to work together on a feature, and also enables ad hoc peer-to-peer code sharing. Because switching branches is easy in Git, a developer can easily stash her current work, switch to the feature branch of her colleague, assist on any issue, and later switch back.<br />
<br />
One of the challenges of feature branching is keeping in sync with the develop branch, because it will typically move forward in time, while any feature branch is based on the state of the develop branch at the time of the feature branch's birth. We are using our pull request-based review workflow to ensure that the developer in charge of a feature is also responsible for resolving any merge issues:<br />
<ul>
<li>Developer finishes feature, makes pull request </li>
<li>Reviewer uses GitHub to do the review. </li>
<li>GitHub will tell if the pull request cannot automatically be merged (i.e. cannot be merged back into develop branch without conflicts). In that case the reviewer will ask the developer to do a “reverse merge”. </li>
<li>A reverse merge means: From the feature branch, merge latest develop into it. Resolve any merge conflicts. Finally push the reverse merged feature branch.</li>
</ul>
<div style="text-align: left;">
<strong><br /></strong></div>
<h4 style="text-align: left;">
Release Management</h4>
Our release management process includes a release candidate branch (rc). When we are preparing the next release, we merge the development branch into rc.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUE3qk1E-Am4ITKZWNSrO6kF9o56qbgUudpj4Rogkj8nbLiaz3wDdq0YfrI6XL6NtlxxbtoJzNmX6dbhBCX-SSdwzQEF8xWe3ChrlKQ5Xr_MZUlDN0mtxPOMCzj8FK8aiuh71CPSSMHKg/s1600-h/release5.png"><img align="left" alt="release" border="0" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsF5wE539q5K2VfzhEz38828wsPw5lcIsdY2HJ6uVjJzTzMu0_Py2qu71iwbzTDuY41MdcLtWrbXffem7YfKBxVI6suzeNJn9bgMXWMbP_uqk6khpZ6jVMbc36S1bVg_k92jmZPig53k4/?imgmax=800" style="background-image: none; border-bottom-width: 0px; border-left-width: 0px; border-right-width: 0px; border-top-width: 0px; display: inline; float: left; margin: 5px 14px 5px 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="release" width="409" /></a><br />
Now any stabilizing of the upcoming release is done on rc. We typically let our build server feed from the rc branch to a build that also makes automated deployment to an rc area, where the project stakeholders can preview, test and verify the upcoming release.
<br />
We use the branching to rc to free the develop branch, so work on the next version can immediately start, without depending on the release to be finished.<br />
Once rc is considered stable, we merge to master, which by definition is a release (again, our build system has a build that feeds from master and deploys to production). We also merge back any changes done on rc to the develop branch.<br />
<br />
<h4 style="text-align: left;">
Hotfixing</h4>
Whenever a bug is discovered in production (yes it happens even to us), we do hotfixing directly on the master branch.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUWAuRvUYLMaqZD5qfjs-1CyOrWEF29t2yr9RGf9xffgdl7Qgvstpi1N06eSoiVZ1zmm7qJjxFBK0A4X6b6CsAz-muFw29bYbdRe8DFIH0NjZYhKybqk5WwvHAEZsmWpt5KVhZ9BikcK0/s1600-h/hotfix4.png"><img align="left" alt="hotfix" border="0" height="157" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghsRI-r6pxoKt0y5OKM0ROwghBhrByjOleq0_X0zfpo41trq1OBsSa-yRWms1JTWvXoHgqDwthlW_hlRl-Ljt59BvpDLQQP7f61nvXWDXPSRfMDU__BBdzXGmna9Oqf394RhCpcm3DbL8/?imgmax=800" style="background-image: none; border-bottom-width: 0px; border-left-width: 0px; border-right-width: 0px; border-top-width: 0px; display: inline; float: left; margin: 5px 15px 5px 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="hotfix" width="197" /></a><br />
The hotfix is done just as we do feature branches, except it branches directly of the master branch. Once done and merged back into master, we also merge it into develop to keep things in sync.<br />
<h5>
<strong><br clear="all" /></strong></h5>
<h4 style="text-align: left;">
Traceability</h4>
We couple our branching model with our issue tracking system, so feature branches and hotfixes are named after the issue tracker ids. We have branches with names like feature-2316, bug-4323 or hotfix-3784. Additionally we always refer issue tracker ids in our commit messages, which will automatically be picked up by the issue tracker (setting issues to “ready to test”, “done”, etc.), but that’s a story for another blog post…<br />
<br />
<h4>
Conclusion</h4>
<span style="font-weight: normal;">Our branching model supports our software development cycle well, and gives us a consistent way of ensuring a frictionless workflow with good traceability and integration between our tools. If you have any comments, questions or experiences from your workflow to share, please don’t hesitate to use the comments.</span>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-89089207672818147462013-08-30T11:49:00.000+02:002013-08-30T11:49:12.400+02:00How we do versioning<div dir="ltr" style="text-align: left;" trbidi="on">
Keeping track of version numbers as a part of the release management process seems to be one of these things that should be simple, but is done in so many different ways. Here is the approach we use at <a href="http://www.headfitted.dk/en/home.aspx" target="_blank">Headfitted</a> that works very well for us:<br />
<br />
First, let's take a look at <a href="http://semver.org/" rel="nofollow" target="_blank">semver</a>. This is an initiative of keeping a common standard for version numbers that should be supported whenever possible. In short a version number should look like this:<br />
<br />
MAJOR.MINOR.PATCH<br />
<br />
This is what we use.<br />
In both our node/Javascript projects and in our .NET projects, we keep the version number at one central place in the source code. The tooling and build scripts do the rest: Making sure this version number is stamped into all compiled binaries and static script files.<br />
<br />
<b>Javascript/node</b><br />
In node/Javscript we have a <span style="font-family: Courier New, Courier, monospace;">version.json</span> file simply like this:<br />
<br />
<pre class="brush: javascript; gutter: false; auto-links: false;">{ 'version' : '1.2.0' }
</pre>
<br />
<b>.NET</b><br />
In .NET we need to do a few tricks to keep things simple. First, in all <span style="font-family: Courier New, Courier, monospace;">AssemblyInfo.cs</span> files for our Visual Studio solution, we strip out all version number related stuff, and instead we create one <span style="font-family: Courier New, Courier, monospace;">SolutionInfo.cs</span> file that looks like this:<br />
<br />
<pre class="brush: csharp;">#if DEBUG
[assembly: AssemblyConfiguration("Debug")]
#else
[assembly: AssemblyConfiguration("Release")]
#endif
[assembly: AssemblyVersion("1.2.0.0")]
</pre>
<br />
We don't provide an <span style="font-family: Courier New, Courier, monospace;">AssemblyFileVersion</span>, because by omitting this, it will take the same version as stated in <span style="font-family: Courier New, Courier, monospace;">AssemblyVersion</span>. Why make it more complicated?<br />
<br />
Finally we add a kind of symbolic link to <span style="font-family: Courier New, Courier, monospace;">SolutionInfo.cs</span> from each of the projcets of our solution: Add -> Existing Item, and then click the arrow to be able to select <b>Add As Link</b>:<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKg8MfZAEAzmrkyIJUAIs6CT7uJX50CSjHLWXHtSpXuatVWX2NVZmMHUUcL6rg5WKySoKytaz4BvNjnTrGMVyxJjCd6WlGPV8M4SpedjEJmK7l1phy3oplRLNCFGJ4w3TBDVfh5-1FOmM/s1600/AsLink.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="133" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKg8MfZAEAzmrkyIJUAIs6CT7uJX50CSjHLWXHtSpXuatVWX2NVZmMHUUcL6rg5WKySoKytaz4BvNjnTrGMVyxJjCd6WlGPV8M4SpedjEJmK7l1phy3oplRLNCFGJ4w3TBDVfh5-1FOmM/s320/AsLink.png" width="320" /></a></div>
<br />
This way we only need to maintain the version number at one place for the entire Visual Studio Solution.<br />
<br />
Another obstacle is that in .NET the <span style="font-family: Courier New, Courier, monospace;">AssemblyVersion</span> is done like this: MAJOR.MINOR.BUILD.REVISION. So to make it easy we let Microsoft have it's will and let it name it "BUILD", when really it is Semver's "PATCH". Peace. We simply only use the first 3 parts, and always keep the REVISION at 0.<br />
<br />
<b>Incrementing</b><br />
So when should the different parts of the version number be incremented? How and by whom? Semver gives us the answer to the first question:<br />
<br />
MAJOR version when you make incompatible API changes,<br />
MINOR version when you add functionality in a backwards-compatible manner, and<br />
PATCH version when you make backwards-compatible bug fixes.<br />
<br />
How and by whom? This is where you need to make some decisions on the practical aspects. In our case we made these:<br />
<ul style="text-align: left;">
<li>MAJOR and MINOR are maintained in the source code.</li>
<li>PATCH is maintained by the build server.</li>
</ul>
This works because we run all projects through a build server. In the code, we typically bump the MINOR (or even the MAJOR) at the beginning of a sprint/development cycle. Our commit comments look like this:<br />
<br />
<pre class="brush: text; gutter: false; auto-links: false;">Bumped minor version. We are now working on version 1.2.
</pre>
<br />
Now each time our build server makes a build, it will extract the MAJOR.MINOR from the source code, and call a very simple homemade counter service. This service will take any key and maintain a counter for this key. So when we pass the project name and MAJOR.MINOR to it (eg "projectX-1.2"), we might get "1" back, and "2", "3", ... on each subsequent call. This is our PATCH, and now the build server has the complete version number to stamp into the source code before building.<br />
<br />
This works beautifully because if we change MAJOR or MINOR in the source code, the key will be new, and PATCH numbering will automatically restart at 0. It also makes sure that each build gets an unique version number, even if we have multiple build configurations (eg. test, staging, release). In our build server (Teamcity), we echo back the version number to the server, so we get a nice overview like this:<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGwMlDyt_Z32eyiEL41DoYWw4WhhYLgfC81OAZjYiPbKzU5944WSdK9jSbchw-g3nbAfiN2h8IsvxBb0tne46cbeh6-56nOtWsS0cAI2nNPejh00fdwDhD0vfkCVsiHMm4qoQ6Pzldg0s/s1600/Builds.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGwMlDyt_Z32eyiEL41DoYWw4WhhYLgfC81OAZjYiPbKzU5944WSdK9jSbchw-g3nbAfiN2h8IsvxBb0tne46cbeh6-56nOtWsS0cAI2nNPejh00fdwDhD0vfkCVsiHMm4qoQ6Pzldg0s/s1600/Builds.png" /></a></div>
<br />
We don't check the final version number back into version control. In our source code, version numbers will always have PATCH set to 0. The build server replaces this with the correct PATCH number, builds, and then discards the changed source files. When building locally on a developer machine, PATCH will always be 0. This way we never have version number merge conflicts, and we avoid polluting the commit history with patch increments.<br />
<br />
Our approach works well with our branching model, as the MAJOR and MINOR always goes with the source code (and branch). In the above screen dump, you can see how our Release Candidate and master branch is on version 2.16, while the develop branch has progressed and is on 2.17.<br />
<br />
<b>The result</b><br />
We have unique version numbers, we follow common Semver conventions, the version number is always present in our releases, and it works well with our branching model.<br />
Whenever possible, we discretely display the version number on the screen, typically in the footer. This is a great help for our issue tracking, so we can track exactly at what version a given bug started appearing, and we can say from which version the bug was fixed. We also use the version number for update detection, especially in single-page Javascript apps, so we can notify the user if a new version was deployed and the page should be refreshed. With some clever appending the version number to Javascript file names, we can also avoid browser-caching-old-version issues.<br />
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-7040298401571904162010-05-12T15:53:00.000+02:002012-07-26T00:39:21.687+02:00Structuring tests using contexts and scenarios<div dir="ltr" style="text-align: left;" trbidi="on">
One of the greatest challenges of writing good tests is to keep the tests short and readable. This can be achieved by composition - here is the way I use to structure AAA-style tests:<br />
<br />
<pre class="brush: csharp;">[Test]
public void ProductValidator_IllegalProductCode_Fails()
{
// Arrange
var context = new TestContextBuilder<ProductValidatorContext>()
.WithScenario(AnonymousProduct)
.WithScenario(ProductHasIllegalProductCode)
.BuildContext();
ProductValidator sut = context.CreateSubjectUnderTest();
// Act
bool res = sut.Validate();
// Assert
Assert.IsFalse(res);
}
</pre>
<br />
The goal is to provide more readable tests and to encapsulate (and reuse) test setup code. As a bonus it will be easy to move between tests and these context/scenario-style tests and using a framework like <a href="http://storyq.codeplex.com/" target="_blank">StoryQ</a> .<br />
<br />
Using AAA syntax, the Arrange part of each test uses a specific test context, and aditionally applies one or more scenarios to this context. In the Act part the Subject Under Test is exercised in the context, and in the Assert part the assertions are made, typically on the Subject Under Test or on the context.<br />
<br />
The test context:<br />
<br />
The test context is the context needed to run a test on the subject under test. In other words, it is the fixed part that stays the same for each test in your test class. You want to keep the context at a minimal level, meaning that the context establishes just enough stuff to be able to create the subject under test. Also the context exposes the things you might want to modify from test to test or make assertions against. We often place the test context class as a private, nested class inside the test class itself.<br />
<br />
<pre class="brush: csharp; auto-links: false;">public class ProductValidatorContext : TestContextBase
{
public Product TheProduct { get; set; }
public ProductValidator CreateSubjectUnderTest()
{
return new ProductValidator(TheProduct);
}
}
</pre>
<br />
The scenarios:<br />
<br />
Each specific test will use a context builder to build the context. The context builder initializes the context and allows you specify one or more scenarios to apply to the context. Scenarios can be expressed using simple lambda expressíons, methods or classes:<br />
<br />
Using lambda:<br />
<pre class="brush: csharp; gutter: false; auto-links: false;">var context = new TestContextBuilder<ProductValidatorContext>()
.WithScenario(AnonymousProduct)
.WithScenario(x => x.TheProduct.Code = "Illegal product code")
.BuildContext();
</pre>
<br />
Using a method:<br />
<pre class="brush: csharp; gutter: false; auto-links: false;">var context = new TestContextBuilder<ProductValidatorContext>()
.WithScenario(AnonymousProduct)
.WithScenario(ProductHasIllegalProductCode)
.BuildContext();
...
private void ProductHasIllegalProductCode(ProductValidatorContext context)
{
context.TheProduct.Code = "Some illegal product code";
}
</pre>
<br />
Using a class:<br />
<pre class="brush: csharp; gutter: false; auto-links: false;">var context = new TestContextBuilder<ProductValidatorContext>()
.WithScenario(AnonymousProduct)
.WithScenari<ProductHasIllegalProductCode>()
.BuildContext();
...
private class ProductHasIllegalProductCode : TestScenarioBase<ProductValidatorContext>
{
public override void Apply(ProductValidatorContext context)
{
context.TheProduct.Code = "Some illegal product code";
}
}
</pre>
<br />
Scenario classes can expose properties that are specific to the scenario in case you need to assert against these:<br />
<br />
<pre class="brush: csharp; gutter: false; auto-links: false;">Assert.That(context.Scenario<SomeProductInEditing>.TheProduct.Name, Is.Not.Null);
</pre>
<br />
So typically the context establishes mocks or test instances of all the dependencies of the subject under test. The scenario(s) set the behavior of these mocks or builds the test instances. Here I typically use frameworks like <a href="http://code.google.com/p/moq/" target="_blank">Moq</a> and <a href="http://autofixture.codeplex.com/" target="_blank">Autofixture</a>.<br />
<br />
I have put the context builder as well as base classes for the contexts and scenarios on <a href="http://code.msdn.microsoft.com/cstest" target="_blank">Code Gallery</a>.<br />
<br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-5443923193119812202010-03-19T11:19:00.000+01:002011-10-24T23:22:36.037+02:00Best practices for Assert statements in unit testsSome may find this trivial, but I find myself going over this in almost every code review: You’ve got to pay attention to your unit test assert statements:<br />
<br />
<strong>Don’t hide the comparison in your assert</strong><br />
Consider this assert statement:<br />
<br />
<pre class="brush: csharp;">
Assert.IsTrue(myObj.Name == "Expected name");
</pre>
<br />
Why is this bad? Take a look at the test runner output when the test fails:<br />
<br />
<pre class="brush: plain;">
NUnit.Framework.AssertionException: Expected: True
But was: False
</pre>
<br />
The comparison is hidden, so the outcome of the test is always a non-informative true/false. Change the statement to<br />
<br />
<pre class="brush: csharp;">
Assert.AreEqual("Expected name", myObj.Name);
</pre>
<br />
This will give the following output when the test fails:<br />
<br />
<pre class="brush: plain;">
NUnit.Framework.AssertionException: Expected string length 13 but was 12. Strings differ at index 0.
Expected: "Expected name"
But was: "Another name"
-----------^
</pre>
<br />
Much more informative, right?<br />
<br />
I personally like the alternate fluent assert constructs you can do with NUnit, because it makes you write the assert statement right almost without thinking about it:<br />
<br />
<pre class="brush: csharp;">
Assert.That(myObj.Name, Is.EqualTo("Expected name"));
</pre>
<br />
<strong>The assert message should add information or be omitted</strong><br />
<br />
How about:<br />
<br />
<pre class="brush: csharp;">
Assert.AreEqual(myObj.Name, "Peter", "Expected name to be Peter.");
</pre>
<br />
The assert message is a waste of typing efforts, because the output of the test runner will tell you anyway. Use the assertion messages only to provide extra information, or omit them:<br />
<br />
<pre class="brush: csharp;">
myObj.DoSomething();
Assert.That(myObj.HasError, Is.True, "Expected HasError to be set because DoSomething produces an error.");
</pre>
<br />
<strong>Multiple assert statements</strong><br />
<br />
When completing a complex arrange part of a test, it is tempting to make a lot of assert statements (after all this trouble, it’s time to assert to heck out of it, right?) Well not really…<br />
<br />
The assert statement is <em>the test</em>. When you look at a test that has multiple assert statements, the test will most likely violate the <a href="http://en.wikipedia.org/wiki/Single_responsibility_principle" target="_blank">SRP principle</a> (yes, it also applies to tests), meaning that the test is testing to many things at once. Why is this bad? Because the test will most likely be big and difficult to understand. What will you name the test if it tests 5 different things? Have you seen each of the assert statements fail? If you haven’t, how do you know that the test is working?<br />
<br />
Resolution is simply to split it up. You can extract any complex arrange/setup part to a private method and reuse this from your split up tests.<br />
<br />
The only case where I see myself diverting from this practice is when doing <em>progressive</em> assertions against the same object. Again it is about making the test informative. Here my expected outcome is that a person with the expected name is added to Persons:<br />
<br />
<pre class="brush: csharp;">
Assert.That(myObj.Persons[0].Name, Is.EqualTo("Expected name"));
</pre>
<br />
Will potentially throw a NullReferenceException or an IndexOutOfRangeException, which doesn’t give a clear clue to what the problem is. Here I would use:<br />
<br />
<pre class="brush: csharp;">
Assert.That(myObj.Persons, Is.Not.Null);
Assert.That(myObj.Persons, Is.EqualTo(1));
Assert.That(myObj.Persons[0].Is.EqualTo("Expected name"));
</pre>
<br />
This will be more informative in case the Persons list is not instantiated or populated.<br />
<br />
<strong>Conclusion</strong><br />
<br />
The general issue here is, that when you are constructing your test, you know exactly what is going on, at that precise moment. It is very easy to be blinded by focusing only on making this test go green. But try to look ahead – in 3 months, this test may fail. Imagine your trusted coworker having to find out what went wrong – fast. That’s why your tests should be easy to understand and informative when failing. And that, my friend, is achieved by paying attention to the assert statement.<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-8263881258338744092010-03-11T22:50:00.000+01:002011-10-25T07:59:25.027+02:00Code Coverage analysis for .NET projects in TeamCity 5I personally don't use test code coverage as a goal in itself, but it is a great tool to find the blind spots - finding classes and methods that don't have coverage, but really should.<br />
Speaking in numbers, I don't set a certain code coverage percentage as a strict required minimum, instead I look for classes or namespaces that falls below a certain treshold, like 60%.<br />
<br />
My favorite build server of choice now makes it easier to keep track of the code test coverage. Team City Version 5.0+ has a nice little new part (Build Configuration –> Runner) :<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhg7MHwrHqM8_KHpMLDeweYh3q_sSlSvQeM0mXgZaUBJgZerm72l3J4HEMi2ggEka5u3ygG7Cdt1X9SGecHSoOcQQSDE1j7r7ARCl3MEDGehPvA7Gs5ctD7g72R6VdEGCy_UZ6Y5v_KTes/s1600/tcncover1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhg7MHwrHqM8_KHpMLDeweYh3q_sSlSvQeM0mXgZaUBJgZerm72l3J4HEMi2ggEka5u3ygG7Cdt1X9SGecHSoOcQQSDE1j7r7ARCl3MEDGehPvA7Gs5ctD7g72R6VdEGCy_UZ6Y5v_KTes/s1600/tcncover1.png" /></a></div>
<br />
Out of the box TeamCity supports <a href="http://www.ncover.com/" target="_blank">NCover</a> (commercial and community versions) and <a href="http://sourceforge.net/projects/partcover/" target="_blank">PartCover</a>. Let’s look at NCover – there is a free (however old) community edition. NCover provides much more progressed commercial versions if you want to do advanced coverage analysis and Visual Studio integration, but let’s start small with the community edition.<br />
<br />
On the build server:<br />
Download and install <a href="http://www.ncover.com/download/community" target="_blank">NCover Community 1.5.8</a> (you need to register, which is free).<br />
Download and install <a href="http://www.kiwidude.com/dotnet/DownloadPage.html" target="_blank">NCover Explorer 1.4.0.7</a> (this is just a zip, unzip it to an “Explorer” sub folder to the NCover install location).<br />
Now you just need to set it up from the TeamCity Build Configuration –> Runner page:<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCxl849TBCHWL0XTM0mP9oodoU5NzkPoMEZ5Wz94uC8wyT32x_Zp4SrlmmoRJIzxGyMLQWMZ-k5hEQs6K0z-I5bXVkJl9a7nLPRtXmRcfhkKH0nk_iF3LmjIcUYGaQ42-swJBES6Msjdk/s1600/tcncover2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCxl849TBCHWL0XTM0mP9oodoU5NzkPoMEZ5Wz94uC8wyT32x_Zp4SrlmmoRJIzxGyMLQWMZ-k5hEQs6K0z-I5bXVkJl9a7nLPRtXmRcfhkKH0nk_iF3LmjIcUYGaQ42-swJBES6Msjdk/s1600/tcncover2.png" /></a></div>
<br />
A few things to note:<br />
Don’t count on the autodetection of the NCover install files, especially if you are on a 64-bit server. Put those paths in manually.<br />
Enter the assemblies to analyze – without extension, and one on each line.<br />
<br />
Now you are ready to go. Run a build and check out the code coverage report:<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTzngwu88NgxyRxCP_ZKA_GS3jk0oI7pJS9eitmPdgw6fOj8lLW1FLz-4Vgpg2LUZ4HdErocblWNMeYLyza8pNzIAS7WZACAhCQtXTl3o5X4nXiKUVPmkJNUs7L1vIoBHAiur4KTIKSeY/s1600/tcncover3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTzngwu88NgxyRxCP_ZKA_GS3jk0oI7pJS9eitmPdgw6fOj8lLW1FLz-4Vgpg2LUZ4HdErocblWNMeYLyza8pNzIAS7WZACAhCQtXTl3o5X4nXiKUVPmkJNUs7L1vIoBHAiur4KTIKSeY/s1600/tcncover3.png" /></a></div>
<br />
You can choose between different reports to view assemblies, namespaces, methods and classes to find potential blind spots. I find this a valuable extension to your build server that it is hard to be without, once you have started using it.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-60843142744083349842010-02-23T16:46:00.000+01:002011-10-25T10:41:13.313+02:00Folder structure for a .NET applicationI usually stick to the same base folder structure for my .NET applications. This one works well for both my open source projects and enterprise projects at work:<br />
<br />
<style>
table.padded-table td { padding:3px; border: solid 1px grey; }table.padded-table { border-collapse: collapse; border: solid 1px grey; }
</style>
<table border="0" class="padded-table" style="width: 708px;"><tbody>
<tr> <td valign="top" width="202">
<li>Branch name (usually ‘trunk’) </li>
</td> <td valign="top" width="504">This root folder contains the solution file.</td> </tr>
<tr> <td valign="top" width="203"><ul>
<li>Application </li>
</ul>
</td> <td valign="top" width="503">All the projects of the application goes here. Each project in it’s own subfolder.
</td> </tr>
<tr> <td valign="top" width="204"><ul>
<li>Build </li>
</ul>
</td> <td valign="top" width="502">I keep all build related stuff here, like:<br/>
<ul>
<li>The signing key for the application </li>
<li>A C# project holding the main MSBuild script file and any custom MSBuild tasks <br /> </li>
</ul>
</td> </tr>
<tr> <td valign="top" width="205"><ul>
<li>Common </li>
</ul>
</td> <td valign="top" width="501">Usually not much here, except always a SolutionInfo.cs file. <br />
<br /></td> </tr>
<tr> <td valign="top" width="206"><ul>
<li>Documentation </li>
</ul>
</td> <td valign="top" width="501">All developer-related documents of the project are kept here. This way a developer can easily get the latest version of a document without leaving Visual Studio. <br />
If the project contains auto-generated documentation (for instance using <a href="http://www.codeplex.com/Sandcastle" target="_blank">Sandcastle</a>), these generator-projects are kept here as well. <br />
<br /></td> </tr>
<tr> <td valign="top" width="207"><ul>
<li>Installers </li>
</ul>
</td> <td valign="top" width="500">All installer projects are kept here (Setup projects, <a href="http://wix.sourceforge.net/" target="_blank">WIX</a> projects). Each project in it’s own subfolder. <br />
<br /></td> </tr>
<tr> <td valign="top" width="207"><ul>
<li>Lib </li>
</ul>
</td> <td valign="top" width="500">This folder contains all the external assemblies used by the project. <br />
All Visual Studio projects reference the needed dll’s directly from this folder. <br />
<br /></td> </tr>
<tr> <td valign="top" width="207"><ul>
<li>Tests </li>
</ul>
</td> <td valign="top" width="500">Contains all test projects, usually divided in unit tests, integration tests and test helpers. Each project in it’s own subfolder. <br />
<br /></td> </tr>
</tbody></table>
<br />
Note that the above is the physical folder structure. I always make sure that the version control has the excact same folder structure. Never try to build this folder structure from within Visual Studio. Instead build it by hand and add it to source control by hand.<br />
<br />
In the main Visual Studio solution file however, I alter the structure slightly by putting the contents of the Application folder directly in the solution root, and mimic the rest of the folders using solution folders:<br />
<ul>
<li>Solution root <ul>
<li>All projects of the Application folder </li>
<li>Build </li>
<li>Common </li>
<li>Documentation </li>
<li>Installers </li>
<li>Lib </li>
<li>Tests </li>
</ul>
</li>
</ul>
<br />
Now, with everything being nicely structured, it’s time to do some coding.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-3501966555139463712010-02-12T23:16:00.000+01:002011-10-25T10:44:53.624+02:00Creating a .NET project’s development environment<p>This week a co-worker and I set up the envrionment for a new development project. It's starting to look like a pattern. Here is our list:</p><p><strong>The development server</strong></p> <p>We use virtual servers running on some huge hardware farm somewhere. This is nice. We order a new Windows Server instance from our IT services, and within 24 hours it is up and running.</p> <ul> <li>IIS - used for staging our builds (if web) </li> <li>Visual Studio Database Edition <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=bb3ad767-5f69-4db9-b1c9-8f55759846ed&displaylang=en#filelist" target="_blank">GDR2</a> </li> <li>SQL Server with: <ul> <li>CI database, used as the database for CI build integration tests </li> <li>Staging Database (typically backing a nightly staging build, so the PM can track to progress) </li> <li>Pre-Production Database – should always reflect the scheme of the current in-production build. We use this for schema comparisons. </li> </ul> </li> <li>Instal <a href="http://www.visualsvn.com/" target="_blank">Visual SVN</a> (server)</li> <li>Install <a href="http://www.jetbrains.com/teamcity/" target="_blank">Team City 5.0</a>  with: <ul> <li>CI Build </li> <li>Staging Build (usually nightly) </li> <li>Release Build (for the releases) </li> </ul> </li> </ul> <p>All the installations are basically "next, next, finish" installations, using default settings.</p> <p>This setup means that every project has it's own Subversion server and build server running on the dev server. This works well for the small to medium sized projects we are having, and we have had no performance issues for our typical 1-10 team member projects.</p> <p><strong>The development environment</strong></p> <p>To keep the development environment consistent for all team members we prepare a complete development environment in a virtual machine, ready for distribution. The install list we're currently using is:</p> <ul> <li><a href="http://msdn.microsoft.com/en-us/vstudio/default.aspx" target="_blank">Visual Studio 2008</a> (our company issue license is the Team Developer edition) </li> <li>Visual Studio Database Edition GDR2 </li> <li><a href="http://www.visualsvn.com/" target="_blank">Visual SVN</a> + <a href="http://tortoisesvn.tigris.org/" target="_blank">Tortoise SVN</a> </li> <li><a href="http://www.jetbrains.com/resharper/" target="_blank">ReSharper</a> </li> <li><a href="http://code.msdn.microsoft.com/sourceanalysis" target="_blank">StyleCop</a> </li> <li><a href="http://www.codeplex.com/StyleCopForReSharper" target="_blank">StyleCop for ReSharper</a> </li> <li><a href="http://submain.com/products/ghostdoc.aspx" target="_blank">GhostDoc</a> </li> <li><a href="http://www.nunit.org/index.php" target="_blank">NUnit</a> </li> <li><a href="http://www.microsoft.com/sqlserver/2008/en/us/" target="_blank">SQL Server Developer Edition</a> </li> <li><a href="http://www.jetbrains.com/teamcity/" target="_blank">Team City</a> Visual Studio add-in </li> <li>Team City tray notifier </li> <li><a href="http://wix.sourceforge.net/" target="_blank">WIX</a> (Windows Installer Xml toolset) </li> </ul> <p>After setting up the development environment, we run a <a href="http://support.microsoft.com/kb/302577" target="_blank">sysprep</a> and close it down, compress and distribute. The way sysprep works is that when the developer starts up this virtual machine for the first time, she is presented with a setup wizard resembling the last install steps in a clean Windows install. She can then give the virtual machine a name, set administrator password, etc. The virtual machine will now be unique.</p> <p>I intentionally left our the version numbers for Windows Server, IIS and SQL Server. We used 2003/6.0/2005 for the latest projects, and are hoping that the next customer's hosting provider gives green light for 2008 versions.</p> <p>Now we can start coding…</p><br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-46360811332288745092010-01-25T15:23:00.000+01:002011-10-25T12:13:55.029+02:00Speeding up your Build & Run (F5)Have you ever been in the situation where you need to run your application, check something, edit some code, build, run again, repeatately? And each time you are pressing F5 run run, it’s just taking too long to build.<br />
There are many ways to improve the build time (including buying cool and speedy hardware like an SSD drive), but this one is free:<br />
In Visual Studio, select Tools –> Options --> Projects and Solutions –> Build and Run<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3wC5vHSy4wzCbW5pxo6EHmuaKYeOma2u8f5K6Ugb_NyCBBYJX926GlRvvUWIGgovC5u2CDsR-pbjlH0HU1cVVBGNWi0U93xbmXZUg7ostcLFHnSL_eGOBfCW_mYypDT36N-mJgjEUVcE/s1600/speedupbuild.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3wC5vHSy4wzCbW5pxo6EHmuaKYeOma2u8f5K6Ugb_NyCBBYJX926GlRvvUWIGgovC5u2CDsR-pbjlH0HU1cVVBGNWi0U93xbmXZUg7ostcLFHnSL_eGOBfCW_mYypDT36N-mJgjEUVcE/s1600/speedupbuild.png" /></a></div>
<br />
Check the “Only build startup project and dependencies on Run” option.<br />
I can’t believe that this is not a default setting.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-47892376125834953272009-08-21T14:07:00.000+02:002011-10-25T12:24:50.272+02:00New release of FormStateKeeperI just made a new release of the <a href="http://fsk.codeplex.com/" target="_blank">FormStateKeeper</a> - earlier known as FormStateSaver, but hey, a naming change is also a way of binging in something new ;-). Most important news for this version is support for ASP.NET MVC. As the source code has grown from a small hack to a nice little project, I also moved it from MSDN Code Gallery to <a href="http://fsk.codeplex.com/" target="_blank">Codeplex</a>.<br />
<br />
For those who don’t know FormStateKeeper: It is a small HttpModule that will assist in the scenario where the user looses all the contents of the web page form fields when redirected to the login page. This will typically happen using forms authentication if an authentication timeout happens before the user submits the form. FormStateKeeper prevents this by intercepting the http post the user makes just before being redirected to the login page. It then stores the posted field values, and recreates them just before the user is being redirected to the original page.<br />
<br />
FormStateKeeper has a small footprint of one .dll file an one line in the web.config file.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-50435547243161716902009-08-06T13:37:00.000+02:002011-10-25T12:25:17.722+02:00Introducing FirstBricksI like application blocks. In almost every project you need basic, application-block stuff like a logger, a cache, some structured exception handling and a dependency container.<br />
And how nice, all of these things are out there ready to use – <a href="http://msdn.microsoft.com/en-us/library/dd203099.aspx">Microsoft Enterprise Library</a>, <a href="http://www.castleproject.org/">Castle Windsor</a>, <a href="http://logging.apache.org/log4net/">log4net</a>, <a href="http://www.springframework.net/">Spring.NET</a>, …<br />
Except that I see too many examples (at work and other places) of developers not utilizing these goodies. And I hear some typical reasons:<br />
<br />
<ul>
<li>Don’t have time to look through the offerings and choose the right application block / application block framework </li>
<li>Don’t have time to master the complex XML configuration needed to start using an application block </li>
<li>Disagreements over choice of quite similar alternatives (especially with loggers and IoC containers) </li>
<li>NIH! (Not Invented Here, so we roll our own)</li>
</ul>
<br />
I’m trying to meet these challenges by creating a framework called <a href="http://firstbricks.codeplex.com/">FirstBricks</a>. FirstBricks is a thin wrapper-framework around existing application blocks. It has 4 core goals:<br />
<ol>
<li>Ease of use. You should be able to start using an application block with 2 lines of code and no config file mess. In other words it hides/encapsulates complex configuration until you really need (if ever). </li>
<li>Abstractions. You don’t work against a specific application block implementation, you work against an abstraction. So for example, you log using an ILog, but you don’t need to care whether it is an EntLib logger, log4net or a test logger. And you can change the implementation any time using the dependency container. </li>
<li>Extensibility. You should not need to modify the source if this framework. But you can extend it with your own application block implementations, configurators and even new abstractions. For example, this first version of FirstBricks has ist abstractions flavoured by the MS Enterprise Library as I know this best. If you don’t like that, you can create a new abstraction and still use the structure and consitency of the framework. </li>
<li>Consistency. Every application block wrapping is done the same way, configuration is accessed the same way and so on. </li>
</ol>
Currently the FirstBricks provides dependency injection, logging, caching and exception handling, and ships with implementations build on top of the Microsoft Enterprise Library 4.1. We use it at my work in a few projects, and now it has gone open source to hopefully get some feedback and momentum.<br />
Here is an example of FirstBricks in use. In this case we want to use just logging. To set things up:<br />
<br />
<pre class="brush:csharp;">// Tell FirstBricks to use the Enterprise Library implementations of it's application blocks.
Bricks.Initialize<EntLibMappings>();
// Set up logging to a file
Log.Configurator.ConfigureSimpleFileLog("AppLog.txt");
</pre>
<br />
And to log, simply use a line like this anywhere in your application:<br />
<br />
<div>
<pre class="brush: csharp;">Log.Write("Something to log");
</pre>
</div>
<br />
<a href="http://firstbricks.codeplex.com/">Take a look, try it out</a> and don’t hesitate to give me feedback.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-79579166108287111952008-12-02T11:36:00.000+01:002011-10-25T13:10:54.176+02:00Social Networks, IMs and tying things together<br />I’m using a variety of social networks and instant messengers. But it took me some time before I found a way to make it work in practice.<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdcn7cNtKu31ncOeDOMCpSByR7jXEPa5HJjZk1P9Or0oEs16yWcQcpO__zLA-bEQrGJG1syOZM_NWwGQQ7RInrSdNT1GmqCp1xynR23aZA3PIQ7RyHJ7FVanUg1oklCUXidTun_FWLqkI/s1600/digsby.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdcn7cNtKu31ncOeDOMCpSByR7jXEPa5HJjZk1P9Or0oEs16yWcQcpO__zLA-bEQrGJG1syOZM_NWwGQQ7RInrSdNT1GmqCp1xynR23aZA3PIQ7RyHJ7FVanUg1oklCUXidTun_FWLqkI/s1600/digsby.png" /></a><br />Take <a href="http://www.twitter.com/">Twitter</a> as an example. I don’t think many people use twitter from the Twitter web page. They use one of the many available twitter clients. But now things get complicated – some social networks work best with a client, others work best from their web page (but it can be difficult to remember to check the web page). So you can easily end up with a line of clients installed that take up memory, and a few social network sites that you forget to check (or maybe these sites are spamming your inbox with notifications instead).<br />
<br />Of course it’s all about the tools used. In my case things started to fit together when I discovered <a href="http://www.digsby.com/">Digsby</a>. In short Digsby is the one IM and social network client to rule them all. In my case I was able to handle all this in one client application:<br />
<ul>
<li>· MSN Messenger</li>
<li>· Google Talk</li>
<li>· Gmail notifier</li>
<li>· LinkedIn feeds</li>
<li>· Facebook feeds</li>
<li>· Twitter feeds</li>
</ul>
And I am just using some of the many mail/IM/social networks supported. If you don’t know Digsby, <a href="http://www.digsby.com/">go check it out now</a>.<br />
<br />
But the problem is only half solved. Now I have a single point to receive feeds and notifications. I can also use Digsby to post a twitter message, set my Facebook status or set my LinkedIn status. But what if I want to post to multiple services at once? We need a multiplier, so it’s time to introduce <a href="http://ping.fm/">ping.fm</a>.<br />
<br />
Ping.fm is a service that allows you to send status updates, micro blogging and ordinary blog posts to ping.fm, and it will automatically distribute these to a group of your social networks or other services.<br />
<br />
You can have different distribution groups (ping.fm calls them triggers). You set up all of this at the ping.fm web site. But now for the important part: I don’t want to pop open a new browser each time I need to do a micro blog or status update. So, use ping.fm’s IM service. It works this way: You set up ping.fm as another IM buddy of yours. Each time you need to post/update something, you just write an IM message to ping.fm:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9OZXPJpSflcgFEdfle83ypI65_d5IfPVeqlPNYkdL1s_vOe_J_y3BkiuqmeTUKSWuk6P3gbVzwUUGy0yhyphenhyphen12HbWe6I4QubEGtrPvAhcIvNFacohIW0c7nxhyphenhyphenR3Guv4MEbbBWaNJUFO38/s1600/pingfm.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9OZXPJpSflcgFEdfle83ypI65_d5IfPVeqlPNYkdL1s_vOe_J_y3BkiuqmeTUKSWuk6P3gbVzwUUGy0yhyphenhyphen12HbWe6I4QubEGtrPvAhcIvNFacohIW0c7nxhyphenhyphenR3Guv4MEbbBWaNJUFO38/s1600/pingfm.png" /></a></div>
In this example I first set my status using a customer trigger (which sets status on LinkedIn, Plaxo and Facebook). Then I use another trigger to post a micro blog entry to Twitter, Yammer and my private log.<br />
<br />
This way I can still control everything from Digsby. Creative? Definitely. But it works.<br />
<br />
Now recently <a href="http://www.yammer.com/">Yammer</a> emerged. Ping.fm supports Yammer (so I can distribute posts to Yammer as well), but Digsby doesn’t support Yammer (yet?), and I would hate to install another client and break my principles just to receive Yammer feeds. We need a workaround – this time a solution can be found in Yammer’s IM feature: You can set up Yammer to post all tweets to your IM account (supported are AIM, Google Talk and Jabber, hopefully more to come), and you can reply and make new posts by writing an IM message to Yammer.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXKm9XWX-KOsEH9INxNGxY_trLBTu8aWukkEWknNzVVu9eTJ_Gh0wU2SyfK6AiD_V0ASO8vUuJhvo67gKWsWQhxxfYp3gng3jES59ZSHw1nw7BjS_dRM5ONAJaIam7hmpf9vWHr0YSVHI/s1600/yammer.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXKm9XWX-KOsEH9INxNGxY_trLBTu8aWukkEWknNzVVu9eTJ_Gh0wU2SyfK6AiD_V0ASO8vUuJhvo67gKWsWQhxxfYp3gng3jES59ZSHw1nw7BjS_dRM5ONAJaIam7hmpf9vWHr0YSVHI/s1600/yammer.png" /></a></div>
So, it all works for me using only one client. I had to be a little creative in tying everything together – hopefully this will be easier the day Digsby implements Yammer support and merges with ping.fm.<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-57845683502684719162008-12-01T09:55:00.000+01:002011-10-25T13:11:58.052+02:00TechEd EMEA notes part 1<br />This year I had the privilege of attending TechEd Developers in Barcelona. My session selections are pretty wide and I’m not going to do a full resume of the sessions I attended, just put up some of my notes and key points taken. Hopefully they are of some use.<br />
<br />
<strong>Visual Studio 2010</strong>Definitely a huge step in the right direction(s). The step from VSNET 2005 to VSNET 2008 was small compared to the next step. Not to mention what is to come in Team Foundation Server 2010. It would take multiple blog posts to describe it (and the first 100 posts about it are probably already out there) I’m just going to check it out by downloading the <a href="http://www.microsoft.com/downloads/details.aspx?familyid=922b4655-93d0-4476-bda4-94cf5f8d4814&displaylang=en">virtual image with a Visual Studio 2010</a> install that Microsoft offers.<br />
<br />
<strong>Data Dude (aka Visual Studio 2008 Database Edition):</strong>I think this is one of the most overlooked Microsoft developer products, mainly because almost no one has a license for it. Well <a href="http://msdn.microsoft.com/en-us/vsts2008/products/cc990295.aspx">now there is no excuse</a>, as this product is now a part of the most used Visual Studio Team Developer license. I got an intro to the features, and I it looks like a combination of many features known from Red Gate’s <a href="http://www.red-gate.com/products/sql_data_compare/index.htm">SQL Data compare</a> and <a href="http://www.red-gate.com/products/SQL_Compare/index.htm">SQL Schema compare</a> (it will generate diff scripts for schema and data at any time between any combination of running database instances or static scripts), plus a nice way of putting all this into TFS versioning and TFS build. Oh, and if you do get your hands on Data Dude, don't forget to apply <a href="http://blogs.msdn.com/gertd/archive/2008/11/25/visual-studio-team-system-2008-database-edition-gdr-rtm.aspx">the new service release (GDR)</a>.<br />
<br />
<strong>Jon Flanders on REST and WCF</strong>This is a great talk if you need to see some code behind all this <a href="http://en.wikipedia.org/wiki/REST">REST</a> hype, and as a bonus you will see how this can be implemented in a simple and elegant way using <a href="http://msdn.microsoft.com/en-us/netframework/aa663324.aspx">WCF 3.5</a>.<br /><br /><a href="http://www.microsoft.com/emea/teched2008/developer/tv/default.aspx?vid=74">Go watch it</a> (free for everyone).<br />
<br />
<strong>Learned a few tips about strong naming, the GAC and NGen</strong>Strong name signed assemblies actually have a performance impact when loaded, if they are not in the GAC. This is because of security checks, hashing, etc. that takes place every time the assembly is loaded, where as this only takes place once if the assembly is placed in the GAC. .NET 3.5 SP1 has a strong name bypass feature to avoid this issue. This will be enabled by default in most cases – see <a href="http://blogs.msdn.com/shawnfa/archive/2008/05/14/strong-name-bypass.aspx">this blog post from the .NET security blog</a> for more info.<br />
<br />
Do you know <a href="http://msdn.microsoft.com/en-us/library/6t9t5wcf(VS.80).aspx">Ngen</a>? You should. Basically Ngen will pre-generate the native code that any assembly will end up as. This is an alternate to the default JIT compiling, and it could potentially give you a free performance gain, just by running:<br />
<br />
<div>
<pre style="background-color: #f4f4f4; border-style: none; color: black; font-family: consolas, 'Courier New', courier, monospace; font-size: 8pt; line-height: 12pt; margin: 0em; overflow: visible; padding: 0px; width: 100%;">ngen install youassembly.dll</pre>
<br /></div>
Try reading <a href="http://msdn.microsoft.com/en-us/magazine/cc163808.aspx">this article</a> if you want in-depth info on ngen.<br />
<br />
<strong>Source Code Outliner, how did I miss that</strong><br />I went to a talk about tips & tricks for the VS.NET C# IDE - unfortunately not much new stuff for me, except this one: The <a href="http://www.codeplex.com/SourceCodeOutliner">source code outliner</a>. How did I miss that? It's quite simple, all it does it to show a tree view of your current source file. Like the little upper-right drop-down you use to locate a specific method, just in an expanded view. It will provide both overview and navigation of your code. Once you start using it, you will know that you missed it. And it is free, of course.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-82803383267791282192008-11-24T06:27:00.000+01:002011-10-25T13:16:59.462+02:00Home network setup (or how a multi-function NAS saved my day)I finally managed to set up a reasonable home network. We had a few issues that needed to be solved:<br />
<ol>
<li>Running out of free network ports in the router - more, please...</li>
<li>Both of the two printers in the house needed to be on the network.</li>
<li>I needed a common network drive for music, photos (with some sharing capabilities) and as a place to put backups from our PCs.</li>
<li>A backup solution for this common network drive (in case the drive fails).</li>
<li>Even more backup (in case something realy bad happens to the entire house).</li>
</ol>
The old setup consisted of an ADSL modem connected to a wireless router with a 4-port switch and an old print server with only one usb printer port. Very standard, I guess. So after some consideration I went for two new purchases:<br />
<br />
A network switch to handle issue #1<br />
I chose a <a href="http://trendnet.com/products/proddetail.asp?prod=280_TEG-S80TXE&cat=115">TrendNet 8-port gigabit switch</a>.<br />
<br />
A multi-function <a href="http://en.wikipedia.org/wiki/Network-attached_storage">NAS</a> to handle the rest of the issues:<br />
<a href="http://www.qnap.com/pro_detail_feature.asp?p_id=78">A Qnap TS-109</a> with a 500gb disk (I threw in a <a href="http://www.samsung.com/global/business/hdd/productmodel.do?group=72&type=61&subtype=63&model_cd=257&ppmi=1155">Samsung F1</a>) .<br />
<br />
So what is this thing? It's a small server with room for a harddrive of choice, 3 USB ports, an eSata port and of course a network connection. The network now looks like this:<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEil7MXxnU9jCVhzWef03bcVp2zmCUlIoxnej63ZkzG6s1qi2kaOA4eO86g8kb8gXsnX2kVfn7xyUDUT52kXV8Vy9RasmIDJOgZmrn5li0eWPJrZ8TWGV58dtEsl2x1k-8YezxFFmyCZBKw/s1600/homenetwork.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEil7MXxnU9jCVhzWef03bcVp2zmCUlIoxnej63ZkzG6s1qi2kaOA4eO86g8kb8gXsnX2kVfn7xyUDUT52kXV8Vy9RasmIDJOgZmrn5li0eWPJrZ8TWGV58dtEsl2x1k-8YezxFFmyCZBKw/s1600/homenetwork.png" /></a></div>
<br />
<br />
Many multi-function NAS products exist - <a href="http://www.qnap.com/">Qnap</a> and <a href="http://www.synology.com/enu/index.php">Synology</a> seeming to be the leading product lines. Out of the box (and a few clicks in the web-based administration tool) I was able to set up:<br />
<ul>
<li>Printer server for 2 connected USB printers. </li>
<li>A network share for our familiy pictures. Wwe use <a href="http://picasa.google.com/">Picasa</a> and it works fine to map a network drive and point its picture folder to this drive. </li>
<li>A network share for our music. I put all our mp3's there and setup <a href="http://www.apple.com/itunes/download/">iTunes</a> on my machine to have its iTunes Music folder pointed to this share. All other PC's use iTunes + the iTunes server capability of the NAS for playback. This way I can rip new cds from my machine and every machine on the network can play it.</li>
</ul>
Backup (you will notice I'm caring a lot about backup):<br />
The different models in the Qnap series comes with one or more disks, so you could choose a model with 2 disks and set it up to use a <a href="http://en.wikipedia.org/wiki/RAID">Raid 0</a> for data redundancy. I chose the TS-109 with only one disk, and I then use Qnap's scheduled backup feature instead. This way I can make use of a spare 500gb usb disk, I had. Each night it will automatically run the backup.<br />
<br />
In case of total disaster (let's hope it never gets to that, but <a href="http://en.wikipedia.org/wiki/Seest_fireworks_disaster">accidents do happen</a>) I want some kind of remote backup. You can pay for internet backup solutions, but I found it too expensive when you need large amounts of data backed up. Instead I use the remote backup feature of the NAS - the plan is to set up a similar NAS for a familiy member at another location, and by knocking a few holes in our firewalls we should be able to sync each others data over the net.<br />
<br />
So, to draw a few conclusions: For about 430$ (2.500 Dkr) I was able to build a nice solution for home network, sharing and backup. I could have chosen a Windows Home Server based solution like the HP MediaSmart Server, which would be a fine solution too (and about 500$ more). Anyway, I like the small footprint of the Qnap.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-13633731446589223072008-11-02T23:53:00.000+01:002011-10-25T13:22:40.872+02:00Installing IIS on a Windows XP SP3 issueI ran into a strange issue today when I realized that I needed a real IIS 6 on my local Windows XP SP3 development environment.<br />
<br />
I went through the usual Add Windows Component install as this dialog came up:<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgU8E57bQ2Y31ecUO682RRGQBQfAthyAhFiypb2_Ubj1YhE8fi3HGZmA8JxzbNC5v3Y1goCxCsvd2sFIFWqrwo14X3Du0InSkdxp8dYLi58qCLmlqlj6Q-K09_E2NCrDFRkdxmoTYPULEY/s1600/iiswinxp.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgU8E57bQ2Y31ecUO682RRGQBQfAthyAhFiypb2_Ubj1YhE8fi3HGZmA8JxzbNC5v3Y1goCxCsvd2sFIFWqrwo14X3Du0InSkdxp8dYLi58qCLmlqlj6Q-K09_E2NCrDFRkdxmoTYPULEY/s1600/iiswinxp.png" /></a></div>
<br />
Wait a minute... Windows XP Professional Service Pack 3 CD? I never had that CD, I just ran the <a href="http://www.microsoft.com/downloads/details.aspx?FamilyId=68C48DAD-BC34-40BE-8D85-6BB4F56F5110&displaylang=en">SP3 install bootstrapper</a> as Microsoft recommends. Anyway I managed to find it on <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=2fcde6ce-b5fb-4488-8c50-fe22559d164e&displaylang=en">Microsoft Download as an iso-file</a>, mounted it and... Not accepted. Actually there is no i386 folder on this CD.<br />
<br />
So, in my case the solution was simple: Just click <strong>browse</strong> and point it to the <strong>C:\WINDOWS\ServicePackFiles\i386</strong> folder.<br />
<br />
After a while the installer will ask for the original Windows XP CD. Feed it with that and there you go, IIS installed.<br />
<br />
It seems I was lucky not to have <a href="http://www.eggheadcafe.com/software/aspnet/32183398/cant-install-iis-in-win-x.aspx">more problems</a> than this and didn't have to try resolutions like <a href="http://support.microsoft.com/?id=894351">this KB article</a>. Just had to figure out that Service Pack 3 CD = C:\WINDOWS\ServicePackFiles\i386.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5151962190898355461.post-71615333151068839392008-09-10T17:08:00.000+02:002018-06-25T13:30:39.463+02:00My toolbox<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
After finishing my new developer PC, it was time for the software part. Here are my must-have applications that I installed right away:<br />
<br />
<a href="http://office.microsoft.com/en-gb/default.aspx">Office 2007</a><br />
... and a billion service packs and hotfixes ;-)<br />
<br />
<a href="http://msdn.microsoft.com/en-US/vstudio/products/bb933731(en-us).aspx">Visual Studio .NET 2008</a><br />
Of course.<br />
<br />
<a href="http://www.microsoft.com/downloads/details.aspx?familyid=22e69ae4-7e40-4807-8a86-b3d36fab68d3">Consolas font</a><br />
I just have to read all code with this font.<br />
<br />
<a href="http://www.roland-weigelt.de/ghostdoc/">Ghost Doc</a><br />
This is a controversial product, as it can be used to auto generate code comments with absolutely no information in it, except what can be gathered from the method- or property name. I use it like I use the code snippet feature in VS.NET - to provide a quick template that I then fill out with relevant information.<br />
<br />
<a href="http://www.testdriven.net/">Testdriven.NET</a><br />
Great in many ways when doing TDD or unit tests in general. The ability to right-click on a test method and choosing Run Test is just brilliant, useful and simple.<br />
<br />
<a href="http://www.adobe.com/products/reader/">Acrobat Reader</a><br />
Can't read without it.<br />
<br />
<a href="http://get.live.com/writer/overview">Windows Live Writer</a><br />
Can't blog without it.<br />
<br />
<a href="http://www.google.com/talk/">Google Talk</a><br />
Mostly as an email notifier, and I have friends who prefer this IM channel.<br />
<br />
<a href="http://notepad-plus.sourceforge.net/">Notepad++</a><br />
My text editor of choice.<br />
<br />
<a href="http://delicious.com/help/tools">Delicious buttons</a><br />
This is just a great way of organizing and sharing your bookmarks.<br />
<br />
<a href="http://www.timesnapper.com/">Timesnapper</a><br />
Keeps track of what I did on my computer yesterday, last week or whenever. Because I can never rember. And I have timesheets to do.<br />
<br />
<a href="http://www.vmware.com/products/ws/">VMware workstation</a><br />
My virtualization product of choice. Even though HyperV look interesting...<br />
<br />
<a href="http://www.7-zip.org/">7-Zip</a><br />
This is a sleek, fast and free archiver.<br />
<br />
<a href="http://www.getpaint.net/">Paint.NET</a><br />
Superb image editor. Amazing that a quality product like this is free.<br />
<br />
<a href="https://docs.microsoft.com/en-us/powershell/">Powershell</a><br />
The scripting platform of the future. Get it today.<br />
<br />
<a href="http://www.microsoft.com/downloads/details.aspx?familyid=c26efa36-98e0-4ee9-a7c5-98d0592d8c52&displaylang=en">SyncToy 2.0</a><br />
I use it to synchronize my central documents with my USB pen drive.</div>
Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-5151962190898355461.post-68587893172629456342008-09-05T20:10:00.000+02:002012-02-22T15:31:10.873+01:00New developer rig<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
Long time ago I read Jeff Atwood's posts where he was <a href="http://www.codinghorror.com/blog/archives/000905.html">building the ultimate developer PC</a> for <a href="http://www.hanselman.com/blog/">Scott Hanselman</a>. These posts touched my inner hardware geek, even though I hadn't given the hardware side of things much attention for years.<br />
<br />
After reading the post (including <a href="http://www.hanselman.com/blog/GoneQuadDay0WithTheUltimateDeveloperPC.aspx">Scott's post about his 12 second post-to-login time</a>), I knew that this was not gonna let me go before I had build a rig like that myself.<br />
<br />
So after a lot of research, fundraising ;-) and online shopping at hardware stores, waiting for parts forever, and making adjustments, here is the PC I came up with:<br />
<br />
Case: <a href="http://www.antec.com/us/productDetails.php?ProdID=81820">Antec P182</a><br />
Atwoods described of the craftmanship behind this case. And it is indeed a great case. The best things about this case: It is build with low noise in mind, and in good quality. A few minor issues: It's big. A few inches taller than my previous case. The top fan gave a noticeable hum (and I'm hysterical about noise), so I replaced it with a <a href="http://www.scythe-usa.com/product/acc/002/sflex_detail.html">Scythe S-FLEX 1200</a>. The side panels have small plastic hinges that can break off if you are not careful (ask me why I know).<br />
<br />
Noise damping: <a href="http://www.nexustechnologyusa.com/c/ntusa/damptek_1.html">Nexus DampTEK</a><br />
I fitted this material inside the case to absorb any hiss and hum. Good material: Compact, fire-resistant and easy to remove and refit (the glue is not like tar, as it is on other similar products). The difference: Actually not much (I would say a few db at most), as the machine was quiet enough before.<br />
<br />
PSU: <a href="http://www.corsairmemory.com/products/hx.aspx">Corsair CMPSU-520HXEU, 520 Watt</a><br />
I like the modular cables, and the 500W should be more than enough.<br />
<br />
Mobo: <a href="http://global.msi.com.tw/index.php?func=proddesc&prod_no=1373&maincat_no=1&cat2_no=170">MSI P7N SLI Platinum</a><br />
Just went for the upgraded version of the one in Hanselman's rig. Liked the idea of using nVidia chipset for both motherboard and GPU.<br />
<br />
Processor: <a href="http://processorfinder.intel.com/Details.aspx?sSpec=SLAWE">Intel Core 2 Quad Q9300</a><br />
Following the <a href="http://www.codinghorror.com/blog/archives/000942.html">debate on Coding Horror</a> about Dual Core vs. Quad Core I settled for a quad. I found that Q9300 had the right balance between cost and features for my needs.<br />
<br />
Processor cooler: <a href="http://www.scythe-usa.com/product/cpu/023/scmn1000_detail.html">Scythe Mine</a><br />
I was a little disapointed about the noise level, but adding a <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16835118217">Zalman Fanmate</a> lowered the RPM enough for it to be silent, but still cool.<br />
<br />
RAM: <a href="http://www.corsairmemory.com/products/xms2_dhx.aspx">Corsair TWIN2X4096-6400C4DHX</a> x 2 (that's 8 gb)<br />
You can't get enough RAM, and it is cheap these days. I did spend a little extra for better latency settings on these RAMs, which actually gave me a .1 better memory score in Vista.<br />
<br />
Harddrive: <a href="http://www.wdc.com/en/products/products.asp?DriveID=459">WD Velociraptor 300gb</a><br />
I waited more than a month before this drive was out on the streets. Expensive, but <a href="http://www.tomshardware.com/reviews/HDD-SATA-VelociRaptor,1914.html">very, very fast</a>.<br />
<br />
Harddrive 2: <a href="http://www.wdc.com/en/products/products.asp?driveid=306&language=en">WD Cavair SE 400AAJS</a><br />
This is the only part that survived from my previous machine. I use it as data drive.<br />
<br />
GPU: 2 x <a href="http://www.xfxforce.com/en-us/products/graphiccards/8series/8600GT.aspx">XFX GeForce 8600GT</a><br />
I'm not a gamer, so these mid range graphics cards should do. I dream of a 3 monitor setup, so I got myself 2 cards.<br />
These cards turned out have a quite noisy fan, so I had to replace it with silent <a href="http://www.northq.com/products/gfx/nq3850.html">NorthQ NQ 3850A</a> coolers.<br />
<br />
Monitor: <a href="http://www.samsung.com/us/consumer/detail/detail.do?group=computersperipherals&type=monitors&subtype=lcd&model_cd=LS22PEBSFV/XAA">Samsung 22" 2232BW</a><br />
Stylish. Crystal clear. I just love Samsung monitors. I should get one more of these.<br />
<br />
OS: <a href="http://www.microsoft.com/windows/windows-vista/compare-editions/ultimate.aspx">Vista Ultimate 64-bit with SP1</a><br />
Actually my 64-bit experience has been painless so far. Really. And it runs like a dream.<br />
<br />
So there you go. Niiice machine, runs fast, smooth and quiet. I could have bough myself a Dell and saved many hours, but the process of building this machine was just as enjoyable as the end result.<br />
<br />
I know this post should have been out sooner as hardware related stuff gets outdated quickly. Anyway, use it as inspiration.</div>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5151962190898355461.post-89884442685209258432008-07-29T01:34:00.000+02:002011-10-25T13:28:46.871+02:00Spoolsv.exe causing massive CPU load<br />My Lenovo T60 is running hot. Very hot. This seems to be a known problem with this model as several colleges are experiencing the same. If the CPU is loaded constantly above 50%, the temperature rises to dangerous levels.<br />
<br />
Standard development work like I do should not give a constant CPU load like this, so I <a href="http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx">Process Explored</a> a bit... In my case the guilty one was the spooler service (spoolsvc.exe). It just went mad with a constant 40-50% CPU load for no apparent reason (I was not printing!). It seems that if some print job went wrong long time ago, some remains of it can stay in the spooler, even after restarts.<br />
<br />
So, my simple workaround was a batch file that stops the spooler service, clears it and starts it again:<br />
<br />
<pre>net stop spoolerdel c:\windows\system32\spool\printers\. /F /Qnet start spooler</pre>
<br />
Maybe I should put it in Autorun...<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-1615069808966231642008-02-15T17:44:00.000+01:002011-10-25T13:33:27.805+02:00Save web form contents on authentication timeout and restore itIn ASP.NET solutions using forms based security, there is a problem if the forms authentication ticket times out while the user is filling out a large web form. This is a simple solution to that problem using a custom HttpModule and a single .aspx page:<br />
<br />A typical scenario is where:<br />
<ol>
<li><div>
The solution is using forms authentication (using cookies, and with a timeout set to 30 minutes in web.config).</div>
</li>
<br />
<li><div>
The user starts filling out a large web form. </div>
</li>
<br />
<li><div>
The user takes a long phonecall or goes to lunch. </div>
</li>
<br />
<li><div>
The user returns, resumes filling out the form and submits. </div>
</li>
<br />
<li><div>
Bang - the user is redirected to the login page because the authentication ticket timed out.<br />
After logging in again the form will be empty - all work filling out the form is lost.</div>
</li>
</ol>
The easiest solution would be to make the forms authentication ticket live very long (ex. 24 hours). But in my experience many customers require that the login times out after typically 30 minutes for security reasons.<br />
<br />
I had no luck googling a solution, so here is what I came up with:<br />
<br />
Solving this requires a simple HttpModule and a transit page. The HttpModule will capture the posted form and save it to application state, just before the forms authentication redirects the user to the login page. Application state is used because Session state is not accessible at the time of interception. A few tricks are used here, see the code for details.<br />
<br />
After login, the same HttpModule will redirect the request to a transit page. This transit page will restore the form contents as hidden fields and do a submit (http post triggered on load) to the original page. There you go, form restored. The flow looks like this:<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgP5qWS-H58WvVyZprRmYPMK6VPQtwzpj3f9Yv7oGDnzZBPPLy5FRMVQ75uONqSsMSH9n1yXDeVV9dT9JJ762rRkL24qdWA_NZLBRz98Vh_LAG83D152xz5Nya1HP8DxyRDBchLWdfCuFk/s1600/formstatekeeperdiag.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgP5qWS-H58WvVyZprRmYPMK6VPQtwzpj3f9Yv7oGDnzZBPPLy5FRMVQ75uONqSsMSH9n1yXDeVV9dT9JJ762rRkL24qdWA_NZLBRz98Vh_LAG83D152xz5Nya1HP8DxyRDBchLWdfCuFk/s1600/formstatekeeperdiag.png" /></a></div>
<br />
This way everything from the original form post will be restored, including ViewState. The original page will happily recieve the postback and do normal page processing. The restoring of the form will be transparent to the end user. Usually the transit page will be so fast that the user doesn't see it.<br />
<br />
This will also work with Ajax to some extent. Form fields inside an UpdatePanel will be re-populated by the form saver, but any callback from controls inside the UpdatePanel will be "replaced" by a full page postback from the transit page.<br />
<br />
I have put together a small demo with the HttpModue and the transit page at MSDN Code Gallery: <a href="http://code.msdn.microsoft.com/formsaver">http://code.msdn.microsoft.com/formsaver</a>.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5151962190898355461.post-19629058921227086592008-02-03T01:58:00.000+01:002011-10-25T13:35:36.480+02:00Backing up your virtual machines<br />After spending quite a few hours installing virtual machines and setting up snapshot chains, I backed up my work simply by copying all virtual machine files to another disk.<br />
<br />
When working on a project using a virtual machine, I'm using another backup strategy, as I need automated, daily backups with at least 7 days history. Here <a href="http://www.symantec.com/norton/products/overview.jsp?pcid=br&pvid=ghost12">Norton Ghost 12</a> really shines. Among its features, Ghost can do complete disk backups and restore these directly as VMware disk files (.vmdk).<br />
<br />
On each "vital" virtual machine, I install Ghost and set up a daily backup of the entire virtual disk to another physical disk, or (preferably) to a network drive. I set up a complete backup every 7 days, and differential backups each day in between. The backups run at lunch time where my virtual machine is typically fired up, and I won't be bothered.<br />
<br />
Ghost is quite efficient in both backup speed and compression. For instance, I have a 20gb virtual disk, and Ghost only uses 11gb to store a complete disk backup. Each differential backup typically takes up another 100 - 1000mb. The time it takes to do the backup is less than the length of my lunch break (< 30 mins) ;-)<br />
<br />
So, if my external HD (where I keep all my virtual machines) dies, or if the .vmdk file somehow gets corrupted, or if I mess up something inside the virtual machine - I can be back up and running in no time. Even if my host PC is lost, I can be working again on another PC quickly.<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-60683210404812046162008-02-03T01:14:00.000+01:002011-10-25T13:37:56.319+02:00VMware snapshot and cloningOnce I got running on VMware, I used the snapshot and cloning features to set up a base framework for quickly producing new virtual machines with just the right customization.<br />
<br />The snapshot feature of VMware simply takes a snapshot of the current state of the virtual machine. Then you can chain these snapshots - I use a pattern like this:<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifCDaM2It9cu4Xp6mUeOM0ozenoyuSN93hJZIJ4EMil7EVdBSYDtQDFlHiczto2M3uZuCR3SG2sj8AEmrNzdSmfTi6yqhVVw_DkYv6fCv4MpfvbS3LG2O1_e7aSkjT6vuGn-tX9kz6Eso/s1600/vmware1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifCDaM2It9cu4Xp6mUeOM0ozenoyuSN93hJZIJ4EMil7EVdBSYDtQDFlHiczto2M3uZuCR3SG2sj8AEmrNzdSmfTi6yqhVVw_DkYv6fCv4MpfvbS3LG2O1_e7aSkjT6vuGn-tX9kz6Eso/s1600/vmware1.png" /></a></div>
<br />So, for example my Windows XP base snapshot chain looks like this:<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW7ffUb8x8AIwFUttdnnz7niqHsKrITJkKXoXWW7lvdUS4TuNLwK-vPnfwKzBNO9t41U_104AUiS6ZvH3sKDRcv-_7NUz1Iq0ERc8dDhyvbSPBQtnabxPGqnH_brhOjMxo-vi7DSXoMl0/s1600/vmware2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW7ffUb8x8AIwFUttdnnz7niqHsKrITJkKXoXWW7lvdUS4TuNLwK-vPnfwKzBNO9t41U_104AUiS6ZvH3sKDRcv-_7NUz1Iq0ERc8dDhyvbSPBQtnabxPGqnH_brhOjMxo-vi7DSXoMl0/s1600/vmware2.png" /></a></div>
<br />
So let's say I need to check out the new ASP.NET 3.5 extensions... I then create a full clone of the VS2008 snapshot. If I need to test something on a clean OS, I just clone the clean OS snapshot, and so on. I always use full clones - linked clones are only useful for quick testing (as an undo feature), or to separate a snapshot chain into two.<br />
<br />
After cloning, it is a good idea to run <a href="http://technet.microsoft.com/en-us/sysinternals/bb897418.aspx">newsid</a> to give the new clone a unique SID. VMware takes care of generating a new virtual MAC address when cloning. If your virtual machine needs to run in a coorporate Active Directory, adding your new clone to the AD will be the final step.<br />
<br />
When creating new virtual machines from scratch, I would recommend always to set the disk size quite large, and use dynamically expanding disk. I use 64gb as my default size. This way you probably won't have to spend time handling disk full problemes as your project grows bigger. I also use SCSI as virtual disk type - as far as I can tell, it should be faster.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-25573628237663806732008-01-16T14:48:00.001+01:002011-10-25T13:39:03.927+02:00I'm a VMware convert...My work involves doing a lot of proof of concept work, as well as moving between various projects. Desktop virtualization is the key for me here. I use one virtual machine per project, and several for testing.<br />
<br />
Until recently, I used Microsoft Virtual PC, but VMware won me over for a couple of reasons:<br />
<ul>
<li>The snapshot and cloning features.</li>
<li>If your host PC have multi-core CPU, so can your virtual machines.</li>
<li>Multi monitor support (yes, your virtual machines can run multi-montor).</li>
<li>Better network features.</li>
<li>Easy to convert my old VPC images. </li>
<li>It just seems to run better and to be a more mature product (no hard evidence here, just a gut feeling).</li>
</ul>
Converting my existing VPC's was a little tricky at first - you need to start them up in MS VPC and uninstall the Virtual Machine Additions and then shut down the guest OS. Secondly, <a href="http://www.vmware.com/products/converter/">VMware Converter</a> (free) does a lot better conversion job than the converters build into <a href="http://www.vmware.com/products/player/">VMware Player</a> and <a href="http://www.vmware.com/products/ws/">VMware Workstation</a>.<br />
<br />
After converting I fired up each virtual machines in VMware workstation and installed VMware extensions on the guest os.<br />
<br />
Now my virtual machines are running perfectly.<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-3150363814182357172008-01-16T14:48:00.000+01:002011-10-25T13:40:26.651+02:00The process cannot access the file because it is being used by another processSometimes working on large projects with many developers using Visual Studio .NET can be frustrating, simply because you have a lot of hard-to-control factors. You need to know every best practice of the tool, as well as every problem and workaround. At the same time, this must be communicated to the rest of the team effectively. If you don't master this, you will spend an awful lot of time solving tool-related problems for you or your co-developers. And still, new things come up. Like this one:<br />
<br />
At random our developers encountered a "Cannot create/shadow copy 'XXX' when that file already exists." on their machines when starting up a web application. A few reloads and mumbling secret magic words usually made the error go away. Annoying anyway. It happened both in Visual Studio .NET 2005 and 2008.<br />
<br />
So, googling for a solution I found suggestions like <a href="http://blog.devstone.com/Aaron/archive/2007/02/22/2207.aspx">this</a> and <a href="http://bloggingabout.net/blogs/rick/archive/2007/02/14/cannot-create-shadow-copy-your-assembly-info-here-when-that-file-already-exists.aspx">this</a>. A single line in the web.config (under system.web) should cure the problem:<br />
<br />
<span style="color: blue; font-size: small;"><span style="font-size: x-small;"><</span><span style="font-size: x-small;"><span style="color: #a31515;">hostingEnvironment</span><span style="color: blue;"> </span><span style="color: red;">shadowCopyBinAssemblies</span><span style="color: blue;">=</span>"<span style="color: blue;">false</span>"<span style="color: blue;"> /></span></span> </span><br />
<span style="color: blue; font-size: small;"><br /></span><br />
Now don't stop reading! This cures the problem mentioned, however it produced another even worse problem: As the developers on the team got the latest version from our version control, they started having trouble building. Typically:<br />
<ol>
<li><div>
Build the web project. No problems</div>
</li>
<li><div>
View the web project in the browser</div>
</li>
<li><div>
Make some changes in code and build again - build error:</div>
</li>
</ol>
<strong>Unable to copy file [Some random dll] to bin\debug\[Some random dll]. The process cannot access the file because it is being used by another process.</strong><br />
<br />
It only happened for Visual Studio .NET 2005 users (but that is stil the majority in our team). Workarounds like restarting the local web server or deleting the dll manually worked for some, but not all. Rebuilding the entire solution worked, but in our 48 projects solution, this took way to long time on each build.<br />
<br />
It took me some time to figure out that the web.config change was the cause - and still I would probably not be able to prove it in court. Maybe other characteristics in our solution played a part too (the error is a confirmed bug, see the <a href="http://support.microsoft.com/kb/313512">Microsoft knowledge base</a>). But removing the line from web.config cured the problem for everyone. Now we happily live with our occasional "Cannot create/shadow copy 'XXX' " errors ;-)<br />
<br />
The final solution seems to be upgrading to Visual Studio .NET 2008 (where the bug seems to be fixed) and re-applying the web.config change!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-38484558270505342062007-12-04T17:51:00.000+01:002011-10-25T13:42:53.212+02:00When a VPC disk hits the ceilingI use a VPCs for nearly everything... Probably more posts to come about that. Today I realized that a dynamic expanding hard disk in a VPC actually <u>does</u> have a fixed maximum size (purely my own ignorance to blame - it is obvious in the Virtual Disk Wizard). So when my current project VPC reached its default 16gb disk limit, no magic resizing occurred, simply "disk full".<br />
<br />
Turns out help is not far away. The <a href="http://vmtoolkit.com/" target="_blank">vmToolkit site</a> has a <a href="http://vmtoolkit.com/files/folders/converters/entry87.aspx" target="_blank">nice free utility</a> that lets you resize your .vhd files. You will have to sign up at the site, but it's free. It took me about an hour to expand my .vhd from 16gb to 32gb, and it works perfectly:<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1j7gN-7IPLi6TYcfTIgy9HqMKlPGzDkUtESGNYmyqruStvlL1R3zJ4LMreQqUZf0vFFzBCpxZsUTI0DEJ-Zj3vcJfqhSQ8puas07UO2CIspcHyegQ5dnXn_FmfFGeMjluNmrLDb311IE/s1600/vhdresizer.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1j7gN-7IPLi6TYcfTIgy9HqMKlPGzDkUtESGNYmyqruStvlL1R3zJ4LMreQqUZf0vFFzBCpxZsUTI0DEJ-Zj3vcJfqhSQ8puas07UO2CIspcHyegQ5dnXn_FmfFGeMjluNmrLDb311IE/s1600/vhdresizer.png" /></a></div>
<br />
<strong>However, please note:</strong> The virtual OS will see the disk expansion as a new raw, unformatted partition. So, for instance in Windows XP, Disk Management it will look like this:<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdyxskFb8u6NitNblS3rTiLLazDYbYGnlgs416I-rOD0C8KLFwP7536zeckIVVwJg19c5cG5bTqF1JLMqojN8lvEsSn678OyH9SKr7DGmgnGga-C4WTZ4EMSnYvTuybFE__fQuko9MMIA/s1600/diskmanager.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdyxskFb8u6NitNblS3rTiLLazDYbYGnlgs416I-rOD0C8KLFwP7536zeckIVVwJg19c5cG5bTqF1JLMqojN8lvEsSn678OyH9SKr7DGmgnGga-C4WTZ4EMSnYvTuybFE__fQuko9MMIA/s1600/diskmanager.png" /></a></div>
<br />
The only way to expand the primary partition into the new space was in my case (Windows XP) to use <a href="http://www.symantec.com/norton/products/overview.jsp?pcid=sp&pvid=pm80" target="_blank">Partition Magic</a>. It took me 69$ and about 12 seconds ;-)<br />
<br />
Btw - check out this list of <a href="http://vpc.visualwin.com/" target="_blank">What Works and What Doesn't in Microsoft Virtual PC</a>, impressive list, especially if you look at how many OS's that actually will run in a VPC.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5151962190898355461.post-85705344905200516132007-11-25T00:02:00.000+01:002011-10-25T13:51:10.041+02:00ASP.NET 2.0 Forms authentication - Keeping it customized yet simpleIn my continuous quest to migrate a large ASP site to ASP.NET, one central step was to implement forms authentication that comformed to all the existing schemas and business logic around users and user rights. <br />
I think a scenario like this with a lot of predefined conditions is not uncommon. And here the Membership and RoleProvider features of 2.0 usually don't fit (these are great features, but not always aplicable). It usually (like in our case) comes down to something like this: <br />
<br />
Membership: Method X, Y and Z of the Membership model are not needed by the preconditions and method Q and R needs to be modified. Special methods A and B needs to be added. <br />RoleProvider: Preconditions requires a custom RoleProvider class with specialization requirements like with the Membership system. <br />
<br />
One solution would be to go ahead and implement your own implementations, but you will probably end up doing a lot of (unnescessary) code work, and the intended productivity benefits will be lost. <br />
My suggested solution is to specialize at alower level of .NET authentication and authorization - IPrincipal and IIdentity. The steps would be: <br />
<br />
<ul>
<li> <div>
Make your own implementation of IIdentity if needed (usually GenericIdentity or FormsIdentity are sufficient) </div>
</li>
<li> <div>
Make your own implementation of IPrincipal. Example:<br />
<br /></div>
</li>
</ul>
<pre class="brush: csharp; gutter: false;">public class MyPrincipal : IPrincipal
{
public MyPrincipal(IIdentity ident, List<string> roles, int someCustomProperty1, string someCustomProperty2)
{
this.identity = ident;
this.roles = roles;
this.someCustomProperty1 = someCustomProperty1;
this.someCustomProperty2 = someCustomProperty2;
}
IIdentity identity;
public IIdentity Identity
{
get { return identity; }
}
private List<string> roles;
public bool IsInRole(string role)
{
return roles.Contains(role);
}
private int someCustomProperty1;
public int SomeCustomProperty1
{
get { return someCustomProperty1; }
}
private string someCustomProperty2;
public string SomeCustomProperty2
{
get { return someCustomProperty2; }
}
}</pre>
<br />
Set up web.config to forms based authentication. Typically like:<br />
<br />
<pre class="brush: xml; gutter: false;"><system.web>
<authentication mode="Forms">
<forms loginUrl="Logon.aspx">
</forms>
</authentication>
<authorization>
<deny users="?" />
</authorization>
</system.web> </pre>
<pre class="brush: xml; gutter: false;">
</pre>
<ul>
<li><div>
Your code: A succesful login should establish the encrypted cookie:</div>
</li>
</ul>
<span class="Apple-style-span" style="font-family: monospace; white-space: pre;">FormsAuthentication.SetAuthCookie(userId, false); </span><br />
<span class="Apple-style-span" style="font-family: monospace; white-space: pre;"><br /></span><br />
<ul>
<li><div>
Your code: Global.asax should enrich each request with the needed extra data and cache it:</div>
</li>
</ul>
<pre class="brush: csharp; gutter: false;">protected void Application_AuthenticateRequest(object sender, EventArgs e)
{
if (HttpContext.Current.User != null)
{
if (HttpContext.Current.User.Identity.IsAuthenticated)
{
if (HttpContext.Current.User.Identity is FormsIdentity)
{
// Get Forms Identity From Current User
FormsIdentity id = (FormsIdentity)HttpContext.Current.User.Identity;
// Create a custom Principal Instance and assign to Current User (with caching)
MyPrincipal principal = (MyPrincipal)HttpContext.Current.Cache.Get(id.Name);
if (principal == null)
{
// Create and populate your Principal object with the needed data and Roles.
principal = MyBusinessLayerSecurityClass.CreatePrincipal(id, id.Name);
HttpContext.Current.Cache.Add(
id.Name,
principal,
null,
System.Web.Caching.Cache.NoAbsoluteExpiration,
new TimeSpan(0, 30, 0),
System.Web.Caching.CacheItemPriority.Default,
null);
}
HttpContext.Current.User = principal;
}
}
}
}</pre>
<br />
Looking from an architechtural view, our specializations will reside in 3 places:<br />
<ul>
<li><div>
IIdentity: Your implementation must contain some sort of unique user identity - nothing else. </div>
</li>
<li><div>
IPrincipal: Your implementation can contain extra user information, and role checking logic must be present (IsInRole as a minimum). </div>
</li>
<li><div>
Your Business logic: Here you should place the code that handles your specific security - like checking a login, getting the users roles, etc., as well as all the very-special-method-x methods that you preconditions require.</div>
</li>
</ul>
This way you will end up with a clean, easy-to-test security implementation that satisfies the preconditions of your solution - and nothing else. As a bonus you have avoided dependencies between your security code and ASP.NET - this can make testing easier and makes your security code reusable with other types of GUI.Unknownnoreply@blogger.com8tag:blogger.com,1999:blog-5151962190898355461.post-73774499571946141542007-11-21T14:33:00.000+01:002011-10-25T13:56:37.000+02:00Server.UrlEncode in ASP and ASP.NETIn our process of migrating parts of a large site from traditional ASP to ASP.NET, we encountered a problem with Server.UrlEncode and special characters:<br />
<br />
If the ASP page does something like:<br />
<br />
<pre class="brush:csharp;">
param = "These characters are special: Æ Ø Å";
param = Server.URLEncode(param);
Response.Redirect("MyAspNetPage.aspx?p=" & param);
</pre>
<br />
And the recieving ASP.NET page does a<br />
<br />
<pre class="brush:csharp;">
string param = Request.QueryString["p"];
</pre>
<br />
The result is not the original encoded text, but some garbage characters. The problem is that the default encoding is different between the two platforms.<br />
<br />
We need to do two workarounds:<br />
<br />
First, double-encode on the ASP page:<br />
<br />
<pre class="brush:csharp;">
param = "These characters are special: Æ Ø Å";
param = Server.URLEncode(Server.URLEncode(param));
Response.Redirect("MyAspNetPage.aspx?p=" & param);
</pre>
<br />
On the ASP.NET side the Request.QueryString implicitly url decodes using the default encoding, which is not what we want. By double-encoding on the ASP page, we get a chance to tell the framework to use a specific encoding when url decoding the second level. The trick is not to use the Server object, but the HttpUtility class instead.:<br />
<br />
<pre class="brush:csharp;">
string param = HttpUtility.UrlDecode(Request.QueryString["p"], System.Text.Encoding.GetEncoding("ISO-8859-1"));
</pre>
<br />
The ISO-8859-1 is the default encoding for traditional ASP in our case. Now the recieved and parsed querystring is the same as the original intended.<br />
<br />
Please note that the UrlDecode method of the HttpUtility class is different than the one exposed by the Server object (which actually maps to the HttpServerUtility class). I would recommend using the HttpUtility methods as these provide more functionality - like specifying the encoding.<br />
<br />
Going from ASP.NET to ASP is easier - just use the HttpUtility.UrlEncode on the ASP.NET page to encode a query string for an ASP page.Unknownnoreply@blogger.com0