bling.github.io

This blog has relocated to bling.github.io.

Saturday, December 11, 2010

Auto Mocking NSubstitute with Castle Windsor

I was debating whether to make this blog post because it’s so damn simple to implement, but hey, if it saves someone else time, I did some good.

First of all, register an ILazyComponentLoader into Windsor:

var c = new WindsorContainer();
c.Register(Component.For<LazyComponentAutoMocker>());

Then, the implementation of LazyComponentAutoMocker is simply this:

public class LazyComponentAutoMocker : ILazyComponentLoader
{
  public IRegistration Load(string key, Type service, IDictionary arguments)
  {
    return Component.For(service).Instance(Substitute.For(new[] { service }, null));
  }
}

And you’re done!  Here’s a simple unit test example using only the code from above:

[Test]
public void IDictionary_Add_Invoked()
{
  var dict = c.Resolve<IDictionary>();
  dict.Add(1, 1);
  dict.Received().Add(1, 1);
}

That was almost too easy.

Sunday, December 5, 2010

Working in Git to Working in Mercurial

I took the dive a couple weeks ago and learned how to use Git and fell in love with its simplicity.  I don’t know what it is, but after using Git every day it actually started to make sense that git checkout is used for so many things, which is ironic because prior to using Git, every introductory tutorial I’ve read has always had me thinking, “checkout does what and what and what now??”.

So why did I switch to Mercurial?  I need to push/pull from a Subversion repository, and I work on Windows.  General day to day work is great, but when I needed to git svn rebase or git svn dcommit it would take so long that I simply left to get coffee.  What’s worse, no matter what I set for core.autocrlf I would always get some weird whitespace merging error, when nothing was wrong.  It became a regular part of my workflow to just git rebase –skip everything because that’s what fixed things.  Scary.

The crappy Subversion and whitespace support (at least in Windows) led me to try Mercurial.  After getting accustomed to Mercurial, they’re actually a lot more similar than they are different.

First things first, you need to add the following to your mercurial.ini file in your home directory:

[bookmarks]
track.current = True

Here’s a comparison of my typical Git workflow translated to Mercurial:

  Git Mercurial
Work on a new feature/bug git checkout –b new_feature hg book new_feature
Hack hack hack git add .  
Commit git commit –m “done!” hg commit –m “done!”
Hack more and commit git commit –a –m “more hacking” hg commit –m “more hacking”
Sync with up stream git pull master hg pull
Merge git checkout master hg merge new_feature
  git merge new_feature hg commit –m “merged”
Push git push hg push

Notice that they’re practically exactly the same?  There are some minor differences.  The obvious one being that you need to add into git’s index before committing.  And the other I didn’t have to switch branches in Mercurial (more on that later).  But aside from that, it’s pretty much the same, from a user’s point of view.  Check this fantastic thread on StackOverflow for one of the best comparisons on the net if you want to dive into technical details.

So far, there are only two things that bother me having switched:
  a) No fast-forward merge.  You need to manually rebase and then commit.
  b) No automatic commit after a conflict free merge.

These are fairly minor annoyances, but can be scripted/aliased away.  Or if you don’t like those defaults in Git, you can override them in the command arguments or set them in gitconfig.  Mercurial similarly can change default arguments in its hgrc/mercurial.ini.

One of the biggest advantages of Git is easily switching between all its local lightweight branches at lightning speed.  Tangled source code is no more!  I was confused with Mercurial’s bookmarks when I first used them because all it does is label your head with something.  I don’t know why this is, but it is the same as git checkout –b, but for some reason I visualized Git “branching” the latest commit into two paths.  But how do you do that if they’re the same?  It should only branch after a commit which introduces changes, which is what Mercurial does.  In this scenario, Mercurial is more explicit, whereas Git is implicit.

Using bookmarks, you can mimic Git’s workflow like this:

  Git Mercurial
Create a bunch of ‘working areas’ git branch feature1 hg book feature1
  git branch feature2 hg book feature2
Switch/commit/hack/commit git checkout feature1 hg up feature1
  git commit –a –m “feature1” hg commit –m “feature1”
  git checkout feature2 hg up feature2
  git commit –a –m “feature1” hg commit –m “feature2”
Sync up stream git pull master hg pull
  git checkout master hg up default
Merge in 1 feature git merge feature1 hg merge feature1
    hg commit –m “merged”
Delete branch/bookmark git branch –d feature1 hg book –d feature1
Push git push hg push –r tip
Switch back and hack again git checkout feature2 hg up feature2

The –r tip switch on hg push might have raised an eyebrow.  This is to tell Mercurial to push all changes that lead to the tip.  This will include the changes in feature1 that we just merged in, but exclude all the ones in feature2.  If you issue a hg push it will complain and warn you that you’re going to create multiple heads on the remote repository.  This is not what you want since there may be unfinished features, experimental branches, etc.  Of course, you can force it anyways, but that’s not a good idea.

At first, the tip was most confusing to me because I tried to associate it with Git’s master, which was simply not the case.  Tip refers to the newest change that you know about.  This can be on any branch or bookmark.  Once I understood that, and stopped trying to create a Git master equivalent, everything was straightforward.

So let’s start with an example.  First I create a bookmark feature1 after syncing, and then making a commit looks like this:

image

And then if you switch to feature2 and make a commit, it becomes like this:

image

Here’s where you start thinking, “I don’t have a master which tracks upstream changes, how do I separate my local changes?”  And now here’s a situation where Mercurial does more black magic than Git.  If there are up stream changes, after issuing hg pull, this happens:

image

Mercurial automatically split my local changes into their own separate branches (actually, heads is the accurate term).  After this operation, tip is now tracking the changes that I just pulled in from upstream, instead of my bookmark.

From here, it’s simply hg up tip to switch to the “master” branch.  Then a hg merge feature1; hg commit, and it becomes like this:

image

Then it’s simply hg push –r tip, and you’re good to go.  Basically, if you bookmark every feature/bug you work on, then you should only have one branch that doesn’t have a bookmark, which is the same as the ‘master’ branch from Git.

What about Subversion?

Whoops, forgot why I switched in the first place.  First, install hgsubversion, after that’s set up, simply:

hg clone http://path/to/subversion/repository

And then it’s just hg pull or hg push.  Is it really that simple?  Yes, yes it is.

Saturday, December 4, 2010

CQRS: Building a “Transactional” Event Store with MongoDB

As you all already know if you’re familiar with MongoDB, is that it does not support transactions.  The closest thing we have is atomic modifications of a single document.

The Event Store in a CQRS architecture has the important responsibility of detecting concurrency violations, where two different sources try to update the same version of the aggregate.  The one that gets it late should be denied changes into the store with an exception thrown.  This ensures the integrity of the data.

Here is a very simple typical implementation of appending events into the event store:

public void Append(Guid id, long expectedVersion, IEnumerable<IEvent> events)
{
  try
  {
    _events.Insert(events.Select(x => ...)); // convert to storage type
  }
  catch (...)
  {
    if (E11000 duplicate key)
      throw new ConcurrencyException(...);
  }
}

Syntax is a mix of C#/pseudo code, but the basic concepts are the same.  This assumes that you’ve set up an multi-index on the collection between the ID and the version.  Thus, when you insert something that already has a matching ID/version, Mongo will tell you of a duplicate key violation, and all is good.

But wait!  Operations are atomic per document!  So what happens if you append 100 events, and it fails on the 43rd one?  Events 1 through 42 will continue to exist in the data store, which is bad news.

Obviously, this solution is not going to work.  The next step was to do something like this:

catch (...)
{
  if (E11000 duplicate keys)
  {
    foreach (var e in events)
      _events.Delete(new { _id = e._id });
 
    throw new ConcurrencyException(...);
  }
}

So, before inserting into the collection, each events gets a generated ObjectID, so that if it fails, the catch exception can simply tell the data store to delete everything.

At first glance this seems to fix everything, except for one glaring problem.  What happens if you lose connection to the database before, or midway sending the deletes?  Now you have a problem of ensuring that those deletes are guaranteed, and so then the question that arises from that is where would you store it?  A local file?  Another database?  The problem is, at that moment, if another process in the system queries all events for the same aggregate it will return invalid data.

So, we’re back to square one.  We need to simulate a transaction through a single insert.

The secret is in the schema design.  Initially, we started out with a straight forward row-per-event schema.  But since we’re operating with documents, we can model it as a batch of events.

Thus, instead of versioning every event individually, we version a batch of events.  For example, originally we would insert 3 events, and the data saved would look like this:

{ _id = 1, aggregate_id = 1, version = 1, event = { … } }
{ _id = 2, aggregate_id = 1, version = 2, event = { … } }
{ _id = 3, aggregate_id = 1, version = 3, event = { … } }

In the new schema, it would look like this:

{ _id = 1, aggregate_id = 1, version = 1, events = [ { … }, { … }, { … }, { … } ] }

Now, a downside to this approach is you lose a bit of granularity of stored events, since you are grouping multiple events under a single version.  However, I don’t see this as a huge loss since the main reason you want to use event sourcing in the first place is to be able to restore an aggregate to any state in its history, and we still retain that functionality.

In our case, this is working very well for us.  When a command gets handled, it generates a bunch of events that get applied and then saved to MongoDB.  I can’t think of any scenario where it’d want to replay to the middle of a half-processed command (but of course it’s possible anyways, just reply half of a batch of events).  But that’s just asking for trouble.  It’s most likely easier to just the re-process the command.

Now, you may be asking why go through the trouble of batching events when you can just store one document per aggregate, and then put all events in one document?  Yes, that would solve the problem very effectively…until you hit the 4MB per document limit ;-)

Tuesday, November 23, 2010

CQRS: Auto register event handlers

I’m not going to go into detail about what the deal is about event handlers in a CQRS architecture, since a quick Google/Bing search will give plenty of very good information.  What this post is about is a solution to the “how do I quickly register something to handle a bunch of events” without copying pasting all over the place. There are other solutions out there, like this one.  Here’s something I came up with (took some concepts from my post on Weak Events).

public class Aggregate
{
  private delegate void OpenEventHandler<in TTarget, in TEvt>(TTarget target, TEvt @event);
  private static readonly IDictionary<Type, OpenEventHandler<Game, IEvent>> _evtHandlers = new Dictionary<Type, OpenEventHandler<Game, IEvent>>();
   
  static Aggregate()
  {
    var methods = from m in typeof(Game).GetMethods(BindingFlags.NonPublic | BindingFlags.Instance)
    let p = m.GetParameters()
            where m.Name == "ApplyEvent" && p.Length == 1 && typeof(IEvent).IsAssignableFrom(p[0].ParameterType)
            select m;
    
    var registerForwarder = typeof(Game).GetMethod("RegisterForwarder", BindingFlags.NonPublic | BindingFlags.Static);
    foreach (var m in methods)
    {
      Type eventType = m.GetParameters()[0].ParameterType;
      var forwarder = registerForwarder.MakeGenericMethod(eventType).Invoke(null, new[] { m });
      _evtHandlers[eventType] = (OpenEventHandler<Game, IEvent>)forwarder;
    }
  }
  
  private static OpenEventHandler<Game, IEvent> RegisterForwarder<TEvt>(MethodInfo method)
  {
    var invoker = typeof(OpenEventHandler<,>).MakeGenericType(typeof(Game), typeof(TEvt));
    var forwarder = (OpenEventHandler<Game, TEvt>)Delegate.CreateDelegate(invoker, null, method);
    return (g, e) => forwarder(g, (TEvt)e);
  }
 
  private void ApplyEvent(EventHappened e)
  {
    _something = e.Something;
  }
 
  public void ApplyChanges(IEnumerable<IEvent> events)
  {
    foreach (var e in events)
    {
      _evtHandlers[e.GetType()](this, e);
    }
  }
}

A couple things:

  • The registration happens in the static constructor.  This is important, because this relatively heavy cost of using reflection only happens once for the aggregate.
  • The filtering of methods is arbitrary.  I chose “ApplyEvent” here as the convention, but of course you can choose whatever you like.
  • ApplyChanges simply invokes the event handlers dictionary directly.  Assuming you’re being a good citizen with the code, accessing _evtHandlers doesn’t need a lock because once created it should never be modified.

So in summary, it finds all methods named ApplyEvent in the current class, and generates an “open delegate” which takes in an extra parameter which is the instance itself.  In this case, the instance is the aggregate, as shown in the ApplyChanges method.

So there you have it!  Excluding the lengthy LINQ query, roughly 10 lines of code to find and register all event handlers in the aggregate.  And if you’re wondering, the performance cost is negligible because there’s no reflection involved in the invocation of the handlers.  Awesome!

Monday, November 8, 2010

That Immutable Thing

Do you have some sort of ImmutableAttribute in your domain that you use to mark classes as immutable?  Have you ever needed to enforce that contract?  Checking for readonly fields isn’t enough?  Well, this weekend I had a code spike that helped solve this problem in my current project.

For this project, I’m using the NoRM driver for MongoDB, and one of the limitations of the serializer is that all types must be classes, must have a default constructor, and all properties have a public setter.  So, now the domain has a bunch of classes like this:

public class UserCreatedEvent : IEvent
{
  public string Name { get; set; }
  public UserCreatedEvent() { }
  public UserCreatedEvent(string name) { Name = name; }
}

That God for code snippets (or Resharper templates).  With so many classes like this that need to get serialized, I wanted to extra sure that no code ever calls the setter method for the Name property.  Thankfully, with some help of Mono.Cecil, it’s possible.

First off, you need to define ImmutableAttribute and that add that do classes, and in my case, it is historical domain events that get serialized to an event store.

Then, you just write a unit test which leverages the power of Mono.Cecil.  It turned out to be pretty simple.  Here’s the code:

using System.Collections.Generic;
using System.Linq;
using Mono.Cecil;
using Mono.Cecil.Cil;
using NUnit.Framework;
namespace blingcode
{
    [TestFixture]
    public class ImmutabilityTests
    {
        private static readonly MethodDefinition[] _setterMethods;
        private static readonly AssemblyDefinition[] _assemblies;
        static ImmutabilityTests()
        {
            _assemblies = new[]
            {
                AssemblyDefinition.ReadAssembly(typeof(Something).Assembly.Location),
            };
            _setterMethods = _assemblies
                .SelectMany(a => a.Modules)
                .SelectMany(m => m.Types)
                .Where(t => t.CustomAttributes.Any(attr => attr.AttributeType.Name.Contains("ImmutableAttribute")))
                .SelectMany(t => t.Properties)
                .Where(p => p.SetMethod != null)
                .Select(m => m.SetMethod)
                .ToArray();
        }
        [Test]
        public void ClassesWith_ImmutableAttribute_ShouldNotUse_PropertySetters()
        {
            AssertForViolations(_assemblies
                                    .SelectMany(a => a.Modules)
                                    .SelectMany(m => m.Types)
                                    .Where(t => t.IsClass)
                                    .SelectMany(t => t.Methods));
        }
        [Test]
        public void ThisFixtureActuallyWorks()
        {
            var assembly = AssemblyDefinition.ReadAssembly(typeof(ImmutabilityTests).Assembly.Location);
            var type = assembly.Modules.SelectMany(m => m.Types)
                .Where(t => t.IsClass && t.FullName.Contains(GetType().FullName)).First();
            try
            {
                AssertForViolations(type.Methods);
            }
            catch (AssertionException)
            {
                Assert.Pass();
            }
        }
        private static void AssertForViolations(IEnumerable<MethodDefinition> potentialMethods)
        {
            foreach (var method in potentialMethods.Where(m => m.HasBody))
            {
                foreach (Instruction ins in method.Body.Instructions.Where(ins => ins.OpCode == OpCodes.Callvirt))
                {
                    MemberReference mr = ins.Operand as MemberReference;
                    if (mr != null)
                    {
                        var result = _setterMethods.FirstOrDefault(m => m.FullName == mr.FullName);
                        if (result != null)
                        {
                            throw new AssertionException(result + " was invoked by " + method + ", even though the type has the Immutable attribute.");
                        }
                    }
                }
            }
        }
        private void InvokeCardSetters()
        {
            // this only exists to test that the test does indeed work
            var c = new SomeImmutableClass();
            c.SomeImmutableValue = 123;
        }
    }
}

Nothing too complicated.  The main thing to look for is the callvirt method, which the C# compiler always generates for classes.  Then, you match the operand to the method definition and viola!

Tuesday, October 5, 2010

Memory Leak with WPF’s RichTextBox

Apparently, setting IsUndoEnabled to false isn’t enough.  You must also set UndoLimit=0 as well otherwise it’ll still keep track of undo history.  Doh!

Monday, October 4, 2010

Yet Another Weak Event Implementation

I gotta say that WPF is a complete pain in the butt when it comes to memory leaks.  A simple google/bing search shows up more than enough results for weak events, so why am I making yet another blog post about another implementation of a weak event?  Well, if someone else out there happens to have the same requirements/needs that I do right now, maybe this will save them a little time.

In my particular case, I needed to accomplish a couple goals:

  • Minimal performance hit.  a.k.a minimal use of reflection.
  • Generic and easy to use.
  • Thread safe.
  • Support for explicit registration and unregistration.

One of the better implementations on weak events I found was from Dustin Campbell’s blog post.  Unfortunately, this implementation has one major problem: you cannot explicitly unregister.  In fact, the only way an attached listener no longer receives messages is if it gets garbage collected.  Our application happened to use this for an especially special event on our base entity: the infamous INotifyPropertyChanged event.  Needless to say, all the static WPF dependency objects left a whole bunch of leaked WeakReferences around (ironic?).

But wait, doesn’t WPF already have a IWeakEventManager that solves this problem?  The problem with this is a couple things.  It’s slow.  It’s painful to use.  It’s annoying to implement.  Long story short, you need a static WeakEventManager for every unique delegate.  Yes, sure, you can override a lot of the methods to make it more perform faster, but if you need to go that far you might as well write your own that does just what you need, and no more.

If you haven’t found it already, Daniel Grunwald’s Code Project article is yet another great resource.  My implementation was a mix of this and the earlier mentioned link.  Anywho, here it is:

    public interface IWeakEventEntry<TSender, TArgs>
    {
        bool IsAlive { get; }
        bool Matches(Delegate handler);
        bool Invoke(SynchronizationPriority priority, TSender sender, TArgs args);
    }
    public class WeakEventEntry<TTarget, TSource, TArgs> : IWeakEventEntry<TSource, TArgs> where TTarget : class
    {
        private delegate void OpenForwardingEventHandler(TTarget target, TSource sender, TArgs args);
        private readonly WeakReference m_TargetRef;
        private readonly OpenForwardingEventHandler m_OpenHandler;
        private readonly MethodInfo m_Method;
        public WeakEventEntry(Delegate handler)
        {
            m_OpenHandler = (OpenForwardingEventHandler)Delegate.CreateDelegate(typeof(OpenForwardingEventHandler), null, handler.Method);
            m_TargetRef = new WeakReference(handler.Target);
            m_Method = handler.Method;
        }
        public bool IsAlive { get { return m_TargetRef.IsAlive; } }
        public bool Matches(Delegate handler)
        {
            return handler.Method == m_Method && handler.Target == m_TargetRef.Target;
        }
        public bool Invoke(TSource sender, TArgs args)
        {
            TTarget target = m_TargetRef.Target as TTarget;
            if (target != null)
            {
                m_OpenHandler(target, sender, args);
                return true;
            }
            return false;
        }
    }
    public class WeakEvent<TSender, TArgs>
    {
        private static readonly IWeakEventEntry<TSender, TArgs>[] EMPTY_LIST = new IWeakEventEntry<TSender, TArgs>[0];
        private List<IWeakEventEntry<TSender, TArgs>> m_Events;
        private IWeakEventEntry<TSender, TArgs>[] m_InvokeList = EMPTY_LIST;
        public int Count { get { return m_InvokeList.Length; } }
        public void Add(Delegate handler)
        {
            if (handler.Target == null) // static method
            {
                Add(new StrongEventEntry<TSender, TArgs>(handler));
            }
            else
            {
                Type type = typeof(WeakEventEntry<,,>).MakeGenericType(handler.Target.GetType(), typeof(TSender), typeof(TArgs));
                Add((IWeakEventEntry<TSender, TArgs>)type.GetConstructors()[0].Invoke(new object[] { handler }));
            }
        }
        public void Add(IWeakEventEntry<TSender, TArgs> entry)
        {
            lock (this)
            {
                if (m_Events == null)
                    m_Events = new List<IWeakEventEntry<TSender, TArgs>>(8);
                m_Events.Add(entry);
                m_InvokeList = m_Events.ToArray();
            }
        }
        public void Remove(Delegate handler)
        {
            lock (this)
            {
                if (m_Events != null)
                {
                    for (int i = 0; i < m_Events.Count; i++)
                    {
                        if (m_Events[i].Matches(handler))
                        {
                            m_Events.RemoveAt(i);
                            break;
                        }
                    }
                    m_InvokeList = m_Events.ToArray();
                }
            }
        }
        public void Clear()
        {
            lock (this)
            {
                m_Events = null;
                m_InvokeList = EMPTY_LIST;
            }
        }
        public void Raise(TSender sender, TArgs args)
        {
            IWeakEventEntry<TSender, TArgs>[] events = m_InvokeList;
            bool removeDead = false;
            for (int i = events.Length - 1; i >= 0; i--)
                removeDead |= !events[i].Invoke(sender, args);
            if (removeDead)
                RemoveDeadReferences();
        }
        private void RemoveDeadReferences()
        {
            lock (this)
            {
                if (m_Events != null)
                {
                    for (int i = m_Events.Count - 1; i >= 0; i--)
                    {
                        if (!m_Events[i].IsAlive)
                            m_Events.RemoveAt(i);
                    }
                    m_InvokeList = m_Events.ToArray();
                }
            }
        }
    }

Nothing too special here.  For performance, a copy of all events are stored in an array so that raising events doesn’t need to be in a lock.  And while locking on “this” is usually bad practice, in this case I didn’t have any other local variable to use, and creating an object just for the sake of locking in my eyes was not worth it in this scenario.

I tried generating the OpenForwardingDelegate using DynamicMethod and IL, but the result was negligible because it doesn’t actually affect invoking speed, which is what we’re most concerned with.

Also, if you didn’t notice, this implementation also supports static methods attaching, hence why there is an IWeakEventEntry interface.  Here’s the StrongEventEntry implementation:

    public class StrongEventEntry<TSource, TArgs> : IWeakEventEntry<TSource, TArgs>
    {
        private delegate void ClosedForwardingEventHandler(TSource sender, TArgs args);
        private readonly ClosedForwardingEventHandler m_ClosedHandler;
        private readonly MethodInfo m_Method;
        public bool IsAlive { get { return true; } }
        public StrongEventEntry(Delegate handler)
        {
            m_Method = handler.Method;
            m_ClosedHandler = (ClosedForwardingEventHandler)Delegate.CreateDelegate(typeof(ClosedForwardingEventHandler), null, handler.Method);
        }
        public bool Matches(Delegate handler)
        {
            return m_Method == handler.Method;
        }
        public bool Invoke(SynchronizationPriority priority, TSource sender, TArgs args)
        {
            m_ClosedHandler(sender, args);
            return true;
        }
    }

Last but not least, usage:

        private WeakEvent<object, PropertyChangedEventArgs> _propertyChanged = new WeakEvent<object, PropertyChangedEventArgs>();
        public event PropertyChangedEventHandler PropertyChanged
        {
            add { _propertyChanged.Add(value); }
            remove { _propertyChanged.Remove(value); }
        }

Performance is quite good.  On my machine for 1,000,000,000 invocations, it takes the WeakEvent ~17.3 seconds, vs 4.2 seconds for a standard delegate, for roughly 4x invocation cost.

And there you have it!  A simple, generic, and fast weak event in ~200 lines of code.

Friday, May 14, 2010

Contextual Lifestyle with Castle Windsor

EDIT: As of version 3, scoped lifestyles are now a first class citizen supported out of the box (http://docs.castleproject.org/Windsor.Whats-New-In-Windsor-3.ashx)
EDIT: A much better implementation can be found at https://github.com/castleprojectcontrib/Castle.Windsor.Lifestyles

IMO, one of the big missing features of Castle Windsor is that it doesn’t come with a built-in way for dealing with contextual lifestyles.  It handles transients and singletons fairly well, but once you get to other lifestyles it’s pretty heavily dependent on having some “state manager” handling the instances.  For example, PerWebRequest uses the HttpContext, PerThread uses thread static variables, etc.
Contextual lifestyles is one of those things where it doesn’t seem all that useful at first, and then when you see the possibilities it’s like getting hit with a huge truck.
A question was posted to the Castle Google Group recently, which I follow, which illustrates a relatively common example of why someone would want to have a contextual lifestyle.  Basically, you have a whole bunch of components you want to resolve, but only within a context.
Here’s some boiler plate code of the domain model:
public interface IRepository { ISession Session { get; } }
public interface ISession : IDisposable { bool IsDisposed { get; } }
public class Session : ISession
{
    public bool IsDisposed { get; set; }
    public void Dispose() { IsDisposed = true; }
}
public class Repository1 :IRepository
{
    public ISession Session { get; private set; }
    public Repository1(ISession session){ Session = session; }
}
public class Repository2 : IRepository
{
    public ISession Session { get; private set; }
    public Repository2(ISession session){ Session = session; }
}
public class Model1
{
    public IRepository First { get; private set; }
    public IRepository Second { get; private set; }
    public Model1(IRepository first, IRepository second) { First = first; Second = second; }
}
public class Model2
{
    public IRepository Second { get; private set; }
    public Model2(IRepository second) { Second = second; }
}
And here’s the unit test I want to pass:
[Test]
        public void ResolutionsByContext()
        {
            IWindsorContainer root = new WindsorContainer();
            root.Register(Component.For<Model1>().LifeStyle.Transient,
                          Component.For<Model2>().LifeStyle.Transient,
                          Component.For<IRepository>().ImplementedBy<Repository1>().LifeStyle.Transient,
                          Component.For<IRepository>().ImplementedBy<Repository2>().LifeStyle.Transient,
                          Component.For<ISession>().ImplementedBy<Session>().LifeStyle.PerContextScope());

            Model1 model1;
            Model2 model2;
            ISession session1, session2;
            using (var context1 = root.BeginLifetimeScope())
            {
                model1 = context1.Resolve<Model1>();
                session1 = model1.First.Session;
                Assert.AreSame(model1.First.Session, model1.Second.Session);
                Assert.AreSame(context1.Resolve<ISession>(), context1.Resolve<ISession>());

                using (var context2 = root.BeginLifetimeScope())
                {
                    model2 = context2.Resolve<Model2>();
                    session2 = model2.Second.Session;
                    Assert.AreNotSame(model1.First.Session, model2.Second.Session);

                    var anotherModel2 = context2.Resolve<Model2>();
                    Assert.AreSame(anotherModel2.Second.Session, model2.Second.Session);

                    Assert.AreSame(session2, context2.Resolve<ISession>());
                    Assert.AreNotSame(context1.Resolve<ISession>(), context2.Resolve<ISession>());
                }
                Assert.IsTrue(session2.IsDisposed);
                Assert.IsFalse(session1.IsDisposed);
            }
            Assert.IsTrue(session1.IsDisposed);
        }

I copied the name BeginLifetimeScope from Autofac, which inherently supports contextual scopes as a first-class citizen (of which the test passes).  The question now, is how do we get Castle Windsor to do the same?
Initially, I took a look at ISubDependencyResolver and caching variables.  Unfortunately, this didn’t work too well because sub resolvers never got hit if they were resolved from the container directly.
The next step I tried was with lifestyle managers, but alas, the CreationContext was always transient and I was unable to store any state that distinguished between different context resolutions.
After digging deeper into the Windsor codebase and getting into the subsystems and handlers, I found a solution that seems to work.  It passes the test above, but that’s about it.  Test well if you’re gonna use this in production code!!!
Here goes!
First, you have a lifestyle manager to distinguish between other lifestyles.
public class ContextualLifestyleManager : AbstractLifestyleManager
    {
        private object instance;
        public override object Resolve(CreationContext context)
        {
            return instance ?? (instance = base.Resolve(context));
        }
        public override void Dispose()
        {
        }
    }
And finally, the magic happens with this:
public static class ContextualExtensions
    {
        public static ComponentRegistration<T> PerContextScope<T>(this LifestyleGroup<T> group)
        {
            return group.Custom<ContextualLifestyleManager>();
        }
        public static IWindsorContainer BeginLifetimeScope(this IWindsorContainer parent)
        {
            var child = new WindsorContainer();
            var ss = (INamingSubSystem)parent.Kernel.GetSubSystem(SubSystemConstants.NamingKey);
            foreach (var handler in ss.GetHandlers())
            {
                if (handler.ComponentModel.CustomLifestyle == typeof(ContextualLifestyleManager))
                {
                    child.Kernel.AddCustomComponent(handler.ComponentModel);
                }
            }
            parent.AddChildContainer(child);
            return child;
        }
    }
First method is just a helper method to be a little more fluent in the registration for when you want many things to have contextual lifestyle.  The second method is the guts.  Long story short, we create a child container, and duplicate all component models of contextual lifestyle.  Thus, whenever components are resolved, the “override” is found in the child and resolved.  Anything else will be found in the parent.
I was initially pretty happy with this, until I profiled the performance.  With Autofac, creating and disposing 100,000 contexts took 5ms on my computer.  Doing the same with with Windsor took 3.8 seconds.  Out of curiosity, I profiled again, but this time just creating child containers without copying handlers down: 1.9 seconds.  So while this implementation works, it’s not as performant as I’d like it to be….
Maybe I’ll come up with another solution, but for now if the performance is acceptable maybe this would be useful for others!