I have moved!

I've moved my blog
CLICK HERE

Tuesday 16 December 2008

Ora Stories

I've written an add-in for Visual Studio called Ora. You can read more about it via that link but here's some background to it.

It's designed to replace a common use of regions - though I'd call it a common abuse of regions - so it's called Ora, which means a bunch of things in various languages, but can mean region in Latin.

Regions are a feature of C# that generate some controversy, inevitably known as the region wars. Sample posts:

I think regions mostly suck, at least in the way they are commonly used. They're a form of comment, so they should come with the same warning as comments:

Do not redundantly replicate in a comment any information that is clearly stated in the code itself.

With this advice in mind, and applying it consistently to regions as well, it would make little sense to put a region called Private Static Methods around all your private static methods - it's already perfectly obvious what they are. It says so completely unambiguously in the code. But of course, one day someone will decide that one of the methods should be public - at which point they either have to remember to physically move the method out of that region, or else the region is no longer correct (which is the ultimate fate of all redundant comments).

So it's almost distressing to see advice like this being handed out:

I've seen class templates, intended to be used by anyone who is starting a new class, which come ready stocked with over a dozen regions called Private Fields, Static Public Properties, and so on. Suddenly, instead of writing elegant self-descriptive code, you're filling in a tax form. And when you've captured a simple concept in a class with only a few members, it will contain a dozen empty regions, just in case someone one day wants to put things in it that don't belong in it.

The only justification I know of for this is that it helps anyone reading the code to navigate it. But as a solution, regions are utterly unworkable, so let's state the actual problem clearly:

A reader of the code needs to see a simple overview of a class, in which the members have been grouped in various helpful ways, so they can navigate to a member in the source code by clicking on its name in the overview.

So what they need is something that automatically builds such an overview on the fly, directly from the code under the cursor. This is the purpose of Ora, my add-in for Visual Studio.

Thursday 11 December 2008

The Maybe Monad in C#

The Maybe Monad is extremely simple. It represents a value that might be there, but might not, and also provides a neat way of working with such values.

This Haskell-related page makes it pretty clear:

The Maybe monad embodies the strategy of combining a chain of computations that may each return Nothing by ending the chain early if any step produces Nothing as output. It is useful when a computation entails a sequence of steps that depend on one another, and in which some steps may fail to return a value.

Change Nothing to null and we're talking in C#. Furthermore, it advises:

If you ever find yourself writing code like this:

case ... of
  Nothing -> Nothing
  Just x  -> case ... of
               Nothing -> Nothing
               Just y  -> ...

you should consider using the monadic properties of Maybe to improve the code.

Again, translating into C#, we're talking about code like this:

public static Role GetRole(string urlString)
{
    string[] url = SplitUrl(urlString);
 
    RecordCompany company = Music.GetCompany("4ad.com");
    if (company != null)
    {
        Band band = company.GetBand("Pixies");
        if (band != null)
        {
            Member member = band.GetMember("David");
            if (member != null)
                return member.Role;
        }
    }
 
    return null;
}

As we navigate our way through a graph of objects, we repeatedly have to check whether the road is blocked by a null. When a situation like this occurs inside our nice tidy Linq expression, it makes it look really ugly.

But how can we improve on this in C#?

Firstly, we should be clear that a reference variable (such as company in the above code) is ideally suited for representing the Maybe monad. Any class type can be "stored" in a reference, otherwise that reference has the special value null. So don't be misled by the example on this page; it's correct (it's a great article) but it might make you think that we need to define a special new class to make a monad. In this case, as long as we're referring to class types (rather than unboxed value types), we don't need to.

What we're observing is that reference types already have one of the operations required by a monad: a Unit function. By simply assigning a new object to a reference variable, you are storing that object in a location that might have been null before you assigned to it, and may become null again later on. So assignment to the reference variable is the Unit function for our monad; it's built into the language.

That's all very well, but what are we actually trying to achieve? Going by the Haskell description, it's as if we'd like to be able to casually write a chunk of code like this:

RecordCompany company = Music.GetCompany("4ad.com");
Band band = company.GetBand("Pixies");
Member member = band.GetMember("David");
return member.Role;

If any of those operations returned null, then the return value of the whole thing would be null. But of course we don't always want C# to apply that rule, and how would the compiler figure out when to stop treating a chain of operations in this special way?

What we're missing from our monad is a Bind function:

public static TOut IfNotNull<TIn, TOut>(
    this TIn v, Func<TIn, TOut> f) where TIn : class 
                                   where TOut: class
{
    if (v == null)
        return null;
 
    return f(v);
}

The type parameters in this generic function constrain us to dealing with class types, so what we have here is an extension method that applies to any reference variable (weird distinction: it makes more sense to think of extension methods applying to the reference variable than to the object stored in the variable, because it's perfectly okay to call an extension method on a null reference variable).

This extension method takes a function that "transitions" from one object to another. They don't have to be the same type. To see the point of all this, let's see how our ugly code looks if we rewrite it:

return Music.GetCompany("4ad.com")
            .IfNotNull(company => company.GetBand("Pixies"))
            .IfNotNull(band => band.GetMember("David"))
            .IfNotNull(member => member.Role);

Now it's all just one expression! Cool!

(If you're wondering why is this important, there are lots of reasons, but here's one to start you off. The original version of the code was a sequence of statements, so it couldn't be represented by an expression tree, whereas the new version can be.)

So why is IfNotNull a good Bind function? It's not immediately obvious that it is, because a Bind function talks to its caller in terms of values wrapped in the monad, but deals in "naked" values with the function passed to it. But IfNotNull uses ordinary reference variables in both situations.

This is because there is a feature missing from C#. It ought to be possible to somehow tag a reference variable to say that it is definitely not null.

Response to comment from Marcel:

The : (colon) operator sounds good, but I'd argue that it's a little inflexible. It's a member access operator, like an alternative to the . (dot) so it works for the example I've given here. But what if the way I'd like to transition to the next object in the chain is by passing the current object as a parameter to some function? For example, look up the next object in a Dictionary with TryGetValue, using the current object as a key. With IfNotNull, the non-null object is given a named reference so I can do whatever I want with it.

As for verbosity, what I'd really like here is the ability to write what might be called extension operators, something Microsoft has considered but unfortunately isn't planning to implement anytime soon. These are like a cross between extension methods and operator overloads. This SO question pretty much covers the idea.

If that was possible, we could change IfNotNull to a binary operator such as | (pipe), allowing us to pipe values between lambdas like this:

return Music.GetCompany("4ad.com")
        | company => company.GetBand("Pixies")
        | band => band.GetMember("David")
        | member => member.Role;

I think that would be just about perfect.

Wednesday 10 December 2008

Optimizing Aggregate for String Concatenation

Linq lets you think in a very generally applicable way and solve a very wide variety of problems with a few key concepts. That’s a great thing. But it’s irritating when the elegant solution doesn’t perform as well as an ugly special case.

Using a combination of well-known Linq features I’m going to demonstrate that we already have the power to get the best of both worlds: speed and elegance.

One example that has always irked me (and which is simple enough to demonstrate the idea with) is this:

Enumerable.Range(0, size) 
          .Select(n => n.ToString())
          .Aggregate((a, b) => a + ", " + b);

It’s got all the attributes of a beautiful Linq-style solution – a single expression that produces the thing we want, using very general operations that are parameterized by self-contained functions.

But if size gets large, it’s dreadfully slow. The reason is that Aggregate takes the first two items, uses the function to combine them, then you can kind of imagine it putting the result back on the start of the list to replace the original two items. So each time it does that, the list shrinks, until eventually there’s only one item left, which it returns. All very logical and beautiful – the items on the list are from a set and we have defined a closed binary operation on them, so I guess it’s a magma.

But that first item is an ever-growing string, and each time around it has to be copied into a new string. This is a recipe for disastrous performance. To get faster performance, we need to use a different function, which I’ll call Oggregate:

public static string Oggregate(this IEnumerable<string> source, string delimiter)
{
    StringBuilder builder = new StringBuilder();
 
    builder.Append(source.First());
    foreach (string s in source.Skip(1))
    {
        builder.Append(delimiter);
        builder.Append(s);
    }
 
    return builder.ToString();
}

But this is irksome, because you have to know when to use this optimized version, and then you have to know how to modify your original code. You can’t just flick the “go faster” switch.

A good programming language defines general concepts that compose (go together) well. A good compiler then takes programs written in that language and does terrifying things to them, without telling you, so that they still do what you asked, but they do it fast. The compiler does this by spotting patterns and saying “Ah, I see what you’re trying to do. I know a faster way to do that.” You don’t have to explicitly tell it to do it the fast way. It just notices the pattern and makes the smart decision. Such capabilities are called compiler optimizations. They’re “turnkey” solutions, in the sense that all you have to do is turn the key and they start up. You don’t typically have to think too hard about it. Somebody already did that for you.

So my ideal solution for the above problem would be for the compiler (or a library) to notice the pattern I’m using and use the StringBuilder approach instead. If I’m not using that pattern, it should fall back to doing what it usually does.

I can’t change the compiler, so can I write a library? The problems facing us are three fold:

  • We want to replace the standard library’s version of an algorithm that can operate on any sequence.
  • We only want to do that for sequences of strings.
  • We only want to do it if the function parameter has a very narrowly-defined shape.

Skipping the first one for now, the solution to the second problem is to write a version of Aggregate for the special case of string sequences. The solution to the third is clearly going to involve Linq expressions, as they give us a way to examine the structure of simple expression-like functions, and also to apply such a function to some parameters if necessary:

public static string Aggregate(this IEnumerable<string> source, 
                 Expression<Func<string, string, string>> func)
{
    ... look at the func to decide what to do

Having defined such a function in some namespace, what happens if we add a using directive at the top of the source file where we want to use it? That’s no good, because the compiler has two choices for which function to use. In C++, the thing we’ve written is a kind of “template specialization”, and the rules for template resolution in C++ usually mean that the most specialized choice is the one that the compiler picks. But this isn’t the case in C#. The compiler just gives up and says that we’re being ambiguous.

But if we put our using directive inside a namespace block, then the C# compiler is happy to assume that it should select our version of Aggregate:

using System;
using System.Linq;
using System.Diagnostics;
 
namespace ConsoleApplication5
{
    using Optimizations.AggregateStringBuilder;
 
    class Program
    {
        static void Main(string[] args)
        {

The location of the using directive affects the overload resolution priority assigned to the functions in that namespace. So we can switch on our optimisation at the level of a namespace block. I’m satisfied with that – it constitutes an on/off switch, for my purposes.

Here’s what the Aggregate function looks like:

public static string Aggregate(this IEnumerable<string> source, 
                        Expression<Func<string, string, string>> func)
{
    BinaryExpression root = func.Body as BinaryExpression;
    if (root != null)
    {
        if (root.NodeType == ExpressionType.Add)
        {
            BinaryExpression left = root.Left as BinaryExpression;
            if (left != null)
            {
                if (left.NodeType == ExpressionType.Add)
                {
                    ParameterExpression leftLeft = 
                        left.Left as ParameterExpression;
 
                    if (leftLeft != null)
                    {
                        ConstantExpression leftRight = 
                            left.Right as ConstantExpression;
 
                        if (leftRight != null)
                        {
                            ParameterExpression right = 
                                root.Right as ParameterExpression;
 
                            if (right != null)
                            {
                                if ((leftLeft.Name == func.Parameters[0].Name) 
                                  && (right.Name == func.Parameters[1].Name))
                                    return source.Oggregate(
                                        (string)leftRight.Value);
                            }
                        }
                    }
                }
            }
        }
    }
 
    return source.Aggregate(func.Compile());
}

In other words, it looks pretty ugly. But that’s optimizations for you. It really just looks at the lambda to see if it fits a very rigid pattern: (a + c) + b, where a and b are the parameters to the lambda and c is a constant. If so, it calls Oggregate. Otherwise, it falls back to letting the compiler run the lambda as it usually would. A compiled delegate doesn’t match our Expression<T> argument, so the normal version of Aggregate is called.

This means that, where it’s an appropriate optimization, all it does is examine a few enum properties, perform a few casts, compare a couple of strings and then call Oggregate. So all the extra work (which is very minor) happens outside the loop.

The remaining question is, how badly does it hurt performance when it isn’t an appropriate optimization? As usual, it depends greatly on how you use it. If you’re using Aggregate to concatenate a small number of strings, the compilation step is wasteful to be sure. But again, it happens outside the loop. And in any case, if you find that your program runs slower, it’s just as easy to switch off the optimization as it is to switch it on.

So in conclusion, although this example is pretty simple and so not exactly earth-shattering in itself, it serves to demonstrate how C# gives us the tools to introduce our own turnkey optimizations for special cases of the very general algorithms available in Linq, which was a pleasant surprise for me.

I posted this as a quiz question on Stack Overflow. The reaction to the idea of someone posting a question as a quiz was quite mixed – the votes for the question went negative a few times before averaging out at zero. I think this is a good use of the site, because the end result is a question with an answer and some associated discussion to give more detailed background, alternative approaches, etc. I think it annoyed people who didn’t know the answer, because (as I would freely admit) part of the reward of using Stack Overflow is the ego-boost of handing out knowledge to those in need. If you meet someone who already knows the answer to their question, and then – even worse – you don’t know the answer, then it kind of spoils the fun for you (one guy in particular seemed quite upset). But the fact remains that such an exercise produces the same kind of valuable addition to the site.

Tuesday 9 December 2008

Other Examples of Iterators and Async IO

The idea of returning functions via yield return to simply asynchronous IO programming has a precedent in Microsoft's Concurrency and Coordination Runtime:

http://msdn.microsoft.com/en-us/library/bb648753.aspx

Although they are yielding interfaces, really they are yielding functions:

http://msdn.microsoft.com/en-us/library/microsoft.ccr.core.itask_members.aspx

The ITask interface has only one really important method: Execute, which means that really it's a function (a delegate in C# terms). I think the functional style makes the whole thing cleaner, but the point is that the idea was already in use in the Microsoft's CCR, perhaps as long ago as 2005 (although that's difficult to verify).

Saturday 6 December 2008

More on Jeffrey Richter’s AsyncEnumerator and Functional Programming

If you do asynchronous programming (or have been put off it in the past by the complexity) and you haven’t already looked at this, then you really should:

http://msdn.microsoft.com/en-us/magazine/cc546608.aspx

I blogged yesterday about an idea for doing the same kind of thing but using lambdas to encapsulate the last remaining bit of complexity. Today I’ve applied the same idea, but with the goal of providing a thin layer that works along with Jeffrey Richter’s AsyncEnumerator class, essentially providing an optional new way of working with it. As I am not especially familiar with the asynchronous APIs, it would be especially stupid for me to try and reinvent the wheel instead of building on the work Jeffrey has already done.

I should point out that everything in this article is provided “as is”, but even more so than people usually mean by that phrase, because I haven’t even tried running any of this code (although it does compile, at least). I’ve merely attempted to analytically prove (by substitution and expansion) that it produces the equivalent of something that works.

These are just ideas. I don’t currently have a need for this facility in my own work, so I can’t afford to invest in testing it. I just got carried away with the implications of an idea today, and here are the results of that.

I’ll start by explaining the nuts and bolts. Using pure functional programming techniques, I’ve defined a handful of static functions, mostly extension methods on pre-existing types, making them very easy to discover and use. To begin with, I’ll leave out a detail to do with exception handling, and then add it in as an afterthought, as it makes things a little messier.

Jeffrey’s AsyncEnumerator class is effectively a consumer of a sequence of integers. The sequence is generated by a function that the user writes, taking advantage of the yield return keyword. They yield a count of 1 for each asynchronous operation that they start:

stream.BeginWrite(outputData, 0, outputData.Length, ae.End(), null);
yield return 1;
stream.EndWrite(ae.DequeueAsyncResult());

In the above snippet (taken from Jeffrey’s TcpClient example), a single write is made. Characteristically, there are three steps:

  • Call a BeginXXX API to launch the asynchronous operation. Such operations always require a callback function, which is provided by calling AsyncEnumerator.End.
  • Yield the value 1 (because only one operation has been launched in this case).
  • Call an EndXXX API to finish the operation. The AsyncEnumerator.DequeueAsyncResult is used to obtain some data associated with results of the operation.

The integers so yielded are interpreted as requests to wait for that number of asynchronous operations to complete. So to do three operations in parallel, you would begin three asynchronous calls, and then yield return 3. When your function got back control, the three calls would all have completed. However, they may of course complete in any order, so when the results are dequeued some care may need to be taken in matching the results up with their corresponding EndXXX functions, because they may be of different types (e.g. a stream operation and a WebRequest).

Using Linq’s Select function, we can easily convert between sequence types. This means that we can provide a way for the user to write a function that yields a sequence of some other kind, such that it can still be plugged into AsyncEnumerator.

The elements of the new kind of sequence are called activities. Although most user need not be aware of this, they are in fact functions, defined by this delegate:

public delegate int Activity(AsyncEnumerator asyncEnum);

An activity launches one or more asynchronous operations, and returns how many it has launched, making it perfect to fit AsyncEnumerator’s requirements. So we can easily convert a sequence of these activities into the kind of sequence that AsyncEnumerator likes to get:

public static void Execute(this AsyncEnumerator asyncEnum, 
                                   IEnumerable<Activity> activities)
{
    asyncEnum.Execute(activities.Select(activity => activity(asyncEnum))
                                .GetEnumerator());
}

In other words, we simply produce a sequence of integers by calling each activity in the input sequence. Then as a final step, get the IEnumerator<int> of the sequence and pass it to the intrinsic AsyncEnumerator.Execute function. By providing this as an extension method, it’s as convenient to use as its intrinsic cousin.

How can we compose a set of activities to make a single activity that causes all the activities in the set to execute in parallel? By defining a higher-order function that does it for us:

public static Activity Parallel(params Activity[] list)
{
    return asyncEnum => list.Sum(activity => activity(asyncEnum));
}

Again, the implementation is trivial thanks to Linq. Our composite activity has to return the sum of the return values of the activities on the input list, so it’s the perfect fit for the Sum function, which adds together the values produced by a function executed for each item in a sequence.

Finally, we need a higher-order function to allow us to easily define an activity. This one’s a little more mind-bending, but still pretty short. Crucially, it is much easier to use than it is to fully understand, and even then most users do not need to use it directly.

public static Activity Pair(Action<AsyncCallback> begin, AsyncCallback end)
{
    return asyncEnum => // action accepts an AsyncEnumerator 
    {
        begin(result => // callback accepts a result
           {
               end(result); // pass on to the user's handler
               asyncEnum.End()(result); // and to AsyncEnumerator
               asyncEnum.DequeueAsyncResult(); // we don't need this
           });
 
        return 1;
    };
}

It pairs together two functions. The first one, begin, is responsible for launching the asynchronous call. To do this, it needs a callback function that it can pass to whatever asynchronous API it calls. The second one, end, will run when the call completes, and (as you will see if you look up the definition of the standard AsyncCallback delegate) accepts an IAsyncResult object.

So the Activity function returned by Pair will call the begin function, and pass it a function to serve as the callback for the asynchronous API. That function is defined by the inner lambda. It calls onto three things:

  • Firstly, the end function (whatever that may be) so that the result of the call can be interpreted.
  • Secondly, the callback supplied by AsyncEnumerator. This is crucial, as it ensures that the thread pool is asked to execute another step through of the sequence of integers.
  • Finally, the AsyncEnumerator.DequeAsyncResult function, although the result is discarded. This is because we have already passed the result to the end function. But we should still call this function once for each call that is made, so that the queue doesn’t needlessly grow in length.

That’s all we need as a basis. But for maximum convenience for most users, we can add extension methods for the most commonly used asynchronous call types. For example, writing to a stream:

public static Activity RequestWrite(this Stream stream, byte[] buffer, 
                                    int offset, int size)
{
    return Pair(
        callback => stream.BeginWrite(buffer, offset, size, callback, null),
        result => stream.EndWrite(result));
}

This is a higher order function – it makes an Activity function, using the handy Pair function to tie together the two halves of the operation. But it can be called directly on a Stream object. If so called, it does not actually do anything; the return value must be passed on via yield return.

So how does code look using this whole technique? Let’s look back at that real example again. The function starts with this signature:

private static IEnumerator<Int32> Process(AsyncEnumerator ae, 
                                          String server, 
                                          String message) 
{

Then in the body of the function there are triplets of lines like this:

stream.BeginWrite(outputData, 0, outputData.Length, ae.End(), null);
yield return 1;
stream.EndWrite(ae.DequeueAsyncResult());

By contrast, the alternative way of working enabled by these new functions begins by declaring the function like so:

private static IEnumerable<Activity> Process(String server, 
                                             String message)
{

The first thing to note (aside from the obvious change to the return type) is that there is no need to pass in an AsyncEnumerator object, even though it interoperates with one automatically. The code to write to the stream looks like this:

yield return stream.RequestWrite(outputData, 0, outputData.Length);

The triplet of lines has been boiled down to a single line, which appears just like a non-asynchronous call but with yield return in front of it.

Because the helper extensions are higher order functions, the user doesn’t need to even be aware that they’re using functional programming, or that an Activity is a function. They just make a method call that appears to do what they want, although they prefix it with yield return.

What about parallel activities running at the same time? Loosely inspired by another of Jeffrey’s examples, this function obtains results from two website simultaneously:

private static IEnumerable<Activity> ProcessAllAndEachOps()
{
   yield return Async.Parallel(
 
            WebRequest.Create("http://www.google.com").RequestResponse(true, response =>
                Console.WriteLine("Bytes from Google: " + response.ContentLength)),
 
            WebRequest.Create("http://www.microsoft.com").RequestResponse(true, response =>
                Console.WriteLine("Bytes from Microsoft" + response.ContentLength))
       );
 
   Console.WriteLine("All the operations completed.");
}

The Parallel function we saw defined above takes care of composing a set of activities into a new activity that will launch all of them to run in parallel.

The end result appears so simple that it is worth going through an exercise of “expansion” to see how this alternative way of working must produce the same results as the original way. Hold onto your hats…

Here’s the nice and easy way it looks to the typical user:

yield return stream.RequestWrite(outputData, 0, outputData.Length);

Let’s substitute in the definition of RequestWrite:

yield return Async.Pair(
    callback => stream.BeginWrite(outputData, 0, outputData.Length, callback, null),
    result => stream.EndWrite(result));

Then the definition of Pair, in two stages to make it easy to follow. Firstly, keeping the definitions of begin and end separated out:

yield return asyncEnum => // action accepts an AsyncEnumerator 
{
    Action<AsyncCallback> begin =
              callback => stream.BeginWrite(outputData, 0, 
                            outputData.Length, callback, null);
 
    AsyncCallback end = result => stream.EndWrite(result);
    
    begin(result => // callback accepts a result
    {
        end(result); // pass on to the user's handler
        asyncEnum.End()(result); // and to AsyncEnumerator
        asyncEnum.DequeueAsyncResult(); // we don't need this
    });
    
    return 1;
};

Then finally substituting those definitions into their places:

yield return asyncEnum => // action accepts an AsyncEnumerator 
{
    stream.BeginWrite(outputData, 0, outputData.Length, 
        result => // callback accepts a result
        {
          stream.EndWrite(result); // pass on to the user's handler
          asyncEnum.End()(result); // and to AsyncEnumerator
          asyncEnum.DequeueAsyncResult(); // we don't need this
        }, null);
 
    return 1;
};

Now we can see what’s really happening. The statement yields a function that calls BeginWrite on the stream and then returns 1. BeginWrite requires a callback, and we build one for it. In the normal way of using AsyncEnumerator as seen in Jeffrey’s example, the callback is provided by calling AsyncEnumerator.End, but here we effectively wrap that in a lambda so we can do other things as well. First we call EndWrite on the stream (which may throw an exception, but I’ll deal with that in a moment), then we call on to the callback returned by AsyncEnumerator.End, and finally we do the housekeeping of discarding one IAsyncResult instance from AsyncEnumerator’s internal inbox.

Then this whole function will be executed by the Execute extension method. Some more substitution will make this clearer. Here’s the expression we use to generate the kind of IEnumerable<int> that the intrinsic Execute method requires:

activities.Select(activity => activity(asyncEnum))

So each activity is called in turn to produce integers, and the result is the same as if the return value of the activity was being yielded directly. In other words, continuing our expansion (now supposing we are in a function that is yielding integers having been passed the parameter asyncEnum):

stream.BeginWrite(outputData, 0, outputData.Length,
    result => // callback accepts a result
    {
        stream.EndWrite(result); // pass on to the user's handler
        asyncEnum.End()(result); // and to AsyncEnumerator
        asyncEnum.DequeueAsyncResult(); // we don't need this
    }, null);
 
yield return 1;

It is now clear what the only actual differences are. Compare with Jeffrey’s original code (I’ll repeat it here one more time to save you scrolling up and down the page).

stream.BeginWrite(outputData, 0, outputData.Length, ae.End(), null);
yield return 1;
stream.EndWrite(ae.DequeueAsyncResult());

There, the End() callback causes the function to resume executing after the operation finishes, and the next line of code is the call to stream.EndWrite. Working the new way, stream.EndWrite is called from within the callback itself. It also has the correct IAsyncResult ready to use.

Also, immediately after calling the function returned by End(), the result is removed from the inbox by calling DequeAsyncResult. This places an additional thread-safety constraint on AsyncEnumerator, because previously it looks like the inbox is only accessed by one thread at a time. But by looking at the code of AsyncEnumerator in Reflector, I can see that it takes out a lock before accessing the queue, so this should be fine.

So what are the limitations of this new approach? The main one is the lack of flexibility in exception handling. With the original raw approach, you are free to place a try/catch around either the BeginXXX or EndXXX calls (although you can’t place one such handler around both calls, due to a limitation of the yield return implementation.)

In the new approach, the best we can do is to allow any exceptions to propagate out of the IEnumerator<int> sequence generator. In other words, if the BeginXXX or EndXXX calls throw an exception, then it is just as if they were to throw uncaught exceptions in the original approach.

To achieve this, we need to make a further change. The reason is that the EndXXX call is made in the context of a thread that is running due to the fact that an asynchronous call has completed. We do not control that thread. Rather than bothering it with an exception, we need to transfer the exception into the context of whichever thread is executing our function.

To achieve this, first we have to catch the exception. The user’s callback is executed in the inner lambda in Pair, so that’s where we need a try/catch block:

public static Activity Pair(Action<AsyncCallback> begin, AsyncCallback end)
{
    return asyncEnum => // action accepts an AsyncEnumerator 
    {
        begin(result => // callback accepts a result
        {
            try
            {
                end(result); // pass on to the user's handler
            }
            catch (Exception e)
            {
                asyncEnum.Cancel(e);
            }
            asyncEnum.End()(result); // and to AsyncEnumerator
            asyncEnum.DequeueAsyncResult(); // we don't need this
        });
 
        return 1;
    };
}

Note that we make use of a handy feature of AsyncEnumerator where we stash the exception inside it by calling Cancel. We then continue as normal, calling on to AsyncEnumerator’s callback, which means that the iteration will resume. What does this mean?

It means that when iteration resumes, the first thing we need to check for the cancellation, retrieve the exception and throw it. Unfortunately, this means we need to mess up our nice simple Execute function. If you refer to where I defined it above, you’ll see that I originally used Select to do all the looping and yielding. But this doesn’t allow us to perform the exception check immediately after resuming from the yield return. We need to write out the equivalent of Select in “long hand” so we can insert the extra code:

public static void Execute(this AsyncEnumerator asyncEnum, 
                           IEnumerable<Activity> activities)
{
    asyncEnum.Execute(AdaptEnumerator(asyncEnum, activities));
}
 
private static IEnumerator<int> AdaptEnumerator(
       AsyncEnumerator asyncEnum, Enumerable<Activity> activities)
{
    foreach (Activity activity in activities)
    {
        yield return activity(asyncEnum);
 
        object x;
        if (asyncEnum.IsCanceled(out x) && (x is Exception))
            throw (Exception)x;
    }
}

The private AdaptEnumerator function serves as the equivalent of Select, except that it checks for an exception to throw immediately following resumption after the yield return. This means that after the asynchronous action completes with an error, no further code executes in the underlying function that yields the Activities.

Like I say, I have only written this and done the above analysis on it, not really tested much. If you want to give it a try, be my guest:

http://www.earwicker.com/downloads/async.zip

Please let me know if you find any ridiculous bugs or design flaws, or even if it works. It’s just a source file containing all the functions (just 150 or so lines of code), so it could be added to any project already using AsyncEnumerator, or built into a separate class library.