Ben Biddington

Whatever it is, it's not about "coding"

Posts Tagged ‘.net

Why can’t I hang an extension method on a type?

leave a comment »

My brother asked me this. And while I don’t know, I did discover some interesting things along the way.

An extension method is nothing more than a compiler trick. It is simply a static method that takes an instance of the type being extended as an argument. That’s it.

The sugar part is that to you as a programmer, it appears to read more naturally in some cases.

They do not have any special privileges on private or protected members and they are not analagous to ruby module mixins (because the extended class cannot invoked extension methods).

[TBD: It is interesting that instance methods are supplied “this” as their first argument, see CIL]

[TBD: It is interesting that the compiler emits a callvirt instruction even in cases where call seems more appropriate just because callvirt has a null reference check. See: Why does C# always use callvirt?]

[TBD: Extensions are really a higher level abstraction because they operate only against public interface. An extension method is a client of the object it “extends”]


namespace Examples {
    public class ExampleClass { }

    public static class Extensions {
        public static void ExtensionMethod(this ExampleClass instance) {

    public class ThatUsesExampleClass {
        public void RunExample() {
            new ExampleClass().ExtensionMethod();

The interesting part is RunExample (because it invokes the extension method):

public void RunExample() {
    new ExampleClass().ExtensionMethod();

which compiles to:

.method public hidebysig instance void
RunExample() cil managed
    // Code size       13 (0xd)
    .maxstack  8
    IL_0000:  nop
    IL_0001:  newobj     instance void Examples.ExampleClass::.ctor()
    IL_0006:  call       void Examples.Extensions::ExtensionMethod(class Examples.ExampleClass)
    IL_000b:  nop
    IL_000c:  ret
} // end of method ThatUsesExampleClass::RunExample

It is clear that the compiler has done nothing more than redirect to a static method on a static class:

IL_0006:  call       void Examples.Extensions::ExtensionMethod(class Examples.ExampleClass)


The usual static method usage rules apply:

[Clean code chapter 6]
Procedural code (code using data structures) makes it easy to add new functions withoutchanging the existing data structures. OO code, on the other hand, makes it easy to add new classes without changing existing functions.

The complement is also true:
Procedural code makes it hard to add new data structures because all the functions must
change. OO code makes it hard to add new functions because all the classes must change.
So, the things that are hard for OO are easy for procedures, and the things that are
hard for procedures are easy for OO!

In any complex system there are going to be times when we want to add new data
types rather than new functions. For these cases objects and OO are most appropriate. On
the other hand, there will also be times when we’ll want to add new functions as opposed
to data types. In that case procedural code and data structures will be more appropriate.

Mature programmers know that the idea that everything is an object is a myth. Sometimes
you really do want simple data structures with procedures operating on them.

[TBD: Usage — how does it fit with OO design?]

Back to the question

Still no answer.

But I can’t see any reason why the C# compiler couldn’t do the same for static constructs, but I wonder how you would express that on the extension method itself.  Perhaps that’s where the ExtensionAttribute comes in. Note: It currently is illegal to use the ExtensionAttribute directly.

But if you examine the IL for an extension method itself, you’ll see it has been applied:

.custom instance void [System.Core]System.Runtime.CompilerServices.ExtensionAttribute::.ctor() =
    ( 01 00 00 00 )
.method public hidebysig static void
    ExtensionMethod(class Examples.ExampleClass 'instance') cil managed {

    .custom instance void [System.Core]System.Runtime.CompilerServices.ExtensionAttribute::.ctor() =
        ( 01 00 00 00 ) 

    // Code size       9 (0x9)
    .maxstack  8
    IL_0000:  nop
    IL_0001:  ldarg.0
    IL_0002:  callvirt   instance string [mscorlib]System.Object::ToString()
    IL_0007:  pop
    IL_0008:  ret
} // end of method Extensions::ExtensionMethod

Written by benbiddington

4 August, 2010 at 09:15

Posted in development

Tagged with , , , , ,

Windows services and net use

leave a comment »

We have some Windows services that need to access network shares, and even though we have net used, those resources are still unavailable. It appears this is because our services are running as LocalSystem.

How to check the connections available to LocalSystem

1. Open command prompt as LocalSystem

Follow these instructions to get a LocalSystem cmd prompt using at.exe.

Note: You can use at.exe only when the Schedule service is running, to find out:

sc query schedule

2. List connections

net use

You will see the set of connections available.

Note this set is different to the list generated by ordinary command prompt (your account).

How to add connection for LocalSystem

Don’t know, that method is not very automatable.


Written by benbiddington

27 April, 2010 at 13:37

Posted in development

Tagged with , , , , ,

Async operations and exceptions

leave a comment »

We have had the case where we’re creating a class that allows clients to block while internally it reads an entire stream asynchronously. This class encapsulates the state required to perform such a task.

While attempting to write unit tests for exceptions, we found that an exception thrown during the asynchronous operation would not be thrown to client. Debugging showed that the exception was being thrown, but no notification was being sent to the parent thread.

No such thing as unhandled exceptions on managed threads

[MSDN] [since .NET Framework v2.0] There is no such thing as an unhandled exception on a thread pool [or finalizer] thread. When a task throws an exception that it does not handle, the runtime prints the exception stack trace to the console and then returns the thread to the thread pool.

Errors raised on a child thread are essentially lost when the thread exits. This means there is some work required to propagate these exceptions.

This requires a blocking wait on the part of the client, and a mechanism for storing the exception so the parent thread can read it.

As an example, we have implemented an AsyncStreamReader which contains a blocking ReadAll method. If an asynchronous read fails with an exception, that exception is exposed internally as a field, and the waiting thread is then signalled. Once the waiting thread wakes up it checks the exception field and throws it if required.

We have blocking Read operation that waits for an async read to complete. The notification mechanism is a ManualResetEvent (WaitHandle).

  1. T1: Invoke ReadAll.
  2. T1: Start async operation (spawns T2).
  3. T1: Wait.
    1. T2: Async operation encounters exception.
    2. T2: Store exception in _error field.
    3. T2: Signals T1.
    4. T2: Returns without triggering any subsequent reads.
    5. T2: Thread exits
  4. T1: Parent thread resumes (still inside ReadAll).
  5. T1: Checks _error field. If it is not null, throw it, otherwise return.
  6. T1: Exception is now propagated


Written by benbiddington

27 April, 2010 at 13:37

Raking .NET projects in TeamCity

leave a comment »

Faced with the unpleasant prospect of assembling yet another stack of xml files for an automated build, I thought I’d try rake instead. A couple of people here at 7digital have used Albacore before, so I started there.

1. Build

Use Albacore‘s msbuild task:

require 'albacore'

desc "Clean and build"
msbuild 'clean_and_build' do |msb| :configuration => :Release
    msb.targets :Clean, :Build
    msb.verbosity = "quiet"
    msb.solution  = "path/to/ProjectName.sln"

2. Run tests

This is also very straight forward with Albacore, but slightly more useful is applying the usual TeamCity test result formatting and reporting.

2.1 Tell your build where the NUnit test launcher is

TeamCity already has an NUnit runner, and the recommended way to reference it is with an environment variable.

Note: The runners are in the <TEAM CITY INSTALLATION DIR>/buildAgent/plugins/dotnetPlugin/bin directory.

2.2 Write the task

Once you have the path to the executable, you’re free to apply any of the available runner options.

Assuming you have added the TEAMCITY_NUNIT_LAUNCHER environment variable then the actual execution is then something like:

asm = 'ProjectName.Unit.Tests.dll'
sh("#{nunit_launcher} v2.0 x86 NUnit-2.5.0 #{asm}")

Beats hundreds of lines of xml I reckon.


Written by benbiddington

18 February, 2010 at 13:37

Posted in development

Tagged with , , , , , ,

.NET Process — avoid deadlock with async reads

leave a comment »

If you are working with a child process that writes large amounts of data to its redirected stdout (or stderr), it is advisable to read from it asynchronously.

Why read stdout asynchronously?

A pipe is a connection between two processes in which one process writes data to the pipe and the other reads from the pipe. System.Diagnostics.Process.StandardOutput is an example of a pipe.

A child process may block while it waits for the client end to read from its stdout (or stderr).

When redirected, a process’s stdout may reach its limit, it will then wait for its parent to read some data before it will continue. If the parent process is waiting for all the bytes to be written before it reads anything (synchronous read), then it will wait indefinitely.

The point is: redirected streams have a limited buffer, keep them clear to allow process to complete.

So you may encounter deadlock:

[Deadlock] Pipes have a fixed size (often 4096 bytes) and if a process tries to write to a pipe which is full, the write will block until a process reads some data from the pipe.

If your child process is going to write more data than its buffer can contain, you’ll need to read it asynchronously. This stops a process blocking by ensuring there is space to emit data.


Example: piping a file to lame stdin (Windows)

Use the type command:

$ type file.mp3 | lame --mp3output 64 - "path/to/output.mp3"

Type reads the source file an emits it to its stdout, we’re then piping that directly to lame. In the preceeding example, lame has been instructed to read from stdin and write directly to a file.

To pipe stdout to another process, use something like:

$ type file.mp3 | lame --mp3output 64 - - | another_process

Or redirect to a file:

$ type file.mp3 | lame --mp3output 64 - - > "path/to/output.mp3"

Get a list of running processes (Windows)

Use the query process command.


Written by benbiddington

8 September, 2009 at 09:56

.NET Process — working with binary output

with one comment

Lately we discovered an issue while encoding Mp3 files with LameOur client reported encoded files we garbled; playable but watery — and full of pops and clicks.

We found this was due to interpreting the binary output from Lame as text — we had mistakenly employed Process.BeginOutputReadLine and its companion event OutputDataReceived.


By observing a Process using its OutputDataReceived event, clients can make asynchronous reads on a process’s StandardOutput.

Process.StandardOutput is a TextReader: it represents a reader that can read a sequential series of characters, i.e., it interprets its underlying stream as text.

When StandardOutput is being read asynchronously, the Process class monitors it, collecting characters into a string. Once it encounters a line ending, it notifies observers (handlers of its OutputDataReceived event), with the line of text it’s been collecting.

In short, the Process‘s underlying byte stream is converted to lines of text, and clients are notified one line at a time.

In doing so, some bytes are discarded: any bytes that (in the current encoding) represent line endings.

As a result of these missing bytes, our output Mp3s were playable, but sounded terrible.


Bypass StandardOutput. Use its underlying Stream instead.

Written by benbiddington

7 September, 2009 at 08:00

IDisposable and unmanaged memory

leave a comment »

My pair and I had to implement IDisposable the other day, and I had almost forgotten how and why it is done the way it is, so I thought I’d make some notes. An exceptionally clear summary can be found in section 9.3 of Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries, which I have used as the basis.

Objects that:

  1. Contain references to unmanaged resources, i.e., objects that don’t have finalizers. These types of objects should also define a finalizer.
  2. — or– contain references to disposable objects.

should always implement IDisposable. Disposable objects offer clients a way to free resources deterministically, rather than whenever the CLR deems it necessary.

Here is a class that contains a simple implementation. It includes a finalizer because it contains a reference to an unmanaged object that doesn’t have its own.

public class UnmanagedResourceHolder : IDisposable {
    IntPtr buffer; // An unmanaged resource
    SafeHandle managedResource;

    public UnmanagedResourceHolder () {
        this.buffer = ... // init buffer
        this.managedResource = ...

    public void Dispose() {

        // Only suppress if Dispose(true) has completed successfully
        // to ensure finalizer gets a chance

    public ~UnmanagedResourceHolder() {

    protected virtual void Dispose(Boolean disposing) {
        // Can't find reference for the following, assume it's self-explanatory...

        if (disposing) {
            // Run deterministic cleanup
            if (managedResource != null) {

Points to note:

  • Unmanaged resources released on both paths. This ensures deterministic cleanup is available as well as finalizer cleanup.
  • Managed resources are not released during finalizer. This is because managedResource is managed — it will handle its own finalization, plus the next reason.
  • During finalization, (normally valid) assumptions about the internal state of an object are no longer reliable. Finalization occurs in an unpredictable order — for example, the managedResource field may have already been finalized.
  • Provided Dispose() is called, finalization is skipped (though there is still overhead, see below).
  • It is a good idea to provide a protected virtual Dispose to allow derived types to perform their own cleanup.
  • Always invoke super type’s Dispose (if there is one) — for obvious reasons — when overriding in derived type.

A connection pool example

Why is it important to close database connections? Here’s what happens when connection is not explicitly closed:

Audit Login		-- network protocol: TCP/IP
SQL:BatchStarting	SELECT count(1) from User
SQL:BatchCompleted	SELECT count(1) from User
Audit Logout

Here’s what happens when a connection is closed (or finalized):

Audit Login		-- network protocol: TCP/IP...
SQL:BatchStarting	SELECT count(1) from User
SQL:BatchCompleted	SELECT count(1) from User
Audit Logout
RPC:Completed		exec sp_reset_connection

Identical, except that sp_reset_connection is invoked at the end.

In both cases, the connection remains sleeping (process is waiting for a lock or user input):

login_time last_batch hostname cmd status
2009-06-15 09:17:29.590 BENB AWAITING COMMAND sleeping

This behaviour is part of ADO.NET connection pooling. Connections remain ready like this until they are considered surplus (and removed from the pool), or the application exits. You can prove this easily enough yourself, quit your test fixture and then requery your connection state.

It is, therefore, important to close connections from an ADO.NET pooling standpoint. In order to make the in-memory connection available again.

If Open is invoked on a database connection, and there are no free connections available, an InvalidOperationException results with an error message like:

Timeout expired.  The timeout period elapsed prior to obtaining
a connection from the pool. This may have occurred because all pooled
connections were in use and max pool size was reached.

Querying connection states

Examine connections in SqlServer using master.db.sysprocesses:

select login_time, last_batch, hostname, cmd, status
from master.dbo.sysprocesses with(nolock)
where dbid = DB_ID('PersonalWind')


Finalizers are only for unmanaged resources. A finalizer provides a mechanism for releasing unmanaged resources when clients omit explicit disposal. Finalization occurs before the garbage collector reclaims managed memory, and is the last chance for objects to release unmanaged resources.

[MSDN, Object Lifetime: How Objects Are Created and Destroyed] The garbage collector in the CLR does not (and cannot) dispose of unmanaged objects, objects that the operating system executes directly, outside the CLR environment. This is because different unmanaged objects must be disposed of in different ways. That information is not directly associated with the unmanaged object; it must be found in the documentation for the object. A class that uses unmanaged objects must dispose of them in its Finalize method.

Though useful in certain circumstances, finalizers are notoriously difficult to implement, and incur real overhead:

  • [MSDN] When allocated, finalizable objects are added to a finalization list. When these instances are no longer reachable and the GC runs, they’re moved to the “FReachable” queue, which is processed by the finalizer thread. Suppressing finalization with GC.SuppressFinalize sets a “do not run my finalizer” flag in the object’s header, such that the object will not get moved to the FReachable queue by the GC. As a result, while minimal, there is still overhead to giving an object a finalizer even if the finalizer does nothing or is suppressed.
  • When the CLR needs to call a finalizer, it postpones reclamation of managed memory until the next round. This means finalizable objects are longer-lived — they use memory for longer.


There is no way to predict when a finalizer will be called, because CLR decides when to reclaim memory based dynamically at runtime. Garbage collection is an expensive exercise, and is minimized by design, so memory can persist long after the variables that reference it have dropped out of scope. This may be unacceptable for some systems. Database connection pooling is a prime example of this. Failure to release connections by closing them when they’re no longer required quickly cripples a system.


Written by benbiddington

15 June, 2009 at 21:01