How Asynchronous Operations Can Reduce Performance

It’s well known that asynchronous programming can improve application performance. Developers writing Windows applications are trying to keep the UI thread responsive as work is performed in the background while website and web API developers attempt to maximize thread availability for processing new requests. Like any technique, however, asynchronous programming can produce results that are contradictory to the expectation. Without some forethought and care into how work is offloaded onto threads, developers can unwittingly put areas of application performance at risk. In this post I want to talk about the different types of asynchronous work, how the operating system schedules this work and how to determine when creating new threads should be created in your application when initiating asynchronous operations.

Types of Asynchronous Operations

Most of the functionality we code into our applications fall into one of two main categorizations: Compute-Bound operations are those in which calculations are done that are time consuming to perform. Examples of this include sorting a large array or calculating the first 1000 digits of pi. In these cases, the CPU is responsible for completing the work but with most every modern machine supporting multiple cores it benefits the application to continue processing the main thread while these calculations are taking place on a second core. For these scenarios, it’s worthwhile to offload the work to a thread pool thread to improve the application’s performance. We can do this by using the Task class to schedule the work on the thread pool as in the example below:

private static Task<double> DoHeavyCalculationsAsync(int length)
   return Task.Run(() => {
	//do heavy calculations on another thread

Most of the other operations we think of as time consuming, such as reading and writing to files, database operations and calls made over the network are classified as I/O-bound operations. At first blush, I/O operations seem like the perfect candidate for offloading to a new thread pool thread. They can be unpredictable in the time it takes for the operations to complete so therefore they will likely cause the application to lose responsiveness. By moving this logic to another thread, the main thread can continue to perform its work and all is right with the world, or so it seems. In reality, because of the way that the thread pool schedules work to be performed, the truth of the matter is that performance can actually be worsened rather than improved. Let’s look at a simple example:

private static Task<byte[]> ReadFileAsync(string path)
    return Task.Run(() =>
        Byte[] bytes;
        using (var stream = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read, 4096, true))
            bytes = new byte[stream.Length];
            stream.Read(bytes, 0, (int)stream.Length);
        return bytes;

Thread Management under the Hood

While it’s true that this code will not block the calling thread, it’s not doing the application any favors in terms of overall performance. All .NET applications are allocated two sets of threads onto which it can queue work, and the number of these threads that are allocated to the process as of .NET 4 varies based on a number of factors, including the resources available on the machine running the application. The first set contains worker threads that can execute work in parallel with the main thread of the application. This is the only group of threads outside of the main application thread that the Task class in the .NET 4 framework can queue work onto. The second set of threads are known as I/O completion threads. These are threads that are requested by the CLR when the OS has completed asynchronous I/O operations like reading file contents, making driver calls or making calls over a network. When an I/O operation is complete, the completion port bound to that operation will notify the CLR, which will obtain a thread from this set on which to execute the callback of the asynchronous operation. The problem with offloading I/O operations to another thread using Task.Run is that it results in swapping one thread for another when the OS can perform asynchronous I/O without blocking a thread. In the example above if you had a loop that was reading the contents of all files within a fairly large folder tree, you could easily have 100 or more threads waiting for their respective files to finish being read. This will tax the application for sure because the thread pool will have to create and destroy a large number of worker threads to handle the volume of files being processed, and eventually this can cause resource contention on the machine since each thread takes a minimum of one megabyte of RAM. About the only thing that will be going to plan is that the UI will be responsive, at least until the system starts to go into resource starvation. So how can we make use of the I/O completion threads in our applications if the .NET thread pool is exclusively tied to queuing operations on worker threads?

Looking Inward

While it is possible for .NET developers to queue work onto these completion ports using some rather obscure methods in the ThreadPool class and native overlapped structures, it’s quite unlikely you’ll ever really need to. Luckily, the framework is already chock full with many methods that implement this I/O pattern for you. Any methods that implement the Asynchronous Programming Model (APM), characterized by the Begin and End method pairs, or the more recent Task-based Asynchronous Programming (TAP) paradigm. These methods have been programmed to use the native calls that schedule work to be run on the I/O completion threads. The result is that no thread is held up waiting for the I/O to complete, and the overall application performance will be better for it. Here is our example from above, rewritten to use built in TAP methods that use the I/O completion threads:

private static async Task<byte[]> ReadFileAsync(string path)
    var bytesRead = 0;
    byte[] fileBytes;

    using (var stream = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read, 4096, true))
        fileBytes = new byte[stream.Length];
        bytesRead = await stream.ReadAsync(fileBytes, 0, (int)stream.Length).ConfigureAwait(false);
    return fileBytes;

In the above example, we aren’t creating a new thread to offload the work onto, but simply calling the ReadAsync method of the Stream class. When the await statement is hit, the current execution is returned to the caller and the work is queued on the OS, freeing the current thread to perform other work. When the file read is complete, a thread from the pool is re-assigned to handle the remaining work after the await.


By differentiating between CPU and I/O operations in your application and a little digging into the classes surrounding I/O operations, you can determine when it is truly beneficial to offload work to the thread pool or whether to let to the framework asynchronous methods do the heavy lifting instead.

Share this: Tweet about this on Twitter0Share on Reddit0Share on Facebook0Buffer this pageShare on Google+0
1 reply

Trackbacks & Pingbacks

  1. […] How Asynchronous Operations can reduce performance […]

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply to Articles for 2014-apr-17 | Readings for a day Cancel reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>