Ways of creating multi-threaded applications in .NET (Part 2). ThreadPool Class

In Part 1 of this article, we talked about what threads are in .NET. Now, we want to dwell on the methods of background and asynchronous execution of threads in .NET apps.

These methods have advantages and disadvantages. They are not always convenient to use, but generally, background and asynchronous execution of threads offers wide opportunities in executing separate background threads for both small and long tasks.

Thread pool and its difference from the Thread class

Creation and destruction of threads are very resource-intensive processes. Performing them too often is not recommended. However, there are various small tasks that require asynchronous execution or with maximum utilization of all CPU cores. For such tasks, it is best to create a set of threads in advance and then distribute the tasks among these threads.

It would be quite good if the threads that had already completed their tasks could be re-used without wasting computational resources destroying them and creating new threads. It would also be nice if the program itself determines how many threads it would require to efficiently solve a problem.

Such a set of threads in .NET exists and is called a thread pool. It is implemented in the ThreadPool static class of the System. Threading namespace. You need not create a ThreadPool class object, and it will not work either. Such an object is created automatically when the application starts – provided the System. Threading namespace is connected in it.

If you’re interested in more, read Microsoft Roslyn – using the compiler as a service

ThreadPool can automatically increase or reduce the number of active threads to maximize task execution efficiency. The maximum allowed number of processing threads in a pool is 1023. The pool allocates a maximum of 1000 threads in an I/O operation.

To get maximum number of threads, you can use the GetMaxThreads method of the ThreadPool static class. The first parameter passed to this method returns the number of processing threads. The second parameter returns the number of I/O threads.

int nWorkers; // number of processing threads
int nCompletions; // number of I/O threads
ThreadPool.GetMaxThreads(out nWorkers, out nCompletions);

You can also specify the maximum and minimum number of threads in a pool. To set the maximum number of threads, you need to invoke the SetMaxThreads method.

ThreadPool.SetMaxThreads(int nWorkers, int nCompletions);

where nWorkers is the number of processing threads, nCompletions is the number of I/O threads. To set the minimum number of threads in a pool, use the SetMinTherads method.

ThreadPool.SetMinThreads(int nWorkers, int nCompletions);

The parameters here are exactly the same as in the SetMaxThreads method.

If, for some reason, the threads are not enough to perform the user’s tasks, the tasks will be automatically placed in a queue. As soon as at least one of the pool threads finishes its work, it will be redirected to execute tasks in the queue. If any of the threads completes its work before the rest, it will be sent back to the pool but not destroyed. This thread can be re-enabled at the first opportunity.

You can add a task to a thread pool’s queue in one of the following four ways:

  • Calling the QueueUserWorkItem method.
  • Calling asynchronous delegates BeginInvoke() and EndInvoke();
  • Using the BackgroundWorker class methods;
  • Using the Task Parallel Library (TPL) methods.

QueueUserWorkItem method

This method adds a task to the thread pool’s queue for execution and requests the required number of threads from the pool to perform this task. The name of the executable function, wrapped in a WaitCallBack delegate, is passed to the method as a parameter. The object of storing the task state data can be passed as the second parameter.

ThreadPool.QueueUserWorkItem(Job);

If the ThreadPool object does not exist at the time the method is invoked, it will be created. If the pool is already created and there is at least one free thread in it, then the task is passed to this thread. If several pool threads are free, then the pool will allocate these threads such that the task is executed as quickly as possible.

The following example uses all the basic methods of working with a thread pool – accessing a pool to display the maximum number of threads and sending a task to a pool for execution.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace ThreadPoolTest
{
    class Program
    {
        static void Main()
        {
            int nWorkers; // number of processing threads
            int nIOs; // number of I/O threads
            ThreadPool.GetMaxThreads(out nWorkers, out nIOs);
            Console.WriteLine("Maximum threads: " + nWorkers
                + "\nMaximum I/O Threads: " + nIOs);
            for(int i = 0; i < 10; i++)                 ThreadPool.QueueUserWorkItem(Job);
            Thread.Sleep(3000);
            Console.ReadLine();
        }
        static void Job(object state)
        {
            for (int i = 0; i < 3; i++)             {                 Console.WriteLine("cycle {0}, is processing by thread {1}",
                    i, Thread.CurrentThread.ManagedThreadId);
                Thread.Sleep(100);
            }
        }
    }
}

The result of the example is shown in Figure 1. The program was executed on an Intel Core i7 4770K processor, which contains 4 physical and 8 logical processor cores.


Fig. 1 Result of program execution in a thread pool.

As can be seen from Figure 1, eight threads were allocated from the pool to the program – exactly the same number of logical processor cores available.

Features of a thread pool

Using a thread pool allows you to enhance the performance of a multithreaded application. A thread pool significantly reduces the cost of starting and stopping threads, increases the number of threads that are started and stopped, and can reuse completed threads.

However, a thread pool has a number of features that in certain situations can be considered as shortcomings:

  • All threads from a pool are background thread.
  • At the end of all the foreground threads of an application, the work of all threads from the pool will also be aborted, regardless of whether they have completed their tasks or not.
  • It is impossible to make a thread from a pool a foreground thread.
  • Threads in a pool do not have a name. The only thing you can get for a thread from a pool is its ID (using the ManagedThreadID property):

Thread.CurrentThread.ManagedThreadId

  • Threads from a pool cannot be assigned a name.
  • The priority of a thread in a pool can be changed, but once it finishes executing its task and is returned to the pool, its priority will be reset to the default value (normal).
  • When processing COM objects in a thread pool, there will be problems due to the fact that such objects require the use of single-threaded apartment (STA) threads, but all the threads of a thread pool are multi-threaded apartment (MTA) threads.
  • Threads in a pool are suitable for executing small tasks, but not for permanent work (such threads need to be created using the Thread class).
  • Blocking a thread from a pool will lead to the starting of additional pool threads; the pool will continue to execute the task but this will affect performance.

A thread pool is implicitly used in the following .NET constructs:

  • Windows Communication Foundation (WCF);
  • Interprocess communication component – .NET Remoting;
  • ASP.NET;
  • ASMX Web Services;
  • Event-based Asynchronous Pattern (EAP);
  • Timers: System.Timer and System.Windows.Timer;
  • Parallel LiNQ (PLINQ).

It should be remembered that all the features of a thread pool apply to the above constructs.

If you’re interested in more, read .NET Core Framework Complete Review

Asynchronous delegates

The C# function can be invoked for both synchronous and asynchronous execution. When the function is invoked synchronously, it is executed in the same thread as the main program. The synchronous function invocation itself occurs in the usual way – by specifying the function name and its arguments in brackets immediately after the name.

When a function is invoked asynchronously, the runtime environment CLR allocates for the function a separate thread from the thread pool and executes the function in this thread, while the master program continues to execute in the main thread. To execute a function asynchronously, it must be wrapped in an AsyncCallBack delegate. Next, this delegate must be invoked by calling the BeginInvoke method. You can use the EndInvoke method to get the value returned by the method and terminate the method.

The following example illustrates how to work with asynchronous delegates.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Runtime.Remoting.Messaging;
class Program
{
    public delegate int MyDelegate(int x, int y);
    static AutoResetEvent are = new AutoResetEvent(false);
    static int WriteSum(int x, int y)
    {
        Console.WriteLine("Thread {0}: Sum = {1}",
            Thread.CurrentThread.ManagedThreadId, x + y);
        return x + y;
    }
    static void Summ(IAsyncResult async)
    {
        Thread.Sleep(3000);
        // AsyncResult type from the System.Runtime.Remoting.Messaging namespace
        MyDelegate func = ((AsyncResult)async).AsyncDelegate as MyDelegate;
        int sum = func.EndInvoke(async);
        are.Set(); // The Set method is used in thread synchronization and gives a signal to a thread to continue working
    }
    static void Main()
    {
        MyDelegate func = WriteSum;
        // The C# compiler displays an AsyncCallback delegate to refer to the SumDone() method
        IAsyncResult async = func.BeginInvoke(10, 10, Summ, null);
        Console.WriteLine("Thread {0}: called throw BeginInvoke() waiting to complete SumDone()",
                Thread.CurrentThread.ManagedThreadId);
        are.WaitOne(); // The WaitOne method waits for a Set signal from at least one thread
        Console.WriteLine("Thread {0}: finished his work",
                Thread.CurrentThread.ManagedThreadId);
        Console.ReadKey();
    }
}

To run a function for asynchronous execution, you need to declare the delegate class first.

public delegate int MyDelegate(int x, int y);

where int written after the keyword delegate is the type of the value returned by the function. The arguments of the function are listed in brackets.

The AutoResetEvent class notifies the thread generated by the asynchronous delegate that an event has occurred by calling the Set method. The value false is passed to the event constructor if AutoResetEvent is not scheduled to be triggered immediately after it is created.

staticAutoResetEvent are = new AutoResetEvent(false);

Next, you need to create an asynchronous MyDelegate delegate, which was declared earlier. The created delegate will be named func.

MyDelegate func = ((AsyncResult)async).AsyncDelegate as MyDelegate;

The EndInvoke method of the func delegate is used to return the result of asynchronous function execution.

The Set method of the AutoResetEvent class gives a signal to a waiting thread that it can resume its work. The Set method works only with waiting threads. In any other state other than waiting, the method ignores the threads. The WaitOne method is used to enter a thread in a waiting state. The WaitOne method blocks the current thread until it receives a signal generated by the Set method.

The result of the example is shown in Figure 2.


Fig. 2 – Result of execution of an asynchronous delegate.

As can be seen from Figure 2, the asynchronous delegate is actually executed in a separate thread.

BackgroundWorker Class

The BackgroundWorker class is designed to start long-running tasks in a separate thread. This class is essentially a wrapper for the ThreadPool class and uses a thread pool in its implementation. BackgroundWorker is needed if there is only one task that needs to be executed in a background mode in a separate thread.

BackgroundWorker provides the following capabilities:

  • Implementation of the protocol for sending and receiving messages on task progress, completion or early termination.
  • Flag for cancellation of an operation without using the Abort method of the Thread class.
  • Can be placed as a component on a form in a Visual Studio form designer (implements the IComponent interface).
  • Can handle exception in the main thread of a .
  • NET app (without mandatory writing of the try {} catch block in the body of the delegate of the passed thread).

  • Can change the statuses of window controls without using InvokeRequired and Dispatcher.

How to use BackgroundWorker

You can take use the features of the BackgroundWorker class in two ways:

  1. To create an instance of the BackgroundWorker class.
  2. To create a class inherited from BackgroundWorker.

When creating an instance of the BackgroundWorker class, the following needs to be performed:

  1. Create this instance by invoking the constructor.
  2. Add a DoWork event handler.
  3. Invoke the RunWorkerAsync method and pass an instance of any class inherited from object to it as an argument.

Once the work is completed, BackgroundWorker will generate a RunWorkerCompleted event.

BackgroundWorker allows you to display the progress of an operation. To do this you need to:

  1. Set the value true for the WorkerReportsProgress property.
  2. In the DoWork event handler, periodically invoke ReportProgress, indicating the amount of work done and the remaining work.
  3. Process the ProgressChanged event by requesting the ProgressPercentage property of its argument.

Event handlers ProgressChanged and RunWorkerCompleted freely access the user interface elements.

If there is a need to cancel an operation being performed by BackgroundWorker, you need to:

  1. Set the WorkerSupportsCancellation property to true.
  2. Set the Cancel property of the DoWorkArgs argument to true.
  3. Request cancellation of the operation using the CancelAsync method of the BackgroundWorker class.

The example below illustrates all the common operations with BackgroundWorker:

using System;
using System.Threading;
using System.ComponentModel;
class Program
{
    static BackgroundWorker bw;
    static void Main()
    {
        bw = new BackgroundWorker(); // we create a new instance of the BackgroundWorker class
        bw.WorkerReportsProgress = true; // we set support for progress of operations
        bw.WorkerSupportsCancellation = true; // we set support for operation canceling
        bw.DoWork += workfunc; // we add DoWork event handler
        bw.ProgressChanged += Progress; // we add state change event handlers
        bw.RunWorkerCompleted += Completed; // we add a shutdown event handler
        bw.RunWorkerAsync(null); // We run BackgroundWorker
        Console.WriteLine(
            "Press Enter during five seconds to abort the process");
        Console.ReadLine();
        if (bw.IsBusy) // if the Enter button is pressed
        {
            bw.CancelAsync(); //cancel operation
            Console.ReadLine(); //read Enter key pressing
        }
    }
    static void workfunc(object sender, DoWorkEventArgs e)
    { // function executed by BackgroundWorker
        for (int i = 0; i <= 100; i += 20)
        {
            if (bw.CancellationPending)
            { // here we process operation cancellation request
                e.Cancel = true; // here we cancel the operation
                return;
            }
            bw.ReportProgress(i); // here we declare the status of the operation
            Thread.Sleep(1000); //and put the thread to sleep for a second
        }
        e.Result = 1989; // will be passed to RunWorkerComрleted
    }
    static void Completed(object sender, RunWorkerCompletedEventArgs e)
    { // BackgroundWorker completion event handler function
        if (e.Cancelled) // if user aborted work
            Console.WriteLine(
                "Task processing by BackgroundWorker was aborted by user!");
        else if (e.Error != null)
            Console.WriteLine("Worker exception: " + e.Error); // if work was aborted due to exception
        else // if work was executed completely
            Console.WriteLine("Work is complete. Result is " + e.Result + ". ");
        Console.WriteLine("Press Enter to exit...");
    }
    static void Progress(object sender, ProgressChangedEventArgs e)
    { // function that displays the status of work being performed
    Console.WriteLine("Proceed " + e.ProgressPercentage + "%");
    } //ProgressPercentage - method of the ProgressChangedEventArgs argument of the BackgroundWorker class
}

Display of the application when the Enter key is pressed (BackgroundWorker was aborted by the user) is shown in Figure 3.


Fig. 3 – Display of application when BackgroundWorker was interrupted.

Figure 4 shows the display of the application if the task that BackgroundWorker was performing was not interrupted.


Fig. 4 – Display of the application in the case when BackgroundWorker operation was not aborted.

BackgroundWorker inheritance

The BackgroundWorker class allows you to inherit user classes from it. This class provides the OnDoWork virtual method, which the developer can override in his own way.

using System.Collections.Generic;
using System.Threading;
using System.ComponentModel;
namespace BgWorkerInherit
{
    public class Client
    {
        public Jamshut Tile (int foo, int bar)
        {
            return new Jamshut(foo, bar);
        }
    }
    public class Jamshut : BackgroundWorker
    {
         //You can add typed fields.
        public Dictionary<string, int> Result;
        public volatile int Foo;
        public volatile int Bar;
        public Jamshut()
        {
            WorkerReportsProgress = true; //Jamshut will show the progress of its work
            WorkerSupportsCancellation = true; //Jamshut can interrupt work
        }
        public Jamshut(int foo, int bar) : this()
        {
            Foo = foo;
            Bar = bar;
        }
        protected override void OnDoWork(DoWorkEventArgs e)
        {
            ReportProgress(0, "Bossy, Jamshut begins to put tiles");
            bool finished = false;
            int percentage = 0;
             //Jamshut begins to work
            Thread.Sleep(1000);
            while (!finished)
            {
                if (CancellationPending)
                { //If a request is received to cancel the operation, Jamshut will stop its work
                    e.Cancel = true;
                    return;
                }
                Thread.Sleep(1000);
                if (percentage < 100) percentage += 10;                 // Jamshut reports on the progress of work
                ReportProgress(percentage, "Proceed "+percentage+" %");
            }
            ReportProgress(100, "Bossy, come to see. Jamshut finished his work...");
            e.Result = Result;
        }
    }
    class Program
    {
        static void Main(string[] args)
        {
        }
    }
}

The code that created the Jamshut class object will have an already configured background operation handler that will report on the progress of its work and support its cancellation. In addition, the Jamshut class can update all the elements of the application’s graphical user interface without using Control.Invoke (in WinForms) and Dispatcher.Invoke (in WPF) methods.

Conclusion

In this part of the article, we have looked at the methods of background and asynchronous execution of threads in .NET apps. These methods have a number of advantages and disadvantages and that is why they are not always convenient to use. But in general, background and asynchronous execution of threads provides ample opportunities for execution in separate background threads of both short- and long-running tasks.

In part 3 of this article, we’ll look at .NET’s Task Parallel Library (TPL) and Parallel Language Integrated Query (PLINQ), which enables you to parallelize separate code snippets or database queries.