Rate this article
Thanks for rating!

Ways Of Creating Multithreaded Applications In .NET (Part 1). What .NET Threads Are

With the advent of multi-core processors, multithreading has become almost indispensable in the development of applications. It is multi-threading that gives significant performance gain when using multiple processor cores.

However, multithreading comes with a lot of hidden pitfalls that are very unpleasant for inexperienced developers. Therefore, we have decided to make a series of articles devoted to multithreading methods in .NET applications using the C# language as an example.

In the first part, we’ll talk about multitasking and multithreading, we’ll consider the architecture of multi-core processors and how processor cores are arranged in an operating system. We’ll also review operating system tools for creating multithreaded applications, and we’ll take a closer look at the Thread class.

What is multitasking and multithreading?

Multitasking has become quite a natural phenomenon in modern operating systems. When several applications are running at the same time, the operating system can quickly switch between them, giving them CPU processing resources in turns. This creates the illusion that several programs are running simultaneously. This separation seems inconspicuous since neither the person nor the fastest Internet connection can work at the speeds with which modern processors process information.

Multithreading is the same thing as multitasking but within one application. The operating system switches between different parts of the same application quickly, thus creating the illusion that it is executing them simultaneously.

If you’re interested in more, read Microsoft Roslyn – using the compiler as a service

Architecture of modern computers

In the 2000s, when the CPU clock speeds grew rapidly, it seemed that nothing could stop this growth. Some experts predicted that the 10 GHz mark will be exceeded by 2010. This growth was proportional to reduction in the size of processor transistors, while an increase in processor power in those times was significantly ahead of Gordon Moore’s predictions (see Moore’s law).

However, before long, engineers encountered problems – substantial increase in heat release and fundamental limitations on transistor size. As a result, further increase in performance by improving CPU clock speed became practically impossible, and the clock speed remained at the 3-5 GHz mark today.

Engineers had to look for other ways of improving CPU performance. They found such an effective solution in multiprocessor information processing. If you cannot make the processor faster, why not add one more processor? In this case, you don’t need to create such a processor in the form of a separate device. The easiest way is to create such processors within one module so that they all have equal access to shared memory. Such processors were called physical processor cores.

Logical processor cores and hyper-threading

Operating systems operate on logical processor cores, sharing the time resources of each of the processor cores between processes and threads. A logical processor core may not always match with a physical processor.

In 2002, the Intel Pentium 4 processor introduced the Hyper-Threading technology for execution of commands. Hyper-Threading involves execution of multiple command threads by one physical processor core. In this case, the operating system sees each thread as a separate logical core. Dual-threaded hyper-threading works by adding another set of registers, an instruction pointer, and an interrupt controller into the physical core of the processor. Here, the number and set of execution units in the core remains unchanged.

Hyper-threading appeared as a solution to the problem of frequent downtime of the computational pipeline of Intel Pentium 4 processor, associated with excessive increase in the number of information processing stages in this pipeline.

The reasons for such downtime were:

  • The branching instruction was incorrectly predicted when executing conditional and unconditional branches.
  • There was a miss when accessing the processor’s cache and data needed to be loaded into the cache from the RAM.
  • The result of the previous instruction, which is still executing, is needed to execute the next instruction.

It should be understood that hyper-threading threads are not full-fledged physical processor cores, so they do not give multiple increase in performance. On average, the performance gain from hyper-threading is 1-30%, depending on the task being solved. In some tasks, there can be no performance increase at all. Nevertheless, hyper-threading is used in processors to this day, for example, in Intel Core i3, Core i7, Atom, Pentium, AMD Ryzen, and others.

If you’re interested in more, read .NET Core Framework Complete Review

Processes and threads in operating systems

The operating system works with logical processor cores, not knowing about their physical implementation. It sees the physical processor cores and hyper-threading threads as the same. A clear example of this is a screenshot of Windows Task Manager for a quad-core (4 physical cores + 4 hyper-threading threads) of the Intel Core i7-4770K processor in Windows 7 (Fig. 1).


Fig. 1 – Screenshot of the Windows Task Manager for Core i7-4770K.

The main program object of an operating system is the process. A process is an executable instance of an application that owns system resources (for example, RAM resources or I/O threads).

Each process can have one or more threads. Each thread executes part of the process code and has its own stack and registers. Threads can only access process resources and share them among themselves. The structure of a multithreaded program is shown in Fig. 2.


Fig. 2 – Structure of a single-threaded and multithreaded program.

At the same time, it is much faster to switch between threads than between processes during execution. Therefore, in terms of computing resources, it’s more profitable working with threads than working with processes. In addition, threads are supported by most operating systems and software platforms, for example:

  • WIN32 API Threads (Windows)
  • Cocoa Threads (iOS)
  • Multiprocessing Services (iOS)
  • Java Threads (Android)
  • POSIX Threads (GNU/Linux)
  • C Runtime Library (C)
  • OpenMP (C++, Fortran)
  • Intel Threading Building Blocks (C++, Fortran)

We will consider the technology for working with threads in .NET using C# as an example.

Thread class

The System.Threading namespace contains all the tools for low-level thread creation and management. First, we add this namespace to the project.

using System.Threading;

If threads have not yet been created, then at least one thread is already executing in the application. Let’s call it Main. To create another thread, we need to create a new Thread object.

Thread t1 = new Thread(GetThreadld);

In this case, the constructor of this object must pass the name of the function whose code will be executed in this thread. Here, it is the GetThreadId function. This function is passed as an object (in other words, a delegate) to the constructor’s argument. In this case, a function represented as a delegate has neither parameters nor return value. However, we can get out of this situation. For example, we can give a command to start a thread and simultaneously pass function parameters.

t1.Start("1");

Until the Start command is given, the thread will not start execution. In this case, the GetThreadId function can pass only an object as a parameter. The GetThreadId itself is declared as:

static void GetThreadld(object data)

A thread can be assigned a priority – both before its start and during execution.

t1.Priority = ThreadPriority.Lowest;
t2.Priority = ThreadPriority.BelowNormal;
t3.Priority = ThreadPriority.Normal;
t4.Priority = ThreadPriority.Highest;

ThreadPriority is a listed type here.

After all the threads have started, you need to call a function that is waiting for completion of their work. This is the Join function. Once the Join for all threads is executed, the threads will be terminated and destroyed. .NET automatically frees any resources that were occupied by these threads.

//waiting for all threads to finish executing
t1.Join();
t2.Join();
t3.Join();
t4.Join();

After the threads have finished executing, the main thread will again take over code execution completely.

Example of how the Thread class works

The following example displays 1000 messages from different threads with different priorities. Threads write their numbers to the console 1000 times each.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;

namespace ConsoleApplication1
{
class Program
{
static void GetThreadld(object data)
{

// now the Main thread will display the received string (its number) one thousand times
for(int i = 0; i <= 1000; i++)
Console.Write(data);
}

static void Main(string[] args)
{

// we create 4 threads, we transfer as parameter the name of the function executed by the thread
Thread t1 = new Thread(GetThreadld);
Thread t2 = new Thread(GetThreadld);
Thread t3 = new Thread(GetThreadld);
Thread t4 = new Thread(GetThreadld);

// we assign priorities to threads

t1.Priority = ThreadPriority.Lowest;  // lowest
t2.Priority = ThreadPriority.BelowNormal;  // below normal
t3.Priority = ThreadPriority.Normal;  // normal
t4.Priority = ThreadPriority.Highest;  // highest

// we run each thread and pass the thread number as a parameter

t1.Start(“1”);
t2.Start(“2”);
t3.Start(“3”);
t4.Start(“4”);

Console.WriteLine(” all threads have started “);
// waiting for all threads to finish executing
t1.Join();
t2.Join();
t3.Join();
t4.Join();


Console.ReadKey();  // Until the user presses the key, the program will not end (so that you will have time to view the result)
}
}
}

The program execution result is shown in Fig. 3.


Fig. 3 – Visual illustration of the work of threads with different priorities on a quad-core processor.

The example in Fig. 3 shows three facts:

  1. Creation of threads is a fairly time-consuming operation. First, thread 1 with the lowest priority was created and immediately launched for execution. Then the other threads were created and launched in turns.
  2. Threads are terminated according to their priority: the thread with the highest priority (4) ended earlier than the other threads.
  3. If a multithreaded program is running on a multi-core processor, the priority of threads becomes less significant here than on single-core processors, since threads will be allocated among all the processor cores.

Background and foreground threads

Threads can be foreground and background. The difference between foreground threads is that the program does not end until all the foreground threads have been executed. Background threads do not hinder the completion of a program and are terminated together with it, even if the process that the background threads were executing have not yet been run.

To find out whether a thread is a background or foreground thread, use the IsBackground property.

bool bg = Thread.CurrentThread.IsBackground;

where CurrentThread is a static method of the Thread class, which returns a link to the thread that is currently an active thread.

By default, all threads are created by foreground threads. But anywhere in the program code, you can make the thread to become background and vice versa.

t2.IsBackground = true;

Possible errors when working with the Thread class

Despite the simplicity of working with the Thread class, many novice developers make gross errors when creating multi-threaded applications.

Error 1

The most common mistake made by inexperienced developers is that they try to catch exceptions that occur in child threads, using the try { } catch block to wrap its call from the parent thread. The point is that in this case, all exceptions in the parent thread will be processed, while exceptions in child threads will remain unprocessed and lead to immediate termination of the application. The listing below shows how not to catch exceptions in multithreaded applications.

try
{ // This code is incorrect
t1.Start("1" );
t2.Start("2" );
t3.Start("3" );
t4.Start("4" );
t2.IsBackground = true;
t3.IsBackground = true;
t4.IsBackground = true;
Console.WriteLine(" all threads have started ");
//waiting for all threads to finish executing
t1.Join();
t2.Join();
t3.Join();
t4.Join();
} catch (Exception e)
{ // only exceptions in the parent thread will be processed here
Console.WriteLine(e.ToString());
} // exceptions in child threads will not be processed and they will stop the application

To catch all exceptions in child threads, the try {} catch block must be located inside the function that will be passed to the child thread for execution, as in the listing below:

static void threadID(object data)
{
try
{
// now the thread will display the received string (its number) one thousand times

for (int i = 0; i <= 1000; i++)


Console.Write(data);
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}

Error 2

The second error is the attempt to access the application interface from the child thread. When developing applications with a graphical user interface (for example, WinForms or WPF applications), there is always a main thread that monitors the state of the GUI elements. Only this thread can change the state of the interface elements. Any other thread, when accessing these elements, will immediately throw an exception.

In WinForms applications, the compiler will mark as erroneous the code that accessed the controls from another thread. In additional information, the following will be written about this error:

“Additional information: Cross-thread operation not valid: Control ‘textBox1’ accessed from a thread other than the thread it was created on”.

In WPF applications, the situation is even worse. This will throw up an InvalidOperationException exception during execution of the application with the description “The calling thread cannot access this object because its owner is another thread” (Fig. 4).


Fig. 4 – An example of an exception when accessing the elements of the WPF application interface from another thread.

If there is still a need to change the interface elements, there are fairly simple solutions in this case. If you need to access the interface in a WinForms application, you will need to first perform a check by calling the InvokeRequired method from the interface element. If the InvokeRequired condition is true, then execute the Invoke method (see the listing below). If Invoke has already worked, then the interface element can be accessed directly.

void ControlAccess()
{
if(textBox1.InvokeRequired)

textBox1.Invoke(ControlAccess)

else


textBox1.Text = "тест";
}

Calling Invoke without InvokeRequired check will also throw an exception.

In WPF applications, special object Dispatcher is used to access interface objects from other threads. To organize a call, you need to connect the following System.Windows.Threading space.

using System.Windows.Threading;

Next, we need to wrap the application interface from another thread with the static Invoke method of the Dispatcher class.

this.Dispatcher.Invoke(DispatcherPriority.Normal, (ThreadStart)delegate() { Cons.Text = "Industrial";});

Here “this” is a pointer to the current window whose interface elements are accessed. DispatcherPriority is an enumerated type that is responsible for interface access priority. Type gradation is exactly the same as when creating threads.

With the delegate() function, we declare an anonymous function and pass code to it with direct access to the interface elements. After declaring the anonymous function, it must be cast to the ThreadStart type (casting to a type is indicated in parentheses before the variable or function). In this case, accessing the interface will not throw any exceptions in the WPF application.

Error 3

A fairly common mistake is the lack of control over the completion of threads. The point is that the .NET Common Language Runtime (CLR) environment does not know if the thread will continue to perform any actions after it has completed all the work. Therefore, all responsibilities for completing the work of threads and their destruction lie on the shoulders of the developer.

The developer is obliged to ensure correct completion of all application threads in case the application was closed (including abnormally). If this is not done, the application’s parent thread will be terminated, while the child threads will continue to execute (even when the application window is already closed). At the same time, they will consume system resources, and after closing the application window, you can stop them only through the task manager.

This can lead to amusing situations. For example, the author of this article used to study the multimedia capabilities of WPF and worked with the MediaPlayer class, which can open and play *.mp3 files. He did this in a separate thread. If you don’t take care of the ending of the thread that plays music, then even after closing the application window, the music will continue playing.

Conclusion

Modern multi-core processors are designed such as to execute a large number of processes and threads at the same time. Nevertheless, this operation is quite resource-intensive.

Despite all the simplicity and efficiency of working with the Thread class, multithreaded programming is fraught with a lot of dirty tricks, and the developer risks running into unexpected program behavior if he/she doesn’t know about the tricks.

In the following parts of this article, a thread pool that saves significantly on the computing cost of creating threads will be considered. Methods of thread synchronization, multi-sequencing of cycles and database queries will also be considered.

Share the article

Anna Vasilevskaya
Anna Vasilevskaya Account Executive

Get in touch

Drop us a line about your project at contact@instinctools.com or via the contact form below, and we will contact you soon.