Parallel LINQ is a set extension method created for LINQ to Objects. If you want to learn about standard LINQ, you can read my other post here. The main difference between them is that Parallel LINQ executes the code in parallel, which could improve the performance and execution speed of the application, whileLINQ does it sequentially. Of course using LINQ gains significant performance improvements if you or your users have a multi-core machine.

PLINQ was added to .NET Framework 3.5 and was designed to be functional with IEnumerable and IEnumerable<T> collections. The most improvement can be seen if the logic for the collection items is well-separated and there is no shared state between them. Take into account that the data source needs to be partitioned and passed to the individual threads/tasks before execution and the results have to be merged into the result collection. All these operations have overhead and affect the execution time of PLINQ query. For more details about general performance considerations when using PLINQ, please read the MSDN page.

As an example let’s take a look at an algorithm for finding prime numbers. This is a task which is computationally intensive and can be well parallelized. Prime numbers are very important in computer science – there are a lot of cryptographic algorithms that are based on prime numbers.

I implemented the IsPrime() method to decide if the number, given in the parameter, is prime or not. 

        public static bool IsPrime(int number)
        {
            if (number < 2 || number % 2 == 0)
            {
                return false;
            }

            if (number == 2 || number == 3)
            {
                return true;
            }

            var sqrt = Math.Ceiling(Math.Sqrt(number));

            for (int i = 3; i <= sqrt; i += 2)
            {
                if (number % i == 0)
                {
                    return false;
                }
            }

            return true;
        }

I cover some edge cases in the beginning. If the number is smaller than 2 or is divisible by 2 then the number is not prime. If the number is 2 or 3 then I return true since these are prime numbers. Then, the method is looking for divisors between 3 and the square root of the original number, if it finds a divisor, the number is not prime. In any other case the number is prime.

Below is a normal LINQ for selecting the primes between 1 and 5.000.000. I added some performance measurements using Stopwatch class from System.Diagnostics namespace.

var items = Enumerable.Range(1, 5000000);
var watch = new Stopwatch();

watch.Restart();
var primes = items.Where(p => IsPrime(p)).ToList();
watch.Stop();

Console.WriteLine("{0} prime numbers found in {1} milliseconds.", primes.Count, watch.ElapsedMilliseconds);

The output, when compiled to Release mode is (executed three times):

348512 prime numbers found in 1570 milliseconds.
348512 prime numbers found in 1562 milliseconds.
348512 prime numbers found in 1583 milliseconds.

Running on a single thread, the calculation takes around 1.5 seconds.

I make a simple change to execute the code in parallel using the AsParallel() extension method of the IEnumerable interface:

var items = Enumerable.Range(1, 5000000);
var watch = new Stopwatch();

watch.Restart();
var primes = items.AsParallel().Where(p => IsPrime(p)).ToList();
watch.Stop();

Console.WriteLine("{0} prime numbers found in {1} milliseconds.", primes.Count, watch.ElapsedMilliseconds);

When executing the code in Release mode the result is the following:

348512 prime numbers found in 830 milliseconds.
348512 prime numbers found in 860 milliseconds.
348512 prime numbers found in 851 milliseconds.

The execution time has decreased almost by half, from 1.5 seconds to 0.8 seconds and this only by adding AsParallel() to the code. This performance increase is realistic, since on my machine I have two CPU cores. Even if I increase the number of items to scan till 15 million, the execution time is around 3.7 seconds. When executing the NON parallel query for 15 million items the selection takes around 7.2 seconds (it should be noted that the ratio is similar: almost 50% improvement when executing in parallel).

Using the WithDegreeOfParallelism() method, you can maximize how many concurrent Tasks (see related article on Tasks here)  are created by the framework and how many work chunks is the data split into.

var primes = items
                .AsParallel()
                .WithDegreeOfParallelism(8)
                .Where(p => IsPrime(p))
                .ToList();

In this code I force it to create eight execution threads/tasks for filtering the items using the where clause.

In some cases PLINQ can decide to fall back to the sequential execution mode. It decides based on the type and structure of queries you run, like when select and indexed where statements that rearrange the original item list appear in the query or when queries contain Take(), Skip(), SkipWhile() and TakeWhile() methods, and the order of the items in the collection was changed during execution.

There are cases when your measurements validate that executing a LINQ in parallel has better performance, but the framework falls back to sequential execution. If this case appears you can enforce the .NET Framework to always execute your query in parallel, you can do this using the WithExecutionMode() method:

var primes = items
               .AsParallel()
               .WithDegreeOfParallelism(8)
               .WithExecutionMode(ParallelExecutionMode.ForceParallelism)
               .Where(p => IsPrime(p))
               .ToList();

As you can see, gaining extra performance from the hardware is easy using PLINQ, but please be aware that not all algorithms can be executed in parallel and in many cases executing code in parallel can lead to strange errors in results if the implemented logic is complex and has shared states.

Please make measurements before and after using PLINQ. This way, you can be sure that you really made a difference with your code change. You should also try to test your code compiled in release mode on other machines which have more CPU cores and also with less CPU cores than yours. By doing this, you ensure that the code is optimized for different hardware, and not just on the one you used to develop the software.

Greg Bogdan
Greg Bogdan Hire Me

Software Engineer, Blogger, Tech Enthusiast

I am a Software Engineer with over 7 years of experience in different domains(ERP, Financial Products and Alerting Systems). My main expertise is .NET, Java, Python and JavaScript. I like technical writing and have good experience in creating tutorials and how to technical articles. I am passionate about technology and I love what I do and I always intend to 100% fulfill the project which I am ...

Hire Me

Next Article

What to Look for in a Product Manufacturer