Tuesday, October 4, 2016

Running TensorFlow natively on Windows 10

TensorFlow is a library for evaluating numerical expressions of high-rank arrays (a.k.a. “multi-dimensional arrays” or “tensors”, which sometimes may actually represent tensors in the mathematical sense), a capability that is crucial for many scientific computing tasks. TensorFlow, however, specifically targets machine learning tasks, in particular ‘deep learning,’ whose practical viability critically depends on highly efficient multi-dimensional algebra routines and highly efficient high-dimensional gradient calculation. While TensorFlow’s core is implemented in C++, it comes with a Python API that enables interactive experimentation with the library.

Unfortunately TensorFlow—or rather: its build system—hasn’t yet been ported to Windows (but the guys are working on it). Until then, one can get by using Docker containers or running full-blown Linux VMs. With the introduction of the WSL (Windows Subsystem for Linux) as a part of Windows 10 Anniversary Update, however, it has become possible to run the Linux-version of TensorFlow on Windows in its Ubuntu user space (CPU only, sadly). WSL is still in beta, so there are some quirks to be expected. Follow the instructions below to set up a working IPython-TensorFlow environment on Windows 10 Pro:

Step 1: Activate WSL and “Bash on Ubuntu on Windows”

First you need to activate the Linux Subsystem and install the Ubuntu user land. First, open Windows Settings (Windows + I) and click on “Windows Update, recovery, backup” (Fig. 1).

Figure 1: Click on “Update” ...
Click on “Use developer features” and enable “developer mode.” (Fig. 2). This might take a while and you may have to restart your machine afterwards.

Figure 2: ... to enable developer mode.
Now we need to activate the Linux subsystem. Open the (classical) control panel and navigate to "Programs" → "Turn Windows features on or off" (Fig. 3 & 4).

Figure 3

Figure 4
Select "Windows Subsytem for Linux (Beta)" in the dialog and click OK. You'll probably have to restart your machine again (Fig. 5).

Figure 5
Now you should be able to run "bash" (either from the start menu or from a cmd prompt), which guides you through the further installation process (c.f. WSL Installation Guide). After this procedure, there should be a new start menu entry "Bash on Ubuntu on Windows" (Fig. 6).

Figure 6

Step 2 (optional): Install mintty for WSL

When you use the aforementioned short cut, a bash shell starts in a conventional cmd.exe console host. While that is perfectly useable, I personally much prefer mintty (Cygwin's default terminal emulator). Luckily, there's already a version for WSL available: Just download the installer, run it, et voilà, ready to go. The installer also configures the explorer context menu to contain a handy "WSL in Mintty Here" shortcut, which opens a bash session in the current path.
Figure 7: mintty hosting a bash shell

Step 3: Install Anaconda

Anaconda from Continuum Analytics is the computational science Python distribution. While we could also simply use the default Python distribution from the Ubuntu repositories, Anaconda comes with Intel's MKL and thus a substantial performance boost (not to mention its potent conda package manager). Start a bash shell and download the Anaconda installer by running the following commands

    $ cd ~
    $ wget https://repo.continuum.io/archive/Anaconda3-4.2.0-Linux-x86_64.sh
    $ chmod +x Anaconda3-4.2.0-Linux-x86_64.sh
    $ ./Anaconda3-4.2.0-Linux-x86_64.sh

Note that this installs Anaconda into your WSL home directory. You could install it "system-wide" using sudo, but as WSL environments are per-Windows-user anyway, there isn't much point in doing so. At some point the installer will ask you, whether it should add Anaconda to your Linux PATH, effectively making it the default Python. Confirm by entering "yes" (Fig. 8).

Figure 8: YES!!!
You may have to start a new bash session in order to make the PATH change effective. Alternatively you can "source" (reload/re-execute) .bashrc via

    $ . ~/.bashrc

Now you should be able to run

    $ ipython

And see a message like this:

        Python 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul  2 2016, 17:53:06)
        Type "copyright", "credits" or "license" for more information.

        IPython 5.1.0 -- An enhanced Interactive Python.
        ?         -> Introduction and overview of IPython's features.
        %quickref -> Quick reference.
        help      -> Python's own help system.
        object?   -> Details about 'object', use 'object??' for extra details.
   
        In [1]:

Yet, when you enter the command (IPython magic)

        In [1]: %pylab

Python will throw some PyQt4 error at us:

      --->   31 from .qt_compat import QtCore, QtGui, QtWidgets, _getSaveFileName, __version__
             32 from matplotlib.backends.qt_editor.formsubplottool import UiSubplotTool
             33

        /home/niemeyer/anaconda3/lib/python3.5/site-packages/matplotlib/backends/qt_compat.py in ()
            135     # have been changed in the above if block
            136     if QT_API in [QT_API_PYQT, QT_API_PYQTv2]:  # PyQt4 API
        --> 137         from PyQt4 import QtCore, QtGui
            138
            139         try:

        ImportError: No module named 'PyQt4'

Step 4: Fix Matplotlib PyQt4 Error

The above error is already known by the Anaconda developers. Sadly, the proposed solutions like explicitly selecting the Qt5Agg backend or downgrading to Qt4 didn't work for me. What did work was switchting to the TkAgg. For that you need to create a new text file

    $ vi ~/.config/matplotlib/matplotlibrc

(use nano, if you can't handle vi...) and add the following line

    backend : TkAgg

When you now start IPython again, executing %pylab should work fine ...

        In [1]: %pylab
        Using matplotlib backend: TkAgg
        Populating the interactive namespace from numpy and matplotlib

... only to run into the next error when trying to create a little test plot:

        In [2]: x = linspace(0, 10, 1000)
        In [3]: plot(x, x**2)
        OMP: Error #100: Fatal system error detected.
        OMP: System error #22: Invalid argument

Step 5: Work Around OpenMP Error

The previous error also is already known, though not yet fixed. To work around this bug(?), edit your .bashrc

    $ vi ~/.bashrc

and add the line

    export KMP_AFFINITY=disabled

to the end of the file. Run

    $ . ~/.bashrc

again and re-try %pylab and plotting. This time, IPython will reward us with a new error message:

        -> 1868         self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
           1869         if useTk:
           1870             self._loadtk()

        TclError: no display name and no $DISPLAY environment variable

Step 6: Install X11 Server and set $DISPLAY

Matplotlib needs an X server to draw its plot windows. Nowadays I recommend VcXsrv, which is easy to install and just works out of the box. You could use Cygwin/X or Xming, but at least the former requires some fiddeling with its setting for it to work with WSL.

After having installed and started your X server of choice, edit your .bashrc again to add the following line

    export DISPLAY=:0.0

Again,

        $ . ~/.bashrc

Now, IPython/matplotlib should finally work.

Step 7: Install TensorFlow

The TensorFlow installation itself is pretty straight-forward: Execute

    $ conda install -c conda-forge tensorflow

Alongside of raw TensorFlow, you may also want to install a deep learning library like Keras, which is easily installed via PIP

    $ pip install keras

When you now start again IPython and enter

    import keras

you should get a little "using TensorFlow backend" message, indicating your successful installation of TensorFlow on Windows!

Figure 9: When installed correctly, Keras defaults to TensorFlow as its backend
Figure 10: Training a (small) CNN using TensorFlow on WSL

Friday, August 28, 2015

SIMD Fundamentals. Part III: Implementation & Benchmarks

This is the third part of my series on SIMD Fundamentals, targeted at .NET developers. You may read the other two parts here:
While the previous two parts where more focused on theory and concepts, we are now going to actually get our hands dirty and write some code to see how different approaches to SIMD processing compare in practice, both from a performance an an ease-of-implementation point of view.

Prelude

In Part II I used F# (pseudo-)code to illustrate the difference between the AoS (array of structures) and SoA (structure of array) approaches. That's why I thought using F# for implementing the benchmarks might be a good idea. So I installed Visual Studio 2015, created a new F# console application project and installed System.Numerics.Vectors in its most recent 4.1.0 incarnation via NuGet. Yet, when I tried to use System.Numerics.Vector<T> IntelliSense wanted to convince me there was no such thing:
Maybe just a problem with the F# language service? I tried to run this little hello world sample

but that didn't work either, because it references the wrong version of System.Numerics.Vectors:

I didn't have luck with manually replacing the "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.6\System.Numerics.Vectors.dll" reference with the one delivered by the System.Numerics.Vectors NuGet package either.

For now, I thus resorted to using C# as an implementation language instead.

Managed Implementations

For the upcoming benchmarks we will use the same problem we stated in the previous post: Compute the squared L2 norm of a set of 3-vectors. According to the previous post, we want to compare five possible approaches:
  • Scalar AoS
  • AoS vectorized across each 3-vector
  • On-the-fly vectorization (converting AoS to SoA)
  • Scalar SoA
  • Vectorized SoA
To avoid any memory bottlenecks and keep everything cached, we will use a comparatively small set of vectors. I found that an array of 1024 to 2048 Vec3fs yielded peak performance. Note that this has the nice side effect of being a multiple of the AVX vector lane width (32 byte or 8 single precision floating point values). To get a reliable estimate of the average performance, we repeat the procedure a large number of times: For the benchmarks, all procedures had to compute a total of 32 · 2^30 (approximately 32 billion) dot products.

For further details, you can consult the full benchmark code here. In case you want to run it yourself and change some parameters, make sure that vectorLen is a multiple of laneWidth, as the remaing code assumes.

All of the following benchmark code was compiled and run on an Intel Core i5-4570 (Haswell) with 16 GB auf DD3-SDRAM on Windows 10 Pro. The managed implementation was developed using Visual Studio 2015 on .NET 4.6 and targeted x64 (release build, "RyuJIT").

Scalar AoS

This version is probably the easiest to understand and, as long as you don't intend to vectorize, a pretty reasonable one: We have an array of Vector3 structures (here we simply use the one provided by System.Numerics.Vectors) and compute the resulting dot products vector by vector:

The outer loop over j is there to ensure the total number of computed dot products is 32 billion. For the inner loop, the JIT compiler generates the following machine code:

As expected, it uses scalar AVX instructions to compute the dot products (VMULSS, VADDSS).

AoS vectorized across each 3-vector

In this version, we still compute a single dot product per iteration, but we compute the squares of the components at once and then determine the (horizontal) sum of those squared components:

When using Vector3.Dot, the compiler emits code that uses the DPPS (SSE 4.1) or VDPPS (AVX) instructions:

For some reason, it's loading the data twice, first into XMM0 and then into XMM1 (VMOVSS, VMOVSD, VSHUFPS).

On-the-fly vectorization

Because AoS isn't a data layout well suited for vectorization, last time we came up with the idea of reordering the data on the fly. Vector gather instructions would help with that, but System.Numerics.Vector<T> only supports loading consecutive elements for now. The only managed solution I could come with is to first manually gather the required data into temporary arrays and then creating the vector instances from these temporary data structures:

That works, in principle, meaning that the compiler can now emit VMULPS and VADDPS instructions to compute 8 dot products at once. Yet, because the JIT compiler doesn't employ VGATHERDPS all this gathering becomes quite cumbersome:

As you can probably imagine, this code isn't exactly a candidate for the world's fastest dot3 product implementation...

Scalar SoA

Let's move on to a proper SoA data layout. The scalar version is similar to the scalar AoS version, only that we now index the components instead of the array of vectors:

The resulting machine code is likewise similar to scalar AoS:

Vectorized SoA

The SoA layout makes it easy to use vector arithmetic to compute 8 dot products at once:

This is what the compiler makes of it:

Nice, but sprinkled with range (?) checks. I also wonder, why it emits VMOVUPD instead of VMOVUPS instructions.

Unmanaged Implementations

After implementing and running the above variants in C#, I figured it would be useful to have something to compare the results to. Thus, I ported the benchmark code to C++ to see what the Visual C++ optimizer, its auto-vectorizer and SIMD intrinsics can do and how close we can get to the theoretical peak performance of the Haswell CPU. For the "native" implementation I used the Visual C++ 2015 compiler with the following flags:
/GS- /GL /W3 /Gy /Zi /Gm- /Ox /Ob2 /Zc:inline /fp:fast /WX- /Zc:forScope /arch:AVX2 /Gd /Oy /Oi /MD /Ot

Scalar AoS

Again, the code for this version is pretty straightforward:

In case you wonder about the inner loop construction: while (i--) turned out to result in slightly faster code than a more traditional for loop.

No surprises regarding the machine code, either:

AoS vectorized across each 3-vector

Let's use the DPPS instruction via intrinsics:

Notice the little trick of directly loading four consecutive floats instead of scalar loads and shuffling. Strictly speaking, this might go wrong for the last element of the vector, if you try to access unallocated memory... In reality you'd handle that special case separately (or simply allocate a few more bytes). The corresponding machine code is really compact:

On-the-fly vectorization

In contrast to the managed version, we can now employ AVX's vector gather instructions to load eight of each 3ed component value into YMM registers:

_mm256_fmadd_ps results in FMA3 (fused multiply add) instructions, combining multiplication and addition/accumulation in one instruction:

Scalar SoA

Auto-vectorized SoA

In order for the auto-vectorizer to kick in, we need to use a for-loop for the inner iteration:

Now the compiler even generates FMA instructions:

Vectorized SoA

Of course we can also vectorize manually by using intrinsics:

Vectorized SoA using FMA

This is the same as above, but it additionaly makes use of FMA:

Results

The following figure displays the performance in GFLOP/s of the different versions. The dashed line is at 51.2 GFLOP/s, the theoretical peak performance of a single Haswell core (single precision):
First of all, both, all the AoS variants and the scalar SoA version, don't even come close to the vectorized SoA versions. Second, any attempts at accelerating the original AoS version failed (C#) or only provide insignificant performance gains (C++). Even vector gather can't save the day and in fact further impairs performance. In any event, the gains don't justify the more complicated code.

If you really need the performance SIMD can provide, you have to switch to a SoA layout: While Visual C++'s auto-vectorizer may relieve you of writing SIMD intrinsics directly, it still requires SIMD-friendly—that is: SoA—code. As long as it works, it provides the most accessible way of writing high-performance code. The second-best way, from a usability stand point, is probably C# and System.Numerics.Vectors, which enables (explicit) SIMD programming via a comparably easy-to-use interface.

Yet, the plot above also shows that non of the managed solutions is really able to keep up with any of the vectorized C++ versions. One reason for that is the inferior code generation of the JIT compiler compared to the C++ optimizer. Others are more intrinsic to the managed programming model (null-pointer checks, range checks). But also System.Numerics.Vectors is far from being complete: For instance, there is no support for FMA or scatter/gather operations. A "Vector8" type could help treating a float[] as AVX-sized chunks.

Conclusions

Want speed? Use a vectorized SoA approach. Want more speed? Use C++. That's what I learned from this little investigation. That and—until we have an AI driven optimizing compiler, that's smart enough to understand, what we are trying to achieve and then automatically transforms our code to the most efficient form—that writing high-performance code will remain both an art and hard work for the time being.

Saturday, June 13, 2015

SIMD Fundamentals. Part II: AoS, SoA, Gather/Scatter - Oh my!

Last time we looked at a very basic example of a data parallel problem: adding two arrays. Unfortunately, data parallelism is not always so easy to extract from existing code bases and often requires considerable effort. Today we will therefore address a slightly more complicated problem and move step by step from a pure scalar to a fully vectorized version.

The problem and its SISD solution

Imagine we wanted to compute the squared ℓ²-norm of a set of m 3-vectors u in R³, what is really just the dot product of each vector x with itself: ‖x‖² = x² + y² + z²

In good OO fashion, you'd probably first define a Vec3f data type, in its simplest form like so (F# syntax)

Our vector field u can then simply be represented by an array of these structs and the code that performs the computation of the dot products is

Load the three components, square them, add up the squares and store the result in dp:
 
For SSE2, that's a total of three MULSS and two ADDSS instructions, so the one dot product we get out is worth 5 FLOPs. Now we would of course like to make use of our shiny SIMD hardware to accelerate the computation. There are a few options of how we can approach this:

Pseudo-SIMD: Bolting SIMD on top of AoS

Our current data layout is an "array of structures" (AoS). As we shall see further below, AoS isn't a particular good choice for vectorization and SIMDifying such code won't yield peak performance. Yet it can be done and depending on the used SIMD instruction set it may even provide some speed-up.

Here's the idea: Instead of loading the three scalar components into separate scalar registers, we store all of them in a single SIMD register. In case of SSE2, n = 4 for single precision floats, so one of the elements in the register is a dummy value that we can ignore.  Now we can multiply the SIMD register with itself to get the squared components (plus the ignored fourth) one. Then we need to horizontally add up the three squares that we are interested in; in pseudo-F#:


Graphically:

Although we were able to replace three MULSS with a single MULPS and therefore express 5 FLOPs through only 3 arithmetic instructions, there are multiple issues with this idea: 
  1. We need three scalar loads plus shuffling operations to place the three components in a single SIMD register.
  2. We effectively only use 3 of the 4 vector lanes, as we operate on Vec3f, not Vec4f structs. This problem becomes even more severe with wider vector paths.
  3. We use vector operations only for multiplication, but not for addition.
  4. Horizontal operations like adding the components are comparatively slow, require shuffling the components around and extracting a single scalar (the result).
  5. We still only compute one dot product per iteration.
Another example for this kind of pseudo-vectorization can be found in Microsoft's SIMD sample pack (ray tracer sample). Yet, for all the reasons mentioned above, you should avoid this approach whenever possible.

On-the-fly vectorization

If you recall our "Hello, world!" example from Part I, then you may also remember that vectorizing our loop meant that we computed n results per iteration instead of one. And that is what we have to achieve for our current dot-product example as well: We need to compute n dot products in each iteration. How can we accomplish this?

Imagine our data (array of Vec3f structs) to form a 3×m matrix (three rows, m columns). Each column represents a Vec3f, each row the x, y or z components of all Vec3fs. In the previous section, we tried to vectorize along the columns by parallelizing the computation of a single dot product—and largely failed.

The reason is that the data parallelism in this example can really be found along the rows of our imagined matrix, as each dot product can be computed independently and thus in parallel from each other. Furthermore m, the number of Vec3fs, is typically much larger then n and so we don't face problems in utilizing the full SIMD width. The wider the SIMD vector is, the more dot products we can compute per iteration.

As with our example from last time, the vectorized version of the algorithm is actually very similar to the plain scalar one. That's why vectorizing a data-parallel problem isn't really hard once you get the idea. The only difference is that we don't handle individual floats, but chunks of n floats:
  1. Load n x values and square them
  2. Load n y values and square them
  3. Load n z values and square them
  4. Add the n x, y and z values
  5. Store the n resulting dot products
In pseudo-F# the algorithm is now

Graphically:

That good part about this version is that it uses vector arithmetic instructions throughout, performing n times the work of its scalar counterparts, performing a total of 20 FLOPs each iteration.

The bad part is the one labeled "scalar loads & shuffling" in the picture above: Before we can compute the n results, we have to gather n x, y and z values, but our data is still laid out in memory as an AoS, i.e. ordered [xyzxyzxyzxyzxyz...]. Loading logically successive [x0 x1 x2 x3 ...] values thus requires indexed/indirect load instructions (vector gather). SIMD instruction sets without gather/scatter support, like SSE2, have to load the data using n conventional scalar loads and appropriately place them in the vector registers, hurting performance considerably.

Approaching nirvana: Structure of arrays (SoA)

To avoid this drawback, maybe we should just store the data in a SIMD-friendly way, as a Structure of Arrays (SoA): Instead of modeling the vector field u as a number of Vec3f structs, we consider it to be an object of three float arrays:

This is indeed very similar to switching from a column-major to row-major matrix layout, because it changes our data layout from [xyzxyzxyz...]. to [xxxxx...yyyyy...zzzzz...].

Observe how this implementation differs from the previous one only in that we index the components instead of the vectors:

And yet it saves us a lot of loads and shuffling, resulting in pure, 100% vectorized SIMD code:
As this version really only replaces the scalar instructions with vector equivalents but executes 20 FLOPs in each iteration, we should now indeed get about 4x the performance of the scalar version and a much more favourable arithmetic-to-branch/load/store ratio.

In fact, once the hard part of the work is done (switching from AoS to an SoA data layout), many compilers can even automatically vectorize loops operating on that kind of data.

Conclusions

Vectorizing OO code needs some getting used to, the SoA view of the world may be a bit alien to non-APL programmers. If you can foresee the need for vectorizing your code at some point in the future, it may be wise to use an SoA layout from the get-go instead of having to rewrite half of your code later on. Experience certainly helps in identifying data-parallel problems; but as a rule of thumb good candidates for extracting data parallelism are those problems where thinking in terms of operations that apply to a whole array/field of items comes naturally.

In part III we are going to discuss concrete implementations of the above concepts using F#, RyuJIT and System.Numerics.Vectors and compare the performance of the different versions—once they sorted out this issue.

Sunday, June 7, 2015

SIMD Fundamentals. Part I: From SISD to SIMD

For a long time, .NET developers didn't have to care about SIMD programming, as it was simply not available to them (except maybe for Mono's Mono.Simd). Only recently Microsoft introduced a new JIT compiler RyuJIT for .NET that, in conjunction with a special library System.Numerics.Vectors, offers access to the SIMD hardware of modern CPUs. The goal of this three-part series of articles is therefore to introduce the fundamental ideas of SIMD to .NET developers who want to make use of all the power today's CPUs provide.

Parallelism in modern CPUs

In recent years CPUs gained performance mainly by increasing parallelism on all levels of execution. The advent of x86-CPUs with multiple cores established true thread level parallelism in the PC world. Solving the "multi-core problem", the question of how to distribute workloads between different threads in order to use all that parallel hardware goodness—ideally (semi-)automatically, suddenly became and continues to be one of the predominant goals of current hard- and software related research.

Years before the whole multi-core issue started, CPUs already utilized another kind of parallelism to increase performance: Instruction level parallelism (ILP) was exploited via pipelining, super-scalar and out-of-order execution and other advanced techniques. The nice thing about this kind of parallelism is that your average Joe Developer doesn't has to think about it, it's all handled by smart compilers and smart processors.

But then in 1997 Intel introduced the P55C and with it a new 64-bit-wide SIMD instruction set called MMX ("multimedia extension"; after all, "multimedia" was "the cloud" of the 90s). MMX made available a level of parallelism new to most PC developers: data level parallelism (DLP). Contrary to ILP however, DLP requires specifically designed code to be of any good. This was true for MMX and it remains to be true for its successors like Intel's 256-bit-wide AVX2. Just as with multi threading, programmers need to understand how to use those capabilities properly in order to exploit the tremendous amounts of floating point horse power.

From SISD to SIMD

Whenever I learn new concepts, I like to first think of examples and generalize afterwards (inductive reasoning). In my experience that's true for most people, so let us therefore start with the "Hello, world!" of array programming: Suppose you have two floating point arrays, say xs and ys of length m and, for some reason, you want to add those arrays component-wise and store the result in an array zs. The conventional "scalar" or SISD (Single Instruction Single Data, see Flynn's taxonomy) way of doing this looks like this (C# syntax)

for (var i = 0; i < m; ++i) {
    zs[i] = xs[i] + ys[i];
}

Fetch two floats, add them up and store the result in zs[i]. Easy:
For this specific example, adding SIMD (Single Instruction Multiple Data) is almost trivial: For simplicity, let us further assume that m is evenly divisible by n, the vector lane width. Now instead of performing one addition per iteration, we add up xs and ys in chunks of n items, in pseudo-C# with an imagined array range expression syntax

for (var i = 0; i < m / n; i += n) {
    zs[i:(i+n-1)] = xs[i:(i+n-1)] + ys[i:(i+n-1)];
}

We fetch n items from xs and n items from ys using vector load instructions add them up using a single vector add and store the n results in zs. Compared to the scalar version, we need to decode fewer instructions and perform n times the work in each iteration. Think of each SIMD register as a window, n values wide into an arbitrarily long stream (array) of scalar values. Each SIMD instructions modifies n scalar values of that stream at once (that's where the speed-up comes from) and moves on to the next chunk of n values:
So SIMD really is just a form of array processing, where you think in terms of applying one operation (single instruction) to a lot of different elements (multiple data), loop-unrolling on steroids. And that's already really all you need to know about SIMD, basically.

Yet, this example is perhaps the most obvious data parallel case one could think of. It gets a whole lot more interesting once you add OOP and the data layout this paradigm propagates to the mix. We will look into the issue of vectorizing typical OOP code in the next part of this series.

Thursday, June 4, 2015

NativeInterop 2.4.0

NativeInterop v2.4.0 is ready and awaiting your download from NuGet. This version brings a revised version of Buffer.Copy tuned for .NET 4.6/RyuJIT: By using a 16 byte block copy, the JITter can generate movq (x86) or movdqu (x64) instructions for further improved performance. Enjoy!


Now I just have to figure out, where the larger performance drop for large data sizes comes from compared to the other methods. memcpy somehow reaches approx. 20 GB/s beyond the L3 cache.

Btw., I have a further post on SIMD programming with .NET almost ready, but I can't publish it yet due to problems with the current version of System.Numerics.Vectors in combination with .NET 4.6/VS 2015 RC. Hope that'll get fixed soon!

Friday, November 7, 2014

A quick look at RyuJIT CTP 5

Microsoft's JIT team just released CTP 5 of "RyuJIT", its next generation JIT compiler for .NET. I just got around to test it with the SIMD version of my X-ray simulator. The updated benchmark results look like this:


Yes, performance actually dropped with the latest release. Again, the AABB ray intersection test is the bottleneck and this time the generated machine code is even worse compared to CTP3:


Let's just hope this is a temporal regression.

Wednesday, July 16, 2014

Methods for Reading Structured Binary Data: Benchmarks & Comparisons

In my previous post I demonstrated how one can use the NativeIntrop library to easily implement file I/O for structured binary data formats, STL in this particular case. In this example, we used NativeInterop.Stream.ReadUnmanagedStructRange to read a sequence of bytes from a file as an array of STL triangle structs.

When implementing ReadUnmanagedStructRange I had these design goals in mind:
  • Ease-of-use (no user-side tinkering with unsafe code)
  • Portability (after all, NativeInterop is a portable class library)
  • Genericity (should work with any unmanaged data type)
  • Efficiency
Today's post concerns how I tried to fulfill the first three bullet points without sacrificing too much of the last one ("efficiency"/"performance").

Choices

Implementing ReadUnmanagedStructRange (and its dual, WriteUnmanagedStructRange) essentially requires a method to convert a stream of bytes to a (managed) array of the target type (which is supposed to be unmanaged/blittable). In context of reading binary STL, we would therefore need to somehow convert a byte[] to an STLTriangle[].

In a language like C that guarantees neither type nor memory safety we would simply treat the raw byte array (represented as a byte*) as a STLTriangle* and be done. And while that approach is possible in C# as well, it requires unsafe code at every point we want to access the STL data. Put differently: As there is no way to change the "runtime type" of a .NET array (certainly for good reasons!) the solution of our problem boils down to either...
  1. "Manually" construct  objects of the target type from the byte[] data or store them in the result array, or...
  2. Unsafely copy the contents of the raw byte[] to an STLTrianlge[] ("memcpy"), without explicitly or implicitly creating intermediate/temporary objects
Both options can be implemented in .NET in multiple ways; for option (1) that is
  • BinaryReader.ReadXXX: read individual fields directly from the stream
  • BitConverter.ToXXX: convert small chunks of the byte[] to individual fields of the target data type
  • Buffer.BlockCopy: read multiple fields of homogenous type at once
  • Marshal.PtrToStructure: interpret larger chunks of the byte[] as complete STLTriangle structs
For option (2) ("memcpy" approach) we will consider
  • Simple, unsafe C# code (*dst++ = *src++)
  • Marshal.Copy
  • memcpy (via P/Invoke)
  • cpblk CIL instruction
  • NativeInterop.Buffer.Copy using a custom blocked memcpy implementation (ReadUnmanagedStrucRange uses this function under the hood)
For our sample scenario, we assume that the STL data is already available in-memory as a byte[]; otherwise we would only be measuring I/O transfer speed. We also don't want to measure GC performance; therefore the following implementations will use a pre-allocated result array ("tris") to store the re-interpreted STLTriangle data.

BinaryReader.ReadXXX

Probably the most straight-forward approach for parsing a binary file is using a BinaryReader to get the data for individual fields of the target data structure and then create the data structure from that data. An implementation for reading an STL file might look like the following bit of code:

public static void BinaryReaderRead(byte[] triBytes, STLTriangle[] tris) {
    using (var br = new BinaryReader(new MemoryStream(triBytes))) {
        for (int i = 0; i < tris.Length; ++i) {
            var normX = br.ReadSingle();
            var normY = br.ReadSingle();
            var normZ = br.ReadSingle();
            var aX = br.ReadSingle();
            var aY = br.ReadSingle();
            var aZ = br.ReadSingle();
            var bX = br.ReadSingle();
            var bY = br.ReadSingle();
            var bZ = br.ReadSingle();
            var cX = br.ReadSingle();
            var cY = br.ReadSingle();
            var cZ = br.ReadSingle();
            var abc = br.ReadUInt16();
            tris[i] = new STLTriangle(
                new STLVector(normX, normY, normZ),
                new STLVector(aX, aY, aZ),
                new STLVector(bX, bY, bZ),
                new STLVector(cX, cY, cZ),
                abc);
        }
    }
}

Note that I'm using a MemoryStream here to simulate reading from a Stream object while avoiding disk I/O.

BitConverter.ToXXX

I guess revealing upfront the BinaryReader approach as not being exactly the most efficient approach will be hardly surprising for my dear readers. The next optimization step could therefore be to read the whole file into a flat byte[] at once and extract the data using BitConverter:

public static void BitConverterTo(byte[] triBytes, STLTriangle[] tris) {
    for (int i = 0; i < tris.Length; ++i) {
        var offset = i * STLTriangle.Size;
        var normX = BitConverter.ToSingle(triBytes, offset);
        var normY = BitConverter.ToSingle(triBytes, offset + 4);
        var normZ = BitConverter.ToSingle(triBytes, offset + 8);
        var aX = BitConverter.ToSingle(triBytes, offset + 12);
        var aY = BitConverter.ToSingle(triBytes, offset + 16);
        var aZ = BitConverter.ToSingle(triBytes, offset + 20);
        var bX = BitConverter.ToSingle(triBytes, offset + 24);
        var bY = BitConverter.ToSingle(triBytes, offset + 28);
        var bZ = BitConverter.ToSingle(triBytes, offset + 32);
        var cX = BitConverter.ToSingle(triBytes, offset + 36);
        var cY = BitConverter.ToSingle(triBytes, offset + 40);
        var cZ = BitConverter.ToSingle(triBytes, offset + 44);
        var abc = BitConverter.ToUInt16(triBytes, offset + 48);
        tris[i] = new STLTriangle(
            new STLVector(normX, normY, normZ),
            new STLVector(aX, aY, aZ),
            new STLVector(bX, bY, bZ),
            new STLVector(cX, cY, cZ),
            abc);
    }
}

Here we convert chunks of four bytes to the single precision floating point fields of an STLTriangle, representing the components of the vertices and the normal vector. The last component is the 2-byte attribute count field.

Buffer.BlockCopy

We can further refine the previous approach by extracting not single floats from the byte[], but all vertex/normal data for each single triangle at once using Buffer.BlockCopy:

public static void BufferBlockCopy(byte[] triBytes, STLTriangle[] tris) {
    var coords = new float[12];
    for (int i = 0; i < tris.Length; ++i) {
        var offset = i * STLTriangle.Size;
        System.Buffer.BlockCopy(triBytes, offset, coords, 0, 48);
        var abc = BitConverter.ToUInt16(triBytes, offset + 48);
        tris[i] = new STLTriangle(
            new STLVector(coords[0], coords[1], coords[2]),
            new STLVector(coords[3], coords[4], coords[5]),
            new STLVector(coords[6], coords[7], coords[8]),
            new STLVector(coords[9], coords[10], coords[11]),
            abc);
    }
}
 
Unfortunately Buffer.BlockCopy can only handle arrays whose elements are of primitive type (in this case, we copy from a byte[] to a float[]). We can therfore not use this approach to copy all bytes at once from the byte[] to a STLTriangle[].

Marshal.PtrToStructure

The Marshal class provides a method PtrToStructure (or alternatively in generic form), which essentially reads an (unmanaged) struct from an arbitrary memory address:

public static unsafe void MarshalPtrToStructureGeneric(byte[] triBytes, STLTriangle[] tris) {
    var triType = typeof(STLTriangle);
    fixed (byte* pBytes = &triBytes[0])
    {
        for (int i = 0; i < tris.Length; ++i) {
            var offset = i * STLTriangle.Size;                    
            tris[i] = Marshal.PtrToStructure<STLTriangle>(new IntPtr(pBytes + offset));
        }
    }
}

*dst++ = *src++

Essentially the same as with Marshal.PtrToStructure can be achieved with a little bit of unsafe C#:

public static unsafe void UnsafePointerArith(byte[] triBytes, STLTriangle[] tris) {
    fixed (byte* pbuffer = triBytes)
    fixed (STLTriangle* ptris = tris)
    {
        var pSrc = (STLTriangle*)pbuffer;
        var pDst = ptris;
        for (int i = 0; i < tris.Length; ++i) {
            *pDst++ = *pSrc++;
        }
    }
}

Here we treat the (fixed) byte input array as an STLTriangle* which we can then dereference to store each triangle data in the result array of type STLTriangle[]. Note that you cannot implement this kind of operation in a generic way in C#, as C# does not allow to dereference pointers to objects of arbitrary type.

Marshal.Copy

The aformentioned unsafe C# approach already is a form of memcpy, albeit an unoptimized one (we copy blocks of 50 bytes at each iteration); furthermore it's specialized for copying from byte[] to STLTriangle[]. Marshal.Copy on the other hand is a little more generic as it can copy any array of primitive elements to an arbitrary memory location (and vice versa):

public static unsafe void ConvertBufferMarshalCopy(byte[] triBytes, STLTriangle[] tris) {
    fixed (STLTriangle* pTris = &tris[0]) {
        Marshal.Copy(triBytes, 0, new IntPtr(pTris), triBytes.Length);
    }
}

memcpy (P/Invoke)

As we already concluded that we essentially need an efficient way to copy one block of memory to some other location in memory, why not simply use the system-provided, highly optimized, memcpy implementation?

[DllImport("msvcrt.dll", EntryPoint = "memcpy", CallingConvention = CallingConvention.Cdecl, SetLastError = false)]
static extern IntPtr memcpy(IntPtr dest, IntPtr src, UIntPtr count);
 
public static unsafe void ConvertBufferMemcpy(byte[] triBytes, STLTriangle[] tris) {
    fixed (byte* src = triBytes)
    fixed (STLTriangle* dst = tris)
    {
        memcpy(new IntPtr(dst), new IntPtr(src), new UIntPtr((uint)triBytes.Length));
    }
}

Easy. Of course, this introduces a dependency on Microsoft's C runtime library msvcrt.dll and is thus not platform independent.

cpblk

Interestingly, the CIL provides a specialized instruction just for copying blocks of memory, namely cpblk. A few years back, Alexander Mutel used cpblk in his nice performance comparison of different memcpy methods for .NET. To my knowledge, this instruction is currently not directly accessible from either C# or F# (not sure about C++/CLI, though). Yet, we can generate the neccessary instructions using a bit of runtime code generation; here I'm using Kevin Montrose's excellent Sigil library:

// parameters: src, dst, length (bytes)
static Action<IntPtrIntPtruint> CpBlk = GenerateCpBlk();
 
static Action<IntPtrIntPtruint> GenerateCpBlk() {
    var emitter = Sigil.Emit<Action<IntPtrIntPtruint>>.NewDynamicMethod("CopyBlockIL");
    // emit IL
    emitter.LoadArgument(1); // dest
    emitter.LoadArgument(0); // src
    emitter.LoadArgument(2); // len
    emitter.CopyBlock();
    emitter.Return();
    // compile to delegate
    return emitter.CreateDelegate();
}
 
public static unsafe void ConvertBufferCpBlk(byte[] triBytes, STLTriangle[] tris) {
    fixed (byte* src = triBytes)
    fixed (STLTriangle* dst = tris)
    {
        CpBlk(new IntPtr(src), new IntPtr(dst), (uint)triBytes.Length);
    }
}

While it's a bit clunky that code generation is required to get access to cpblk, the nice thing about it is that it's platform-independent: any CLI-compliant implementation must provide it. There is even hope that the next version of F# will come with an updated version of it's pointer intrinsics library that will also feature access to cpblk (cf. Additional intrinsics for the NativePtr module and the corresponding pull request).

NativeInterop.Buffer.Copy

My own little NativeInterop helper library also has tools for moving around stuff in memory. For instance, the Buffer module contains a method Copy that copies an input array of type T to an array of type U. Both T and U must be unmanaged types (Note: Neither the C# compiler nor the CLR can enforce this constraint!). Under the covers, Buffer.Copy(T[], U[]) creates an empty result array and then copies the input bytes using a custom block copy implementation to the result array. Using Buffer.Copy is simple:

public static void ConvertBufferBufferConvert(byte[] triBytes, STLTriangle[] tris) {
    NativeInterop.Buffer.Copy(triBytes, tris);
}

Unfortunately, I couldn't use any of the other mem-copy methods to implement Buffer.Copy as they are either not generic enough (Marshal.Copy, Buffer.BlockCopy), introduce dependencies on some native libraries (memcpy via P/Invoke) or require runtime-code generation or unsafe C# code, both of which isn't available for Portable Class Libraries like NativeInterop (cpblk, *dst++=*src++).

I therefore experimented with different memcpy algorithms (different block sizes, different degrees of loop-unrolling, with or without considering alignment...) and—for the time being—setteled with a comparatively simple, aligned qword copying mechanism that offers decent performance on x64 with the current x64 JIT.

Multi-Threading

For some of the methods we will also look at a multi-threaded variant. Essentially, all of those versions look similar to this (here: memcpy):

public static unsafe void ConvertBufferMemcpyMultiThreaded(byte[] triBytes, STLTriangle[] tris) {
    var threadcount = Environment.ProcessorCount;
    var chunksize = triBytes.Length / threadcount;
    var remainder = triBytes.Length % threadcount;
 
    fixed (byte* pBytes = &triBytes[0])
    fixed (STLTriangle* pTris = &tris[0])
    {
        var tasks = new Action[threadcount];
        for (var i = 0; i < threadcount - 1; ++i) {
            var offset = i * chunksize;
            var newSrc = new IntPtr(pBytes + offset);
            var newDst = new IntPtr((byte*)pTris + offset);
            tasks[i] = () => memcpy(newSrc, newDst, new UIntPtr((uint)chunksize));                    
        }
 
        var finalOffset = (threadcount - 1) * chunksize;
        var finalSrc = new IntPtr(pBytes + finalOffset);
        var finalDst = new IntPtr((byte*)pTris + finalOffset);
        tasks[threadcount - 1] = () => memcpy(finalSrc, finalDst, new UIntPtr((uint)(chunksize + remainder)));
 
        Parallel.Invoke(tasks);
    }
}

Given that copying (large) chunks of memory should be a bandwidth-limited problem, multi-threading shouldn't help much, at least for efficient implementations. Methods with a lot of overhead might profit a little more.

Benchmark

As a benchmark problem, I chose to create fake STL data from 2^6 up to 2^20 triangles. As each triangle struct is 50 bytes, the total amount of data transferred (read + write) varies between approx. 6 kB up to approx. 100 MB and should thus stress all cache levels.

Each method described above ran up to 10 000 times with a time-out of 5 s for each of the 15 problem sizes. To minimize interference from other processes, I set the process priority to "high" and a garbage collection is kicked off before each round.

System.Diagnostics.Stopwatch, which I used to measure transfer times, has a resolution of 300 ns on my system. For very small working sets and the fastest methods, that's already too coarse. Of course I could have measured the total time for, say, 10 000 runs and just compute the average. Yet, it turns out that even with sufficient warm-up, there are still enough outliers in the measured timings to skew the result. After some experimenting, I decided to use the mean of the values below the 10th percentile instead. That seems to be more stable than relying on the timings of individual runs and also excludes extreme outliers. Still, I wouldn't trust the measurements for < 2^8 (25 kB) too much.

I ran the benchmarks on an Intel Core i7-2600K @ 3.4 - 4.2 GHz and 32 GiB of DDR3-1600 RAM (PC3-12800, dual channel; peak theoretical transfer rate: 25 GiB/s) under Windows 8.1 Pro. Both the CLR's current x64 JIT compiler "JIT64" as well as RyuJIT (CTP 4) received the oportunity to show of their performance in the benchmarks.

Results & Analysis

First let's have a look at the results from JIT64 (click for full-size):


Ignoring the multithreaded versions for now (dashed lines), it is clear immediately that memcpy (P/Invoke) offers great overall performance for both the smallest and the largest data sets. Marshal.Copy and cpblk come as a close second. Unsafe C# (*dst++ = *src++) offers stable, but comparatively poor performance.

NativeInterop's Buffer.Copy doesn't even come close to the fastest methods for small data sets, but offers comparable performance for larger sets. Something in its implementation is generating way more overhead than neccessary... That "something" turns out to be the allocation of GCHandles for fixing the managed arrays: F# 3.1 doesn't support a "fixed" statement as C# does. To check whether that truly could be the source of the overhead, I implemented a version of the cpblk method that uses GCHandle.Alloc instead of fixed and—lo and behold—this slowed down cpblk to approximately the same speed as Buffer.Copy (light-blue).

Unsurprisingly, the multithreaded versions come with quite some overhead, so they can't compete for< 500 kB. Only for problem sizes that fit into Sandy Bridge's L3 cache we see some nice scaling. Problem sizes beyond the L3 cache size are again limited by the system's DRAM bandwidth and don't profit from multi-threading.

The performance of all other (non-memcpy-like) methods is abysmal at best. We best just skip them...

How does this picture change once we switch to an (early!) preview of the upcoming RyuJIT compiler? As it turns out, not by much; yet there is still happening something interesting:


Suddenly, "*dst++ = *src++" has become one of the fastest implementations. What's going on here? Let's have a look at the generated assembly for *dst++ = *src++; first let's see what JIT64 produces:

lea rdx,[r9+r8]
lea rax,[r9+32h]
mov rcx, r9
mov r9, rax
mov rax, qword ptr [rcx]
mov qword ptr [rsp+20h], rax
mov rax, qword ptr [rcx+8]
mov qword ptr [rsp+28h], rax
mov rax, qword ptr [rcx+10h]
mov qword ptr [rsp+30h], rax
mov rax, qword ptr [rcx+18h]
mov qword ptr [rsp+38h], rax
mov rax, qword ptr [rcx+20h]
mov qword ptr [rsp+40h], rax
mov rax, qword ptr [rcx+28h]
mov qword ptr [rsp+48h], rax
mov ax, word ptr [rcx+30h]
mov word ptr [rsp+50h], ax
lea rcx, [rsp+20h]
mov rax, qword ptr [rcx]
mov qword ptr [rdx], rax
mov rax, qword ptr [rcx+8]
mov qword ptr [rdx+8], rax
mov rax, qword ptr [rcx+10h]
mov qword ptr [rdx+10h], rax
mov rax, qword ptr [rcx+18h]
mov qword ptr [rdx+18h], rax
mov rax, qword ptr [rcx+20h]
mov qword ptr [rdx+20h], rax
mov rax, qword ptr [rcx+28h]
mov qword ptr [rdx+28h], rax
mov ax, word ptr [rcx+30h]
mov word ptr [rdx+30h], ax
inc r11d
cmp r11d, r10d
jl  00007FFE57865410


Hm, so it first moves data from [rcx] (offset into the original byte[]) to some temporary at [rsp+20h] and then copies that to the destination [rdx] (offset into the result STLTriangle[]). That's obviously one step that's not neccessary and where RyuJIT can improve upon (there are 28 mov instructions, but only 14 qword movs + 2 word movs are required to move 50 bytes from memory to another memory location). So, what does RyuJIT really do?

lea r9, [rcx+32h]
lea r10, [rax+32h]
movdqu xmm0, xmmword ptr [rax]
movdqu xmmword ptr [rcx], xmm0
movdqu xmm0, xmmword ptr [rax+10h]
movdqu xmmword ptr [rcx+10h], xmm0
movdqu xmm0, xmmword ptr [rax+20h]
movdqu xmmword ptr [rcx+20h], xmm0
mov r11w, word ptr [rax+30h]
mov word ptr [rcx+30h], r11w
inc r8d
cmp edx, r8d
mov rax, r10
mov rcx, r9
jg 00007FFE57884CE8

We see that RyuJIT is able to not only remove the unrequired intermediate load/store ops, but in addition it also issues (unaligned) SIMD mov instructions to copy 16 bytes at a time. This optimization also works for NativeInterop.Buffer.Copy: Once I increased the blocksize to at least 16 bytes (or larger) performance became identical to that of *dst++=*src++ in the RyuJIT case (aside from the overhead for small problem sizes due to GCHandle allocations).

Conclusion

When your only concern is reading from/writing to a file on a comparatively slow disk, all of the above shouldn't bother you much. Most of the methods will be simply "fast enough". As soon as you start to move around data that's already in memory (e.g. cached) or maybe it resides on a very fast SSD, choosing the right method however becomes critical.

While NativeInterop.Buffer.Copy isn't the fastest of the bunch (yet), it's performance is competitive for medium to large problem sizes and in any case orders of magnitudes faster then the usual BinaryReader-approach. If you want convenience, portability and genericity while maintaining reasonable performance, NativeInterop provides a good all-round solution, I believe. If you want raw speed, though, use memcpy (or whatever may be available on your platform).

Method Convenience Portability Safety Efficiency Genericity
BinaryReader.ReadXXX + + + - -
BitConverter.ToXXX - + o - -
Buffer.BlockCopy - + o - -
Marshal.PtrToStructure - + o - +
*dst++ = *src++ o + - o o
Marshal.Copy + + - + o
memcpy o - - + +
cpblk - + - + +
Buffer.Convert/Copy, Stream.ReadUnmanagedStructRange + + - o/+ +