Pillow Performance

Benchmarks

On this page, you'll find estimates of Pillow's performance, a comparison of Pillow's performance with Pillow-SIMD (a special performance-optimized version of Pillow), and other graphic libraries. Mainly Python libraries are tested.

Unfortunately, there is no single performance metric for these libraries. Different libraries perform operations with different efficiency and sometimes in different ways. It is only possible to measure the performance of a single operation or group of operations. For a good comparison, you need to be sure that the tested libraries give the same outcome for an operation and the same resources are used to perform the operation.

Same outcome

An operation could be performed using different algorithms. Thus, the chosen algorithm can significantly affect the result. For example, if you want to perform a resize, you can use the very fast nearest neighbor algorithm and get very poor quality. Or you can use convolution resampling and get a reasonable quality for most cases. Depending on your needs, you can or can not consider the nearest neighbor method.

In these benchmarks, we always assume that you expect results as close as possible to each other. This allows to directly compare the execution time.

Same resources

Some libraries could use more processing power than others. For example, graphics processing unit or multi-core CPUs. For some cases, this could be an advantage because it shrinks execution time. For example, if the execution time using one core is 2 seconds, the library can reduce it to 0.5 seconds using 4 cores. This library has an advantage over the library which performs the same operation in 1 second but can't use multiple cores.

However, sometimes overall throughput is more important than execution time. If you need to perform the operation on a group of images, you can do it in parallel. In this case, the first library's throughput remains the same: 2 operations per second. The second library's throughput is 4 operations per second. The second library is preferred.

In these benchmarks, we measure the throughput on single CPU core, not minimum achievable execution time.

Test suites

Libraries with Python bindings are tested using pillow-perf test suites. Each test is run 11 times and the mean execution time is calculated.

Libraries

PIL
Python Imaging Library. Initially released for Python 1.2 in 1995. Last version, 1.1.7, released on November 15, 2009. Includes image codecs and image manipulation routines.
Pillow
Python Imaging Library (fork). Originally a packaging fork, designed to facilitate more reliable installation from the Python Package Index. Starting with version 2.0 (2013), Pillow supports Python 3 and is actively maintained and developed. docs
Pillow-SIMD
Highly optimized downstream Pillow fork with performance improvements made for common operations using SIMD instructions. readme
ImageMagick
Very popular image manipulation library with bindings for many languages. Extremely flexible. In this benchmarks Wand wrapper is used. homepage
OpenCV
OpenCV aims to support real-time computer vision. Originally developed by Intel's research center, it was later supported by Willow Garage and is now maintained by Itseez. homepage
VIPS
libvips is a demand-driven, horizontally threaded image processing library. Compared to similar libraries, libvips runs quickly and uses little memory. In this benchmarks pyvips wrapper is used. homepage

Results Browser

System

    Competition

      Chart set

        Units: seconds megapixels/s operations/s

        Convolution Resampling

        Image resampling is one of the most common processing operations. Good and predictable quality can be achieved using the convolution-based method.

        "Convolution" means that for each pixel of the final image we compute some area ("window") of the source image using weights. The size of the area and weights depend on the chosen filter. There are many filters. The most common of which are:

        For the image downscaling the window size should be increased. This takes into account all pixels of the original image. Unfortunately, not all libraries and software do this. As a result, the output image can look extra sharp and crisp. In some cases, the result only vaguely resembles the original image.

        These are the results of resizing a 4928 × 3280 px image to 210 × 140 px using different software and different filters. Google Сhome's <canvas> implementation doesn't increase the window size on downscaling. OpenCV also doesn't do this using the Bicubic filter but it offers a supersampling resampling method.

        Google Chrome
        Google Chrome canvas
        OpenCV Cubic
        OpenCV Cubic
        OpenCV Area
        OpenCV Area (supersampling)

        Obviously, it's meaningless to compare OpenCV's Bicubic performance with the correct convolution implementations. Also, supersampling is a different resampling method, not based on convolutions. It is correct for downscaling but unacceptable for upscaling.

        From the very beginning, PIL also didn't increase the window size for Bilinear and Bicubic filters, but it did for Antialias (the real name of the filter is Lanczos). In Pillow 2.7.0 this was finally fixed.

        PIL Bicubic
        PIL Bicubic
        PIL Lanczos
        PIL Antialias (Lanczos)
        Pillow Bicubic
        Pillow Bicubic

        On the other hand, some image libraries do convolutions right. For example, ImageMagick. The problem with ImageMagick is that resampling is very slow. For example, it's about 20 times slower than Skia which can also do high-quality convolution resampling.

        From the very beginning, PIL and Pillow resampling performance used to be quite low and similar to ImageMagick's performance. Pillow 2.7 reverses the trend introducing several common optimizations such as loops rearrangement and cache-aware transposition.

        Charts show median performance in Megapixels/s (the lower the better) required for resizing the source 2560x1600 RGB image to one of the four destination sizes using one of the filters. For significant downsampling, Pillow 2.7 is 2 — 4 times faster than PIL. Pillow 3.3 introduces fixed-point arithmetic for resampling which is even faster. Pillow 3.4 and 4.3 have additional optimizations for large target dimensions. In the end, current Pillow version is up to 7 times faster than the original PIL.

        But this is not the end of the story. Starting with Pillow 3.2 you can use a SIMD-enabled Pillow version. Pillow-SIMD resampling performance is significantly increased compared to Pillow. The latest Pillow-SIMD compiled with AVX2 is 4 — 6 times faster than Pillow. In sum, Pillow-SIMD resampling performance is 12 — 35 times higher than the original PIL.

        Gaussian Blur

        High-quality Gaussian blur can be used to reduce image noise and details. It is also used as a pre-processing stage in computer vision algorithms.

        Mathematically, applying a Gaussian blur to an image is the same as convolving the image with a Gaussian function. The amount of blur depends on standard deviation size (sigma). In theory, the Gaussian function is infinite. It means that we need to compute every pixel of the source image for every pixel in the destination image. In practice, Gaussian is a rapidly decreasing function and a window size more than 3 * sigma has no meaning.

        As a result, Gaussian blur performance should depend on window size and sigma. For most implementations this is true. But in 2011 Mathematical Image Analysis Group from Saarland University proved that Gaussian blur can be very closely approximated by series of extended box filters. Unlike a true Gaussian filter, box filter can be performed in constant time relative to blur radius.

        This approach was implemented in Pillow 2.7.0. According to the tests, even if approximation with box filters could be little slower with a small sigma, with a larger sigma this gap disappears and approximation becomes much faster. SIMD gives an additional 2x boost.

        Transposition

        This is a group of operations which is most relevant when an image has an orientation flag stored in EXIF. The group includes such operations as rotating by right angles, mirroring and transposition itself. It differs from regular rotation because these operations are non-destructive (ie pixel values are not modified) and can be performed very fast.

        Mirroring and rotating by 180° are reasonably fast in most implementations. There was an issue in Pillow with rotating by 90° and 270° because those operations can consume CPU cache highly inefficiently. In Pillow 2.7 the cache-aware algorithm was implemented. Also, transposition was added.

        Color Conversion

        Conversion from RGBA to RGBa can be used to normalize pixel values before resizing and other transformations.

        Compositing

        Alpha compositing is used for overlaying two semitransparent images. Unlike alpha blending, alpha compositing mixes color and alpha channels with different coefficients.