On this page, you'll find estimates of Pillow's performance, a comparison of Pillow's performance with Pillow-SIMD (a special performance-optimized version of Pillow), and other graphic libraries. Mainly Python libraries are tested.
Unfortunately, there is no single performance metric for these libraries. Different libraries perform operations with different efficiency and sometimes in different ways. It is only possible to measure the performance of a single operation or group of operations. For a good comparison, you need to be sure that the tested libraries give the same outcome for an operation and the same resources are used to perform the operation.
An operation could be performed using different algorithms. Thus, the chosen algorithm can significantly affect the result. For example, if you want to perform a resize, you can use the very fast nearest neighbor algorithm and get very poor quality. Or you can use convolution resampling and get a reasonable quality for most cases. Depending on your needs, you can or can not consider the nearest neighbor method.
In these benchmarks, we always assume that you expect results as close as possible to each other. This allows to directly compare the execution time.
Some libraries could use more processing power than others. For example, graphics processing unit or multi-core CPUs. For some cases, this could be an advantage because it shrinks execution time. For example, if the execution time using one core is 2 seconds, the library can reduce it to 0.5 seconds using 4 cores. This library has an advantage over the library which performs the same operation in 1 second but can't use multiple cores.
However, sometimes overall throughput is more important than execution time. If you need to perform the operation on a group of images, you can do it in parallel. In this case, the first library's throughput remains the same: 2 operations per second. The second library's throughput is 4 operations per second. The second library is preferred.
In these benchmarks, we measure the throughput on single CPU core, not minimum achievable execution time.
Libraries with Python bindings are tested using pillow-perf test suites. Skia is tested using test files. Each test is run 11 times and the mean execution time is calculated.
Image resampling is one of the most common processing operations. Good and predictable quality can be achieved using the convolution-based method.
"Convolution" means that for each pixel of the final image we compute some area ("window") of the source image using weights. The size of the area and weights depend on the chosen filter. There are many filters. The most common of which are:
Bilinear— high-efficiency filter with the small window and a blurry result;
Bicubic— high-quality filter with medium window;
Lanczos— high-quality filter with large window.
For the image downscaling the window size should be increased. This takes into account all pixels of the original image. Unfortunately, not all libraries and software do this. As a result, the output image can look extra sharp and crisp. In some cases, the result only vaguely resembles the original image.
These are the results of resizing a
4928 × 3280 px image to 210 × 140 px
using different software and different filters.
doesn't increase the window size on downscaling.
also doesn't do this using the
but it offers a supersampling resampling method.
Obviously, it's meaningless to compare OpenCV's
with the correct convolution implementations.
Also, supersampling is a different resampling method,
not based on convolutions.
It is correct for downscaling
but unacceptable for upscaling.
From the very beginning,
also didn't increase the window size
but it did for
(the real name of the filter is
2.7.0 this was finally fixed.
On the other hand, some image libraries do convolutions right. For example, ImageMagick. The problem with ImageMagick is that resampling is very slow. For example, it's about 20 times slower than Skia which can also do high-quality convolution resampling.
From the very beginning, PIL and Pillow resampling performance used to be quite low and similar to ImageMagick's performance. Pillow 2.7 reverses the trend introducing several common optimizations such as loops rearrangement and cache-aware transposition.
Charts show median performance in Megapixels/s (the lower the better) required for resizing the source 2560x1600 RGB image to one of the four destination sizes using one of the filters. For significant downsampling, Pillow 2.7 is 2 — 4 times faster than PIL. Pillow 3.3 introduces fixed-point arithmetic for resampling which is even faster. Pillow 3.4 and 4.3 have additional optimizations for large target dimensions. In the end, current Pillow version is up to 7 times faster than the original PIL.
But this is not the end of the story. Starting with Pillow 3.2 you can use a SIMD-enabled Pillow version. Pillow-SIMD resampling performance is significantly increased compared to Pillow. The latest Pillow-SIMD compiled with AVX2 is 4 — 6 times faster than Pillow. In sum, Pillow-SIMD resampling performance is 12 — 35 times higher than the original PIL.
High-quality Gaussian blur can be used to reduce image noise and details. It is also used as a pre-processing stage in computer vision algorithms.
Mathematically, applying a Gaussian blur to an image is the same as convolving the image with a Gaussian function. The amount of blur depends on standard deviation size (sigma). In theory, the Gaussian function is infinite. It means that we need to compute every pixel of the source image for every pixel in the destination image. In practice, Gaussian is a rapidly decreasing function and a window size more than 3 * sigma has no meaning.
As a result, Gaussian blur performance should depend on window size and sigma. For most implementations this is true. But in 2011 Mathematical Image Analysis Group from Saarland University proved that Gaussian blur can be very closely approximated by series of extended box filters. Unlike a true Gaussian filter, box filter can be performed in constant time relative to blur radius.
This approach was implemented in Pillow
According to the tests, even if approximation with box filters
could be little slower with a small sigma,
with a larger sigma this gap disappears and
approximation becomes much faster.
SIMD gives an additional 2x boost.
This is a group of operations which is most relevant when an image has an orientation flag stored in EXIF. The group includes such operations as rotating by right angles, mirroring and transposition itself. It differs from regular rotation because these operations are non-destructive (ie pixel values are not modified) and can be performed very fast.
Mirroring and rotating by 180° are reasonably fast in most implementations. There was an issue in Pillow with rotating by 90° and 270° because those operations can consume CPU cache highly inefficiently. In Pillow 2.7 the cache-aware algorithm was implemented. Also, transposition was added.
RGB— General red, green, blue values
RGBA— RGB with alpha channel
RGBa— RGB values multiplied on alpha channel
L— luminance values (grayscale) with 1 byte per pixel
LA— luminance values with alpha channel
can be used to normalize pixel values
before resizing and other transformations.
Alpha compositing is used for overlaying two semitransparent images. Unlike alpha blending, alpha compositing mixes color and alpha channels with different coefficients.