Pages

Tuesday, April 10, 2012

Low (Resolution) and Order (of applying transformations)

I'm doing a project that involves taking huge images of 400,000,000 pixels, 20000 X 20000, 700 MB on disk, scaling them down and cutting them into reasonably sized tiles (512x512 pixels).


In this post I want to present how simple code re factoring involving merely changing the order of applying the same transformations rewarded an enormous X20 performance gain.


As a first step, I had to order extra 8GB RAM for my workstation. Once thay have arrived (eBay, 100$, 1 week) I could finally click the input file and display it on windows image preview. Before that, my workstation, initially having only 2GB, hang for 20 minutes every time I accidentally hovered over the file.

The next step was to realize that .NET 2.0 bitmap classes won't do the trick. But than I was surprised to discover that WPF BitmapSource and its sub-classes (System.Windows.Media.Imaging) are capable of handling these very large images so I built the code around them avoiding a native C++ bite-cruncher completely. This was a very good start. It made me very happy.

The third step was to wrap the cropped images in DICOM. This is a subject for a dedicated, yet to come, post about the new multi-frame objects and concatenations. Then came the last two steps that proved out to be challenging.

My initial code looked like this:

  1. Use ScaleTransform to down-scale the original image to the required resolution
  2. Loop over the down-scaled image on X and Y to do the tiling (The Cut function)
  3. Use CroppedBitmap to do the cropping (in the loop)

Here it is:


  1. Use ScaleTransform to scale the image to the required resolution

    double scale = CompScale(res); // res is the power of two resolution
    TransformedBitmap bmp = new TransformedBitmap();
    bmp.BeginInit();
    bmp.Source = _img; // _img is a BitmapSource holding the original image
    bmp.Transform = new ScaleTransform(scale, scale);
    bmp.EndInit();
    // Now, call Cut with the down-scaled image ...

  2. Loop over the image on X and Y to do the tiling (The Cut function)

    public
     void Cut(System.Drawing.Size TileSize)
    { 
        int
     numTiles = CalcNumTiles(TileSize.Width, TileSize.Height);
        int rows = _src.PixelHeight;
        int cols = _src.PixelWidth;
        // Loop on Y
        for (int y = 0, Ny = 0, i = 1; y < rows; y += TileSize.Height, Ny++, i++)
        {
            // Take care of last row height
            int h = (TileSize.Height < rows - y ? TileSize.Height : rows - y);
            // Loop on X
            for (int x = 0, Nx = 0; x < cols; x += TileSize.Width, Nx++) 
            {
                // Take care of last column width
                int w = (TileSize.Width < cols - x ? TileSize.Width : cols - x);

  3. Use CroppedBitmap to do the cropping
                CroppedBitmap subImg = new CroppedBitmap(_src, new Int32Rect(x, y, w, h));
               // … do the DICOM stuff with the cropped image
            }
        }
    }
This worked but was far from being perfect. The first problem I noticed was performance. It took almost an hour to complete the work of cropping a 20000x20000 image into 512x512 tiles in all resolutions. After digging deeper into Google and Stack Overflow I found some tips like Freeze(). It didn't do much but it did lead me to take the whole processing to a separate thread which turned out to have an amazing effect on performance. I think it might be somehow related to the worker thread not being a UI thread so many locks and checks are eliminated. The processing time went down to couple of minutes! Much better but still not good enough. Maybe with the aid of a fancy progress bar?

The real showstopper, however, was still buried deep in the code of the WPF imaging code, code that I don't have access to. Down the testing path, when running all the resolutions from thumbnail, one by one, up to full scale, resolutions 4  (5000x5000 pixels) and 5 (10Kx10K pixels) didn't scale down properly. 0 (a single tile thumbnail), 1 (4 tiles), 2 and 3 were fine.  The problem was found after stage 1 in the above process. The down scaled image  (TransformedBitmap bmp) got corrupted. It went dark and had an even darker vertical band across its center line. It was obvious that the down scaling is faulty and that there's some shift in the pixel bits that messed up the whole image (bug? Microsoft people?). 

When hitting a wall, I prefer taking a break and that's exactly what I did. I worked on other projects, did some maintenance work, enjoyed the Passover holiday with my family and did some sport. Then, while strolling through the crowded blocks of down-town Tel-Aviv, digesting a great fun Indian cuisine lunch (Mango Lassi, Samosa, Tali, Gulab Jamun, Chai Masala) the coders muses answered my prayers and whispered in my ears. 

At home, after some rest and a shot of espresso, I sat down to re-factor the code that now looked like this:

  1. Loop over the full scale image on X and Y to do the tiling
  2. Use CroppedBitmap to cut a tile from the original image 
  3. Use ScaleTransform to down-scale the tile to the required resolution (in the loop)

Here it is:
public void CutAndScale(System.Drawing.Size FinalTileSize, double scale)
{
    if (scale == 1) { Cut(FinalTileSize, sink); // If full scale, do like before     }    else    {
  1. Loop over the full scale image on X and Y to do the tiling (notice up scaled tile)

            System.Drawing.Size TileSize = new System.Drawing.Size(
                (int)(FinalTileSize.Width / scale), (int)(FinalTileSize.Height / scale));
            int rows = _src.PixelHeight;
            int cols = _src.PixelWidth;
            // Loop on Y
            for (int y = 0, Ny = 0; y < rows; y += TileSize.Height, Ny++) 
            {
                // Take care of last row height
                int h = (TileSize.Height < rows - y ? TileSize.Height : rows - y);
                // Loop on X
                for (int x = 0, Nx = 0; x < cols; x += TileSize.Width, Nx++) 
                {
                    // Take care of last column width
                    int w = (TileSize.Width < cols - x ? TileSize.Width : cols - x);

  2. Use CroppedBitmap to cut a tile from the original image 

                     CroppedBitmap subImg = new CroppedBitmap(_src, new Int32Rect(x, y, w, h));

  3. Use ScaleTransform to scale the tile to the required resolution
                    TransformedBitmap bmp = new TransformedBitmap();
                    bmp.BeginInit();
                    bmp.Source = subImg;
                    bmp.Transform = new ScaleTransform(scale, scale);
                    bmp.EndInit();
                    // … do the DICOM stuff with the cropped image
                }
            }
        }
    }

This is classic re-factoring that made a lot of sense. The down scaling from step 1 is now performed inside the tiling loop on a proportionally bigger full resolution tile from the original image. My goal was to work around the scale transform 'bug' as I assumed that since it worked good on smaller image sizes it will work just as well on the cropped tiles. It worked.

But I also assumed that this change in the order of applying the transformations will have a bad effect on performance because I did now many more scaling operations and this is where I got it all wrong! You can imagine how surprised I was after the first time I hit the "process" button and immediatly got the "Done!" message box. I thought its a bug and looked into the output folder that was full with the correct down-scaled cropped tiles?! The new code worked 20 times FASTER!!! Instead of taking 2 minutes, it finished in couple of seconds.

Looking back, maybe I can find some reasons for it because every down-scaling is done on much smaller images but I never expected such a dramatic difference. 

So, here's what I've learned from all this:
  1. The WPF BitmapSource classes family is cool. It can process efficiently 400 Mega Pixel images if you use it correctly which means: do the byte crunching on a non-UI worker thread.
  2. Sometimes solving one problem also solves another one. It doesn't always have to be a small blanket story where solving one problem introduces a different one.
  3. Shuffling the processing steps is always a good idea, even if you don't exactly know what to expect. In this case, the non intuitive decision to take a processing scale into the loop made the difference. You can only guess what is going on inside the loops of Microsoft's WPF ScaleTransform when image size is larger than 5,000X5,000 pixels.
  4. Avoiding C++ is good too. Not that I'm afraid of C++, I love C++ but from a contractor point of view, delivering code in my customers comfort zone is a big plus. Just think of all the project configurations (32, 64 bit, static, dynamic libraries, pre-compiled headers, that were avoided and the C++ developer skills that are not as common as they used to be.
That's it for today. No DICOM this time. As always, questions and comments are most welcome.



No comments:

Post a Comment