Posted by on July 17, 2013 in Image Processing and Analysis

This post is the sixth in a series about the Ancestry.com Image Processing Pipeline (IPP). The IPP is the part of the content pipeline that is responsible for digitizing and processing the millions of images we publish to our site.  The core functionality of the IPP is illustrated in the following diagram.

ImageProcessingFlow - Scaled
Figure 1. Sequence of image processing operations performed in the IPP

In this post I continue with the material from my previous post (part five) in which I described some of the core image processing operations in the IPP, shown in Figure 1 in the box with the red outline. A source image, shown at the top of the diagram, is processed by the Image Processor, which creates a “recipe” file of the operations to be applied to the source image, such as auto-normalization and auto-sharpening. This step is followed by a manual step, Image Quality Editor, in which an operator manually inspects, and if necessary, corrects the image for things like brightness and contrast. This step is followed by the Image Converter, which applies the “recipe” instructions from the two previous steps to the source image and then compresses the image to the desired encoding and file container (such as JPG or J2K).

The objective of the IPP, you might recall, is to enhance the images in a way that generally improves legibility without inadvertently introducing damaging artifacts. In part five, I focused on image contrast  describing histograms as a way to (qualitatively) measure contrast and a technique called auto-normalization as a way to enhance the contrast in an image. This blog post presents an image processing technique called auto-sharpening that attempts to enhance the image. But instead of changing the contrast in the image, it enhances the image by removing some of the blur in the content.

Sharpening, in general, refers to a process that attempts to invert the (usually slight) blurring effects introduced into the image by the camera sensor and lens. Our goal in sharpening an image is to reveal some of the fine details in the text that might not be clearly or easily discernible in the original image. The “auto” part of the name is used to emphasize that the technique is done automatically, based on an algorithmic analysis of the image, and not through our previous manual inspection and sharpening correction of the image by a human operator.

Auto-sharpening, in the most general sense, works by amplifying the high-frequency components of the image. This Wikipedia article on the unsharp mask describes the basis of the algorithm we have developed and fine-tuned for the kinds of historical records we process. The edges of text are high-frequency components and by amplifying these edges, we can make the text more pronounced or sharp. However, for everything else in the image (that’s not text) this technology can introduce unwanted and conspicuous effects that make the image appear noisy, which is why we’ve developed an algorithm to allow for a parameter to control how aggressively it sharpens the image. Level-1 sharpening is the least aggressive and the sharpening effect is barely noticeable, while Level-4 sharpening is the most aggressive and for most images introduces too many artifacts. Although we do occasionally use Level-4 sharpening on some very faded and blurred images, Level-2 is our default level of sharpening.

The following diagram shows a snippet of an image that has been processed with these four different sharpening levels.

Snippet sharpened at four levels - ver2
Figure 2. A snippet of an image that is auto-sharpened at the four levels

Auto-sharpening, as mentioned above, can come with some negative side-effects. It works by exaggerating the brightness difference along edges, which creates the appearance of making these edges more pronounced or sharp. However, applying too much sharpening to an image can damage it by introducing an artifact called a “sharpening halo”. This can be seen in the following two figures in which the dark pixels from the text appear to have a glowing halo.

Figure 3. Side-by-side comparison of a snippet of an image that has been overly-sharpened.
Figure 3. Side-by-side comparison of a snippet of an image that has been overly-sharpened.

 

Figure 4. A part of a character from a level-4 sharpened image that illustrates the sharpening halo effect.
Figure 4. A part of a character from a level-4 sharpened image that illustrates the sharpening halo effect.

It’s clear from Figures 3 and 4 that Level-4 sharpening is too much for this image, since you can see the “halo” around the ink strokes. Our default sharpening setting is Level-2, and usually produces excellent results. Level-2 sharpened images just appear to be a bit more crisp, which almost always means the text is more legible.

In the last figure in my part five blog post I showed a zoomed-in snippet of an image before and after it was auto-normalized. The following figure shows this snippet after is has had Level-2 sharpening applied to it after the auto-normalization operation.

Figure 5. Image snippet comparison showing the benefits of auto-normalization followed by Level-2 sharpening.
Figure 5. Image snippet comparison showing the benefits of auto-normalization followed by Level-2 sharpening.

This comparison demonstrates the benefits of first, auto-normalizing the image to enhance its contrast and then following this operation with a cautious (Level-2) sharpening. Although the noise is slightly amplified in this image, it seems to be a reasonable trade for the improved legibility.

In my next blog post I will continue with the core functionality of the Image Processing Pipeline by presenting our approach to noise removal and image binarization.

 

Join the Discussion

We really do appreciate your feedback, and ask that you please be respectful to other commenters and authors. Any abusive comments may be moderated. For help with a specific problem, please contact customer service.