Xuemei Zhang and Brian A. Wandell
Department of Psychology, Stanford University
Stanford, CA 94305
We describe a spatial extension to the CIELAB color metric that is useful for measuring color reproduction errors of digital images. To compute the error, digital color images are spatially filtered using a pattern-color separable method and then converted into the CIELAB representation. Over patterned regions of the image, the reproduction errors measured using the spatial extension of CIELAB correspond to perceived color errors better than errors computed without the spatial extension. Over uniform spatial regions of the image, errors computed with the extension are equal to errors computed using the standard CIELAB formulae.
The CIELAB system (CIE 1978) is an important international standard for measuring color reproduction errors. This system was created in a period when most color reproduction applications were concerned with matching large uniform colored areas. Hence, the CIELAB system was tested against data from color appearance judgments of large uniform fields.
With the growth of digital color imaging, many applications have been developed to process real images. However, most real images are not made up of large uniform fields. Many psychophysical studies show that discrimination and appearance of small-field or fine-patterned colors differ from similar measurements made using large uniform fields (Noorlander & Koenderink, 1983; Poirson & Wandell, 1993, 1995; Bäuml and Wandell, 1996). Therefore, applying CIELAB to predict local color reproduction errors in patterned images does not give satisfactory results. For example, when we compare a continuous-tone color image with a halftone version of the image, a point-by-point computation of the CIELAB error produces large errors at most image points. Because the halftone patterns vary rapidly these differences are blurred by the eye, and the reproduction may still preserve the appearance of the original.
In this paper, we present an extension of the CIELAB color metric that can be applied to measuring color reproduction errors in images. We refer to the extension as Spatial-CIELAB (S-CIELAB).
We have two goals in designing the S-CIELAB error measure. First, we would like to apply a spatial filtering operation to the color image data in order to simulate the spatial blurring by the human visual system. Second, when the inputs are large uniform areas, we would like the extension to be consistent with the basic CIELAB calculation.
Figure 1 shows how to calculate the S-CIELAB representation. The image data are transformed into an opponent-colors space. Each opponent-colors image is convolved with a kernel whose shape is determined by the visual spatial sensitivity to that color dimension; the area under each of these kernels integrates to one. The calculation is pattern-color separable because the color transformation does not depend on the image's spatial pattern, and the spatial convolution does not depend on the image's color.
Finally, the filtered representation is transformed to a CIE-XYZ representation, and this representation is transformed using the CIELAB formulae. The resulting S-CIELAB representation includes both the spatial filtering and the CIELAB processing.
We use a pattern-color separable transformation for two reasons. First, separable transformations are efficient to compute. Second, psychophysical experiments suggest that the human visual representation of simple colored patterns is pattern-color separable (Poirson & Wandell, 1993, 1996; Bäuml and Wandell, in press). The parameters in the S-CIELAB calculation, including the color transformation and the spatial filters, were estimated from these psychophysical measurements.
Differences between the S-CIELAB representation of an original image
and its reproduction measure the reproduction error. We summarize
these differences by a quantity
,
which is computed
precisely as
in conventional CIELAB. The S-CIELAB
difference measure reflects both spatial and color sensitivity, and it
equals the conventional CIELAB over uniform regions of the image.
To specify the S-CIELAB transformation, we must choose a color
transformation and three spatial filters. In the calculations below,
we used the transformation and filters estimated from human
psychophysical measurements of color appearance. The color
transformation converts the input image, specified in terms of the CIE
1931 XYZ tristimulus values, into three opponent-colors planes that
represent luminance, red-green and blue-yellow image. The linear
transformation from XYZ to opponent-colors is
The data in each plane are filtered by two-dimensional separable spatial kernels of the form

where
![\begin{displaymath}E_i = k_i \exp[- (x^2 + y^2) / \sigma_i^2].
\end{displaymath}](img5.gif)
In the discrete implementation, the scale factor ki is chosen so that Ei sums to 1. The scale factor k is chosen so that for each color plane, its two-dimensional kernel f sums to one.
The parameters
for the three color planes are:
| Plane | Weights wi | Spreads |
| Lum | 0.921 | 0.0283 |
| 0.105 | 0.133 | |
| -0.108 | 4.336 | |
| Red-green | 0.531 | 0.0392 |
| 0.330 | 0.494 | |
| Blue-yellow | 0.488 | 0.0536 |
| 0.371 | 0.386 |
where spread is in degrees of visual angle.
Because the spatial processing stage is separate from the CIELAB calculation, we can implement S-CIELAB as a pre-processor to existing CIELAB-related software or hardware. The separability of the pattern and color stages makes it straightforward to apply the spatial extension to other color difference calculations.
We tested S-CIELAB on JPEG-DCT compressed images, halftoned images, and some simple test patterns (e.g., sweep frequency color images). Some of these methods, such as the JPEG-DCT and halftoning, are designed to take advantage of the spatial insensitivity of the eye to certain colored patterns. Figure 2 shows how the JPEG-DCT transforms three opponent-colors planes of an image. This image was created using the standard JPEG-DCT compression algorithm with a quality factor set to 75, so that the compressed image appears only slightly different from the original. At this compression level, the luminance plane is blurred slightly (A), while the red-green and blue-yellow planes are blurred strongly (B,C). The chromatic blurring is barely visible in the compressed image, much as the loss of spatial resolution in the chromatic channels of the NTSC signal is barely visible in the television picture (Wandell, 1995, p. 326 et seq.).
![]() |
The CIELAB and S-CIELAB reproduction errors for the compressed image
are compared in Figure 3. The solid line shows the
distribution of
values computed point-by-point using CIELAB
on the original and compressed image. The CIELAB error distribution of
values had 36 percent of the image exceeding 5 units, and
10 percent exceeding 10 units. When comparing the color reproduction
of large uniform fields color differences of this size are easily
visible; in the reproduction, however very few errors are visible.
Hence, these
values are larger than the perceived
difference.
![]() |
Because CIELAB was not designed with digital image applications in mind, it should not be applied to evaluate the reproduction error for JPEG-DCT images (or halftones). By adding a spatial pre-processing stage, S-CIELAB extends CIELAB to reproduction errors of digital images.
Bäuml, H., & Wandell, B. A. (1995). The color appearance of mixture gratings. Vision Research, in press.
C.I.E. (1978) Recommendations on uniform color spaces, color difference equations, psychometric color terms. Supplement No.2 to CIE publication No.15 (E.-1.3.1) 1971/(TC-1.3.).
Noorlander, C. & Koenderink, J. J. (1983). Spatial and temporal discrimination ellipsoids in color space. Journal of the Optical Society of America, 73, 1533-1543.
Poirson, A. B. & Wandell, B. A. (1993). Appearance of colored patterns: pattern-color separability. Journal of the Optical Society of America, 10(12), 2458-2470.
Poirson, A. B. & Wandell, B. A. (1996). Pattern-color separable pathways predict sensitivity to simple colored patterns. Vision Research 35(2), 239-254.
Wandell, B. A. (1995). Foundations of Vision. Sinauer Press, Sunderland MA.
This document was generated using the LaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998)
Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
The command line arguments were:
latex2html scielab3.tex.
The translation was initiated by Xuemei Zhang on 1998-10-07