Free Essay

Digital Image Processing

In: Computers and Technology

Submitted By narasimmansami
Words 173795
Pages 696
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

DIGITAL IMAGE PROCESSING

DIGITAL IMAGE PROCESSING
PIKS Inside Third Edition

WILLIAM K. PRATT
PixelSoft, Inc. Los Altos, California

A Wiley-Interscience Publication JOHN WILEY & SONS, INC.

New York • Chichester • Weinheim • Brisbane • Singapore • Toronto

Designations used by companies to distinguish their products are often claimed as trademarks. In all instances where John Wiley & Sons, Inc., is aware of a claim, the product names appear in initial capital or all capital letters. Readers, however, should contact the appropriate companies for more complete information regarding trademarks and registration. Copyright  2001 by John Wiley and Sons, Inc., New York. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic or mechanical, including uploading, downloading, printing, decompiling, recording or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ @ WILEY.COM. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold with the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional person should be sought. ISBN 0-471-22132-5 This title is also available in print as ISBN 0-471-37407-5. For more information about Wiley products, visit our web site at www.Wiley.com.

To my wife, Shelly whose image needs no enhancement

CONTENTS

Preface Acknowledgments PART 1 CONTINUOUS IMAGE CHARACTERIZATION 1 Continuous Image Mathematical Characterization 1.1 1.2 1.3 1.4 2 Image Representation, 3 Two-Dimensional Systems, 5 Two-Dimensional Fourier Transform, 10 Image Stochastic Characterization, 15

xiii xvii 1 3

Psychophysical Vision Properties 2.1 2.2 2.3 2.4 2.5 Light Perception, 23 Eye Physiology, 26 Visual Phenomena, 29 Monochrome Vision Model, 33 Color Vision Model, 39

23

3

Photometry and Colorimetry 3.1 Photometry, 45 3.2 Color Matching, 49

45

vii

viii

CONTENTS

3.3 Colorimetry Concepts, 54 3.4 Tristimulus Value Transformation, 61 3.5 Color Spaces, 63

PART 2 4

DIGITAL IMAGE CHARACTERIZATION

89 91

Image Sampling and Reconstruction 4.1 Image Sampling and Reconstruction Concepts, 91 4.2 Image Sampling Systems, 99 4.3 Image Reconstruction Systems, 110

5

Discrete Image Mathematical Representation 5.1 5.2 5.3 5.4 5.5 Vector-Space Image Representation, 121 Generalized Two-Dimensional Linear Operator, 123 Image Statistical Characterization, 127 Image Probability Density Models, 132 Linear Operator Statistical Representation, 136

121

6

Image Quantization 6.1 Scalar Quantization, 141 6.2 Processing Quantized Variables, 147 6.3 Monochrome and Color Image Quantization, 150

141

PART 3 7

DISCRETE TWO-DIMENSIONAL LINEAR PROCESSING

159 161

Superposition and Convolution 7.1 7.2 7.3 7.4 Finite-Area Superposition and Convolution, 161 Sampled Image Superposition and Convolution, 170 Circulant Superposition and Convolution, 177 Superposition and Convolution Operator Relationships, 180

8

Unitary Transforms 8.1 8.2 8.3 8.4 8.5 General Unitary Transforms, 185 Fourier Transform, 189 Cosine, Sine, and Hartley Transforms, 195 Hadamard, Haar, and Daubechies Transforms, 200 Karhunen–Loeve Transform, 207

185

9

Linear Processing Techniques 9.1 Transform Domain Processing, 213 9.2 Transform Domain Superposition, 216

213

CONTENTS

ix

9.3 Fast Fourier Transform Convolution, 221 9.4 Fourier Transform Filtering, 229 9.5 Small Generating Kernel Convolution, 236 PART 4 10 IMAGE IMPROVEMENT 241 243

Image Enhancement 10.1 10.2 10.3 10.4 10.5 10.6 Contrast Manipulation, 243 Histogram Modification, 253 Noise Cleaning, 261 Edge Crispening, 278 Color Image Enhancement, 284 Multispectral Image Enhancement, 289

11

Image Restoration Models 11.1 11.2 11.3 11.4 General Image Restoration Models, 297 Optical Systems Models, 300 Photographic Process Models, 304 Discrete Image Restoration Models, 312

297

12

Point and Spatial Image Restoration Techniques 12.1 12.2 12.3 12.4 12.5 12.6 12.7 Sensor and Display Point Nonlinearity Correction, 319 Continuous Image Spatial Filtering Restoration, 325 Pseudoinverse Spatial Image Restoration, 335 SVD Pseudoinverse Spatial Image Restoration, 349 Statistical Estimation Spatial Image Restoration, 355 Constrained Image Restoration, 358 Blind Image Restoration, 363

319

13

Geometrical Image Modification 13.1 13.2 13.3 13.4 13.5 Translation, Minification, Magnification, and Rotation, 371 Spatial Warping, 382 Perspective Transformation, 386 Camera Imaging Model, 389 Geometrical Image Resampling, 393 IMAGE ANALYSIS

371

PART 5 14

399 401

Morphological Image Processing

14.1 Binary Image Connectivity, 401 14.2 Binary Image Hit or Miss Transformations, 404 14.3 Binary Image Shrinking, Thinning, Skeletonizing, and Thickening, 411

x

CONTENTS

14.4 Binary Image Generalized Dilation and Erosion, 422 14.5 Binary Image Close and Open Operations, 433 14.6 Gray Scale Image Morphological Operations, 435 15 Edge Detection 15.1 15.2 15.3 15.4 15.5 15.6 15.7 16 Edge, Line, and Spot Models, 443 First-Order Derivative Edge Detection, 448 Second-Order Derivative Edge Detection, 469 Edge-Fitting Edge Detection, 482 Luminance Edge Detector Performance, 485 Color Edge Detection, 499 Line and Spot Detection, 499 509 443

Image Feature Extraction 16.1 16.2 16.3 16.4 16.5 16.6 Image Feature Evaluation, 509 Amplitude Features, 511 Transform Coefficient Features, 516 Texture Definition, 519 Visual Texture Discrimination, 521 Texture Features, 529

17

Image Segmentation 17.1 17.2 17.3 17.4 17.5 17.6 Amplitude Segmentation Methods, 552 Clustering Segmentation Methods, 560 Region Segmentation Methods, 562 Boundary Detection, 566 Texture Segmentation, 580 Segment Labeling, 581

551

18

Shape Analysis 18.1 18.2 18.3 18.4 18.5 Topological Attributes, 589 Distance, Perimeter, and Area Measurements, 591 Spatial Moments, 597 Shape Orientation Descriptors, 607 Fourier Descriptors, 609

589

19

Image Detection and Registration 19.1 19.2 19.3 19.4 Template Matching, 613 Matched Filtering of Continuous Images, 616 Matched Filtering of Discrete Images, 623 Image Registration, 625

613

CONTENTS

xi

PART 6 20

IMAGE PROCESSING SOFTWARE

641 643

PIKS Image Processing Software 20.1 PIKS Functional Overview, 643 20.2 PIKS Core Overview, 663

21

PIKS Image Processing Programming Exercises 21.1 Program Generation Exercises, 674 21.2 Image Manipulation Exercises, 675 21.3 Colour Space Exercises, 676 21.4 Region-of-Interest Exercises, 678 21.5 Image Measurement Exercises, 679 21.6 Quantization Exercises, 680 21.7 Convolution Exercises, 681 21.8 Unitary Transform Exercises, 682 21.9 Linear Processing Exercises, 682 21.10 Image Enhancement Exercises, 683 21.11 Image Restoration Models Exercises, 685 21.12 Image Restoration Exercises, 686 21.13 Geometrical Image Modification Exercises, 687 21.14 Morphological Image Processing Exercises, 687 21.15 Edge Detection Exercises, 689 21.16 Image Feature Extration Exercises, 690 21.17 Image Segmentation Exercises, 691 21.18 Shape Analysis Exercises, 691 21.19 Image Detection and Registration Exercises, 692

673

Appendix 1 Appendix 2 Appendix 3 Bibliography Index

Vector-Space Algebra Concepts Color Coordinate Conversion Image Error Measures

693 709 715 717 723

PREFACE

In January 1978, I began the preface to the first edition of Digital Image Processing with the following statement: The field of image processing has grown considerably during the past decade with the increased utilization of imagery in myriad applications coupled with improvements in the size, speed, and cost effectiveness of digital computers and related signal processing technologies. Image processing has found a significant role in scientific, industrial, space, and government applications. In January 1991, in the preface to the second edition, I stated: Thirteen years later as I write this preface to the second edition, I find the quoted statement still to be valid. The 1980s have been a decade of significant growth and maturity in this field. At the beginning of that decade, many image processing techniques were of academic interest only; their execution was too slow and too costly. Today, thanks to algorithmic and implementation advances, image processing has become a vital cost-effective technology in a host of applications. Now, in this beginning of the twenty-first century, image processing has become a mature engineering discipline. But advances in the theoretical basis of image processing continue. Some of the reasons for this third edition of the book are to correct defects in the second edition, delete content of marginal interest, and add discussion of new, important topics. Another motivating factor is the inclusion of interactive, computer display imaging examples to illustrate image processing concepts. Finally, this third edition includes computer programming exercises to bolster its theoretical content. These exercises can be implemented using the Programmer’s Imaging Kernel System (PIKS) application program interface (API). PIKS is an International xiii xiv

PREFACE

Standards Organization (ISO) standard library of image processing operators and associated utilities. The PIKS Core version is included on a CD affixed to the back cover of this book. The book is intended to be an “industrial strength” introduction to digital image processing to be used as a text for an electrical engineering or computer science course in the subject. Also, it can be used as a reference manual for scientists who are engaged in image processing research, developers of image processing hardware and software systems, and practicing engineers and scientists who use image processing as a tool in their applications. Mathematical derivations are provided for most algorithms. The reader is assumed to have a basic background in linear system theory, vector space algebra, and random processes. Proficiency in C language programming is necessary for execution of the image processing programming exercises using PIKS. The book is divided into six parts. The first three parts cover the basic technologies that are needed to support image processing applications. Part 1 contains three chapters concerned with the characterization of continuous images. Topics include the mathematical representation of continuous images, the psychophysical properties of human vision, and photometry and colorimetry. In Part 2, image sampling and quantization techniques are explored along with the mathematical representation of discrete images. Part 3 discusses two-dimensional signal processing techniques, including general linear operators and unitary transforms such as the Fourier, Hadamard, and Karhunen–Loeve transforms. The final chapter in Part 3 analyzes and compares linear processing techniques implemented by direct convolution and Fourier domain filtering. The next two parts of the book cover the two principal application areas of image processing. Part 4 presents a discussion of image enhancement and restoration techniques, including restoration models, point and spatial restoration, and geometrical image modification. Part 5, entitled “Image Analysis,” concentrates on the extraction of information from an image. Specific topics include morphological image processing, edge detection, image feature extraction, image segmentation, object shape analysis, and object detection. Part 6 discusses the software implementation of image processing applications. This part describes the PIKS API and explains its use as a means of implementing image processing algorithms. Image processing programming exercises are included in Part 6. This third edition represents a major revision of the second edition. In addition to Part 6, new topics include an expanded description of color spaces, the Hartley and Daubechies transforms, wavelet filtering, watershed and snake image segmentation, and Mellin transform matched filtering. Many of the photographic examples in the book are supplemented by executable programs for which readers can adjust algorithm parameters and even substitute their own source images. Although readers should find this book reasonably comprehensive, many important topics allied to the field of digital image processing have been omitted to limit the size and cost of the book. Among the most prominent omissions are the topics of pattern recognition, image reconstruction from projections, image understanding,

PREFACE

xv

image coding, scientific visualization, and computer graphics. References to some of these topics are provided in the bibliography. WILLIAM K. PRATT Los Altos, California August 2000

ACKNOWLEDGMENTS

The first edition of this book was written while I was a professor of electrical engineering at the University of Southern California (USC). Image processing research at USC began in 1962 on a very modest scale, but the program increased in size and scope with the attendant international interest in the field. In 1971, Dr. Zohrab Kaprielian, then dean of engineering and vice president of academic research and administration, announced the establishment of the USC Image Processing Institute. This environment contributed significantly to the preparation of the first edition. I am deeply grateful to Professor Kaprielian for his role in providing university support of image processing and for his personal interest in my career. Also, I wish to thank the following past and present members of the Institute’s scientific staff who rendered invaluable assistance in the preparation of the firstedition manuscript: Jean-François Abramatic, Harry C. Andrews, Lee D. Davisson, Olivier Faugeras, Werner Frei, Ali Habibi, Anil K. Jain, Richard P. Kruger, Nasser E. Nahi, Ramakant Nevatia, Keith Price, Guner S. Robinson, Alexander A. Sawchuk, and Lloyd R. Welsh. In addition, I sincerely acknowledge the technical help of my graduate students at USC during preparation of the first edition: Ikram Abdou, Behnam Ashjari, Wen-Hsiung Chen, Faramarz Davarian, Michael N. Huhns, Kenneth I. Laws, Sang Uk Lee, Clanton Mancill, Nelson Mascarenhas, Clifford Reader, John Roese, and Robert H. Wallis. The first edition was the outgrowth of notes developed for the USC course “Image Processing.” I wish to thank the many students who suffered through the xvii xviii

ACKNOWLEDGMENTS

early versions of the notes for their valuable comments. Also, I appreciate the reviews of the notes provided by Harry C. Andrews, Werner Frei, Ali Habibi, and Ernest L. Hall, who taught the course. With regard to the first edition, I wish to offer words of appreciation to the Information Processing Techniques Office of the Advanced Research Projects Agency, directed by Larry G. Roberts, which provided partial financial support of my research at USC. During the academic year 1977–1978, I performed sabbatical research at the Institut de Recherche d’Informatique et Automatique in LeChesney, France and at the Université de Paris. My research was partially supported by these institutions, USC, and a Guggenheim Foundation fellowship. For this support, I am indebted. I left USC in 1979 with the intention of forming a company that would put some of my research ideas into practice. Toward that end, I joined a startup company, Compression Labs, Inc., of San Jose, California. There I worked on the development of facsimile and video coding products with Dr., Wen-Hsiung Chen and Dr. Robert H. Wallis. Concurrently, I directed a design team that developed a digital image processor called VICOM. The early contributors to its hardware and software design were William Bryant, Howard Halverson, Stephen K. Howell, Jeffrey Shaw, and William Zech. In 1981, I formed Vicom Systems, Inc., of San Jose, California, to manufacture and market the VICOM image processor. Many of the photographic examples in this book were processed on a VICOM. Work on the second edition began in 1986. In 1988, I joined Sun Microsystems, of Mountain View, California. At Sun, I collaborated with Stephen A. Howell and Ihtisham Kabir on the development of image processing software. During my time at Sun, I participated in the specification of the Programmers Imaging Kernel application program interface which was made an International Standards Organization standard in 1994. Much of the PIKS content is present in this book. Some of the principal contributors to PIKS include Timothy Butler, Adrian Clark, Patrick Krolak, and Gerard A. Paquette. In 1993, I formed PixelSoft, Inc., of Los Altos, California, to commercialize the PIKS standard. The PIKS Core version of the PixelSoft implementation is affixed to the back cover of this edition. Contributors to its development include Timothy Butler, Larry R. Hubble, and Gerard A. Paquette. In 1996, I joined Photon Dynamics, Inc., of San Jose, California, a manufacturer of machine vision equipment for the inspection of electronics displays and printed circuit boards. There, I collaborated with Larry R. Hubble, Sunil S. Sawkar, and Gerard A. Paquette on the development of several hardware and software products based on PIKS. I wish to thank all those previously cited, and many others too numerous to mention, for their assistance in this industrial phase of my career. Having participated in the design of hardware and software products has been an arduous but intellectually rewarding task. This industrial experience, I believe, has significantly enriched this third edition.

ACKNOWLEDGMENTS

xix

I offer my appreciation to Ray Schmidt, who was responsible for many photographic reproductions in the book, and to Kris Pendelton, who created much of the line art. Also, thanks are given to readers of the first two editions who reported errors both typographical and mental. Most of all, I wish to thank my wife, Shelly, for her support in the writing of the third edition. W. K. P.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

PART 1
CONTINUOUS IMAGE CHARACTERIZATION
Although this book is concerned primarily with digital, as opposed to analog, image processing techniques. It should be remembered that most digital images represent continuous natural images. Exceptions are artificial digital images such as test patterns that are numerically created in the computer and images constructed by tomographic systems. Thus, it is important to understand the “physics” of image formation by sensors and optical systems including human visual perception. Another important consideration is the measurement of light in order quantitatively to describe images. Finally, it is useful to establish spatial and temporal characteristics of continuous image fields which provide the basis for the interrelationship of digital image samples. These topics are covered in the following chapters.

1

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

1
CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION

In the design and analysis of image processing systems, it is convenient and often necessary mathematically to characterize the image to be processed. There are two basic mathematical characterizations of interest: deterministic and statistical. In deterministic image representation, a mathematical image function is defined and point properties of the image are considered. For a statistical image representation, the image is specified by average properties. The following sections develop the deterministic and statistical characterization of continuous images. Although the analysis is presented in the context of visual images, many of the results can be extended to general two-dimensional time-varying signals and fields.

1.1. IMAGE REPRESENTATION Let C ( x, y, t, λ ) represent the spatial energy distribution of an image source of radiant energy at spatial coordinates (x, y), at time t and wavelength λ . Because light intensity is a real positive quantity, that is, because intensity is proportional to the modulus squared of the electric field, the image light function is real and nonnegative. Furthermore, in all practical imaging systems, a small amount of background light is always present. The physical imaging system also imposes some restriction on the maximum intensity of an image, for example, film saturation and cathode ray tube (CRT) phosphor heating. Hence it is assumed that
0 < C ( x, y, t, λ ) ≤ A

(1.1-1)
3

4

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION

where A is the maximum image intensity. A physical image is necessarily limited in extent by the imaging system and image recording media. For mathematical simplicity, all images are assumed to be nonzero only over a rectangular region for which
–Lx ≤ x ≤ Lx –Ly ≤ y ≤ Ly

(1.1-2a) (1.1-2b)

The physical image is, of course, observable only over some finite time interval. Thus let
–T ≤ t ≤ T

(1.1-2c)

The image light function C ( x, y, t, λ ) is, therefore, a bounded four-dimensional function with bounded independent variables. As a final restriction, it is assumed that the image function is continuous over its domain of definition. The intensity response of a standard human observer to an image light function is commonly measured in terms of the instantaneous luminance of the light field as defined by
Y ( x, y, t ) =

∫0 C ( x, y, t, λ )V ( λ ) d λ



(1.1-3)

where V ( λ ) represents the relative luminous efficiency function, that is, the spectral response of human vision. Similarly, the color response of a standard observer is commonly measured in terms of a set of tristimulus values that are linearly proportional to the amounts of red, green, and blue light needed to match a colored light. For an arbitrary red–green–blue coordinate system, the instantaneous tristimulus values are
R ( x, y, t ) = G ( x, y, t ) = B ( x, y, t ) =

∫0 C ( x, y, t, λ )RS ( λ ) d λ ∫0 ∫0




(1.1-4a) (1.1-4b) (1.1-4c)

C ( x, y, t, λ )G S ( λ ) d λ C ( x, y, t, λ )B S ( λ ) d λ



where R S ( λ ) , G S ( λ ) , BS ( λ ) are spectral tristimulus values for the set of red, green, and blue primaries. The spectral tristimulus values are, in effect, the tristimulus

TWO-DIMENSIONAL SYSTEMS

5

values required to match a unit amount of narrowband light at wavelength λ . In a multispectral imaging system, the image field observed is modeled as a spectrally weighted integral of the image light function. The ith spectral image field is then given as
F i ( x, y, t ) =

∫0



C ( x, y, t, λ )S i ( λ ) d λ

(1.1-5)

where S i ( λ ) is the spectral response of the ith sensor. For notational simplicity, a single image function F ( x, y, t ) is selected to represent an image field in a physical imaging system. For a monochrome imaging system, the image function F ( x, y, t ) nominally denotes the image luminance, or some converted or corrupted physical representation of the luminance, whereas in a color imaging system, F ( x, y, t ) signifies one of the tristimulus values, or some function of the tristimulus value. The image function F ( x, y, t ) is also used to denote general three-dimensional fields, such as the time-varying noise of an image scanner. In correspondence with the standard definition for one-dimensional time signals, the time average of an image function at a given point (x, y) is defined as
〈 F ( x, y, t )〉 T = lim 1 T ----- ∫ F ( x, y, t )L ( t ) dt 2T –T

T→∞

(1.1-6)

where L(t) is a time-weighting function. Similarly, the average image brightness at a given time is given by the spatial average,
〈 F ( x, y, t )〉 S = lim
L L 1 -------------- ∫ x ∫ y F ( x, y, t ) dx dy 4L x L y –L x –Ly

Lx → ∞ Ly → ∞

(1.1-7)

In many imaging systems, such as image projection devices, the image does not change with time, and the time variable may be dropped from the image function. For other types of systems, such as movie pictures, the image function is time sampled. It is also possible to convert the spatial variation into time variation, as in television, by an image scanning process. In the subsequent discussion, the time variable is dropped from the image field notation unless specifically required.

1.2. TWO-DIMENSIONAL SYSTEMS A two-dimensional system, in its most general form, is simply a mapping of some input set of two-dimensional functions F1(x, y), F2(x, y),..., FN(x, y) to a set of output two-dimensional functions G1(x, y), G2(x, y),..., GM(x, y), where ( – ∞ < x, y < ∞ ) denotes the independent, continuous spatial variables of the functions. This mapping may be represented by the operators O { · } for m = 1, 2,..., M, which relate the input to output set of functions by the set of equations

6

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION

G 1 ( x, y ) = O 1 { F 1 ( x, y ), F 2 ( x, y ), …, FN ( x, y ) } G m ( x, y ) = O m { F 1 ( x, y ), F 2 ( x, y ), …, F N ( x, y ) } G M ( x, y ) = O M { F 1 ( x, y ), F2 ( x, y ), …, F N ( x, y ) } … …

(1.2-1)

In specific cases, the mapping may be many-to-few, few-to-many, or one-to-one. The one-to-one mapping is defined as
G ( x, y ) = O { F ( x, y ) }

(1.2-2)

To proceed further with a discussion of the properties of two-dimensional systems, it is necessary to direct the discourse toward specific types of operators. 1.2.1. Singularity Operators Singularity operators are widely employed in the analysis of two-dimensional systems, especially systems that involve sampling of continuous functions. The two-dimensional Dirac delta function is a singularity operator that possesses the following properties:

∫–ε ∫–ε δ ( x, y ) dx dy =
∞ ∞

ε

ε

1

for ε > 0
F ( x, y )

(1.2-3a) (1.2-3b)

∫–∞ ∫–∞ F ( ξ, η )δ ( x – ξ, y – η ) d ξ dη =

In Eq. 1.2-3a, ε is an infinitesimally small limit of integration; Eq. 1.2-3b is called the sifting property of the Dirac delta function. The two-dimensional delta function can be decomposed into the product of two one-dimensional delta functions defined along orthonormal coordinates. Thus δ ( x, y ) = δ ( x )δ ( y )

(1.2-4)

where the one-dimensional delta function satisfies one-dimensional versions of Eq. 1.2-3. The delta function also can be defined as a limit on a family of functions. General examples are given in References 1 and 2. 1.2.2. Additive Linear Operators A two-dimensional system is said to be an additive linear system if the system obeys the law of additive superposition. In the special case of one-to-one mappings, the additive superposition property requires that

TWO-DIMENSIONAL SYSTEMS

7

O { a 1 F 1 ( x, y ) + a 2 F 2 ( x, y ) } = a 1 O { F 1 ( x, y ) } + a 2 O { F 2 ( x, y ) }

(1.2-5)

where a1 and a2 are constants that are possibly complex numbers. This additive superposition property can easily be extended to the general mapping of Eq. 1.2-1. A system input function F(x, y) can be represented as a sum of amplitudeweighted Dirac delta functions by the sifting integral,
F ( x, y ) =

∫– ∞ ∫– ∞ F ( ξ, η )δ ( x – ξ, y – η ) d ξ dη





(1.2-6)

where F ( ξ, η ) is the weighting factor of the impulse located at coordinates ( ξ, η ) in the x–y plane, as shown in Figure 1.2-1. If the output of a general linear one-to-one system is defined to be
G ( x, y ) = O { F ( x, y ) }

(1.2-7)

then
 ∞ ∞  G ( x, y ) = O  ∫ ∫ F ( ξ, η )δ ( x – ξ, y – η ) d ξ dη  –∞ –∞ 

(1.2-8a)

or
G ( x, y ) =

∫–∞ ∫–∞ F ( ξ, η )O { δ ( x – ξ, y – η ) } d ξ dη





(1.2-8b)

In moving from Eq. 1.2-8a to Eq. 1.2-8b, the application order of the general linear operator O { ⋅ } and the integral operator have been reversed. Also, the linear operator has been applied only to the term in the integrand that is dependent on the

FIGURE1.2-1. Decomposition of image function.

8

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION

spatial variables (x, y). The second term in the integrand of Eq. 1.2-8b, which is redefined as
H ( x, y ; ξ, η) ≡ O { δ ( x – ξ, y – η ) }

(1.2-9)

is called the impulse response of the two-dimensional system. In optical systems, the impulse response is often called the point spread function of the system. Substitution of the impulse response function into Eq. 1.2-8b yields the additive superposition integral
G ( x, y ) =

∫–∞ ∫–∞ F ( ξ, η )H ( x, y ; ξ, η) d ξ d η





(1.2-10)

An additive linear two-dimensional system is called space invariant (isoplanatic) if its impulse response depends only on the factors x – ξ and y – η . In an optical system, as shown in Figure 1.2-2, this implies that the image of a point source in the focal plane will change only in location, not in functional form, as the placement of the point source moves in the object plane. For a space-invariant system
H ( x, y ; ξ, η ) = H ( x – ξ, y – η )

(1.2-11)

and the superposition integral reduces to the special case called the convolution integral, given by
G ( x, y ) =

∫–∞ ∫–∞ F ( ξ, η )H ( x – ξ, y – η ) dξ dη





(1.2-12a)

Symbolically,
G ( x, y ) = F ( x, y ) * H ( x, y )

(1.2-12b)

FIGURE 1.2-2. Point-source imaging system.

TWO-DIMENSIONAL SYSTEMS

9

FIGURE 1.2-3. Graphical example of two-dimensional convolution.

denotes the convolution operation. The convolution integral is symmetric in the sense that
G ( x, y ) =

∫–∞ ∫–∞ F ( x – ξ, y – η )H ( ξ, η ) d ξ dη





(1.2-13)

Figure 1.2-3 provides a visualization of the convolution process. In Figure 1.2-3a and b, the input function F(x, y) and impulse response are plotted in the dummy coordinate system ( ξ, η ) . Next, in Figures 1.2-3c and d the coordinates of the impulse response are reversed, and the impulse response is offset by the spatial values (x, y). In Figure 1.2-3e, the integrand product of the convolution integral of Eq. 1.2-12 is shown as a crosshatched region. The integral over this region is the value of G(x, y) at the offset coordinate (x, y). The complete function F(x, y) could, in effect, be computed by sequentially scanning the reversed, offset impulse response across the input function and simultaneously integrating the overlapped region. 1.2.3. Differential Operators Edge detection in images is commonly accomplished by performing a spatial differentiation of the image field followed by a thresholding operation to determine points of steep amplitude change. Horizontal and vertical spatial derivatives are defined as

10

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION

------------------d x = ∂F ( x, y ) ∂x ∂F ( x, y ) d y = ------------------∂y

(l.2-14a) (l.2-14b)

The directional derivative of the image field along a vector direction z subtending an angle φ with respect to the horizontal axis is given by (3, p. 106)
∂F ( x, y ) ∇{ F ( x, y ) } = ------------------- = d x cos φ + d y sin φ ∂z

(l.2-15)

The gradient magnitude is then
∇{ F ( x, y ) } = dx + dy
2 2

(l.2-16)

Spatial second derivatives in the horizontal and vertical directions are defined as
2

∂ F ( x, y ) d xx = ---------------------2 ∂x ∂ F ( x, y ) d yy = ---------------------2 ∂y
2

(l.2-17a)

(l.2-17b)

The sum of these two spatial derivatives is called the Laplacian operator:
∂ F ( x, y ) ∂ F ( x, y ) ∇2{ F ( x, y ) } = ---------------------- + ---------------------2 2 ∂x ∂y
2 2

(l.2-18)

1.3. TWO-DIMENSIONAL FOURIER TRANSFORM The two-dimensional Fourier transform of the image function F(x, y) is defined as (1,2)
∞ ∞

F ( ω x, ω y ) =

∫–∞ ∫–∞ F ( x, y ) exp { –i ( ωx x + ωy y ) } dx dy

(1.3-1)

where ω x and ω y are spatial frequencies and i = transform is written as

– 1. Notationally, the Fourier

TWO-DIMENSIONAL FOURIER TRANSFORM

11

F ( ω x, ω y ) = O F { F ( x, y ) }

(1.3-2)

In general, the Fourier coefficient F ( ω x, ω y ) is a complex number that may be represented in real and imaginary form,
F ( ω x, ω y ) = R ( ω x, ω y ) + iI ( ω x, ω y )

(1.3-3a)

or in magnitude and phase-angle form,
F ( ω x, ω y ) = M ( ω x, ω y ) exp { iφ ( ω x, ω y ) }

(1.3-3b)

where
2 2 1⁄2

M ( ω x, ω y ) = [ R ( ω x, ω y ) + I ( ω x, ω y ) ]  I ( ω x, ω y )  φ ( ω x, ω y ) = arc tan  -----------------------   R ( ω x, ω y ) 

(1.3-4a) (1.3-4b)

A sufficient condition for the existence of the Fourier transform of F(x, y) is that the function be absolutely integrable. That is,

∫–∞ ∫–∞ F ( x, y ) dx dy < ∞





(1.3-5)

The input function F(x, y) can be recovered from its Fourier transform by the inversion formula
1- ∞ ∞ F ( x, y ) = -------- ∫ ∫ F ( ω x, ω y ) exp { i ( ω x x + ω y y ) } dω x dω y 2 4π –∞ – ∞

(1.3-6a)

or in operator form
–1

F ( x, y ) = O F { F ( ω x, ω y ) }

(1.3-6b)

The functions F(x, y) and F ( ω x, ω y ) are called Fourier transform pairs.

12

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION

The two-dimensional Fourier transform can be computed in two steps as a result of the separability of the kernel. Thus, let
F y ( ω x, y ) =

∫–∞ F ( x, y ) exp { –i ( ωx x ) } dx



(1.3-7)

then
F ( ω x, ω y ) =

∫–∞ F y ( ωx, y ) exp { –i ( ωy y ) } dy



(1.3-8)

Several useful properties of the two-dimensional Fourier transform are stated below. Proofs are given in References 1 and 2. Separability. If the image function is spatially separable such that
F ( x, y ) = f x ( x )f y ( y )

(1.3-9)

then
F y ( ω x, ω y ) = f x ( ω x )f y ( ω y )

(1.3-10)

where f x ( ω x ) and f y ( ω y ) are one-dimensional Fourier transforms of f x ( x ) and f y ( y ), respectively. Also, if F ( x, y ) and F ( ω x, ω y ) are two-dimensional Fourier transform pairs, the Fourier transform of F ∗ ( x, y ) is F ∗ ( – ω x, – ω y ) . An asterisk∗ used as a superscript denotes complex conjugation of a variable (i.e. if F = A + iB, then F ∗ = A – iB ). Finally, if F ( x, y ) is symmetric such that F ( x, y ) = F ( – x, – y ), then F ( ω x, ω y ) = F ( – ω x, – ω y ). Linearity. The Fourier transform is a linear operator. Thus
O F { aF 1 ( x, y ) + bF 2 ( x, y ) } = aF 1 ( ω x, ω y ) + bF 2 ( ω x, ω y )

(1.3-11)

where a and b are constants. Scaling. A linear scaling of the spatial variables results in an inverse scaling of the spatial frequencies as given by ω x ωy 1 O F { F ( ax, by ) } = -------- F  ----- , -----  - ab  a b 

(1.3-12)

TWO-DIMENSIONAL FOURIER TRANSFORM

13

Hence, stretching of an axis in one domain results in a contraction of the corresponding axis in the other domain plus an amplitude change. Shift. A positional shift in the input plane results in a phase shift in the output plane:
OF { F ( x – a, y – b ) } = F ( ω x, ω y ) exp { – i ( ω x a + ω y b ) }

(1.3-13a)

Alternatively, a frequency shift in the Fourier plane results in the equivalence
OF { F ( ω x – a, ω y – b ) } = F ( x, y ) exp { i ( ax + by ) }
–1

(1.3-13b)

Convolution. The two-dimensional Fourier transform of two convolved functions is equal to the products of the transforms of the functions. Thus
OF { F ( x, y ) * H ( x, y ) } = F ( ω x, ω y )H ( ω x, ω y )

(1.3-14)

The inverse theorem states that
1 OF { F ( x, y )H ( x, y ) } = -------- F ( ω x, ω y ) * H ( ω x, ω y ) 2 4π

(1.3-15)

Parseval 's Theorem. The energy in the spatial and Fourier transform domains is related by

∫–∞ ∫–∞ F ( x, y )





2

2 1 ∞ ∞ dx dy = -------- ∫ ∫ F ( ω x, ω y ) dω x dω y 2 –∞ –∞ 4π

(1.3-16)

Autocorrelation Theorem. The Fourier transform of the spatial autocorrelation of a function is equal to the magnitude squared of its Fourier transform. Hence
 ∞ ∞  2 OF  ∫ ∫ F ( α, β )F∗ ( α – x, β – y ) dα dβ = F ( ω x, ω y ) –∞ –∞  

(1.3-17)

Spatial Differentials. The Fourier transform of the directional derivative of an image function is related to the Fourier transform by

 ∂F ( x, y )  OF  -------------------  = – i ω x F ( ω x, ω y )  ∂x 

(1.3-18a)

14

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION

 ∂F ( x, y )  OF  -------------------  = – i ω y F ( ω x, ω y )  ∂y 

(1.3-18b)

Consequently, the Fourier transform of the Laplacian of an image function is equal to
2  ∂2 F ( x, y )  2 2 OF  ---------------------- + ∂ F ( x, y )  = – ( ω x + ω y ) F ( ω x, ω y ) ---------------------2 2  ∂x  ∂y

(1.3-19)

The Fourier transform convolution theorem stated by Eq. 1.3-14 is an extremely useful tool for the analysis of additive linear systems. Consider an image function F ( x, y ) that is the input to an additive linear system with an impulse response H ( x, y ) . The output image function is given by the convolution integral
G ( x, y ) =

∫–∞ ∫–∞ F ( α, β )H ( x – α, y – β ) dα dβ





(1.3-20)

Taking the Fourier transform of both sides of Eq. 1.3-20 and reversing the order of integration on the right-hand side results in
G ( ω x, ω y ) =

∫–∞ ∫–∞ F ( α, β ) ∫–∞ ∫–∞ H ( x – α, y – β ) exp { – i ( ωx x + ωy y ) } dx dy









dα dβ

(1.3-21) By the Fourier transform shift theorem of Eq. 1.3-13, the inner integral is equal to the Fourier transform of H ( x, y ) multiplied by an exponential phase-shift factor. Thus
G ( ω x, ω y ) =

∫–∞ ∫–∞ F ( α, β )H ( ωx, ω y ) exp { – i ( ω x α + ωy β ) } dα dβ





(1.3-22)

Performing the indicated Fourier transformation gives
G ( ω x, ω y ) = H ( ω x, ω y )F ( ω x, ω y )

(1.3-23)

Then an inverse transformation of Eq. 1.3-23 provides the output image function
1 ∞ ∞ G ( x, y ) = -------- ∫ ∫ H ( ω x, ω y )F ( ω x, ω y ) exp { i ( ω x x + ω y y ) } dω x dω y 2 4π –∞ –∞

(1.3-24)

IMAGE STOCHASTIC CHARACTERIZATION

15

Equations 1.3-20 and 1.3-24 represent two alternative means of determining the output image response of an additive, linear, space-invariant system. The analytic or operational choice between the two approaches, convolution or Fourier processing, is usually problem dependent.

1.4. IMAGE STOCHASTIC CHARACTERIZATION The following presentation on the statistical characterization of images assumes general familiarity with probability theory, random variables, and stochastic process. References 2 and 4 to 7 can provide suitable background. The primary purpose of the discussion here is to introduce notation and develop stochastic image models. It is often convenient to regard an image as a sample of a stochastic process. For continuous images, the image function F(x, y, t) is assumed to be a member of a continuous three-dimensional stochastic process with space variables (x, y) and time variable (t). The stochastic process F(x, y, t) can be described completely by knowledge of its joint probability density p { F 1, F2 …, F J ; x 1, y 1, t 1, x 2, y 2, t 2, …, xJ , yJ , tJ }

for all sample points J, where (xj, yj, tj) represent space and time samples of image function Fj(xj, yj, tj). In general, high-order joint probability densities of images are usually not known, nor are they easily modeled. The first-order probability density p(F; x, y, t) can sometimes be modeled successfully on the basis of the physics of the process or histogram measurements. For example, the first-order probability density of random noise from an electronic sensor is usually well modeled by a Gaussian density of the form p { F ; x, y, t} = [ 2πσ F ( x, y, t ) ]
2 –1 ⁄ 2

 [ F ( x, y, t ) – η F ( x, y, t ) ]  exp  – -----------------------------------------------------------  2   2σ F ( x, y, t )

2

(1.4-1)

where the parameters η F ( x, y, t ) and σ F ( x, y, t ) denote the mean and variance of the process. The Gaussian density is also a reasonably accurate model for the probability density of the amplitude of unitary transform coefficients of an image. The probability density of the luminance function must be a one-sided density because the luminance measure is positive. Models that have found application include the Rayleigh density,
 [ F ( x, y, t ) ] 2  F ( x, y, t ) p { F ; x, y, t } = --------------------- exp  – ----------------------------  2 2   α 2α

2

(1.4-2a)

the log-normal density,

16

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION
2

p { F ; x, y, t} = [ 2πF ( x, y, t )σ F ( x, y, t ) ]

2

2

–1 ⁄ 2

 [ log { F ( x, y, t ) } – η F ( x, y, t ) ]  exp  – --------------------------------------------------------------------------  2   2σ F ( x, y, t )

(1.4-2b) and the exponential density, p {F ; x, y, t} = α exp{ – α F ( x, y, t ) }

(1.4-2c)

all defined for F ≥ 0, where α is a constant. The two-sided, or Laplacian density, α p { F ; x, y, t} = --- exp{ – α F ( x, y, t ) } 2

(1.4-3)

where α is a constant, is often selected as a model for the probability density of the difference of image samples. Finally, the uniform density
1 p { F ; x, y, t} = ----2π

(1.4-4)

for – π ≤ F ≤ π is a common model for phase fluctuations of a random process. Conditional probability densities are also useful in characterizing a stochastic process. The conditional density of an image function evaluated at ( x 1, y 1, t 1 ) given knowledge of the image function at ( x 2, y 2, t 2 ) is defined as p { F 1, F 2 ; x 1, y 1, t 1, x 2, y 2, t 2} p { F 1 ; x 1, y 1, t 1 F2 ; x 2, y 2, t 2} = -----------------------------------------------------------------------p { F 2 ; x 2, y 2, t2}

(1.4-5)

Higher-order conditional densities are defined in a similar manner. Another means of describing a stochastic process is through computation of its ensemble averages. The first moment or mean of the image function is defined as η F ( x, y, t ) = E { F ( x, y, t ) } =

∫– ∞ F ( x, y, t )p { F ; x, y, t} dF



(1.4-6)

where E { · } is the expectation operator, as defined by the right-hand side of Eq. 1.4-6. The second moment or autocorrelation function is given by
R ( x 1, y 1, t 1 ; x 2, y 2, t 2) = E { F ( x 1, y 1, t 1 )F ∗ ( x 2, y 2, t 2 ) }

(1.4-7a)

or in explicit form

IMAGE STOCHASTIC CHARACTERIZATION

17

R ( x 1, y 1, t 1 ; x 2, y 2, t2 ) =

∫–∞ ∫–∞ F ( x1, x1, y1 )F∗ ( x2, y2, t2 )
× p { F 1, F 2 ; x 1, y 1, t1, x 2, y 2, t 2 } dF 1 dF2





(1.4-7b)

The autocovariance of the image process is the autocorrelation about the mean, defined as

K ( x1, y 1, t1 ; x 2, y 2, t2) = E { [ F ( x 1, y 1, t1 ) – η F ( x 1, y 1, t 1 ) ] [ F∗ ( x 2, y 2, t 2 ) – η∗ ( x 2, y 2, t2 ) ] } F

(1.4-8a) or
K ( x 1, y 1, t 1 ; x 2, y 2, t2) = R ( x1, y 1, t1 ; x 2, y 2, t2) – η F ( x 1, y 1, t1 ) η∗ ( x 2, y 2, t 2 ) F

(1.4-8b)

Finally, the variance of an image process is
2

σ F ( x, y, t ) = K ( x, y, t ; x, y, t )

(1.4-9)

An image process is called stationary in the strict sense if its moments are unaffected by shifts in the space and time origins. The image process is said to be stationary in the wide sense if its mean is constant and its autocorrelation is dependent on the differences in the image coordinates, x1 – x2, y1 – y2, t1 – t2, and not on their individual values. In other words, the image autocorrelation is not a function of position or time. For stationary image processes,
E { F ( x, y, t ) } = η F R ( x 1, y 1, t 1 ; x 2, y 2, t 2) = R ( x1 – x 2, y 1 – y 2, t1 – t 2 )

(1.4-10a)

(1.4-10b)

The autocorrelation expression may then be written as
R ( τx, τy, τt ) = E { F ( x + τ x, y + τy, t + τ t )F∗ ( x, y, t ) }

(1.4-11)

18

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION

Because
R ( – τx, – τ y, – τ t ) = R∗ ( τx, τy, τt )

(1.4-12)

then for an image function with F real, the autocorrelation is real and an even function of τ x, τ y, τ t . The power spectral density, also called the power spectrum, of a stationary image process is defined as the three-dimensional Fourier transform of its autocorrelation function as given by
W ( ω x, ω y, ω t ) =

∫–∞ ∫–∞ ∫–∞ R ( τx, τy, τt ) exp { –i ( ωx τx + ωy τy + ω t τt ) } dτx dτy dτt
(1.4-13)







In many imaging systems, the spatial and time image processes are separable so that the stationary correlation function may be written as
R ( τx, τy, τt ) = R xy ( τx, τy )Rt ( τ t )

(1.4-14)

Furthermore, the spatial autocorrelation function is often considered as the product of x and y axis autocorrelation functions,
R xy ( τ x, τ y ) = Rx ( τ x )R y ( τ y )

(1.4-15)

for computational simplicity. For scenes of manufactured objects, there is often a large amount of horizontal and vertical image structure, and the spatial separation approximation may be quite good. In natural scenes, there usually is no preferential direction of correlation; the spatial autocorrelation function tends to be rotationally symmetric and not separable. An image field is often modeled as a sample of a first-order Markov process for which the correlation between points on the image field is proportional to their geometric separation. The autocovariance function for the two-dimensional Markov process is
 2 2 2 2 R xy ( τ x, τ y ) = C exp – α x τ x + α y τ y   

(1.4-16)

where C is an energy scaling constant and α x and α y are spatial scaling constants. The corresponding power spectrum is
1 2C W ( ω x, ω y ) = --------------- ----------------------------------------------------2 α x αy 1 + [ ωx ⁄ α2 + ω2 ⁄ α2 ] x y y

(1.4-17)

IMAGE STOCHASTIC CHARACTERIZATION

19

As a simplifying assumption, the Markov process is often assumed to be of separable form with an autocovariance function
K xy ( τx, τy ) = C exp { – α x τ x – α y τ y }

(1.4-18)

The power spectrum of this process is
4α x α y C W ( ω x, ω y ) = -----------------------------------------------2 2 2 2 ( α x + ω x ) ( α y + ωy )

(1.4-19)

In the discussion of the deterministic characteristics of an image, both time and space averages of the image function have been defined. An ensemble average has also been defined for the statistical image characterization. A question of interest is: What is the relationship between the spatial-time averages and the ensemble averages? The answer is that for certain stochastic processes, which are called ergodic processes, the spatial-time averages and the ensemble averages are equal. Proof of the ergodicity of a process in the general case is often difficult; it usually suffices to determine second-order ergodicity in which the first- and second-order space-time averages are equal to the first- and second-order ensemble averages. Often, the probability density or moments of a stochastic image field are known at the input to a system, and it is desired to determine the corresponding information at the system output. If the system transfer function is algebraic in nature, the output probability density can be determined in terms of the input probability density by a probability density transformation. For example, let the system output be related to the system input by
G ( x, y, t ) = O F { F ( x, y, t ) }

(1.4-20)

where O F { · } is a monotonic operator on F(x, y). The probability density of the output field is then p { F ; x, y, t} p { G ; x, y, t} = ---------------------------------------------------dO F { F ( x, y, t ) } ⁄ dF

(1.4-21)

The extension to higher-order probability densities is straightforward, but often cumbersome. The moments of the output of a system can be obtained directly from knowledge of the output probability density, or in certain cases, indirectly in terms of the system operator. For example, if the system operator is additive linear, the mean of the system output is

20

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION

E { G ( x, y, t ) } = E { O F { F ( x, y, t ) } } = O F { E { F ( x, y, t ) } }

(1.4-22)

It can be shown that if a system operator is additive linear, and if the system input image field is stationary in the strict sense, the system output is also stationary in the strict sense. Furthermore, if the input is stationary in the wide sense, the output is also wide-sense stationary. Consider an additive linear space-invariant system whose output is described by the three-dimensional convolution integral
G ( x, y, t ) =

∫–∞ ∫–∞ ∫–∞ F ( x – α, y – β, t – γ )H ( α, β, γ ) dα d β dγ







(1.4-23)

where H(x, y, t) is the system impulse response. The mean of the output is then
E { G ( x, y, t ) } =

∫–∞ ∫–∞ ∫–∞ E { F ( x – α, y – β, t – γ ) }H ( α, β, γ ) dα dβ dγ







(1.4-24)

If the input image field is stationary, its mean η F is a constant that may be brought outside the integral. As a result,
E { G ( x, y, t ) } = η F ∫


–∞ –∞ – ∞

∫ ∫





H ( α, β, γ ) dα dβ dγ = η F H ( 0, 0, 0 )

(1.4-25)

where H ( 0, 0, 0 ) is the transfer function of the linear system evaluated at the origin in the spatial-time frequency domain. Following the same techniques, it can easily be shown that the autocorrelation functions of the system input and output are related by
R G ( τ x, τ y, τ t ) = RF ( τx, τy, τt ) * H ( τx, τ y, τ t ) * H ∗ ( – τx, – τ y, – τ t )

(1.4-26)

Taking Fourier transforms on both sides of Eq. 1.4-26 and invoking the Fourier transform convolution theorem, one obtains the relationship between the power spectra of the input and output image,
W G ( ω x, ω y, ω t ) = W F ( ω x, ω y, ω t )H ( ω x, ω y, ω t )H ∗ ( ω x, ω y, ω t )

(1.4-27a)

or
WG ( ω x, ω y, ω t ) = W F ( ω x, ω y, ω t ) H ( ω x, ω y, ω t )
2

(1.4-27b)

This result is found useful in analyzing the effect of noise in imaging systems.

REFERENCES

21

REFERENCES
1. 2. 3. 4. 5. 6. 7. J. W. Goodman, Introduction to Fourier Optics, 2nd Ed., McGraw-Hill, New York, 1996. A. Papoulis, Systems and Transforms with Applications in Optics, McGraw-Hill, New York, 1968. J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psychopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970. A. Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd ed., McGraw-Hill, New York, 1991. J. B. Thomas, An Introduction to Applied Probability Theory and Random Processes, Wiley, New York, 1971. J. W. Goodman, Statistical Optics, Wiley, New York, 1985. E. R. Dougherty, Random Processes for Image and Signal Processing, Vol. PM44, SPIE Press, Bellingham, Wash., 1998.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

2
PSYCHOPHYSICAL VISION PROPERTIES

For efficient design of imaging systems for which the output is a photograph or display to be viewed by a human observer, it is obviously beneficial to have an understanding of the mechanism of human vision. Such knowledge can be utilized to develop conceptual models of the human visual process. These models are vital in the design of image processing systems and in the construction of measures of image fidelity and intelligibility.

2.1. LIGHT PERCEPTION Light, according to Webster's Dictionary (1), is “radiant energy which, by its action on the organs of vision, enables them to perform their function of sight.” Much is known about the physical properties of light, but the mechanisms by which light interacts with the organs of vision is not as well understood. Light is known to be a form of electromagnetic radiation lying in a relatively narrow region of the electromagnetic spectrum over a wavelength band of about 350 to 780 nanometers (nm). A physical light source may be characterized by the rate of radiant energy (radiant intensity) that it emits at a particular spectral wavelength. Light entering the human visual system originates either from a self-luminous source or from light reflected from some object or from light transmitted through some translucent object. Let E ( λ ) represent the spectral energy distribution of light emitted from some primary light source, and also let t ( λ ) and r ( λ ) denote the wavelength-dependent transmissivity and reflectivity, respectively, of an object. Then, for a transmissive object, the observed light spectral energy distribution is
C ( λ ) = t ( λ )E ( λ )

(2.1-1)
23

24

PSYCHOPHYSICAL VISION PROPERTIES

FIGURE 2.1-1. Spectral energy distributions of common physical light sources.

and for a reflective object
C ( λ ) = r ( λ )E ( λ )

(2.1-2)

Figure 2.1-1 shows plots of the spectral energy distribution of several common sources of light encountered in imaging systems: sunlight, a tungsten lamp, a

LIGHT PERCEPTION

25

light-emitting diode, a mercury arc lamp, and a helium–neon laser (2). A human being viewing each of the light sources will perceive the sources differently. Sunlight appears as an extremely bright yellowish-white light, while the tungsten light bulb appears less bright and somewhat yellowish. The light-emitting diode appears to be a dim green; the mercury arc light is a highly bright bluish-white light; and the laser produces an extremely bright and pure red beam. These observations provoke many questions. What are the attributes of the light sources that cause them to be perceived differently? Is the spectral energy distribution sufficient to explain the differences in perception? If not, what are adequate descriptors of visual perception? As will be seen, answers to these questions are only partially available. There are three common perceptual descriptors of a light sensation: brightness, hue, and saturation. The characteristics of these descriptors are considered below. If two light sources with the same spectral shape are observed, the source of greater physical intensity will generally appear to be perceptually brighter. However, there are numerous examples in which an object of uniform intensity appears not to be of uniform brightness. Therefore, intensity is not an adequate quantitative measure of brightness. The attribute of light that distinguishes a red light from a green light or a yellow light, for example, is called the hue of the light. A prism and slit arrangement (Figure 2.1-2) can produce narrowband wavelength light of varying color. However, it is clear that the light wavelength is not an adequate measure of color because some colored lights encountered in nature are not contained in the rainbow of light produced by a prism. For example, purple light is absent. Purple light can be produced by combining equal amounts of red and blue narrowband lights. Other counterexamples exist. If two light sources with the same spectral energy distribution are observed under identical conditions, they will appear to possess the same hue. However, it is possible to have two light sources with different spectral energy distributions that are perceived identically. Such lights are called metameric pairs. The third perceptual descriptor of a colored light is its saturation, the attribute that distinguishes a spectral light from a pastel light of the same hue. In effect, saturation describes the whiteness of a light source. Although it is possible to speak of the percentage saturation of a color referenced to a spectral color on a chromaticity diagram of the type shown in Figure 3.3-3, saturation is not usually considered to be a quantitative measure.

FIGURE 2.1-2. Refraction of light from a prism.

26

PSYCHOPHYSICAL VISION PROPERTIES

FIGURE 2.1-3. Perceptual representation of light.

As an aid to classifying colors, it is convenient to regard colors as being points in some color solid, as shown in Figure 2.1-3. The Munsell system of color classification actually has a form similar in shape to this figure (3). However, to be quantitatively useful, a color solid should possess metric significance. That is, a unit distance within the color solid should represent a constant perceptual color difference regardless of the particular pair of colors considered. The subject of perceptually significant color solids is considered later.

2.2. EYE PHYSIOLOGY A conceptual technique for the establishment of a model of the human visual system would be to perform a physiological analysis of the eye, the nerve paths to the brain, and those parts of the brain involved in visual perception. Such a task, of course, is presently beyond human abilities because of the large number of infinitesimally small elements in the visual chain. However, much has been learned from physiological studies of the eye that is helpful in the development of visual models (4–7).

EYE PHYSIOLOGY

27

FIGURE 2.2-1. Eye cross section.

Figure 2.2-1 shows the horizontal cross section of a human eyeball. The front of the eye is covered by a transparent surface called the cornea. The remaining outer cover, called the sclera, is composed of a fibrous coat that surrounds the choroid, a layer containing blood capillaries. Inside the choroid is the retina, which is composed of two types of receptors: rods and cones. Nerves connecting to the retina leave the eyeball through the optic nerve bundle. Light entering the cornea is focused on the retina surface by a lens that changes shape under muscular control to

FIGURE 2.2-2. Sensitivity of rods and cones based on measurements by Wald.

28

PSYCHOPHYSICAL VISION PROPERTIES

perform proper focusing of near and distant objects. An iris acts as a diaphram to control the amount of light entering the eye. The rods in the retina are long slender receptors; the cones are generally shorter and thicker in structure. There are also important operational distinctions. The rods are more sensitive than the cones to light. At low levels of illumination, the rods provide a visual response called scotopic vision. Cones respond to higher levels of illumination; their response is called photopic vision. Figure 2.2-2 illustrates the relative sensitivities of rods and cones as a function of illumination wavelength (7,8). An eye contains about 6.5 million cones and 100 million cones distributed over the retina (4). Figure 2.2-3 shows the distribution of rods and cones over a horizontal line on the retina (4). At a point near the optic nerve called the fovea, the density of cones is greatest. This is the region of sharpest photopic vision. There are no rods or cones in the vicinity of the optic nerve, and hence the eye has a blind spot in this region.

FIGURE 2.2-3. Distribution of rods and cones on the retina.

VISUAL PHENOMENA

29

FIGURE 2.2-4. Typical spectral absorption curves of pigments of the retina.

In recent years, it has been determined experimentally that there are three basic types of cones in the retina (9, 10). These cones have different absorption characteristics as a function of wavelength with peak absorptions in the red, green, and blue regions of the optical spectrum. Figure 2.2-4 shows curves of the measured spectral absorption of pigments in the retina for a particular subject (10). Two major points of note regarding the curves are that the α cones, which are primarily responsible for blue light perception, have relatively low sensitivity, and the absorption curves overlap considerably. The existence of the three types of cones provides a physiological basis for the trichromatic theory of color vision. When a light stimulus activates a rod or cone, a photochemical transition occurs, producing a nerve impulse. The manner in which nerve impulses propagate through the visual system is presently not well established. It is known that the optic nerve bundle contains on the order of 800,000 nerve fibers. Because there are over 100,000,000 receptors in the retina, it is obvious that in many regions of the retina, the rods and cones must be interconnected to nerve fibers on a many-to-one basis. Because neither the photochemistry of the retina nor the propagation of nerve impulses within the eye is well understood, a deterministic characterization of the visual process is unavailable. One must be satisfied with the establishment of models that characterize, and hopefully predict, human visual response. The following section describes several visual phenomena that should be considered in the modeling of the human visual process.

2.3. VISUAL PHENOMENA The visual phenomena described below are interrelated, in some cases only minimally, but in others, to a very large extent. For simplification in presentation and, in some instances, lack of knowledge, the phenomena are considered disjoint.

30
.

PSYCHOPHYSICAL VISION PROPERTIES

(a) No background

(b) With background

FIGURE 2.3-1. Contrast sensitivity measurements.

Contrast Sensitivity. The response of the eye to changes in the intensity of illumination is known to be nonlinear. Consider a patch of light of intensity I + ∆I surrounded by a background of intensity I (Figure 2.3-1a). The just noticeable difference ∆I is to be determined as a function of I. Over a wide range of intensities, it is found that the ratio ∆I ⁄ I , called the Weber fraction, is nearly constant at a value of about 0.02 (11; 12, p. 62). This result does not hold at very low or very high intensities, as illustrated by Figure 2.3-1a (13). Furthermore, contrast sensitivity is dependent on the intensity of the surround. Consider the experiment of Figure 2.3-1b, in which two patches of light, one of intensity I and the other of intensity I + ∆I , are surrounded by light of intensity Io. The Weber fraction ∆I ⁄ I for this experiment is plotted in Figure 2.3-1b as a function of the intensity of the background. In this situation it is found that the range over which the Weber fraction remains constant is reduced considerably compared to the experiment of Figure 2.3-1a. The envelope of the lower limits of the curves of Figure 2.3-lb is equivalent to the curve of Figure 2.3-1a. However, the range over which ∆I ⁄ I is approximately constant for a fixed background intensity I o is still comparable to the dynamic range of most electronic imaging systems.

VISUAL PHENOMENA

31

(a ) Step chart photo

(b ) Step chart intensity distribution

(c ) Ramp chart photo D B

(d ) Ramp chart intensity distribution

FIGURE 2.3-2. Mach band effect.

32

PSYCHOPHYSICAL VISION PROPERTIES

Because the differential of the logarithm of intensity is d ( log I ) = dI ---I

(2.3-1)

equal changes in the logarithm of the intensity of a light can be related to equal just noticeable changes in its intensity over the region of intensities, for which the Weber fraction is constant. For this reason, in many image processing systems, operations are performed on the logarithm of the intensity of an image point rather than the intensity. Mach Band. Consider the set of gray scale strips shown in of Figure 2.3-2a. The reflected light intensity from each strip is uniform over its width and differs from its neighbors by a constant amount; nevertheless, the visual appearance is that each strip is darker at its right side than at its left. This is called the Mach band effect (14). Figure 2.3-2c is a photograph of the Mach band pattern of Figure 2.3-2d. In the photograph, a bright bar appears at position B and a dark bar appears at D. Neither bar would be predicted purely on the basis of the intensity distribution. The apparent Mach band overshoot in brightness is a consequence of the spatial frequency response of the eye. As will be seen shortly, the eye possesses a lower sensitivity to high and low spatial frequencies than to midfrequencies. The implication for the designer of image processing systems is that perfect fidelity of edge contours can be sacrificed to some extent because the eye has imperfect response to high-spatialfrequency brightness transitions. Simultaneous Contrast. The simultaneous contrast phenomenon (7) is illustrated in Figure 2.3-3. Each small square is actually the same intensity, but because of the different intensities of the surrounds, the small squares do not appear equally bright. The hue of a patch of light is also dependent on the wavelength composition of surrounding light. A white patch on a black background will appear to be yellowish if the surround is a blue light. Chromatic Adaption. The hue of a perceived color depends on the adaption of a viewer (15). For example, the American flag will not immediately appear red, white, and blue if the viewer has been subjected to high-intensity red light before viewing the flag. The colors of the flag will appear to shift in hue toward the red complement, cyan.

FIGURE 2.3-3. Simultaneous contrast.

MONOCHROME VISION MODEL

33

Color Blindness. Approximately 8% of the males and 1% of the females in the world population are subject to some form of color blindness (16, p. 405). There are various degrees of color blindness. Some people, called monochromats, possess only rods or rods plus one type of cone, and therefore are only capable of monochromatic vision. Dichromats are people who possess two of the three types of cones. Both monochromats and dichromats can distinguish colors insofar as they have learned to associate particular colors with particular objects. For example, dark roses are assumed to be red, and light roses are assumed to be yellow. But if a red rose were painted yellow such that its reflectivity was maintained at the same value, a monochromat might still call the rose red. Similar examples illustrate the inability of dichromats to distinguish hue accurately.

2.4. MONOCHROME VISION MODEL One of the modern techniques of optical system design entails the treatment of an optical system as a two-dimensional linear system that is linear in intensity and can be characterized by a two-dimensional transfer function (17). Consider the linear optical system of Figure 2.4-1. The system input is a spatial light distribution obtained by passing a constant-intensity light beam through a transparency with a spatial sine-wave transmittance. Because the system is linear, the spatial output intensity distribution will also exhibit sine-wave intensity variations with possible changes in the amplitude and phase of the output intensity compared to the input intensity. By varying the spatial frequency (number of intensity cycles per linear dimension) of the input transparency, and recording the output intensity level and phase, it is possible, in principle, to obtain the optical transfer function (OTF) of the optical system. Let H ( ω x, ω y ) represent the optical transfer function of a two-dimensional linear system where ω x = 2π ⁄ T x and ω y = 2π ⁄ Ty are angular spatial frequencies with spatial periods T x and Ty in the x and y coordinate directions, respectively. Then, with I I ( x, y ) denoting the input intensity distribution of the object and I o ( x, y )

FIGURE 2.4-1. Linear systems analysis of an optical system.

34

PSYCHOPHYSICAL VISION PROPERTIES

representing the output intensity distribution of the image, the frequency spectra of the input and output signals are defined as
I I ( ω x, ω y ) = I O ( ω x, ω y ) =

∫–∞ ∫–∞ II ( x, y ) exp { –i ( ω x x + ωy y )} dx dy ∫–∞ ∫–∞ IO ( x, y ) exp { –i ( ωx x + ω y y )} dx dy
∞ ∞





(2.4-1) (2.4-2)

The input and output intensity spectra are related by
I O ( ω x, ω y ) = H ( ω x, ω y ) I I ( ω x, ω y )

(2.4-3)

The spatial distribution of the image intensity can be obtained by an inverse Fourier transformation of Eq. 2.4-2, yielding
1I O ( x, y ) = -------2 4π

∫–∞ ∫–∞ IO ( ω x, ωy ) exp { i ( ωx x + ωy y ) } dωx dωy





(2.4-4)

In many systems, the designer is interested only in the magnitude variations of the output intensity with respect to the magnitude variations of the input intensity, not the phase variations. The ratio of the magnitudes of the Fourier transforms of the input and output signals,
I O ( ω x, ω y ) ------------------------------ = H ( ω x, ω y ) I I ( ω x, ω y )

(2.4-5)

is called the modulation transfer function (MTF) of the optical system. Much effort has been given to application of the linear systems concept to the human visual system (18–24). A typical experiment to test the validity of the linear systems model is as follows. An observer is shown two sine-wave grating transparencies, a reference grating of constant contrast and spatial frequency and a variablecontrast test grating whose spatial frequency is set at a value different from that of the reference. Contrast is defined as the ratio max – min -------------------------max + min where max and min are the maximum and minimum of the grating intensity, respectively. The contrast of the test grating is varied until the brightnesses of the bright and dark regions of the two transparencies appear identical. In this manner it is possible to develop a plot of the MTF of the human visual system. Figure 2.4-2a is a

MONOCHROME VISION MODEL

35

FIGURE 2.4-2. Hypothetical measurements of the spatial frequency response of the human visual system.

Contrast

Spatial frequency

FIGURE 2.4-3. MTF measurements of the human visual system by modulated sine-wave grating.

36

PSYCHOPHYSICAL VISION PROPERTIES

FIGURE 2.4-4. Logarithmic model of monochrome vision.

hypothetical plot of the MTF as a function of the input signal contrast. Another indication of the form of the MTF can be obtained by observation of the composite sinewave grating of Figure 2.4-3, in which spatial frequency increases in one coordinate direction and contrast increases in the other direction. The envelope of the visible bars generally follows the MTF curves of Figure 2.4-2a (23). Referring to Figure 2.4-2a, it is observed that the MTF measurement depends on the input contrast level. Furthermore, if the input sine-wave grating is rotated relative to the optic axis of the eye, the shape of the MTF is altered somewhat. Thus, it can be concluded that the human visual system, as measured by this experiment, is nonlinear and anisotropic (rotationally variant). It has been postulated that the nonlinear response of the eye to intensity variations is logarithmic in nature and occurs near the beginning of the visual information processing system, that is, near the rods and cones, before spatial interaction occurs between visual signals from individual rods and cones. Figure 2.4-4 shows a simple logarithmic eye model for monochromatic vision. If the eye exhibits a logarithmic response to input intensity, then if a signal grating contains a recording of an exponential sine wave, that is, exp { sin { I I ( x, y ) } } , the human visual system can be linearized. A hypothetical MTF obtained by measuring an observer's response to an exponential sine-wave grating (Figure 2.4-2b) can be fitted reasonably well by a single curve for low-and mid-spatial frequencies. Figure 2.4-5 is a plot of the measured MTF of the human visual system obtained by Davidson (25) for an exponential

FIGURE 2.4-5. MTF measurements with exponential sine-wave grating.

MONOCHROME VISION MODEL

37

sine-wave test signal. The high-spatial-frequency portion of the curve has been extrapolated for an average input contrast. The logarithmic/linear system eye model of Figure 2.4-4 has proved to provide a reasonable prediction of visual response over a wide range of intensities. However, at high spatial frequencies and at very low or very high intensities, observed responses depart from responses predicted by the model. To establish a more accurate model, it is necessary to consider the physical mechanisms of the human visual system. The nonlinear response of rods and cones to intensity variations is still a subject of active research. Hypotheses have been introduced suggesting that the nonlinearity is based on chemical activity, electrical effects, and neural feedback. The basic logarithmic model assumes the form
IO ( x, y ) = K 1 log { K 2 + K 3 I I ( x, y ) }

(2.4-6)

where the Ki are constants and I I ( x, y ) denotes the input field to the nonlinearity and I O ( x, y ) is its output. Another model that has been suggested (7, p. 253) follows the fractional response
K 1 I I ( x, y ) I O ( x, y ) = ----------------------------K 2 + I I ( x, y )

(2.4-7)

where K 1 and K 2 are constants. Mannos and Sakrison (26) have studied the effect of various nonlinearities employed in an analytical visual fidelity measure. Their results, which are discussed in greater detail in Chapter 3, establish that a power law nonlinearity of the form
I O ( x, y ) = [ I I ( x, y ) ] s (2.4-8)

where s is a constant, typically 1/3 or 1/2, provides good agreement between the visual fidelity measure and subjective assessment. The three models for the nonlinear response of rods and cones defined by Eqs. 2.4-6 to 2.4-8 can be forced to a reasonably close agreement over some midintensity range by an appropriate choice of scaling constants. The physical mechanisms accounting for the spatial frequency response of the eye are partially optical and partially neural. As an optical instrument, the eye has limited resolution because of the finite size of the lens aperture, optical aberrations, and the finite dimensions of the rods and cones. These effects can be modeled by a low-pass transfer function inserted between the receptor and the nonlinear response element. The most significant contributor to the frequency response of the eye is the lateral inhibition process (27). The basic mechanism of lateral inhibition is illustrated in

38

PSYCHOPHYSICAL VISION PROPERTIES

FIGURE 2.4-6. Lateral inhibition effect.

Figure 2.4-6. A neural signal is assumed to be generated by a weighted contribution of many spatially adjacent rods and cones. Some receptors actually exert an inhibitory influence on the neural response. The weighting values are, in effect, the impulse response of the human visual system beyond the retina. The two-dimensional Fourier transform of this impulse response is the postretina transfer function. When a light pulse is presented to a human viewer, there is a measurable delay in its perception. Also, perception continues beyond the termination of the pulse for a short period of time. This delay and lag effect arising from neural temporal response limitations in the human visual system can be modeled by a linear temporal transfer function. Figure 2.4-7 shows a model for monochromatic vision based on results of the preceding discussion. In the model, the output of the wavelength-sensitive receptor is fed to a low-pass type of linear system that represents the optics of the eye. Next follows a general monotonic nonlinearity that represents the nonlinear intensity response of rods or cones. Then the lateral inhibition process is characterized by a linear system with a bandpass response. Temporal filtering effects are modeled by the following linear system. Hall and Hall (28) have investigated this model extensively and have found transfer functions for the various elements that accurately model the total system response. The monochromatic vision model of Figure 2.4-7, with appropriately scaled parameters, seems to be sufficiently detailed for most image processing applications. In fact, the simpler logarithmic model of Figure 2.4-4 is probably adequate for the bulk of applications.

COLOR VISION MODEL

39

FIGURE 2.4-7. Extended model of monochrome vision.

2.5. COLOR VISION MODEL There have been many theories postulated to explain human color vision, beginning with the experiments of Newton and Maxwell (29–32). The classical model of human color vision, postulated by Thomas Young in 1802 (31), is the trichromatic model in which it is assumed that the eye possesses three types of sensors, each sensitive over a different wavelength band. It is interesting to note that there was no direct physiological evidence of the existence of three distinct types of sensors until about 1960 (9,10). Figure 2.5-1 shows a color vision model proposed by Frei (33). In this model, three receptors with spectral sensitivities s 1 ( λ ), s 2 ( λ ), s 3 ( λ ) , which represent the absorption pigments of the retina, produce signals e 1 = ∫ C ( λ )s 1 ( λ ) d λ e 2 = ∫ C ( λ )s 2 ( λ ) d λ e 3 = ∫ C ( λ )s 3 ( λ ) d λ

(2.5-1a) (2.5-1b) (2.5-1c)

where C ( λ ) is the spectral energy distribution of the incident light source. The three signals e 1, e 2, e 3 are then subjected to a logarithmic transfer function and combined to produce the outputs d 1 = log e 1 e2 d 2 = log e 2 – log e 1 = log ---e1 e3 d 3 = log e 3 – log e 1 = log ---e1

(2.5-2a) (2.5-2b)

(2.5-2c)

40

PSYCHOPHYSICAL VISION PROPERTIES

FIGURE 2.5-1 Color vision model.

Finally, the signals d 1, d 2, d 3 pass through linear systems with transfer functions H 1 ( ω x, ω y ) , H 2 ( ω x, ω y ) , H 3 ( ω x, ω y ) to produce output signals g 1, g 2, g 3 that provide the basis for perception of color by the brain. In the model of Figure 2.5-1, the signals d 2 and d 3 are related to the chromaticity of a colored light while signal d 1 is proportional to its luminance. This model has been found to predict many color vision phenomena quite accurately, and also to satisfy the basic laws of colorimetry. For example, it is known that if the spectral energy of a colored light changes by a constant multiplicative factor, the hue and saturation of the light, as described quantitatively by its chromaticity coordinates, remain invariant over a wide dynamic range. Examination of Eqs. 2.5-1 and 2.5-2 indicates that the chrominance signals d 2 and d 3 are unchanged in this case, and that the luminance signal d 1 increases in a logarithmic manner. Other, more subtle evaluations of the model are described by Frei (33). As shown in Figure 2.2-4, some indication of the spectral sensitivities s i ( λ ) of the three types of retinal cones has been obtained by spectral absorption measurements of cone pigments. However, direct physiological measurements are difficult to perform accurately. Indirect estimates of cone spectral sensitivities have been obtained from measurements of the color response of color-blind peoples by Konig and Brodhun (34). Judd (35) has used these data to produce a linear transformation relating the spectral sensitivity functions s i ( λ ) to spectral tristimulus values obtained by colorimetric testing. The resulting sensitivity curves, shown in Figure 2.5-2, are unimodal and strictly positive, as expected from physiological considerations (34). The logarithmic color vision model of Figure 2.5-1 may easily be extended, in analogy with the monochromatic vision model of Figure 2.4-7, by inserting a linear transfer function after each cone receptor to account for the optical response of the eye. Also, a general nonlinearity may be substituted for the logarithmic transfer function. It should be noted that the order of the receptor summation and the transfer function operations can be reversed without affecting the output, because both are

COLOR VISION MODEL

41

FIGURE 2.5-2. Spectral sensitivity functions of retinal cones based on Konig’s data.

linear operations. Figure 2.5-3 shows the extended model for color vision. It is expected that the spatial frequency response of the g 1 neural signal through the color vision model should be similar to the luminance spatial frequency response discussed in Section 2.4. Sine-wave response measurements for colored lights obtained by van der Horst et al. (36), shown in Figure 2.5-4, indicate that the chromatic response is shifted toward low spatial frequencies relative to the luminance response. Lateral inhibition effects should produce a low spatial frequency rolloff below the measured response. Color perception is relative; the perceived color of a given spectral energy distribution is dependent on the viewing surround and state of adaption of the viewer. A human viewer can adapt remarkably well to the surround or viewing illuminant of a scene and essentially normalize perception to some reference white or overall color balance of the scene. This property is known as chromatic adaption.

FIGURE 2.5-3. Extended model of color vision.

42

PSYCHOPHYSICAL VISION PROPERTIES

FIGURE 2.5-4. Spatial frequency response measurements of the human visual system.

The simplest visual model for chromatic adaption, proposed by von Kries (37, 16, p. 435), involves the insertion of automatic gain controls between the cones and first linear system of Figure 2.5-2. These gains ai =

[ ∫ W ( λ )si ( λ ) dλ]

–1

(2.5-3)

for i = 1, 2, 3 are adjusted such that the modified cone response is unity when viewing a reference white with spectral energy distribution W ( λ ) . Von Kries's model is attractive because of its qualitative reasonableness and simplicity, but chromatic testing (16, p. 438) has shown that the model does not completely predict the chromatic adaptation effect. Wallis (38) has suggested that chromatic adaption may, in part, result from a post-retinal neural inhibition mechanism that linearly attenuates slowly varying visual field components. The mechanism could be modeled by the low-spatial-frequency attenuation associated with the post-retinal transfer functions H Li ( ω x, ω y ) of Figure 2.5-3. Undoubtedly, both retinal and post-retinal mechanisms are responsible for the chromatic adaption effect. Further analysis and testing are required to model the effect adequately.

REFERENCES
1. Webster's New Collegiate Dictionary, G. & C. Merriam Co. (The Riverside Press), Springfield, MA, 1960. 2. H. H. Malitson, “The Solar Energy Spectrum,” Sky and Telescope, 29, 4, March 1965, 162–165. 3. Munsell Book of Color, Munsell Color Co., Baltimore. 4. M. H. Pirenne, Vision and the Eye, 2nd ed., Associated Book Publishers, London, 1967. 5. S. L. Polyak, The Retina, University of Chicago Press, Chicago, 1941.

REFERENCES

43

6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19.

20. 21.

22. 23. 24. 25. 26. 27. 28.

29.

L. H. Davson, The Physiology of the Eye, McGraw-Hill (Blakiston), New York, 1949. T. N. Cornsweet, Visual Perception, Academic Press, New York, 1970. G. Wald, “Human Vision and the Spectrum,” Science, 101, 2635, June 29, 1945, 653–658. P. K. Brown and G. Wald, “Visual Pigment in Single Rods and Cones of the Human Retina,” Science, 144, 3614, April 3, 1964, 45–52. G. Wald, “The Receptors for Human Color Vision,” Science, 145, 3636, September 4, 1964, 1007–1017. S. Hecht, “The Visual Discrimination of Intensity and the Weber–Fechner Law,” J. General. Physiology., 7, 1924, 241. W. F. Schreiber, Fundamentals of Electronic Imaging Systems, Springer-Verlag, Berlin, 1986. S. S. Stevens, Handbook of Experimental Psychology, Wiley, New York, 1951. F. Ratliff, Mach Bands: Quantitative Studies on Neural Networks in the Retina, HoldenDay, San Francisco, 1965. G. S. Brindley, “Afterimages,” Scientific American, 209, 4, October 1963, 84–93. G. Wyszecki and W. S. Stiles, Color Science, 2nd ed., Wiley, New York, 1982. J. W. Goodman, Introduction to Fourier Optics, 2nd ed., McGraw-Hill, New York, 1996. F. W. Campbell, “The Human Eye as an Optical Filter,” Proc. IEEE, 56, 6, June 1968, 1009–1014. O. Bryngdahl, “Characteristics of the Visual System: Psychophysical Measurement of the Response to Spatial Sine-Wave Stimuli in the Mesopic Region,” J. Optical. Society of America, 54, 9, September 1964, 1152–1160. E. M. Lowry and J. J. DePalma, “Sine Wave Response of the Visual System, I. The Mach Phenomenon,” J. Optical Society of America, 51, 7, July 1961, 740–746. E. M. Lowry and J. J. DePalma, “Sine Wave Response of the Visual System, II. Sine Wave and Square Wave Contrast Sensitivity,” J. Optical Society of America, 52, 3, March 1962, 328–335. M. B. Sachs, J. Nachmias, and J. G. Robson, “Spatial Frequency Channels in Human Vision,” J. Optical Society of America, 61, 9, September 1971, 1176–1186. T. G. Stockham, Jr., “Image Processing in the Context of a Visual Model,” Proc. IEEE, 60, 7, July 1972, 828–842. D. E. Pearson, “A Realistic Model for Visual Communication Systems,” Proc. IEEE, 55, 3, March 1967, 380–389. M. L. Davidson, “Perturbation Approach to Spatial Brightness Interaction in Human Vision,” J. Optical Society of America, 58, 9, September 1968, 1300–1308. J. L. Mannos and D. J. Sakrison, “The Effects of a Visual Fidelity Criterion on the Encoding of Images,” IEEE Trans. Information. Theory, IT-20, 4, July 1974, 525–536. F. Ratliff, H. K. Hartline, and W. H. Miller, “Spatial and Temporal Aspects of Retinal Inhibitory Interaction,” J. Optical Society of America, 53, 1, January 1963, 110–120. C. F. Hall and E. L. Hall, “A Nonlinear Model for the Spatial Characteristics of the Human Visual System,” IEEE Trans, Systems, Man and Cybernetics, SMC-7, 3, March 1977, 161–170. J. J. McCann, “Human Color Perception,” in Color: Theory and Imaging Systems, R. A. Enyard, Ed., Society of Photographic Scientists and Engineers, Washington, DC, 1973, 1–23.

44

PSYCHOPHYSICAL VISION PROPERTIES

30. I. Newton, Optiks, 4th ed., 1730; Dover Publications, New York, 1952. 31. T. Young, Philosophical Trans, 92, 1802, 12–48. 32. J. C. Maxwell, Scientific Papers of James Clerk Maxwell, W. D. Nevern, Ed., Dover Publications, New York, 1965. 33. W. Frei, “A New Model of Color Vision and Some Practical Limitations,” USCEE Report 530, University of Southern California, Image Processing Institute, Los Angeles March 1974, 128–143. 34. A. Konig and E. Brodhun, “Experimentell Untersuchungen uber die Psycho-physische fundamental in Bezug auf den Gesichtssinn,” Zweite Mittlg. S.B. Preuss Akademic der Wissenschaften, 1889, 641. 35. D. B. Judd, “Standard Response Functions for Protanopic and Deuteranopic Vision,” J. Optical Society of America, 35, 3, March 1945, 199–221. 36. C. J. C. van der Horst, C. M. de Weert, and M. A. Bouman, “Transfer of Spatial Chromaticity Contrast at Threshold in the Human Eye,” J. Optical Society of America, 57, 10, October 1967, 1260–1266. 37. J. von Kries, “Die Gesichtsempfindungen,” Nagel's Handbuch der. Physiologie der Menschen, Vol. 3, 1904, 211. 38. R. H. Wallis, “Film Recording of Digital Color Images,” USCEE Report 570, University of Southern California, Image Processing Institute, Los Angeles, June 1975.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

3
PHOTOMETRY AND COLORIMETRY

Chapter 2 dealt with human vision from a qualitative viewpoint in an attempt to establish models for monochrome and color vision. These models may be made quantitative by specifying measures of human light perception. Luminance measures are the subject of the science of photometry, while color measures are treated by the science of colorimetry.

3.1. PHOTOMETRY A source of radiative energy may be characterized by its spectral energy distribution C ( λ ) , which specifies the time rate of energy the source emits per unit wavelength interval. The total power emitted by a radiant source, given by the integral of the spectral energy distribution,
P =

∫0



C(λ ) d λ

(3.1-1)

is called the radiant flux of the source and is normally expressed in watts (W). A body that exists at an elevated temperature radiates electromagnetic energy proportional in amount to its temperature. A blackbody is an idealized type of heat radiator whose radiant flux is the maximum obtainable at any wavelength for a body at a fixed temperature. The spectral energy distribution of a blackbody is given by Planck's law (1):
C1 C ( λ ) = ----------------------------------------------------5 λ [ exp { C 2 ⁄ λT } – 1 ]

(3.1-2)

45

46

PHOTOMETRY AND COLORIMETRY

FIGURE 3.1-1. Blackbody radiation functions.

where λ is the radiation wavelength, T is the temperature of the body, and C 1 and C 2 are constants. Figure 3.1-1a is a plot of the spectral energy of a blackbody as a function of temperature and wavelength. In the visible region of the electromagnetic spectrum, the blackbody spectral energy distribution function of Eq. 3.1-2 can be approximated by Wien's radiation law (1):
C1 C ( λ ) = --------------------------------------5 λ exp { C 2 ⁄ λT }

(3.1-3)

Wien's radiation function is plotted in Figure 3.1-1b over the visible spectrum. The most basic physical light source, of course, is the sun. Figure 2.1-1a shows a plot of the measured spectral energy distribution of sunlight (2). The dashed line in

FIGURE 3.1-2. CIE standard illumination sources.

PHOTOMETRY

47

FIGURE 3.1-3. Spectral energy distribution of CRT phosphors.

this figure, approximating the measured data, is a 6000 kelvin (K) blackbody curve. Incandescent lamps are often approximated as blackbody radiators of a given temperature in the range 1500 to 3500 K (3). The Commission Internationale de l'Eclairage (CIE), which is an international body concerned with standards for light and color, has established several standard sources of light, as illustrated in Figure 3.1-2 (4). Source SA is a tungsten filament lamp. Over the wavelength band 400 to 700 nm, source SB approximates direct sunlight, and source SC approximates light from an overcast sky. A hypothetical source, called Illuminant E, is often employed in colorimetric calculations. Illuminant E is assumed to emit constant radiant energy at all wavelengths. Cathode ray tube (CRT) phosphors are often utilized as light sources in image processing systems. Figure 3.1-3 describes the spectral energy distributions of common phosphors (5). Monochrome television receivers generally use a P4 phosphor, which provides a relatively bright blue-white display. Color television displays utilize cathode ray tubes with red, green, and blue emitting phosphors arranged in triad dots or strips. The P22 phosphor is typical of the spectral energy distribution of commercial phosphor mixtures. Liquid crystal displays (LCDs) typically project a white light through red, green and blue vertical strip pixels. Figure 3.1-4 is a plot of typical color filter transmissivities (6). Photometric measurements seek to describe quantitatively the perceptual brightness of visible electromagnetic energy (7,8). The link between photometric measurements and radiometric measurements (physical intensity measurements) is the photopic luminosity function, as shown in Figure 3.1-5a (9). This curve, which is a CIE standard, specifies the spectral sensitivity of the human visual system to optical radiation as a function of wavelength for a typical person referred to as the standard

48

PHOTOMETRY AND COLORIMETRY

FIGURE 3.1-4. Transmissivities of LCD color filters.

observer. In essence, the curve is a standardized version of the measurement of cone sensitivity given in Figure 2.2-2 for photopic vision at relatively high levels of illumination. The standard luminosity function for scotopic vision at relatively low levels of illumination is illustrated in Figure 3.1-5b. Most imaging system designs are based on the photopic luminosity function, commonly called the relative luminous efficiency. The perceptual brightness sensation evoked by a light source with spectral energy distribution C ( λ ) is specified by its luminous flux, as defined by
∞ 0

F = Km ∫

C ( λ )V ( λ ) d λ

(3.1-4)

where V ( λ ) represents the relative luminous efficiency and K m is a scaling constant. The modern unit of luminous flux is the lumen (lm), and the corresponding value for the scaling constant is K m = 685 lm/W. An infinitesimally narrowband source of 1 W of light at the peak wavelength of 555 nm of the relative luminous efficiency curve therefore results in a luminous flux of 685 lm.

COLOR MATCHING

49

FIGURE 3.1-5. Relative luminous efficiency functions.

3.2. COLOR MATCHING The basis of the trichromatic theory of color vision is that it is possible to match an arbitrary color by superimposing appropriate amounts of three primary colors (10–14). In an additive color reproduction system such as color television, the three primaries are individual red, green, and blue light sources that are projected onto a common region of space to reproduce a colored light. In a subtractive color system, which is the basis of most color photography and color printing, a white light sequentially passes through cyan, magenta, and yellow filters to reproduce a colored light.

3.2.1. Additive Color Matching An additive color-matching experiment is illustrated in Figure 3.2-1. In Figure 3.2-1a, a patch of light (C) of arbitrary spectral energy distribution C ( λ ) , as shown in Figure 3.2-2a, is assumed to be imaged onto the surface of an ideal diffuse reflector (a surface that reflects uniformly over all directions and all wavelengths). A reference white light (W) with an energy distribution, as in Figure 3.2-2b, is imaged onto the surface along with three primary lights (P1), (P2), (P3) whose spectral energy distributions are sketched in Figure 3.2-2c to e. The three primary lights are first overlapped and their intensities are adjusted until the overlapping region of the three primary lights perceptually matches the reference white in terms of brightness, hue, and saturation. The amounts of the three primaries A 1 ( W ) , A 2 ( W ) , A3 ( W ) are then recorded in some physical units, such as watts. These are the matching values of the reference white. Next, the intensities of the primaries are adjusted until a match is achieved with the colored light (C), if a match is possible. The procedure to be followed if a match cannot be achieved is considered later. The intensities of the primaries

50

PHOTOMETRY AND COLORIMETRY

FIGURE 3.2-1. Color matching.

A1 ( C ), A 2 ( C ), A 3 ( C ) when a match is obtained are recorded, and normalized matching values T1 ( C ) , T 2 ( C ) , T3 ( C ) , called tristimulus values, are computed as

A1 ( C ) T 1 ( C ) = --------------A1 ( W )

A2 ( C ) T 2 ( C ) = --------------A2 ( W )

A3 ( C ) T 3 ( C ) = --------------A3( W )

(3.2-1)

COLOR MATCHING

51

FIGURE 3.2-2. Spectral energy distributions.

If a match cannot be achieved by the procedure illustrated in Figure 3.2-1a, it is often possible to perform the color matching outlined in Figure 3.2-1b. One of the primaries, say (P3), is superimposed with the light (C), and the intensities of all three primaries are adjusted until a match is achieved between the overlapping region of primaries (P1) and (P2) with the overlapping region of (P3) and (C). If such a match is obtained, the tristimulus values are
A1 ( C ) T 1 ( C ) = --------------A1 ( W ) A2 ( C ) T 2 ( C ) = --------------A2 ( W ) – A3 ( C ) T 3 ( C ) = ----------------A3( W )

(3.2-2)

In this case, the tristimulus value T3 ( C ) is negative. If a match cannot be achieved with this geometry, a match is attempted between (P1) plus (P3) and (P2) plus (C). If a match is achieved by this configuration, tristimulus value T2 ( C ) will be negative. If this configuration fails, a match is attempted between (P2) plus (P3) and (P1) plus (C). A correct match is denoted with a negative value for T1 ( C ) .

52

PHOTOMETRY AND COLORIMETRY

Finally, in the rare instance in which a match cannot be achieved by either of the configurations of Figure 3.2-1a or b, two of the primaries are superimposed with (C) and an attempt is made to match the overlapped region with the remaining primary. In the case illustrated in Figure 3.2-1c, if a match is achieved, the tristimulus values become
A1 ( C ) T 1 ( C ) = --------------A1 ( W ) –A2 ( C ) T 2 ( C ) = ----------------A2 ( W ) – A3 ( C ) T 3 ( C ) = ----------------A3( W )

(3.2-3)

If a match is not obtained by this configuration, one of the other two possibilities will yield a match. The process described above is a direct method for specifying a color quantitatively. It has two drawbacks: The method is cumbersome and it depends on the perceptual variations of a single observer. In Section 3.3 we consider standardized quantitative color measurement in detail. 3.2.2. Subtractive Color Matching A subtractive color-matching experiment is shown in Figure 3.2-3. An illumination source with spectral energy distribution E ( λ ) passes sequentially through three dye filters that are nominally cyan, magenta, and yellow. The spectral absorption of the dye filters is a function of the dye concentration. It should be noted that the spectral transmissivities of practical dyes change shape in a nonlinear manner with dye concentration. In the first step of the subtractive color-matching process, the dye concentrations of the three spectral filters are varied until a perceptual match is obtained with a reference white (W). The dye concentrations are the matching values of the color match A1 ( W ) , A 2 ( W ) , A 3 ( W ) . Next, the three dye concentrations are varied until a match is obtained with a desired color (C). These matching values A1 ( C ), A2 ( C ), A3 ( C ) , are then used to compute the tristimulus values T1 ( C ) , T2 ( C ), T 3 ( C ), as in Eq. 3.2-1.

FIGURE 3.2-3. Subtractive color matching.

COLOR MATCHING

53

It should be apparent that there is no fundamental theoretical difference between color matching by an additive or a subtractive system. In a subtractive system, the yellow dye acts as a variable absorber of blue light, and with ideal dyes, the yellow dye effectively forms a blue primary light. In a similar manner, the magenta filter ideally forms the green primary, and the cyan filter ideally forms the red primary. Subtractive color systems ordinarily utilize cyan, magenta, and yellow dye spectral filters rather than red, green, and blue dye filters because the cyan, magenta, and yellow filters are notch filters which permit a greater transmission of light energy than do narrowband red, green, and blue bandpass filters. In color printing, a fourth filter layer of variable gray level density is often introduced to achieve a higher contrast in reproduction because common dyes do not possess a wide density range. 3.2.3. Axioms of Color Matching The color-matching experiments described for additive and subtractive color matching have been performed quite accurately by a number of researchers. It has been found that perfect color matches sometimes cannot be obtained at either very high or very low levels of illumination. Also, the color matching results do depend to some extent on the spectral composition of the surrounding light. Nevertheless, the simple color matching experiments have been found to hold over a wide range of conditions. Grassman (15) has developed a set of eight axioms that define trichromatic color matching and that serve as a basis for quantitative color measurements. In the following presentation of these axioms, the symbol ◊ indicates a color match; the symbol ⊕ indicates an additive color mixture; the symbol • indicates units of a color. These axioms are: 1. Any color can be matched by a mixture of no more than three colored lights. 2. A color match at one radiance level holds over a wide range of levels. 3. Components of a mixture of colored lights cannot be resolved by the human eye. 4. The luminance of a color mixture is equal to the sum of the luminance of its components. 5. Law of addition. If color (M) matches color (N) and color (P) matches color (Q), then color (M) mixed with color (P) matches color (N) mixed with color (Q):
( M ) ◊ ( N ) ∩ ( P ) ◊ ( Q) ⇒ [ ( M) ⊕ ( P) ] ◊ [ ( N ) ⊕ ( Q )]

(3.2-4)

6. Law of subtraction. If the mixture of (M) plus (P) matches the mixture of (N) plus (Q) and if (P) matches (Q), then (M) matches (N):
[ (M ) ⊕ (P )] ◊ [(N ) ⊕ ( Q) ] ∩ [( P) ◊ (Q) ] ⇒ ( M) ◊ (N )

(3.2-5)

7. Transitive law. If (M) matches (N) and if (N) matches (P), then (M) matches (P):

54

PHOTOMETRY AND COLORIMETRY

[ (M) ◊ (N)] ∩ [(N) ◊ (P) ] ⇒ (M) ◊ (P)

(3.2-6)

8. Color matching. (a) c units of (C) matches the mixture of m units of (M) plus n units of (N) plus p units of (P): c • C ◊ [m • (M )] ⊕ [n • ( N) ] ⊕ [p • (P ) ]

(3.2-7)

or (b) a mixture of c units of C plus m units of M matches the mixture of n units of N plus p units of P:
[c • (C )] ⊕ [m • ( M) ] ◊ [n • (N)] ⊕ [ p • (P) ]

(3.2-8)

or (c) a mixture of c units of (C) plus m units of (M) plus n units of (N) matches p units of P:
[c • (C )] ⊕ [m • ( M) ] ⊕ [n • (N )] ◊ [ p • (P) ]

(3.2-9)

With Grassman's laws now specified, consideration is given to the development of a quantitative theory for color matching.

3.3. COLORIMETRY CONCEPTS Colorimetry is the science of quantitatively measuring color. In the trichromatic color system, color measurements are in terms of the tristimulus values of a color or a mathematical function of the tristimulus values. Referring to Section 3.2.3, the axioms of color matching state that a color C can be matched by three primary colors P1, P2, P3. The qualitative match is expressed as
( C ) ◊ [ A 1 ( C ) • ( P 1 ) ] ⊕ [ A 2 ( C ) • ( P 2 ) ] ⊕ [ A3 ( C ) • ( P 3 ) ]

(3.3-1)

where A 1 ( C ) , A2 ( C ) , A 3 ( C ) are the matching values of the color (C). Because the intensities of incoherent light sources add linearly, the spectral energy distribution of a color mixture is equal to the sum of the spectral energy distributions of its components. As a consequence of this fact and Eq. 3.3-1, the spectral energy distribution C ( λ ) can be replaced by its color-matching equivalent according to the relation
C ( λ ) ◊ A 1 ( C )P 1 ( λ ) + A2 ( C )P2 ( λ ) + A 3 ( C )P 3 ( λ ) =

j =1



3

A j ( C )Pj ( λ )

(3.3-2)

COLORIMETRY CONCEPTS

55

Equation 3.3-2 simply means that the spectral energy distributions on both sides of the equivalence operator ◊ evoke the same color sensation. Color matching is usually specified in terms of tristimulus values, which are normalized matching values, as defined by
Aj ( C ) T j ( C ) = -------------Aj ( W )

(3.3-3)

where A j ( W ) represents the matching value of the reference white. By this substitution, Eq. 3.3-2 assumes the form
C(λ) ◊

j =1

∑ Tj ( C )Aj ( W )Pj ( λ )

3

(3.3-4)

From Grassman's fourth law, the luminance of a color mixture Y(C) is equal to the luminance of its primary components. Hence
Y(C ) =

∫ C ( λ )V ( λ ) dλ = ∑ ∫ Aj ( C )Pj ( λ )V ( λ ) dλ j =1

3

(3.3-5a)

or
Y(C ) =

j =1

∑ ∫ Tj ( C )Aj ( W )Pj ( λ )V ( λ ) dλ

3

(3.3-5b)

where V ( λ ) is the relative luminous efficiency and P j ( λ ) represents the spectral energy distribution of a primary. Equations 3.3-4 and 3.3-5 represent the quantitative foundation for colorimetry. 3.3.1. Color Vision Model Verification Before proceeding further with quantitative descriptions of the color-matching process, it is instructive to determine whether the matching experiments and the axioms of color matching are satisfied by the color vision model presented in Section 2.5. In that model, the responses of the three types of receptors with sensitivities s 1 ( λ ) , s 2 ( λ ) , s 3 ( λ ) are modeled as e 1 ( C ) = ∫ C ( λ )s 1 ( λ ) d λ e 2 ( C ) = ∫ C ( λ )s 2 ( λ ) d λ e 3 ( C ) = ∫ C ( λ )s 3 ( λ ) d λ

(3.3-6a) (3.3-6b) (3.3-6c)

56

PHOTOMETRY AND COLORIMETRY

If a viewer observes the primary mixture instead of C, then from Eq. 3.3-4, substitution for C ( λ ) should result in the same cone signals e i ( C ) . Thus
3

e1 ( C ) =

j =1 3

∑ Tj ( C )Aj ( W ) ∫ Pj ( λ )s1 ( λ ) d λ ∑ Tj ( C )Aj ( W ) ∫ Pj ( λ )s2 ( λ ) d λ
3

(3.3-7a)

e2( C ) =

(3.3-7b)

j =1

e3 ( C ) =

j =1

∑ Tj ( C )Aj ( W ) ∫ Pj ( λ )s3 ( λ ) d λ

(3.3-7c)

Equation 3.3-7 can be written more compactly in matrix form by defining k ij =

∫ Pj ( λ )si ( λ ) dλ

(3.3-8)

Then e1 ( C ) e2 ( C ) = e3 ( C ) k 11 k 21 k 31 k 12 k 22 k 32 k 13 k 23 k 33 A1( W ) 0 0 0 A2( W ) 0 0 0 A3( W ) T1 ( C ) T2 ( C ) T3 ( C )

(3.3-9)

or in yet more abbreviated form, e ( C ) = KAt ( C )

(3.3-10)

where the vectors and matrices of Eq. 3.3-10 are defined in correspondence with Eqs. 3.3-7 to 3.3-9. The vector space notation used in this section is consistent with the notation formally introduced in Appendix 1. Matrices are denoted as boldface uppercase symbols, and vectors are denoted as boldface lowercase symbols. It should be noted that for a given set of primaries, the matrix K is constant valued, and for a given reference white, the white matching values of the matrix A are constant. Hence, if a set of cone signals e i ( C ) were known for a color (C), the corresponding tristimulus values Tj ( C ) could in theory be obtained from t ( C ) = [ KA ] e ( C )
–1

(3.3-11)

COLORIMETRY CONCEPTS

57

provided that the matrix inverse of [KA] exists. Thus, it has been shown that with proper selection of the tristimulus signals Tj ( C ) , any color can be matched in the sense that the cone signals will be the same for the primary mixture as for the actual color C. Unfortunately, the cone signals e i ( C ) are not easily measured physical quantities, and therefore, Eq. 3.3-11 cannot be used directly to compute the tristimulus values of a color. However, this has not been the intention of the derivation. Rather, Eq. 3.3-11 has been developed to show the consistency of the color-matching experiment with the color vision model. 3.3.2. Tristimulus Value Calculation It is possible indirectly to compute the tristimulus values of an arbitrary color for a particular set of primaries if the tristimulus values of the spectral colors (narrowband light) are known for that set of primaries. Figure 3.3-1 is a typical sketch of the tristimulus values required to match a unit energy spectral color with three arbitrary primaries. These tristimulus values, which are fundamental to the definition of a primary system, are denoted as Ts1 ( λ ) , T s2 ( λ ) , T s3 ( λ ) , where λ is a particular wavelength in the visible region. A unit energy spectral light ( C ψ ) at wavelength ψ with energy distribution δ ( λ – ψ ) is matched according to the equation e i ( Cψ ) =

∫ δ ( λ – ψ )si ( λ ) d λ =

j=1

∑ ∫ Aj ( W )Pj ( λ )Ts ( ψ )si ( λ ) d λ j 3

(3.3-12)

Now, consider an arbitrary color [C] with spectral energy distribution C ( λ ) . At wavelength ψ , C ( ψ ) units of the color are matched by C ( ψ )Ts1 ( ψ ) , C ( ψ )Ts2 ( ψ ) , C ( ψ )T s ( ψ ) tristimulus units of the primaries as governed by 3

∫ C ( ψ )δ ( λ – ψ )si ( λ ) d λ = ∑ ∫ Aj ( W )Pj ( λ )C ( ψ )Ts ( ψ )si ( λ ) d λ j =1 j 3

(3.3-13)

Integrating each side of Eq. 3.3-13 over ψ and invoking the sifting integral gives the cone signal for the color (C). Thus

∫ ∫ C ( ψ )δ ( λ – ψ )si ( λ ) d λ dψ =

ei ( C ) =

j =1

∑ ∫ ∫ Aj ( W )Pj ( λ )C ( ψ )Ts ( ψ )si ( λ ) dψ d λ j 3

(3.3-14) By correspondence with Eq. 3.3-7, the tristimulus values of (C) must be equivalent to the second integral on the right of Eq. 3.3-14. Hence
Tj ( C ) =

∫ C ( ψ )Ts ( ψ ) dψ j (3.3-15)

58

PHOTOMETRY AND COLORIMETRY

FIGURE 3.3-1. Tristimulus values of typical red, green, and blue primaries required to match unit energy throughout the spectrum.

From Figure 3.3-1 it is seen that the tristimulus values obtained from solution of Eq. 3.3-11 may be negative at some wavelengths. Because the tristimulus values represent units of energy, the physical interpretation of this mathematical result is that a color match can be obtained by adding the primary with negative tristimulus value to the original color and then matching this resultant color with the remaining primary. In this sense, any color can be matched by any set of primaries. However, from a practical viewpoint, negative tristimulus values are not physically realizable, and hence there are certain colors that cannot be matched in a practical color display (e.g., a color television receiver) with fixed primaries. Fortunately, it is possible to choose primaries so that most commonly occurring natural colors can be matched. The three tristimulus values T1, T2, T'3 can be considered to form the three axes of a color space as illustrated in Figure 3.3-2. A particular color may be described as a a vector in the color space, but it must be remembered that it is the coordinates of the vectors (tristimulus values), rather than the vector length, that specify the color. In Figure 3.3-2, a triangle, called a Maxwell triangle, has been drawn between the three primaries. The intersection point of a color vector with the triangle gives an indication of the hue and saturation of the color in terms of the distances of the point from the vertices of the triangle.

FIGURE 3.3-2 Color space for typical red, green, and blue primaries.

COLORIMETRY CONCEPTS

59

FIGURE 3.3-3. Chromaticity diagram for typical red, green, and blue primaries.

Often the luminance of a color is not of interest in a color match. In such situations, the hue and saturation of color (C) can be described in terms of chromaticity coordinates, which are normalized tristimulus values, as defined by
T1 t 1 ≡ ----------------------------T 1 + T 2 + T3 T2 t 2 ≡ ----------------------------T 1 + T 2 + T3 T3 t 3 ≡ ----------------------------T 1 + T 2 + T3

(3.3-16a)

(3.3-16b)

(3.3-16c)

Clearly, t3 = 1 – t 1 – t2 , and hence only two coordinates are necessary to describe a color match. Figure 3.3-3 is a plot of the chromaticity coordinates of the spectral colors for typical primaries. Only those colors within the triangle defined by the three primaries are realizable by physical primary light sources. 3.3.3. Luminance Calculation The tristimulus values of a color specify the amounts of the three primaries required to match a color where the units are measured relative to a match of a reference white. Often, it is necessary to determine the absolute rather than the relative amount of light from each primary needed to reproduce a color match. This information is found from luminance measurements of calculations of a color match.

60

PHOTOMETRY AND COLORIMETRY

From Eq. 3.3-5 it is noted that the luminance of a matched color Y(C) is equal to the sum of the luminances of its primary components according to the relation

Y(C ) =

j =1

∑ Tj ( C ) ∫ Aj ( C )Pj ( λ )V ( λ ) d λ

3

(3.3-17)

The integrals of Eq. 3.3-17,
Y ( Pj ) =

∫ Aj ( C )Pj ( λ )V ( λ ) d λ

(3.3-18)

are called luminosity coefficients of the primaries. These coefficients represent the luminances of unit amounts of the three primaries for a match to a specific reference white. Hence the luminance of a matched color can be written as
Y ( C ) = T 1 ( C )Y ( P1 ) + T 2 ( C )Y ( P 2 ) + T 3 ( C )Y ( P 3 )

(3.3-19)

Multiplying the right and left sides of Eq. 3.3-19 by the right and left sides, respectively, of the definition of the chromaticity coordinate
T1 ( C ) t 1 ( C ) = --------------------------------------------------------T 1 ( C ) + T 2 ( C ) + T3 ( C )

(3.3-20)

and rearranging gives t 1 ( C )Y ( C ) T 1 ( C ) = ------------------------------------------------------------------------------------------------t 1 ( C )Y ( P1 ) + t 2 ( C )Y ( P2 ) + t 3 ( C )Y ( P 3 )

(3.3-21a)

Similarly,

t 2 ( C )Y ( C ) T 2 ( C ) = ------------------------------------------------------------------------------------------------t 1 ( C )Y ( P1 ) + t 2 ( C )Y ( P2 ) + t 3 ( C )Y ( P 3 ) t 3 ( C )Y ( C ) T 3 ( C ) = ------------------------------------------------------------------------------------------------t 1 ( C )Y ( P1 ) + t 2 ( C )Y ( P2 ) + t 3 ( C )Y ( P 3 )

(3.3-21b)

(3.3-21c)

Thus the tristimulus values of a color can be expressed in terms of the luminance and chromaticity coordinates of the color.

TRISTIMULUS VALUE TRANSFORMATION

61

3.4. TRISTIMULUS VALUE TRANSFORMATION From Eq. 3.3-7 it is clear that there is no unique set of primaries for matching colors. If the tristimulus values of a color are known for one set of primaries, a simple coordinate conversion can be performed to determine the tristimulus values for another set of primaries (16). Let (P1), (P2), (P3) be the original set of primaries with spectral energy distributions P1 ( λ ), P2 ( λ ), P3 ( λ ), with the units of a match determined by a white reference (W) with matching values A 1 ( W ), A 2 ( W ), A 3 ( W ). Now, consider ˜ ˜ ˜ ˜ a new set of primaries ( P 1 ) , ( P2 ) , ( P3 ) with spectral energy distributions P1 ( λ ) , ˜ ˜ ˜ P2 ( λ ), P 3 ( λ ). Matches are made to a reference white ( W ) , which may be different ˜ than the reference white of the original set of primaries, by matching values A1 ( W ), ˜ ˜ A2 ( W ), A3 ( W ). Referring to Eq. 3.3-10, an arbitrary color (C) can be matched by the tristimulus values T 1 ( C ) , T2 ( C ) , T 3 ( C ) with the original set of primaries or by the ˜ ˜ ˜ tristimulus values T1 ( C ) , T 2 ( C ) , T 3 ( C ) with the new set of primaries, according to the matching matrix relations
˜ ˜ ˜ ˜ e ( C ) = KA ( W )t ( C ) = K A ( W )t ( C )

(3.4-1)

The tristimulus value units of the new set of primaries, with respect to the original set of primaries, must now be found. This can be accomplished by determining the color signals of the reference white for the second set of primaries in terms of both ˜ sets of primaries. The color signal equations for the reference white W become
˜ ˜ ˜ ˜ ˜ ˜ ˜ e ( W ) = KA ( W )t ( W ) = K A ( W )t ( W )

(3.4-2)

˜ ˜ ˜ ˜ ˜ ˜ where T 1 ( W ) = T 2 ( W ) = T 3 ( W ) = 1. Finally, it is necessary to relate the two sets of ˜ primaries by determining the color signals of each of the new primary colors ( P1 ) , ˜ ˜ ( P 2 ) , ( P3 ) in terms of both primary systems. These color signal equations are ˜ ˜ ˜ ˜ ˜ ˜ ˜ e ( P 1 ) = KA ( W )t ( P 1 ) = K A ( W )t ( P1 ) ˜ ˜ ˜ ˜ ˜ ˜ ˜ e ( P 2 ) = KA ( W )t ( P 2 ) = K A ( W )t ( P2 ) ˜ ˜ ˜ ˜ ˜ ˜ ˜ e ( P 3 ) = KA ( W )t ( P 3 ) = K A ( W )t ( P3 )

(3.4-3a) (3.4-3b) (3.4-3c)

where
1 --------------˜ ˜ ˜( P1 ) = A1( W ) t 0 0 0 1 ˜ ˜ t ( P2 ) = --------------˜ A2 ( W ) 0 0 0 1 --------------˜ A3( W )

˜( P2 ) = t ˜

62

PHOTOMETRY AND COLORIMETRY

Matrix equations 3.4-1 to 3.4-3 may be solved jointly to obtain a relationship between the tristimulus values of the original and new primary system:

T1( C ) T2 ( C )

˜ ˜ T1 ( P 2 ) T1 ( P 3 ) ˜ ˜ T2 ( P 2 ) T2 ( P 3 )

˜ ˜ T3 ( C ) T3 ( P 2 ) T3 ( P 3 ) ˜ T 1 ( C ) = -----------------------------------------------------------------------˜ ) T (P ) T (P ) ˜ ˜ T (W
1 1 2 1 3

(3.4-4a)

˜ ˜ ˜ T2 ( W ) T2 ( P 2 ) T2 ( P 3 ) ˜ ˜ ˜ T 3 ( W ) T3 ( P 2 ) T3 ( P 3 ) ˜ T1 ( P 1 ) ˜ T2 ( P 1 ) T1 ( C ) T2 ( C ) ˜ T1 ( P 3 ) ˜ T2 ( P 3 )

˜ ˜ T3 ( P 1 ) T3 ( C ) T3 ( P 3 ) ˜ T 2 ( C ) = -----------------------------------------------------------------------˜ ˜ ˜ T ( P ) T ( W) T (P )
1 1 1 1 3

(3.4-4b)

˜ ˜ ˜ T 2 ( P1 ) T 2 ( W ) T2 ( P 3 ) ˜ T 3 ( P1 ) ˜ ˜ T 3 ( W ) T3 ( P 3 )

˜ T1 ( P1 ) ˜ T2 ( P1 )

˜ T 1 ( P2 ) ˜ T 2 ( P2 )

T1 ( C ) T2( C )

˜ ˜ T3 ( P1 ) T 3 ( P2 ) T 3 ( C ) ˜ T 3 ( C ) = -----------------------------------------------------------------------˜ ˜ ˜ T (P ) T (P ) T (W)
1 1 1 2 1

(3.4-4c)

˜ T2 ( P 1 ) ˜ T3 ( P 1 )

˜ T2 ( P 2 ) ˜ T3 ( P 2 )

˜ T2 ( W ) ˜ T3 ( W )

where T denotes the determinant of matrix T. Equations 3.4-4 then may be written ˜ ˜ ˜ in terms of the chromaticity coordinates ti ( P 1 ), ti ( P 2 ), ti ( P 3 ) of the new set of primaries referenced to the original primary coordinate system. With this revision,
˜ T1 ( C ) ˜ T (C) =
2

m 11 m 21 m 31

m 12 m 22 m 32

m 13 m 31 m 33

T1( C ) T2( C ) T3( C )

(3.4-5)

˜ T3 ( C )

COLOR SPACES

63

where
∆ ij m ij = -----∆i

and
˜ ˜ ˜ ∆1 = T 1 ( W )∆ 11 + T 2 ( W )∆ 12 + T 3 ( W )∆ 13 ˜ ˜ ˜ ∆2 = T 1 ( W )∆ 21 + T 2 ( W )∆ 22 + T 3 ( W )∆ 23 ˜ ˜ ˜ ∆3 = T 1 ( W )∆ 31 + T 2 ( W )∆ 32 + T 3 ( W )∆ 33 ˜ ˜ ˜ ˜ ∆ 11 = t 2 ( P2 )t 3 ( P3 ) – t3 ( P 2 )t 2 ( P 3 ) ˜ ˜ ˜ ˜ ∆ 12 = t 3 ( P2 )t 1 ( P3 ) – t1 ( P 2 )t 3 ( P 3 ) ˜ ˜ ˜ ˜ ∆ 13 = t 1 ( P2 )t 2 ( P3 ) – t2 ( P 2 )t 1 ( P 3 ) ˜ ˜ ˜ ˜ ∆ 21 = t 3 ( P1 )t 2 ( P3 ) – t2 ( P 1 )t 3 ( P 3 ) ˜ ˜ ˜ ˜ ∆ 22 = t 1 ( P1 )t 3 ( P3 ) – t3 ( P 1 )t 1 ( P 3 ) ˜ ˜ ˜ ˜ ∆ 23 = t 2 ( P1 )t 1 ( P3 ) – t1 ( P 1 )t 2 ( P 3 ) ˜ ˜ ˜ ˜ ∆ 31 = t 2 ( P 1 )t 3 ( P2 ) – t3 ( P 1 )t2 ( P 2 ) ˜ ˜ ˜ ˜ ∆ 32 = t 3 ( P 1 )t 1 ( P2 ) – t1 ( P 1 )t3 ( P 2 ) ˜ ˜ ˜ ˜ ∆ 33 = t 1 ( P 1 )t 2 ( P2 ) – t2 ( P 1 )t1 ( P 2 )

Thus, if the tristimulus values are known for a given set of primaries, conversion to another set of primaries merely entails a simple linear transformation of coordinates.

3.5. COLOR SPACES It has been shown that a color (C) can be matched by its tristimulus values T1 ( C ) , T 2 ( C ) , T 3 ( C ) for a given set of primaries. Alternatively, the color may be specified by its chromaticity values t 1 ( C ) , t 2 ( C ) and its luminance Y(C). Appendix 2 presents formulas for color coordinate conversion between tristimulus values and chromaticity coordinates for various representational combinations. A third approach in specifying a color is to represent the color by a linear or nonlinear invertible function of its tristimulus or chromaticity values. In this section we describe several standard and nonstandard color spaces for the representation of color images. They are categorized as colorimetric, subtractive, video, or nonstandard. Figure 3.5-1 illustrates the relationship between these color spaces. The figure also lists several example color spaces.

64

PHOTOMETRY AND COLORIMETRY

nonstandard

colorimetric linear linear intercomponent transformation nonlinear intercomponent transformation colorimetric nonlinear

linear and nonlinear intercomponent transformation

colorimetric linear RGB

subtractive CMY/CMYK

linear point transformation nonlinear nonlinear point transformation intercomponent transformation video gamma luma/chroma YCC linear intercomponent transformation

video gamma RGB

FIGURE 3.5-1. Relationship of color spaces.

Natural color images, as opposed to computer-generated images, usually originate from a color scanner or a color video camera. These devices incorporate three sensors that are spectrally sensitive to the red, green, and blue portions of the light spectrum. The color sensors typically generate red, green, and blue color signals that are linearly proportional to the amount of red, green, and blue light detected by each sensor. These signals are linearly proportional to the tristimulus values of a color at each pixel. As indicated in Figure 3.5-1, linear RGB images are the basis for the generation of the various color space image representations.

3.5.1. Colorimetric Color Spaces The class of colorimetric color spaces includes all linear RGB images and the standard colorimetric images derived from them by linear and nonlinear intercomponent transformations.

COLOR SPACES

65

FIGURE 3.5-2. Tristimulus values of CIE spectral primaries required to match unit energy throughout the spectrum. Red = 700 nm, green = 546.1 nm, and blue = 435.8 nm.

RCGCBC Spectral Primary Color Coordinate System. In 1931, the CIE developed a standard primary reference system with three monochromatic primaries at wavelengths: red = 700 nm; green = 546.1 nm; blue = 435.8 nm (11). The units of the tristimulus values are such that the tristimulus values RC, GC, BC are equal when matching an equal-energy white, called Illuminant E, throughout the visible spectrum. The primary system is defined by tristimulus curves of the spectral colors, as shown in Figure 3.5-2. These curves have been obtained indirectly by experimental color-matching experiments performed by a number of observers. The collective color-matching response of these observers has been called the CIE Standard Observer. Figure 3.5-3 is a chromaticity diagram for the CIE spectral coordinate system.

FIGURE 3.5-3. Chromaticity diagram for CIE spectral primary system.

66

PHOTOMETRY AND COLORIMETRY

RNGNBN NTSC Receiver Primary Color Coordinate System. Commercial television receivers employ a cathode ray tube with three phosphors that glow in the red, green, and blue regions of the visible spectrum. Although the phosphors of commercial television receivers differ from manufacturer to manufacturer, it is common practice to reference them to the National Television Systems Committee (NTSC) receiver phosphor standard (14). The standard observer data for the CIE spectral primary system is related to the NTSC primary system by a pair of linear coordinate conversions. Figure 3.5-4 is a chromaticity diagram for the NTSC primary system. In this system, the units of the tristimulus values are normalized so that the tristimulus values are equal when matching the Illuminant C white reference. The NTSC phosphors are not pure monochromatic sources of radiation, and hence the gamut of colors producible by the NTSC phosphors is smaller than that available from the spectral primaries. This fact is clearly illustrated by Figure 3.5-3, in which the gamut of NTSC reproducible colors is plotted in the spectral primary chromaticity diagram (11). In modern practice, the NTSC chromaticities are combined with Illuminant D65. REGEBE EBU Receiver Primary Color Coordinate System. The European Broadcast Union (EBU) has established a receiver primary system whose chromaticities are close in value to the CIE chromaticity coordinates, and the reference white is Illuminant C (17). The EBU chromaticities are also combined with the D65 illuminant. RRGRBR CCIR Receiver Primary Color Coordinate Systems. In 1990, the International Telecommunications Union (ITU) issued its Recommendation 601, which

FIGURE 3.5-4. Chromaticity diagram for NTSC receiver phosphor primary system.

COLOR SPACES

67

specified the receiver primaries for standard resolution digital television (18). Also, in 1990 the ITU published its Recommendation 709 for digital high-definition television systems (19). Both standards are popularly referenced as CCIR Rec. 601 and CCIR Rec. 709, abbreviations of the former name of the standards committee, Comité Consultatif International des Radiocommunications.

RSGSBS SMPTE Receiver Primary Color Coordinate System. The Society of Motion Picture and Television Engineers (SMPTE) has established a standard receiver primary color coordinate system with primaries that match modern receiver phosphors better than did the older NTSC primary system (20). In this coordinate system, the reference white is Illuminant D65.

XYZ Color Coordinate System. In the CIE spectral primary system, the tristimulus values required to achieve a color match are sometimes negative. The CIE has developed a standard artificial primary coordinate system in which all tristimulus values required to match colors are positive (4). These artificial primaries are shown in the CIE primary chromaticity diagram of Figure 3.5-3 (11). The XYZ system primaries have been chosen so that the Y tristimulus value is equivalent to the luminance of the color to be matched. Figure 3.5-5 is the chromaticity diagram for the CIE XYZ primary system referenced to equal-energy white (4). The linear transformations between RCGCBC and XYZ are given by

FIGURE 3.5-5. Chromaticity diagram for CIE XYZ primary system.

68

PHOTOMETRY AND COLORIMETRY

X Y Z =

0.49018626 0.17701522 0.00000000

0.30987954 0.81232418 0.01007720

0.19993420 0.01066060 0.98992280

RC GC BC

(3.5-1a)

RC GC = BC

2.36353918 – 0.51511248

– 0.89582361 1.42643694

– 0.46771557 0.08867553 1.00927709

X Y Z

(3.5-1b)

0.00524373 – 0.01452082

The color conversion matrices of Eq. 3.5-1 and those color conversion matrices defined later are quoted to eight decimal places (21,22). In many instances, this quotation is to a greater number of places than the original specification. The number of places has been increased to reduce computational errors when concatenating transformations between color representations. The color conversion matrix between XYZ and any other linear RGB color space can be computed by the following algorithm. 1. Compute the colorimetric weighting coefficients a(1), a(2), a(3) from
–1

a(1) a(2) = a(3)

xR yR zR

xG yG zG

xB yB zB

xW ⁄ yW 1 zW ⁄ yW

(3.5-2a)

where xk, yk, zk are the chromaticity coordinates of the RGB primary set. 2. Compute the RGB-to-XYZ conversion matrix.
M ( 1, 1 ) M ( 2, 1 ) M ( 3, 1 ) M ( 1, 2 ) M ( 2, 2 ) M ( 3, 2 ) M ( 1, 3 ) M ( 2, 3 ) M ( 3, 3 ) = xR yR zR xG yG zG xB yB zB a(1) 0 0 0 a(2) 0 0 0 a( 3)

(3.5-2b)

The XYZ-to-RGB conversion matrix is, of course, the matrix inverse of M . Table 3.5-1 lists the XYZ tristimulus values of several standard illuminants. The XYZ chromaticity coordinates of the standard linear RGB color systems are presented in Table 3.5-2. From Eqs. 3.5-1 and 3.5-2 it is possible to derive a matrix transformation between RCGCBC and any linear colorimetric RGB color space. The book CD contains a file that lists the transformation matrices (22) between the standard RGB color coordinate systems and XYZ and UVW, defined below.

COLOR SPACES

69

TABLE 3.5-1. XYZ Tristimulus Values of Standard Illuminants Illuminant A C D50 D65 E X0 1.098700 0.980708 0.964296 0.950456 1.000000 Y0 1.000000 1.000000 1.000000 1.000000 1.000000 Z0 0.355900 1.182163 0.825105 1.089058 1.000000

TABLE 3.5-2. XYZ Chromaticity Coordinates of Standard Primaries Standard CIE RC GC BC NTSC RN GN BN SMPTE RS GS BS EBU RE GE BE CCIR RR GR BR x 0.640000 0.300000 0.150000 0.670000 0.210000 0.140000 0.630000 0.310000 0.155000 0.640000 0.290000 0.150000 0.640000 0.30000 0.150000 y 0.330000 0.600000 0.06000 0.330000 0.710000 0.080000 0.340000 0.595000 0.070000 0.330000 0.60000 0.060000 0.330000 0.600000 0.060000 z 0.030000 0.100000 0.790000 0.000000 0.080000 0.780000 0.030000 0.095000 0.775000 0.030000 0.110000 0.790000 0.030000 0.100000 0.790000

UVW Uniform Chromaticity Scale Color Coordinate System. In 1960, the CIE. adopted a coordinate system, called the Uniform Chromaticity Scale (UCS), in which, to a good approximation, equal changes in the chromaticity coordinates result in equal, just noticeable changes in the perceived hue and saturation of a color. The V component of the UCS coordinate system represents luminance. The u, v chromaticity coordinates are related to the x, y chromaticity coordinates by the relations (23)

70

PHOTOMETRY AND COLORIMETRY

4x u = ---------------------------------– 2x + 12y + 3 6y v = ---------------------------------– 2x + 12y + 3 3u x = -------------------------2u – 8v – 4 2v y = -------------------------2u – 8v – 4

(3.5-3a) (3.5-3b) (3.5-3c) (3.5-3d)

Figure 3.5-6 is a UCS chromaticity diagram. The tristimulus values of the uniform chromaticity scale coordinate system UVW are related to the tristimulus values of the spectral coordinate primary system by

U V W RC GC = BC =

0.32679084 0.17701522 0.02042971

0.20658636 0.81232418 1.06858510

0.13328947 0.01066060 0.41098519

RC GC BC U V W

(3.5-4a)

2.84373542 – 0.63965541 1.52178123

0.50732308 1.16041034 – 3.04235208

– 0.93543113 0.17735107 2.01855417

(3.5-4b)

FIGURE 3.5-6. Chromaticity diagram for CIE uniform chromaticity scale primary system.

COLOR SPACES

71

U*V*W* Color Coordinate System. The U*V*W* color coordinate system, adopted by the CIE in 1964, is an extension of the UVW coordinate system in an attempt to obtain a color solid for which unit shifts in luminance and chrominance are uniformly perceptible. The U*V*W* coordinates are defined as (24)
U∗ = 13W∗ ( u – u o ) V∗ = 13W∗ ( v – v o )
1⁄3 – 17 W∗ = 25 ( 100Y )

(3.5-5a) (3.5-5b) (3.5-5c)

where the luminance Y is measured over a scale of 0.0 to 1.0 and uo and vo are the chromaticity coordinates of the reference illuminant. The UVW and U*V*W* coordinate systems were rendered obsolete in 1976 by the introduction by the CIE of the more accurate L*a*b* and L*u*v* color coordinate systems. Although depreciated by the CIE, much valuable data has been collected in the UVW and U*V*W* color systems. L*a*b* Color Coordinate System. The L*a*b* cube root color coordinate system was developed to provide a computationally simple measure of color in agreement with the Munsell color system (25). The color coordinates are
 Y 1⁄3 – 16  116  -----  Y   o L∗ =   Y  903.3 ----Yo  Y for ----- > 0.008856 Yo

(3.5-6a)

Y for 0.0 ≤ ----- ≤ 0.008856 (3.5-6b) Yo

X Y a∗ = 500 f  -----  – f  -----   X o   Yo  X Z b∗ = 200 f  -----  – f  -----  Xo    Zo 

(3.5-6c)

(3.5-6d)

where
 w1 ⁄ 3  f( w) =    7.787 ( w ) + 0.1379

for w > 0.008856 for 0.0 ≤ w ≤ 0.008856

(3.6-6e) (3.6-6f)

72

PHOTOMETRY AND COLORIMETRY

The terms Xo, Yo, Zo are the tristimulus values for the reference white. Basically, L* is correlated with brightness, a* with redness-greenness, and b* with yellownessblueness. The inverse relationship between L*a*b* and XYZ is
 L∗ + 16  X = X o g  ------------------   25    Y  a∗  Y = Y o g  f  -----  + --------    Y o  500    Y  b∗  Z = Z o g  f  -----  – --------    Y o  200 

(3.5-7a)

(3.5-7b)

(3.5-7c)

where
 w3  g( w) =    0.1284 ( w – 0.1379 )

for w > 0.20681 if 0.0 ≤ w ≤ 0.20689

(3.6-7d) (3.6-7e)

L*u*v* Color Coordinate System. The L*u*v* coordinate system (26), which has evolved from the L*a*b* and the U*V*W* coordinate systems, became a CIE standard in 1976. It is defined as
  Y 1⁄3 -----  – 16  25  100 Y  o   L∗ =    Y 903.3 ---- Yo  u∗ = 13L∗ ( u′ – u′ ) o v∗ = 13L∗ ( v′ – v′ ) o Y for ----- ≥ 0.008856 Yo

(3.5-8a)

Y for ----- < 0.008856 Yo

(3.5-8b) (3.5-8c) (3.5-8d)

where
4X u′ = -------------------------------X + 15Y + 3Z 9Y v′ = -------------------------------X + 15Y + 3Z

(3.5-8e) (3.5-8f)

COLOR SPACES

73

and u′ and v′ are obtained by substitution of the tristimulus values Xo, Yo, Zo for o o the reference white. The inverse relationship is given by
X = 9u′ Y ------4v′
3 ∗ Y = Y o  L + 16  ---------------- 25 

(3.5-9a)

(3.5-9b)

12 – 3u′ – 20v′ Z = Y -----------------------------------4v′

(3.5-9c)

where u∗ u′ = ----------- + u′ o 13L∗ v∗ v' = ----------- + u′ o 13L∗

(3.5-9d) (3.5-9e)

Figure 3.5-7 shows the linear RGB components of an NTSC receiver primary color image. This color image is printed in the color insert. If printed properly, the color image and its monochromatic component images will appear to be of “normal” brightness. When displayed electronically, the linear images will appear too dark. Section 3.5.3 discusses the proper display of electronic images. Figures 3.5-8 to 3.5-10 show the XYZ, Yxy, and L*a*b* components of Figure 3.5-7. Section 10.1.1 describes amplitude-scaling methods for the display of image components outside the unit amplitude range. The amplitude range of each component is printed below each photograph. 3.5.2. Subtractive Color Spaces The color printing and color photographic processes (see Section 11-3) are based on a subtractive color representation. In color printing, the linear RGB color components are transformed to cyan (C), magenta (M), and yellow (Y) inks, which are overlaid at each pixel on a, usually, white paper. The simplest transformation relationship is
C = 1.0 – R M = 1.0 – G Y = 1.0 – B

(3.5-10a) (3.5-10b) (3.5-10c)

74

PHOTOMETRY AND COLORIMETRY

(a) Linear R, 0.000 to 0.965

(b) Linear G, 0.000 to 1.000

(c) Linear B, 0.000 to 0.965

FIGURE 3.5-7. Linear RGB components of the dolls_linear color image. See insert for a color representation of this figure.

where the linear RGB components are tristimulus values over [0.0, 1.0]. The inverse relations are
R = 1.0 – C G = 1.0 – M B = 1.0 – Y

(3.5-11a) (3.5-11b) (3.5-11c)

In high-quality printing systems, the RGB-to-CMY transformations, which are usually proprietary, involve color component cross-coupling and point nonlinearities.

COLOR SPACES

75

(a) X, 0.000 to 0.952

(b) Y, 0.000 to 0.985

(c ) Z, 0.000 to 1,143

FIGURE 3.5-8. XYZ components of the dolls_linear color image.

To achieve dark black printing without using excessive amounts of CMY inks, it is common to add a fourth component, a black ink, called the key (K) or black component. The black component is set proportional to the smallest of the CMY components as computed by Eq. 3.5-10. The common RGB-to-CMYK transformation, which is based on the undercolor removal algorithm (27), is
C = 1.0 – R – uK b M = 1.0 – G – uK b Y = 1.0 – B – uKb K = bKb

(3.5-12a) (3.5-12b) (3.5-12c) (3.5-12d)

76

PHOTOMETRY AND COLORIMETRY

(a) Y, 0.000 to 0.965

(b) x, 0.140 to 0.670

(c) y, 0.080 to 0.710

FIGURE 3.5-9. Yxy components of the dolls_linear color image.

where
K b = MIN { 1.0 – R, 1.0 – G, 1.0 – B }

(3.5-12e)

and 0.0 ≤ u ≤ 1.0 is the undercolor removal factor and 0.0 ≤ b ≤ 1.0 is the blackness factor. Figure 3.5-11 presents the CMY components of the color image of Figure 3.5-7. 3.5.3 Video Color Spaces The red, green, and blue signals from video camera sensors typically are linearly proportional to the light striking each sensor. However, the light generated by cathode tube displays is approximately proportional to the display amplitude drive signals

COLOR SPACES

77

(a) L*, −16.000 to 99.434

(b) a*, −55.928 to 69.291

(c) b*, −65.224 to 90.171

FIGURE 3.5-10. L*a*b* components of the dolls_linear color image.

raised to a power in the range 2.0 to 3.0 (28). To obtain a good-quality display, it is necessary to compensate for this point nonlinearity. The compensation process, called gamma correction, involves passing the camera sensor signals through a point nonlinearity with a power, typically, of about 0.45. In television systems, to reduce receiver cost, gamma correction is performed at the television camera rather than at the receiver. A linear RGB image that has been gamma corrected is called a gamma RGB image. Liquid crystal displays are reasonably linear in the sense that the light generated is approximately proportional to the display amplitude drive signal. But because LCDs are used in lieu of CRTs in many applications, they usually employ circuitry to compensate for the gamma correction at the sensor.

78

PHOTOMETRY AND COLORIMETRY

(a) C, 0.0035 to 1.000

(b) M, 0.000 to 1.000

(c) Y, 0.0035 to 1.000

FIGURE 3.5-11. CMY components of the dolls_linear color image.

In high-precision applications, gamma correction follows a linear law for lowamplitude components and a power law for high-amplitude components according to the relations (22)

 c Kc 2 + c 1 3 ˜ =  K    c4 K

for K ≥ b for 0.0 ≤ K < b

(3.5-13a) (3.5-13b)

COLOR SPACES

79

˜ where K denotes a linear RGB component and K is the gamma-corrected component. The constants ck and the breakpoint b are specified in Table 3.5-3 for the general case and for conversion to the SMPTE, CCIR and CIE lightness components. Figure 3.5-12 is a plot of the gamma correction curve for the CCIR Rec. 709 primaries. TABLE 3.5-3. Gamma Correction Constants General c1 c2 c3 c4 b 1.00 0.45 0.00 0.00 0.00 SMPTE 1.1115 0.45 -0.1115 4.0 0.0228 CCIR 1.099 0.45 -0.099 4.5 0.018 CIE L* 116.0 0.3333 -16.0 903.3 0.008856

The inverse gamma correction relation is
˜   K – c 3 1 ⁄ c2   --------------    c1  k =   K ˜  --- c4

˜ for K ≥ c 4 b

(3.5-14a)

˜ for 0.0 ≤ K < c 4 b

(3.5-14b)

FIGURE 3.5-12. Gamma correction curve for the CCIR Rec. 709 primaries.

80

PHOTOMETRY AND COLORIMETRY

(a) Gamma R, 0.000 to 0.984

(b) Gamma G, 0.000 to 1.000

(c) Gamma B, 0.000 to 0.984

FIGURE 3.5-13. Gamma RGB components of the dolls_gamma color image. See insert for a color representation of this figure.

Figure 3.5-13 shows the gamma RGB components of the color image of Figure 3.5-7. The gamma color image is printed in the color insert. The gamma components have been printed as if they were linear components to illustrate the effects of the point transformation. When viewed on an electronic display, the gamma RGB color image will appear to be of “normal” brightness. YIQ NTSC Transmission Color Coordinate System. In the development of the color television system in the United States, NTSC formulated a color coordinate system for transmission composed of three values, Y, I, Q (14). The Y value, called luma, is proportional to the gamma-corrected luminance of a color. The other two components, I and Q, called chroma, jointly describe the hue and saturation

COLOR SPACES

81

attributes of an image. The reasons for transmitting the YIQ components rather than ˜ ˜ ˜ the gamma-corrected RN G N B N components directly from a color camera were two fold: The Y signal alone could be used with existing monochrome receivers to display monochrome images; and it was found possible to limit the spatial bandwidth of the I and Q signals without noticeable image degradation. As a result of the latter property, a clever analog modulation scheme was developed such that the bandwidth of a color television carrier could be restricted to the same bandwidth as a monochrome carrier. The YIQ transformations for an Illuminant C reference white are given by

Y I Q

0.29889531

0.58662247

0.11448223 0.31114694

= 0.59597799 – 0.27417610 – 0.32180189 0.21147017 – 0.52261711

˜ RN ˜ G ˜ BN Y I Q

N

(3.5-15a)

˜ RN 1.00000000 0.95608445 0.62088850 ˜ G N = 1.00000000 – 0.27137664 – 0.64860590 ˜ BN 1.00000000 – 1.10561724 1.70250126

(3.5-15b)

where the tilde denotes that the component has been gamma corrected. Figure 3.5-14 presents the YIQ components of the gamma color image of Figure 3.5-13. YUV EBU Transmission Color Coordinate System. In the PAL and SECAM color television systems (29) used in many countries, the luma Y and two color differences,
˜ BE – Y U = --------------2.03 ˜ RE – Y V = --------------1.14

(3.5-16a)

(3.5-16b)

˜ ˜ are used as transmission coordinates, where RE and B E are the gamma-corrected EBU red and blue components, respectively. The YUV coordinate system was initially proposed as the NTSC transmission standard but was later replaced by the YIQ system because it was found (4) that the I and Q signals could be reduced in bandwidth to a greater degree than the U and V signals for an equal level of visual quality. The I and Q signals are related to the U and V signals by a simple rotation of coordinates in color space:

82

PHOTOMETRY AND COLORIMETRY

(a) Y, 0.000 to 0.994

(b) l, −0.276 to 0.347

(c) Q, = 0.147 to 0.169

FIGURE 3.5-14. YIQ components of the gamma corrected dolls_gamma color image.

I = – U sin 33° + V cos 33° Q = U cos 33° + V sin 33°

(3.5-17a) (3.5-17b)

It should be noted that the U and V components of the YUV video color space are not equivalent to the U and V components of the UVW uniform chromaticity system. YCbCr CCIR Rec. 601 Transmission Color Coordinate System. The CCIR Rec. 601 color coordinate system YCbCr is defined for the transmission of luma and chroma components coded in the integer range 0 to 255. The YCbCr transformations for unit range components are defined as (28)

COLOR SPACES

83

Y Cb Cr

0.29900000

0.58700000

0.11400000 0.50000000 – 0.08131200

= – 0.16873600 – 0.33126400 0.50000000 – 0.4186680

˜ RS ˜ G ˜ BS

S

(3.5-18a)

˜ RS ˜ G ˜ BS

S

1.00000000 – 0.0009264 1.40168676 = 1.00000000 – 0.34369538 – 0.71416904 1.00000000 1.77216042 0.00099022

Y Cb Cr

(3.5-18b)

where the tilde denotes that the component has been gamma corrected. Photo YCC Color Coordinate System. Eastman Kodak company has developed an image storage system, called PhotoCD, in which a photographic negative is scanned, converted to a luma/chroma format similar to Rec. 601YCbCr, and recorded in a proprietary compressed form on a compact disk. The PhotoYCC format and its associated RGB display format have become defacto standards. PhotoYCC employs the CCIR Rec. 709 primaries for scanning. The conversion to YCC is defined as (27,28,30)
˜ R 709 ˜ G ˜ B 709

Y C1 C2

0.299

0.587 0.114

= – 0.299 – 0.587 0.500 0.500 – 0.587 0.114

709

(3.5-19a)

Transformation from PhotoCD components for display is not an exact inverse of Eq. 3.5-19a, in order to preserve the extended dynamic range of film images. The YC1C2-to-RDGDBD display components is given by

RD GD = BD

0.969 0.000 1.000 0.969 – 0.194 – 0.509 0.969 1.000 0.000

Y C1 C2

(3.5-19b)

3.5.4. Nonstandard Color Spaces Several nonstandard color spaces used for image processing applications are described in this section.

84

PHOTOMETRY AND COLORIMETRY

IHS Color Coordinate System. The IHS coordinate system (31) has been used within the image processing community as a quantitative means of specifying the intensity, hue, and saturation of a color. It is defined by the relations

I V1 V2 =

1 -3 –1 -----6 1 -----6

1 -3 –1 -----6 –1 -----6

1 -3 2 -----6 0

R G B

(3.5-20a)

 V2 H = arc tan  -----  V1 S = ( V1 + V 2 )
2 2 1⁄2

(3.5-20b)

(3.5-20c)

By this definition, the color blue is the zero reference for hue. The inverse relationship is
V 1 = S cos { H } V2 = S sin { H }

(3.5-21a) (3.5-21b)

R G B =

1 1 1

– 6 --------6 6 -----6 6 -----3

6 -----2 – 6 --------2 0

I V1 V2

(3.5-21c)

Figure 3.5-15 shows the IHS components of the gamma RGB image of Figure 3.5-13. Karhunen–Loeve Color Coordinate System. Typically, the R, G, and B tristimulus values of a color image are highly correlated with one another (32). In the development of efficient quantization, coding, and processing techniques for color images, it is often desirable to work with components that are uncorrelated. If the secondorder moments of the RGB tristimulus values are known, or at least estimable, it is

COLOR SPACES

85

(a) l, 0.000 to 0.989

(b) H, −3.136 to 3.142

(c) S, 0.000 to 0.476

FIGURE 3.5-15. IHS components of the dolls_gamma color image.

possible to derive an orthogonal coordinate system, in which the components are uncorrelated, by a Karhunen–Loeve (K–L) transformation of the RGB tristimulus values. The K-L color transform is defined as

K1 K2 K3 =

m 11 m12 m 21 m 31 m 22 m 32

m 13 m 23 m 33

R G B

(3.5-22a)

86

PHOTOMETRY AND COLORIMETRY

R G B =

m 11 m 21 m 12 m 13 m 22 m 23

m 31 m 32 m 33

K1 K2 K3

(3.5-22b)

where the transformation matrix with general term m ij composed of the eigenvectors of the RGB covariance matrix with general term u ij . The transformation matrix satisfies the relation

m 11 m 21 m 31

m 12 m 22 m 32

m 13 m 23 m 33

u 11 u 12 u 13

u 12 u 22 u 23

u 13 u 23 u 33

m 11 m 12 m 13

m 21 m 22 m 23

m 31 m 32 m 33 =

λ1 0 0

0 λ2 0

0 0 λ3

(3.5-23) where λ 1 , λ 2 , λ 3 are the eigenvalues of the covariance matrix and u 11 = E { ( R – R ) } u 22 = E { ( G – G ) } u 33 = E { ( B – B ) } u 12 = E { ( R – R ) ( G – G ) } u 13 = E { ( R – R ) ( B – B ) } u 23 = E { ( G – G ) ( B – B ) }
2 2 2

(3.5-24a) (3.5-24b) (3.5-24c) (3.5-24d) (3.5-24e) (3.5-24f)

In Eq. 3.5-23, E { · } is the expectation operator and the overbar denotes the mean value of a random variable. Retinal Cone Color Coordinate System. As indicated in Chapter 2, in the discussion of models of the human visual system for color vision, indirect measurements of the spectral sensitivities s 1 ( λ ) , s 2 ( λ ) , s 3 ( λ ) have been made for the three types of retinal cones. It has been found that these spectral sensitivity functions can be linearly related to spectral tristimulus values established by colorimetric experimentation. Hence a set of cone signals T1, T2, T3 may be regarded as tristimulus values in a retinal cone color coordinate system. The tristimulus values of the retinal cone color coordinate system are related to the XYZ system by the coordinate conversion matrix (33)

REFERENCES

87

T1 T2 T3

0.000000 1.000000 0.000000 = – 0.460000 1.359000 0.101000 0.000000 0.000000 1.000000

X Y Z

(3.5-25)

REFERENCES
1. T. P. Merrit and F. F. Hall, Jr., “Blackbody Radiation,” Proc. IRE, 47, 9, September 1959, 1435–1442. 2. H. H. Malitson, “The Solar Energy Spectrum,” Sky and Telescope, 29, 4, March 1965, 162–165. 3. R. D. Larabee, “Spectral Emissivity of Tungsten,” J. Optical of Society America, 49, 6, June 1959, 619–625. 4. The Science of Color, Crowell, New York, 1973. 5. D. G. Fink, Ed., Television Engineering Handbook, McGraw-Hill, New York, 1957. 6. Toray Industries, Inc. LCD Color Filter Specification. 7. J. W. T. Walsh, Photometry, Constable, London, 1953. 8. M. Born and E. Wolf, Principles of Optics, 6th ed., Pergamon Press, New York, 1981. 9. K. S. Weaver, “The Visibility of Radiation at Low Intensities,” J. Optical Society of America, 27, 1, January 1937, 39–43. 10. G. Wyszecki and W. S. Stiles, Color Science, 2nd ed., Wiley, New York, 1982. 11. R. W. G. Hunt, The Reproduction of Colour, 5th ed., Wiley, New York, 1957. 12. W. D. Wright, The Measurement of Color, Adam Hilger, London, 1944, 204–205. 13. R. A. Enyord, Ed., Color: Theory and Imaging Systems, Society of Photographic Scientists and Engineers, Washington, DC, 1973. 14. F. J. Bingley, “Color Vision and Colorimetry,” in Television Engineering Handbook, D. G. Fink, ed., McGraw–Hill, New York, 1957. 15. H. Grassman, “On the Theory of Compound Colours,” Philosophical Magazine, Ser. 4, 7, April 1854, 254–264. 16. W. T. Wintringham, “Color Television and Colorimetry,” Proc. IRE, 39, 10, October 1951, 1135–1172. 17. “EBU Standard for Chromaticity Tolerances for Studio Monitors,” Technical Report 3213-E, European Broadcast Union, Brussels, 1975. 18. “Encoding Parameters of Digital Television for Studios”, Recommendation ITU-R BT.601-4, (International Telecommunications Union, Geneva; 1990). 19 “Basic Parameter Values for the HDTV Standard for the Studio and for International Programme Exchange,” Recommendation ITU-R BT 709, International Telecommunications Unions, Geneva; 1990. 20. L. E. DeMarsh, “Colorimetric Standards in U.S. Color Television. A Report to the Subcommittee on Systems Colorimetry of the SMPTE Television Committee,” J. Society of Motion Picture and Television Engineers, 83, 1974.

88

PHOTOMETRY AND COLORIMETRY

21. “Information Technology, Computer Graphics and Image Processing, Image Processing and Interchange, Part 1: Common Architecture for Imaging,” ISO/IEC 12087-1:1995(E). 22. “Information Technology, Computer Graphics and Image Processing, Image Processing and Interchange, Part 2: Programmer’s Imaging Kernel System Application Program Interface,” ISO/IEC 12087-2:1995(E). 23. D. L. MacAdam, “Projective Transformations of ICI Color Specifications,” J. Optical Society of America, 27, 8, August 1937, 294–299. 24. G. Wyszecki, “Proposal for a New Color-Difference Formula,” J. Optical Society of America, 53, 11, November 1963, 1318–1319. 25. “CIE Colorimetry Committee Proposal for Study of Color Spaces,” Technical, Note, J. Optical Society of America, 64, 6, June 1974, 896–897. 26. Colorimetry, 2nd ed., Publication 15.2, Central Bureau, Commission Internationale de l'Eclairage, Vienna, 1986. 27. W. K. Pratt, Developing Visual Applications, XIL: An Imaging Foundation Library, Sun Microsystems Press, Mountain View, CA, 1997. 28. C. A. Poynton, A Technical Introduction to Digital Video, Wiley, New York, 1996. 29. P. S. Carnt and G. B. Townsend, Color Television Vol. 2; PAL, SECAM, and Other Systems, Iliffe, London, 1969. 30. I. Kabir, High Performance Computer Imaging, Manning Publications, Greenwich, CT, 1996. 31. W. Niblack, An Introduction to Digital Image Processing, Prentice Hall, Englewood Cliffs, NJ, 1985. 32. W. K. Pratt, “Spatial Transform Coding of Color Images,” IEEE Trans. Communication Technology, COM-19, 12, December 1971, 980–992. 33. D. B. Judd, “Standard Response Functions for Protanopic and Deuteranopic Vision,” J. Optical Society of America, 35, 3, March 1945, 199–221.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

PART 2
DIGITAL IMAGE CHARACTERIZATION
Digital image processing is based on the conversion of a continuous image field to equivalent digital form. This part of the book considers the image sampling and quantization processes that perform the analog image to digital image conversion. The inverse operation of producing continuous image displays from digital image arrays is also analyzed. Vector-space methods of image representation are developed for deterministic and stochastic image arrays.

89

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

4
IMAGE SAMPLING AND RECONSTRUCTION

In digital image processing systems, one usually deals with arrays of numbers obtained by spatially sampling points of a physical image. After processing, another array of numbers is produced, and these numbers are then used to reconstruct a continuous image for viewing. Image samples nominally represent some physical measurements of a continuous image field, for example, measurements of the image intensity or photographic density. Measurement uncertainties exist in any physical measurement apparatus. It is important to be able to model these measurement errors in order to specify the validity of the measurements and to design processes for compensation of the measurement errors. Also, it is often not possible to measure an image field directly. Instead, measurements are made of some function related to the desired image field, and this function is then inverted to obtain the desired image field. Inversion operations of this nature are discussed in the sections on image restoration. In this chapter the image sampling and reconstruction process is considered for both theoretically exact and practical systems.

4.1. IMAGE SAMPLING AND RECONSTRUCTION CONCEPTS In the design and analysis of image sampling and reconstruction systems, input images are usually regarded as deterministic fields (1–5). However, in some situations it is advantageous to consider the input to an image processing system, especially a noise input, as a sample of a two-dimensional random process (5–7). Both viewpoints are developed here for the analysis of image sampling and reconstruction methods.
91

92

IMAGE SAMPLING AND RECONSTRUCTION

FIGURE 4.1-1. Dirac delta function sampling array.

4.1.1. Sampling Deterministic Fields Let F I ( x, y ) denote a continuous, infinite-extent, ideal image field representing the luminance, photographic density, or some desired parameter of a physical image. In a perfect image sampling system, spatial samples of the ideal image would, in effect, be obtained by multiplying the ideal image by a spatial sampling function
∞ ∞

S ( x, y ) =

j = –∞ k = – ∞





δ ( x – j ∆x, y – k ∆y )

(4.1-1)

composed of an infinite array of Dirac delta functions arranged in a grid of spacing ( ∆x, ∆y ) as shown in Figure 4.1-1. The sampled image is then represented as
∞ ∞

F P ( x, y ) = FI ( x, y )S ( x, y ) =

j = –∞ k = –∞





FI ( j ∆x, k ∆y )δ ( x – j ∆x, y – k ∆y )

(4.1-2)

where it is observed that F I ( x, y ) may be brought inside the summation and evaluated only at the sample points ( j ∆x, k ∆y) . It is convenient, for purposes of analysis, to consider the spatial frequency domain representation F P ( ω x, ω y ) of the sampled image obtained by taking the continuous two-dimensional Fourier transform of the sampled image. Thus

F P ( ω x, ω y ) =

∫–∞ ∫–∞ FP ( x, y ) exp { –i ( ωx x + ωy y ) } dx dy





(4.1-3)

IMAGE SAMPLING AND RECONSTRUCTION CONCEPTS

93

By the Fourier transform convolution theorem, the Fourier transform of the sampled image can be expressed as the convolution of the Fourier transforms of the ideal image F I ( ω x, ω y ) and the sampling function S ( ω x, ω y ) as expressed by
1 F P ( ω x, ω y ) = -------- F I ( ω x, ω y ) * S ( ω x, ω y ) 2 4π

(4.1-4)

The two-dimensional Fourier transform of the spatial sampling function is an infinite array of Dirac delta functions in the spatial frequency domain as given by (4, p. 22)
4π S ( ω x, ω y ) = -------------∆x ∆y
2

j = –∞ k = –∞

∑ ∑





δ ( ω x – j ω xs, ω y – k ω ys )

(4.1-5)

where ω xs = 2π ⁄ ∆x and ω ys = 2π ⁄ ∆y represent the Fourier domain sampling frequencies. It will be assumed that the spectrum of the ideal image is bandlimited to some bounds such that F I ( ω x, ω y ) = 0 for ω x > ω xc and ω y > ω yc . Performing the convolution of Eq. 4.1-4 yields
1 F P ( ω x, ω y ) = -------------∆x ∆y ×
∞ ∞

∫– ∞ ∫– ∞ F I ( ω x – α , ω y – β )


j = – ∞ k = –∞

∑ ∑



δ ( ω x – j ω xs, ω y – k ω ys ) dα dβ

(4.1-6)

Upon changing the order of summation and integration and invoking the sifting property of the delta function, the sampled image spectrum becomes
1 F P ( ω x, ω y ) = -------------∆x ∆y
∞ ∞

j = –∞ k = – ∞





F I ( ω x – j ω xs, ω y – k ω ys )

(4.1-7)

As can be seen from Figure 4.1-2, the spectrum of the sampled image consists of the spectrum of the ideal image infinitely repeated over the frequency plane in a grid of resolution ( 2π ⁄ ∆x, 2π ⁄ ∆y ) . It should be noted that if ∆x and ∆y are chosen too large with respect to the spatial frequency limits of F I ( ω x, ω y ) , the individual spectra will overlap. A continuous image field may be obtained from the image samples of FP ( x, y ) by linear spatial interpolation or by linear spatial filtering of the sampled image. Let R ( x, y ) denote the continuous domain impulse response of an interpolation filter and R ( ω x, ω y ) represent its transfer function. Then the reconstructed image is obtained

94

IMAGE SAMPLING AND RECONSTRUCTION

wY

wX

(a) Original image wY

wX 2p ∆y 2p ∆x (b) Sampled image

FIGURE 4.1-2. Typical sampled image spectra.

by a convolution of the samples with the reconstruction filter impulse response. The reconstructed image then becomes
FR ( x, y ) = F P ( x, y ) * R ( x, y )

(4.1-8)

Upon substituting for FP ( x, y ) from Eq. 4.1-2 and performing the convolution, one obtains
FR ( x, y ) =

j = –∞ k = –∞

∑ ∑





F I ( j ∆x, k ∆y )R ( x – j ∆x, y – k ∆y )

(4.1-9)

Thus it is seen that the impulse response function R ( x, y ) acts as a two-dimensional interpolation waveform for the image samples. The spatial frequency spectrum of the reconstructed image obtained from Eq. 4.1-8 is equal to the product of the reconstruction filter transform and the spectrum of the sampled image,
F R ( ω x, ω y ) = F P ( ω x, ω y )R ( ω x, ω y )

(4.1-10)

or, from Eq. 4.1-7,

1 F R ( ω x, ω y ) = -------------- R ( ω x, ω y ) ∆x ∆y

j = –∞ k = – ∞

∑ ∑





F I ( ω x – j ω xs, ω y – k ω ys ) (4.1-11)

IMAGE SAMPLING AND RECONSTRUCTION CONCEPTS

95

It is clear from Eq. 4.1-11 that if there is no spectrum overlap and if R ( ω x, ω y ) filters out all spectra for j, k ≠ 0 , the spectrum of the reconstructed image can be made equal to the spectrum of the ideal image, and therefore the images themselves can be made identical. The first condition is met for a bandlimited image if the sampling period is chosen such that the rectangular region bounded by the image cutoff frequencies ( ω xc, ω yc ) lies within a rectangular region defined by one-half the sampling frequency. Hence ω xs ω xc ≤ ------2 ω ys ω yc ≤ ------2

(4.1-12a)

or, equivalently, π ∆x ≤ -------ω xc π ∆y ≤ -------ω yc

(4.1-12b)

In physical terms, the sampling period must be equal to or smaller than one-half the period of the finest detail within the image. This sampling condition is equivalent to the one-dimensional sampling theorem constraint for time-varying signals that requires a time-varying signal to be sampled at a rate of at least twice its highest-frequency component. If equality holds in Eq. 4.1-12, the image is said to be sampled at its Nyquist rate; if ∆x and ∆y are smaller than required by the Nyquist criterion, the image is called oversampled; and if the opposite case holds, the image is undersampled. If the original image is sampled at a spatial rate sufficient to prevent spectral overlap in the sampled image, exact reconstruction of the ideal image can be achieved by spatial filtering the samples with an appropriate filter. For example, as shown in Figure 4.1-3, a filter with a transfer function of the form for ω x ≤ ω xL and ω y ≤ ω yL otherwise

K  R ( ω x, ω y ) =   0

(4.1-13a)

(4.1-13b)

where K is a scaling constant, satisfies the condition of exact reconstruction if ω xL > ω xc and ω yL > ω yc . The point-spread function or impulse response of this reconstruction filter is
Kω xL ω yL sin { ω xL x } sin { ω yL y } R ( x, y ) = ---------------------- -------------------------- -------------------------2 ω xL x ω yL y π

(4.1-14)

96

IMAGE SAMPLING AND RECONSTRUCTION

FIGURE 4.1-3. Sampled image reconstruction filters.

With this filter, an image is reconstructed with an infinite sum of ( sin θ ) ⁄ θ functions, called sinc functions. Another type of reconstruction filter that could be employed is the cylindrical filter with a transfer function

K  R ( ω x, ω y ) =   0
2 2 2

for ω x + ω y ≤ ω 0 otherwise

2

2

(4.1-15a) (4.1-15b)

provided that ω 0 > ω xc + ω yc . The impulse response for this filter is

IMAGE SAMPLING AND RECONSTRUCTION CONCEPTS

97

 2 2 J1  ω0 x + y    R ( x, y ) = 2πω 0 K --------------------------------------2 2 x +y

(4.1-16)

where J 1 { · } is a first-order Bessel function. There are a number of reconstruction filters, or equivalently, interpolation waveforms, that could be employed to provide perfect image reconstruction. In practice, however, it is often difficult to implement optimum reconstruction filters for imaging systems. 4.1.2. Sampling Random Image Fields In the previous discussion of image sampling and reconstruction, the ideal input image field has been considered to be a deterministic function. It has been shown that if the Fourier transform of the ideal image is bandlimited, then discrete image samples taken at the Nyquist rate are sufficient to reconstruct an exact replica of the ideal image with proper sample interpolation. It will now be shown that similar results hold for sampling two-dimensional random fields. Let FI ( x, y ) denote a continuous two-dimensional stationary random process with known mean η F I and autocorrelation function
R F ( τ x, τ y ) = E { F I ( x 1, y 1 )F * ( x 2, y 2 ) } I
I

(4.1-17)

where τ x = x 1 – x 2 and τ y = y 1 – y 2 . This process is spatially sampled by a Dirac sampling array yielding

F P ( x, y ) = FI ( x, y )S ( x, y ) = F I ( x, y )

j = –∞ k = –∞

∑ ∑





δ ( x – j ∆x, y – k ∆y )

(4.1-18)

The autocorrelation of the sampled process is then

* RF ( τ x, τ y ) = E { F P ( x 1, y 1 ) F P ( x 2, y 2 ) }
P

(4.1-19)

= E { F I ( x 1, y 1 ) F *( x 2, y 2 ) }S ( x 1, y 1 )S ( x 2, y 2 ) I

The first term on the right-hand side of Eq. 4.1-19 is the autocorrelation of the stationary ideal image field. It should be observed that the product of the two Dirac sampling functions on the right-hand side of Eq. 4.1-19 is itself a Dirac sampling function of the form

98

IMAGE SAMPLING AND RECONSTRUCTION

S ( x 1, y 1 )S ( x 2, y 2 ) = S ( x 1 – x 2, y 1 – y 2 ) = S ( τ x, τ y )

(4.1-20)

Hence the sampled random field is also stationary with an autocorrelation function
R F ( τ x, τ y ) = R F ( τ x, τ y )S ( τ x, τ y )
P I

(4.1-21)

Taking the two-dimensional Fourier transform of Eq. 4.1-21 yields the power spectrum of the sampled random field. By the Fourier transform convolution theorem
1W F ( ω x, ω y ) = -------- W F ( ω x, ω y ) * S ( ω x, ω y ) P I 2 4π

(4.1-22)

where W F I ( ω x, ω y ) and W F P ( ω x, ω y ) represent the power spectral densities of the ideal image and sampled ideal image, respectively, and S ( ω x, ω y ) is the Fourier transform of the Dirac sampling array. Then, by the derivation leading to Eq. 4.1-7, it is found that the spectrum of the sampled field can be written as
1 WF ( ω x, ω y ) = -------------P ∆x ∆y

j = –∞ k = –∞









W F ( ω x – j ω xs, ω y – k ω ys )
I

(4.1-23)

Thus the sampled image power spectrum is composed of the power spectrum of the continuous ideal image field replicated over the spatial frequency domain at integer multiples of the sampling spatial frequency ( 2π ⁄ ∆x, 2π ⁄ ∆y ) . If the power spectrum of the continuous ideal image field is bandlimited such that W F I ( ω x, ω y ) = 0 for ω x > ω xc and ω y > ω yc , where ω xc and are ω yc cutoff frequencies, the individual spectra of Eq. 4.1-23 will not overlap if the spatial sampling periods are chosen such that ∆x < π ⁄ ω xc and ∆y < π ⁄ ω yc . A continuous random field F R ( x, y ) may be reconstructed from samples of the random ideal image field by the interpolation formula

F R ( x, y ) =

j = – ∞ k = –∞









F I ( j ∆x, k ∆y)R ( x – j ∆x, y – k ∆y )

(4.1-24)

where R ( x, y ) is the deterministic interpolation function. The reconstructed field and the ideal image field can be made equivalent in the mean-square sense (5, p. 284), that is,
E { F I ( x, y ) – F R ( x, y ) } = 0
2

(4.1-25)

if the Nyquist sampling criteria are met and if suitable interpolation functions, such as the sinc function or Bessel function of Eqs. 4.1-14 and 4.1-16, are utilized.

IMAGE SAMPLING SYSTEMS

99

FIGURE 4.1-4. Spectra of a sampled noisy image.

The preceding results are directly applicable to the practical problem of sampling a deterministic image field plus additive noise, which is modeled as a random field. Figure 4.1-4 shows the spectrum of a sampled noisy image. This sketch indicates a significant potential problem. The spectrum of the noise may be wider than the ideal image spectrum, and if the noise process is undersampled, its tails will overlap into the passband of the image reconstruction filter, leading to additional noise artifacts. A solution to this problem is to prefilter the noisy image before sampling to reduce the noise bandwidth.

4.2. IMAGE SAMPLING SYSTEMS In a physical image sampling system, the sampling array will be of finite extent, the sampling pulses will be of finite width, and the image may be undersampled. The consequences of nonideal sampling are explored next. As a basis for the discussion, Figure 4.2-1 illustrates a common image scanning system. In operation, a narrow light beam is scanned directly across a positive photographic transparency of an ideal image. The light passing through the transparency is collected by a condenser lens and is directed toward the surface of a photodetector. The electrical output of the photodetector is integrated over the time period during which the light beam strikes a resolution cell. In the analysis it will be assumed that the sampling is noise-free. The results developed in Section 4.1 for

100

IMAGE SAMPLING AND RECONSTRUCTION

FIGURE 4.2-1. Image scanning system.

sampling noisy images can be combined with the results developed in this section quite readily. Also, it should be noted that the analysis is easily extended to a wide class of physical image sampling systems. 4.2.1. Sampling Pulse Effects Under the assumptions stated above, the sampled image function is given by
F P ( x, y ) = FI ( x, y )S ( x, y )

(4.2-1)

where the sampling array
J K

S ( x, y ) =

j = –J k = –K

∑ ∑

P ( x – j ∆x, y – k ∆y)

(4.2-2)

is composed of (2J + 1)(2K + 1) identical pulses P ( x, y ) arranged in a grid of spacing ∆x, ∆y . The symmetrical limits on the summation are chosen for notational simplicity. The sampling pulses are assumed scaled such that
∞ ∞

∫–∞ ∫–∞ P ( x, y ) dx dy = 1

(4.2-3)

For purposes of analysis, the sampling function may be assumed to be generated by a finite array of Dirac delta functions DT ( x, y ) passing through a linear filter with impulse response P ( x, y ). Thus

IMAGE SAMPLING SYSTEMS

101

S ( x, y ) = D T ( x, y ) * P ( x, y )

(4.2-4)

where
D T ( x, y ) =

j = –J k = –K

∑ ∑

J

K

δ ( x – j ∆x, y – k ∆y)

(4.2-5)

Combining Eqs. 4.2-1 and 4.2-2 results in an expression for the sampled image function,
F P ( x, y ) =

j = – J k = –K

∑ ∑

J

K

F I ( j ∆x, k ∆ y)P ( x – j ∆x, y – k ∆y)

(4.2-6)

The spectrum of the sampled image function is given by
1 F P ( ω x, ω y ) = -------- F I ( ω x, ω y ) * [ D T ( ω x, ω y )P ( ω x, ω y ) ] 2 4π

(4.2-7)

where P ( ω x, ω y ) is the Fourier transform of P ( x, y ) . The Fourier transform of the truncated sampling array is found to be (5, p. 105)
    --sin  ω x ( J + 1 ) ∆ x sin  ω y ( K + 1 ) ∆ y  2 2     D T ( ω x, ω y ) = --------------------------------------------- ---------------------------------------------sin { ω x ∆x ⁄ 2 } sin { ω y ∆ y ⁄ 2 }

(4.2-8)

Figure 4.2-2 depicts D T ( ω x, ω y ) . In the limit as J and K become large, the right-hand side of Eq. 4.2-7 becomes an array of Dirac delta functions.

FIGURE 4.2-2. Truncated sampling train and its Fourier spectrum.

102

IMAGE SAMPLING AND RECONSTRUCTION

In an image reconstruction system, an image is reconstructed by interpolation of its samples. Ideal interpolation waveforms such as the sinc function of Eq. 4.1-14 or the Bessel function of Eq. 4.1-16 generally extend over the entire image field. If the sampling array is truncated, the reconstructed image will be in error near its boundary because the tails of the interpolation waveforms will be truncated in the vicinity of the boundary (8,9). However, the error is usually negligibly small at distances of about 8 to 10 Nyquist samples or greater from the boundary. The actual numerical samples of an image are obtained by a spatial integration of FS ( x, y ) over some finite resolution cell. In the scanning system of Figure 4.2-1, the integration is inherently performed on the photodetector surface. The image sample value of the resolution cell (j, k) may then be expressed as
F S ( j ∆x, k ∆y) =

∫j∆x – A ∫k∆y – A x j∆x + A x k∆y + A y y F I ( x, y )P ( x – j ∆x, y – k ∆y ) dx dy

(4.2-9)

where Ax and Ay denote the maximum dimensions of the resolution cell. It is assumed that only one sample pulse exists during the integration time of the detector. If this assumption is not valid, consideration must be given to the difficult problem of sample crosstalk. In the sampling system under discussion, the width of the resolution cell may be larger than the sample spacing. Thus the model provides for sequentially overlapped samples in time. By a simple change of variables, Eq. 4.2-9 may be rewritten as
FS ( j ∆x, k ∆y) =

∫–A ∫–A FI ( j ∆x – α, k ∆y – β )P ( – α, – β ) dx dy x y

Ax

Ay

(4.2-10)

Because only a single sampling pulse is assumed to occur during the integration period, the limits of Eq. 4.2-10 can be extended infinitely . In this formulation, Eq. 4.2-10 is recognized to be equivalent to a convolution of the ideal continuous image FI ( x, y ) with an impulse response function P ( – x, – y ) with reversed coordinates, followed by sampling over a finite area with Dirac delta functions. Thus, neglecting the effects of the finite size of the sampling array, the model for finite extent pulse sampling becomes
F S ( j ∆x, k ∆y) = [ FI ( x, y ) * P ( – x, – y ) ]δ ( x – j ∆x, y – k ∆y)

(4.2-11)

In most sampling systems, the sampling pulse is symmetric, so that P ( – x, – y ) = P ( x, y ). Equation 4.2-11 provides a simple relation that is useful in assessing the effect of finite extent pulse sampling. If the ideal image is bandlimited and Ax and Ay satisfy the Nyquist criterion, the finite extent of the sample pulse represents an equivalent linear spatial degradation (an image blur) that occurs before ideal sampling. Part 4 considers methods of compensating for this degradation. A finite-extent sampling pulse is not always a detriment, however. Consider the situation in which

IMAGE SAMPLING SYSTEMS

103

the ideal image is insufficiently bandlimited so that it is undersampled. The finiteextent pulse, in effect, provides a low-pass filtering of the ideal image, which, in turn, serves to limit its spatial frequency content, and hence to minimize aliasing error. 4.2.2. Aliasing Effects To achieve perfect image reconstruction in a sampled imaging system, it is necessary to bandlimit the image to be sampled, spatially sample the image at the Nyquist or higher rate, and properly interpolate the image samples. Sample interpolation is considered in the next section; an analysis is presented here of the effect of undersampling an image. If there is spectral overlap resulting from undersampling, as indicated by the shaded regions in Figure 4.2-3, spurious spatial frequency components will be introduced into the reconstruction. The effect is called an aliasing error (10,11). Aliasing effects in an actual image are shown in Figure 4.2-4. Spatial undersampling of the image creates artificial low-spatial-frequency components in the reconstruction. In the field of optics, aliasing errors are called moiré patterns. From Eq. 4.1-7 the spectrum of a sampled image can be written in the form
1 F P ( ω x, ω y ) = ------------- [ F I ( ω x, ω y ) + F Q ( ω x, ω y ) ] ∆x ∆y

(4.2-12)



FIGURE 4.2-3. Spectra of undersampled two-dimensional function.

104

IMAGE SAMPLING AND RECONSTRUCTION

(a) Original image

(b) Sampled image

FIGURE 4.2-4. Example of aliasing error in a sampled image.

IMAGE SAMPLING SYSTEMS

105

where F I ( ω x, ω y ) represents the spectrum of the original image sampled at period ( ∆x, ∆y ) . The term
1 F Q ( ω x, ω y ) = ------------∆x ∆y

j = –∞ k = – ∞

∑ ∑





F I ( ω x – j ω xs, ω y – k ω ys )

(4.2-13)

for j ≠ 0 and k ≠ 0 describes the spectrum of the higher-order components of the sampled image repeated over spatial frequencies ω xs = 2π ⁄ ∆x and ω ys = 2π ⁄ ∆y. If there were no spectral foldover, optimal interpolation of the sampled image components could be obtained by passing the sampled image through a zonal lowpass filter defined by
K  R ( ω x, ω y ) =   0

for ω x ≤ ω xs ⁄ 2 and ω y ≤ ω ys ⁄ 2 otherwise

(4.2-14a) (4.2-14b)

where K is a scaling constant. Applying this interpolation strategy to an undersampled image yields a reconstructed image field
FR ( x, y ) = FI ( x, y ) + A ( x, y )

(4.2-15)

where
1 - ωxs ⁄ 2 ωys ⁄ 2 F ( ω , ω ) exp { i ( ω x + ω y ) } dω dω (4.2-16) A ( x, y ) = -------- ∫ Q x y x y x y ∫ 2 4π – ωxs ⁄ 2 –ωys ⁄ 2

represents the aliasing error artifact in the reconstructed image. The factor K has absorbed the amplitude scaling factors. Figure 4.2-5 shows the reconstructed image

FIGURE 4.2-5. Reconstructed image spectrum.

106

IMAGE SAMPLING AND RECONSTRUCTION

FIGURE 4.2-6. Model for analysis of aliasing effect.

spectrum that illustrates the spectral foldover in the zonal low-pass filter passband. The aliasing error component of Eq. 4.2-16 can be reduced substantially by lowpass filtering before sampling to attenuate the spectral foldover. Figure 4.2-6 shows a model for the quantitative analysis of aliasing effects. In this model, the ideal image FI ( x, y ) is assumed to be a sample of a two-dimensional random process with known power-spectral density W FI ( ω x, ω y ) . The ideal image is linearly filtered by a presampling spatial filter with a transfer function H ( ω x, ω y ) . This filter is assumed to be a low-pass type of filter with a smooth attenuation of high spatial frequencies (i.e., not a zonal low-pass filter with a sharp cutoff). The filtered image is then spatially sampled by an ideal Dirac delta function sampler at a resolution ∆x, ∆y. Next, a reconstruction filter interpolates the image samples to produce a replica of the ideal image. From Eq. 1.4-27, the power spectral density at the presampling filter output is found to be
W F ( ω x, ω y ) = H ( ω x, ω y ) W F ( ω x, ω y )
O I

2

(4.2-17)

and the Fourier spectrum of the sampled image field is
∞ ∞

1 W F ( ω x, ω y ) = ------------P ∆x ∆y

j = – ∞ k = –∞





W F ( ω x – j ω xs, ω y – k ω ys )
O

(4.2-18)

Figure 4.2-7 shows the sampled image power spectral density and the foldover aliasing spectral density from the first sideband with and without presampling low-pass filtering. It is desirable to isolate the undersampling effect from the effect of improper reconstruction. Therefore, assume for this analysis that the reconstruction filter R ( ω x, ω y ) is an optimal filter of the form given in Eq. 4.2-14. The energy passing through the reconstruction filter for j = k = 0 is then ω xs ⁄ 2 ω ys ⁄ 2

ER =

∫– ω

xs ⁄ 2 – ω ys ⁄ 2



W F ( ω x, ω y ) H ( ω x, ω y ) dω x dω y
I

2

(4.2-19)

IMAGE SAMPLING SYSTEMS

107

FIGURE 4.2-7. Effect of presampling filtering on a sampled image.

Ideally, the presampling filter should be a low-pass zonal filter with a transfer function identical to that of the reconstruction filter as given by Eq. 4.2-14. In this case, the sampled image energy would assume the maximum value ω xs ⁄ 2 ω ys ⁄ 2

E RM =

∫– ω

xs ⁄ 2 – ω ys ⁄ 2



W F ( ω x, ω y ) dω x dω y
I

(4.2-20)

Image resolution degradation resulting from the presampling filter may then be measured by the ratio
E RM – E R E R = ----------------------ERM

(4.2-21)

The aliasing error in a sampled image system is generally measured in terms of the energy, from higher-order sidebands, that folds over into the passband of the reconstruction filter. Assume, for simplicity, that the sampling rate is sufficient so that the spectral foldover from spectra centered at ( ± j ω xs ⁄ 2, ± k ω ys ⁄ 2 ) is negligible for j ≥ 2 and k ≥ 2 . The total aliasing error energy, as indicated by the doubly crosshatched region of Figure 4.2-7, is then
EA = E O – ER

(4.2-22)

where
EO =

∫– ∞ ∫– ∞ W F ( ω x, ω y ) H ( ω x, ω y )
I





2

dω x dω y

(4.2-23)

108

IMAGE SAMPLING AND RECONSTRUCTION

denotes the energy of the output of the presampling filter. The aliasing error is defined as (10)
EA E A = -----EO

(4.2-24)

Aliasing error can be reduced by attenuating high spatial frequencies of F I ( x, y ) with the presampling filter. However, any attenuation within the passband of the reconstruction filter represents a loss of resolution of the sampled image. As a result, there is a trade-off between sampled image resolution and aliasing error. Consideration is now given to the aliasing error versus resolution performance of several practical types of presampling filters. Perhaps the simplest means of spatially filtering an image formed by incoherent light is to pass the image through a lens with a restricted aperture. Spatial filtering can then be achieved by controlling the degree of lens misfocus. Figure 11.2-2 is a plot of the optical transfer function of a circular lens as a function of the degree of lens misfocus. Even a perfectly focused lens produces some blurring because of the diffraction limit of its aperture. The transfer function of a diffraction-limited circular lens of diameter d is given by (12, p. 83)

 2 ω ω ω  -- a cos  -----  – ----- 1 –  -----  2  - π ω   ω0  ω0 0 H(ω) =    0 

for 0 ≤ ω ≤ ω 0

(4.2-25a)

for ω > ω 0

(4.2-25b)

where ω 0 = πd ⁄ R and R is the distance from the lens to the focal plane. In Section 4.2.1, it was noted that sampling with a finite-extent sampling pulse is equivalent to ideal sampling of an image that has been passed through a spatial filter whose impulse response is equal to the pulse shape of the sampling pulse with reversed coordinates. Thus the sampling pulse may be utilized to perform presampling filtering. A common pulse shape is the rectangular pulse
1  ---- 2 P ( x, y ) =  T  0

for x, y ≤ T -2

(4.2-26a) (4.2-26b)

for x, y > T -2

obtained with an incoherent light imaging system of a scanning microdensitometer. The transfer function for a square scanning spot is

IMAGE SAMPLING SYSTEMS

109

sin { ω x T ⁄ 2 } sin { ω y T ⁄ 2 } P ( ω x, ω y ) = ------------------------------ -----------------------------ωxT ⁄ 2 ωy T ⁄ 2

(4.2-27)

Cathode ray tube displays produce display spots with a two-dimensional Gaussian shape of the form
 x2 + y2  1 P ( x, y ) = ------------- exp  – ----------------  2  2σ 2  2πσw w

(4.2-28)

where σ w is a measure of the spot spread. The equivalent transfer function of the Gaussian-shaped scanning spot
 ( ω x + ω y )σ w  P ( ω x, ω y ) = exp  – -------------------------------  2  
2 2 2

(4.2-29)

Examples of the aliasing error-resolution trade-offs for a diffraction-limited aperture, a square sampling spot, and a Gaussian-shaped spot are presented in Figure 4.2-8 as a function of the parameter ω 0. The square pulse width is set at T = 2π ⁄ ω 0, so that the first zero of the sinc function coincides with the lens cutoff frequency. The spread of the Gaussian spot is set at σ w = 2 ⁄ ω 0, corresponding to two standard deviation units in crosssection. In this example, the input image spectrum is modeled as

FIGURE 4.2-8. Aliasing error and resolution error obtained with different types of prefiltering.

110

IMAGE SAMPLING AND RECONSTRUCTION

A W F ( ω x, ω y ) = ---------------------------------I 2m 1 + ( ω ⁄ ωc )

(4.2-30)

where A is an amplitude constant, m is an integer governing the rate of falloff of the Fourier spectrum, and ω c is the spatial frequency at the half-amplitude point. The curves of Figure 4.2-8 indicate that the Gaussian spot and square spot scanning prefilters provide about the same results, while the diffraction-limited lens yields a somewhat greater loss in resolution for the same aliasing error level. A defocused lens would give even poorer results.

4.3. IMAGE RECONSTRUCTION SYSTEMS In Section 4.1 the conditions for exact image reconstruction were stated: The original image must be spatially sampled at a rate of at least twice its highest spatial frequency, and the reconstruction filter, or equivalent interpolator, must be designed to pass the spectral component at j = 0, k = 0 without distortion and reject all spectra for which j, k ≠ 0. With physical image reconstruction systems, these conditions are impossible to achieve exactly. Consideration is now given to the effects of using imperfect reconstruction functions. 4.3.1. Implementation Techniques In most digital image processing systems, electrical image samples are sequentially output from the processor in a normal raster scan fashion. A continuous image is generated from these electrical samples by driving an optical display such as a cathode ray tube (CRT) with the intensity of each point set proportional to the image sample amplitude. The light array on the CRT can then be imaged onto a groundglass screen for viewing or onto photographic film for recording with a light projection system incorporating an incoherent spatial filter possessing a desired optical transfer function. Optimal transfer functions with a perfectly flat passband over the image spectrum and a sharp cutoff to zero outside the spectrum cannot be physically implemented. The most common means of image reconstruction is by use of electro-optical techniques. For example, image reconstruction can be performed quite simply by electrically defocusing the writing spot of a CRT display. The drawback of this technique is the difficulty of accurately controlling the spot shape over the image field. In a scanning microdensitometer, image reconstruction is usually accomplished by projecting a rectangularly shaped spot of light onto photographic film. Generally, the spot size is set at the same size as the sample spacing to fill the image field completely. The resulting interpolation is simple to perform, but not optimal. If a small writing spot can be achieved with a CRT display or a projected light display, it is possible approximately to synthesize any desired interpolation by subscanning a resolution cell, as shown in Figure 4.3-1.

IMAGE RECONSTRUCTION SYSTEMS

111

FIGURE 4.3-1. Image reconstruction by subscanning.

The following subsections introduce several one- and two-dimensional interpolation functions and discuss their theoretical performance. Chapter 13 presents methods of digitally implementing image reconstruction systems.

FIGURE 4.3-2. One-dimensional interpolation waveforms.

112

IMAGE SAMPLING AND RECONSTRUCTION

4.3.2. Interpolation Functions Figure 4.3-2 illustrates several one-dimensional interpolation functions. As stated previously, the sinc function, provides an exact reconstruction, but it cannot be physically generated by an incoherent optical filtering system. It is possible to approximate the sinc function by truncating it and then performing subscanning (Figure 4.3-1). The simplest interpolation waveform is the square pulse function, which results in a zero-order interpolation of the samples. It is defined mathematically as
R0 ( x ) = 1
--for – 1 ≤ x ≤ 1 2 2

(4.3-1)

and zero otherwise, where for notational simplicity, the sample spacing is assumed to be of unit dimension. A triangle function, defined as
x + 1  R1( x ) =   1 – x

for – 1 ≤ x ≤ 0 for 0 < x ≤ 1

(4.3-2a) (4.3-2b)

FIGURE 4.3-3. One-dimensional interpolation.

IMAGE RECONSTRUCTION SYSTEMS

113

provides the first-order linear sample interpolation with trianglar interpolation waveforms. Figure 4.3-3 illustrates one-dimensional interpolation using sinc, square, and triangle functions. The triangle function may be considered to be the result of convolving a square function with itself. Convolution of the triangle function with the square function yields a bell-shaped interpolation waveform (in Figure 4.3-2d). It is defined as
 1 ( x + 3 )2 - -2 2   R 2 ( x ) =  3 – ( x )2 - 4   1 3 2  -- ( x – -- ) 2 2
---for – 3 ≤ x ≤ – 1 2 2

(4.3-3a) (4.3-3b) (4.3-3c)

1 1 for – --- < x ≤ --2 2 3 --for 1 < x ≤ --2 2

This process quickly converges to the Gaussian-shaped waveform of Figure 4.3-2f. Convolving the bell-shaped waveform with the square function results in a thirdorder polynomial function called a cubic B-spline (13,14). It is defined mathematically as
1  2 + --- x 3 – ( x ) 2  -- 2 3 R3 ( x ) =   1 (2 – x )3 -6

for 0 ≤ x ≤ 1 for 1 < x ≤ 2

(4.3-4a) (4.3-4b)

The cubic B-spline is a particularly attractive candidate for image interpolation because of its properties of continuity and smoothness at the sample points. It can be shown by direct differentiation of Eq. 4.3-4, that R3(x) is continuous in its first and second derivatives at the sample points. As mentioned earlier, the sinc function can be approximated by truncating its tails. Typically, this is done over a four-sample interval. The problem with this approach is that the slope discontinuity at the ends of the waveform leads to amplitude ripples in a reconstructed function. This problem can be eliminated by generating a cubic convolution function (15,16), which forces the slope of the ends of the interpolation to be zero. The cubic convolution interpolation function can be expressed in the following general form:
3 2

 A 1 x + B1 x + C 1 x + D1  Rc ( x ) =   3 2  A 2 x + B2 x + C 2 x + D2

for 0 ≤ x ≤ 1 for 1 < x ≤ 2

(4.3-5a) (4.3-5b)

114

IMAGE SAMPLING AND RECONSTRUCTION

where Ai, Bi, Ci, Di are weighting factors. The weighting factors are determined by satisfying two sets of extraneous conditions:

1.

R c ( x ) = 1 at x = 0, and R c ( x ) = 0 at x = 1, 2.

2.

The first-order derivative R' c ( x ) = 0 at x = 0, 1, 2.

These conditions results in seven equations for the eight unknowns and lead to the parametric expression

(a + 2) x – (a + 3) x + 1  Rc ( x ) =   3 2  a x – 5a x + 8a x – 4a

3

2

for 0 ≤ x ≤ 1 for 1 < x ≤ 2

(4.3-6a) (4.3-6b)

where a ≡ A2 of Eq. 4.3-5 is the remaining unknown weighting factor. Rifman (15) and Bernstein (16) have set a = – 1, which causes R c ( x ) to have the same slope, - 1, at x = 1 as the sinc function. Keys (17) has proposed setting a = – 1 ⁄ 2 , which provides an interpolation function that approximates the original unsampled image to as high a degree as possible in the sense of a power series expansion. The factor a in Eq. 4.3-6 can be used as a tuning parameter to obtain a best visual interpolation (18,19). Table 4.3-1 defines several orthogonally separable two-dimensional interpolation functions for which R ( x, y ) = R ( x )R ( y ). The separable square function has a square peg shape. The separable triangle function has the shape of a pyramid. Using a triangle interpolation function for one-dimensional interpolation is equivalent to linearly connecting adjacent sample peaks as shown in Figure 4.3-3c. The extension to two dimensions does not hold because, in general, it is not possible to fit a plane to four adjacent samples. One approach, illustrated in Figure 4.3-4a, is to perform a planar fit in a piecewise fashion. In region I of Figure 4.3-4a, points are linearly interpolated in the plane defined by pixels A, B, C, while in region II, interpolation is performed in the plane defined by pixels B, C, D. A computationally simpler method, called bilinear interpolation, is described in Figure 4.3-4b. Bilinear interpolation is performed by linearly interpolating points along separable orthogonal coordinates of the continuous image field. The resultant interpolated surface of Figure 4.3-4b, connecting pixels A, B, C, D, is generally nonplanar. Chapter 13 shows that bilinear interpolation is equivalent to interpolation with a pyramid function.

IMAGE RECONSTRUCTION SYSTEMS

115

TABLE 4.3-1. Two-Dimensional Interpolation Functions Function Separable sinc Definition 4 sin { 2πx ⁄ T x } sin { 2πy ⁄ T y } R ( x, y ) = ---------- --------------------------------- --------------------------------T x T y 2πx ⁄ T x 2πy ⁄ T y 2π T x = ------ω xs 2π T y = ------ω ys 1 ( ω x, ω y ) =  0 Separable square  1  ---------R 0 ( x, y ) =  T x T y   0
0 ( ω x ,ω y )

ω x ≤ ω xs , otherwise Tx x ≤ ---- , 2 otherwise

ω y ≤ ω ys

Ty y ≤ ---2

sin { ω x T x ⁄ 2 } sin { ω y T y ⁄ 2 } = ------------------------------------------------------------------( ωx Tx ⁄ 2 ) ( ω y Ty ⁄ 2 )

Separable triangle

R 1 ( x, y ) = R 0 ( x, y ) * R0 ( x, y )
1 ( ω x,

ωy ) =

2 0 ( ω x,

ωy )

Separable bell

R 2 ( x, y ) = R 0 ( x, y ) * R1 ( x, y ) 3 2 ( ω x, ω y ) = 0 ( ω x, ω y ) R 3 ( x, y ) = R 0 ( x, y ) * R2 ( x, y ) 4 3 ( ω x, ω y ) = 0 ( ω x, ω y )  x2 + y2  2 –1 R ( x, y ) = [ 2πσ w ] exp  – ----------------   2σ 3  w  σw ( ωx + ωy )  ( ω x, ω y ) = exp  – -------------------------------  2  
2 2 2

Separable cubic B-spline

Gaussian

4.3.3. Effect of Imperfect Reconstruction Filters The performance of practical image reconstruction systems will now be analyzed. It will be assumed that the input to the image reconstruction system is composed of samples of an ideal image obtained by sampling with a finite array of Dirac samples at the Nyquist rate. From Eq. 4.1-9 the reconstructed image is found to be
F R ( x, y ) =

j = –∞





k = –∞





F I ( j ∆x, k ∆y)R ( x – j ∆x, y – k ∆y)

(4.3-7)

116

IMAGE SAMPLING AND RECONSTRUCTION

FIGURE 4.3-4. Two-dimensional linear interpolation.

where R(x, y) is the two-dimensional interpolation function of the image reconstruction system. Ideally, the reconstructed image would be the exact replica of the ideal image as obtained from Eq. 4.1-9. That is,
ˆ F R ( x, y ) =

j = – ∞ k = –∞









F I ( j ∆x, k ∆y)R I ( x – j ∆ x, y – k ∆y)

(4.3-8)

where R I ( x, y ) represents an optimum interpolation function such as given by Eq. 4.1-14 or 4.1-16. The reconstruction error over the bounds of the sampled image is then
ED ( x, y ) =

j = –∞ k = – ∞









FI ( j ∆x, k ∆y) [ R ( x – j ∆x, y – k ∆y) – R I ( x – j ∆x, y – k ∆y) ] (4.3-9)

There are two contributors to the reconstruction error: (1) the physical system interpolation function R(x, y) may differ from the ideal interpolation function RI ( x, y ) , and (2) the finite bounds of the reconstruction, which cause truncation of the interpolation functions at the boundary. In most sampled imaging systems, the boundary reconstruction error is ignored because the error generally becomes negligible at distances of a few samples from the boundary. The utilization of nonideal interpolation functions leads to a potential loss of image resolution and to the introduction of high-spatial-frequency artifacts. The effect of an imperfect reconstruction filter may be analyzed conveniently by examination of the frequency spectrum of a reconstructed image, as derived in Eq. 4.1-11:
1 F R ( ω x, ω y ) = -------------- R ( ω x, ω y ) ∆x ∆y

j = –∞





k = –∞





F I ( ω x – j ω xs, ω y – k ω ys ) (4.3-10)

IMAGE RECONSTRUCTION SYSTEMS

117

Ideally, R ( ω x, ω y ) should select the spectral component for j = 0, k = 0 with uniform attenuation at all spatial frequencies and should reject all other spectral components. An imperfect filter may attenuate the frequency components of the zero-order spectra, causing a loss of image resolution, and may also permit higher-order spectral modes to contribute to the restoration, and therefore introduce distortion in the restoration. Figure 4.3-5 provides a graphic example of the effect of an imperfect image reconstruction filter. A typical cross section of a sampled image is shown in Figure 4.3-5a. With an ideal reconstruction filter employing sinc functions for interpolation, the central image spectrum is extracted and all sidebands are rejected, as shown in Figure 4.3-5c. Figure 4.3-5d is a plot of the transfer function for a zero-order interpolation reconstruction filter in which the reconstructed pixel amplitudes over the pixel sample area are set at the sample value. The resulting spectrum shown in Figure 4.3-5e exhibits distortion from attenuation of the central spectral mode and spurious high-frequency signal components. Following the analysis leading to Eq. 4.2-21, the resolution loss resulting from the use of a nonideal reconstruction function R(x, y) may be specified quantitatively as
E RM – E R E R = ----------------------ERM

(4.3-11)

FIGURE 4.3-5. Power spectra for perfect and imperfect reconstruction: (a) Sampled image input W F ( ω x, 0 ) ; (b) sinc function reconstruction filter transfer function R ( ω x, 0 ) ; (c) sinc I function interpolator output W F ( ω x, 0 ) ; (d) zero-order interpolation reconstruction filter O transfer function R ( ω x, 0 ) ; (e) zero-order interpolator output W F ( ω x, 0 ).
O

118

IMAGE SAMPLING AND RECONSTRUCTION

where
ER =

∫– ω

ω xs ⁄ 2 xs ⁄ 2 – ω ys ⁄ 2



ω ys ⁄ 2

W F ( ω x, ω y ) H ( ω x, w y )
I

2

dω x dω y

(4.3-12)

represents the actual interpolated image energy in the Nyquist sampling band limits, and
E RM =

∫– ω

ω xs ⁄ 2 xs ⁄ 2 – ω ys ⁄ 2



ω ys ⁄ 2

W F ( ω x, ω y ) dω x dω y
I

(4.3-13)

is the ideal interpolated image energy. The interpolation error attributable to highspatial-frequency artifacts may be defined as
EH E H = -----ET

(4.3-14)

where
∞ ∞ 2

ET =

∫– ∞ ∫– ∞ W F ( ω x, ω y ) H ( ω x, ω y )
I

dω x dω y

(4.3-15)

denotes the total energy of the interpolated image and

EH = ET – ER

(4.3-16)

is that portion of the interpolated image energy lying outside the Nyquist band limits. Table 4.3-2 lists the resolution error and interpolation error obtained with several separable two-dimensional interpolation functions. In this example, the power spectral density of the ideal image is assumed to be of the form s  ω- – ω 2 ---- 2 2 s for ω ≤  -----  2 2

W F ( ω x, ω y ) =
I

ω

2

(4.3-17)

and zero elsewhere. The interpolation error contribution of highest-order components, j 1, j2 > 2 , is assumed negligible. The table indicates that zero-order

REFERENCES

119

TABLE 4.3-2. Interpolation Error and Resolution Error for Various Separable TwoDimensional Interpolation Functions Percent Resoluton Error ER 0.0 26.9 44.0 55.4 63.2 38.6 54.6 66.7 Percent Interpolation Error EH 0.0 15.7 3.7 1.1 0.3 10.3 2.0 0.3

Function Sinc Square Triangle Bell Cubic B-spline 3T Gaussian σ w = ----8 -Gaussian σ w = T 2 5T Gaussian σ w = ----8

interpolation with a square interpolation function results in a significant amount of resolution error. Interpolation error reduces significantly for higher-order convolutional interpolation functions, but at the expense of resolution error.

REFERENCES
1. F. T. Whittaker, “On the Functions Which Are Represented by the Expansions of the Interpolation Theory,” Proc. Royal Society of Edinburgh, A35, 1915, 181–194. 2. C. E. Shannon, “Communication in the Presence of Noise,” Proc. IRE, 37, 1, January 1949, 10–21. 3. H. J. Landa, “Sampling, Data Transmission, and the Nyquist Rate,” Proc. IEEE, 55, 10, October 1967, 1701–1706. 4. J. W. Goodman, Introduction to Fourier Optics, 2nd ed., McGraw-Hill, New York, 1996. 5. A. Papoulis, Systems and Transforms with Applications in Optics, McGraw-Hill, New York, 1966. 6. S. P. Lloyd, “A Sampling Theorem for Stationary (Wide Sense) Stochastic Processes,” Trans. American Mathematical Society, 92, 1, July 1959, 1–12. 7. H. S. Shapiro and R. A. Silverman, “Alias-Free Sampling of Random Noise,” J. SIAM, 8, 2, June 1960, 225–248. 8. J. L. Brown, Jr., “Bounds for Truncation Error in Sampling Expansions of Band-Limited Signals,” IEEE Trans. Information Theory, IT-15, 4, July 1969, 440–444. 9. H. D. Helms and J. B. Thomas, “Truncation Error of Sampling Theory Expansions,” Proc. IRE, 50, 2, February 1962, 179–184. 10. J. J. Downing, “Data Sampling and Pulse Amplitude Modulation,” in Aerospace Telemetry, H. L. Stiltz, Ed., Prentice Hall, Englewood Cliffs, NJ, 1961.

120

IMAGE SAMPLING AND RECONSTRUCTION

11. D. G. Childers, “Study and Experimental Investigation on Sampling Rate and Aliasing in Time Division Telemetry Systems,” IRE Trans. Space Electronics and Telemetry, SET-8, December 1962, 267–283. 12. E. L. O'Neill, Introduction to Statistical Optics, Addison-Wesley, Reading, MA, 1963. 13. H. S. Hou and H. C. Andrews, “Cubic Splines for Image Interpolation and Digital Filtering,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-26, 6, December 1978, 508–517. 14. T. N. E. Greville, “Introduction to Spline Functions,” in Theory and Applications of Spline Functions, T. N. E. Greville, Ed., Academic Press, New York, 1969. 15. S. S. Rifman, “Digital Rectification of ERTS Multispectral Imagery,” Proc. Symposium on Significant Results Obtained from ERTS-1 (NASA SP-327), I, Sec. B, 1973, 1131– 1142. 16. R. Bernstein, “Digital Image Processing of Earth Observation Sensor Data,” IBM J. Research and Development, 20, 1976, 40–57. 17. R. G. Keys, “Cubic Convolution Interpolation for Digital Image Processing,” IEEE Trans. Acoustics, Speech, and Signal Processing, AASP-29, 6, December 1981, 1153– 1160. 18. K. W. Simon, “Digital Image Reconstruction and Resampling of Landsat Imagery,” Proc. Symposium on Machine Processing of Remotely Sensed Data, Purdue University, Lafayette, IN, IEEE 75, CH 1009-0-C, June 1975, 3A-1–3A-11. 19. S. K. Park and R. A. Schowengerdt, “Image Reconstruction by Parametric Cubic Convolution,” Computer Vision, Graphics, and Image Processing, 23, 3, September 1983, 258– 272.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

5
DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION

Chapter 1 presented a mathematical characterization of continuous image fields. This chapter develops a vector-space algebra formalism for representing discrete image fields from a deterministic and statistical viewpoint. Appendix 1 presents a summary of vector-space algebra concepts.

5.1. VECTOR-SPACE IMAGE REPRESENTATION In Chapter 1 a generalized continuous image function F(x, y, t) was selected to represent the luminance, tristimulus value, or some other appropriate measure of a physical imaging system. Image sampling techniques, discussed in Chapter 4, indicated means by which a discrete array F(j, k) could be extracted from the continuous image field at some time instant over some rectangular area – J ≤ j ≤ J , – K ≤ k ≤ K . It is often helpful to regard this sampled image array as a N 1 × N 2 element matrix

F = [ F ( n 1, n 2 ) ]

(5.1-1)

for 1 ≤ n i ≤ Ni where the indices of the sampled array are reindexed for consistency with standard vector-space notation. Figure 5.1-1 illustrates the geometric relationship between the Cartesian coordinate system of a continuous image and its array of samples. Each image sample is called a pixel.
121

122

DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION

FIGURE 5.1-1. Geometric relationship between a continuous image and its array of samples.

For purposes of analysis, it is often convenient to convert the image matrix to vector form by column (or row) scanning F, and then stringing the elements together in a long vector (1). An equivalent scanning operation can be expressed in quantitative form by the use of a N2 × 1 operational vector v n and a N 1 ⋅ N 2 × N 2 matrix N n defined as
1 … n–1 n n+1 … N2 1 … n–1 n n+1 … N2

0 … vn = 0 1 0 … 0

0 … Nn = 0 1 0 … 0

(5.1-2)

Then the vector representation of the image matrix F is given by the stacking operation

f =

n=1



N2

N n Fv n

(5.1-3)

In essence, the vector v n extracts the nth column from F and the matrix N n places this column into the nth segment of the vector f. Thus, f contains the column-

GENERALIZED TWO-DIMENSIONAL LINEAR OPERATOR

123

scanned elements of F. The inverse relation of casting the vector f into matrix form is obtained from
F =

n=1



N2

N n fv n

T

T

(5.1-4)

With the matrix-to-vector operator of Eq. 5.1-3 and the vector-to-matrix operator of Eq. 5.1-4, it is now possible easily to convert between vector and matrix representations of a two-dimensional array. The advantages of dealing with images in vector form are a more compact notation and the ability to apply results derived previously for one-dimensional signal processing applications. It should be recognized that Eqs 5.1-3 and 5.1-4 represent more than a lexicographic ordering between an array and a vector; these equations define mathematical operators that may be manipulated analytically. Numerous examples of the applications of the stacking operators are given in subsequent sections.

5.2. GENERALIZED TWO-DIMENSIONAL LINEAR OPERATOR A large class of image processing operations are linear in nature; an output image field is formed from linear combinations of pixels of an input image field. Such operations include superposition, convolution, unitary transformation, and discrete linear filtering. Consider the N 1 × N 2 element input image array F ( n1, n 2 ). A generalized linear operation on this image field results in a M 1 × M 2 output image array P ( m 1, m 2 ) as defined by

P ( m 1, m 2 ) =

n1 = 1 n 2= 1

∑ ∑

N1

N2

F ( n 1, n 2 )O ( n 1, n 2 ; m 1, m2 )

(5.2-1)

where the operator kernel O ( n 1, n 2 ; m 1, m 2 ) represents a weighting constant, which, in general, is a function of both input and output image coordinates (1). For the analysis of linear image processing operations, it is convenient to adopt the vector-space formulation developed in Section 5.1. Thus, let the input image array F ( n1, n 2 ) be represented as matrix F or alternatively, as a vector f obtained by column scanning F. Similarly, let the output image array P ( m1, m2 ) be represented by the matrix P or the column-scanned vector p. For notational simplicity, in the subsequent discussions, the input and output image arrays are assumed to be square and of dimensions N 1 = N 2 = N and M 1 = M 2 = M , respectively. Now, let T 2 2 2 denote the M × N matrix performing a linear transformation on the N × 1 input 2 image vector f yielding the M × 1 output image vector p = Tf

(5.2-2)

124

DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION

The matrix T may be partitioned into M × N submatrices T mn and written as
T11 T = T21 … p = T 12 T 22 … n =1

… …

T 1N T 2N …

(5.2-3)

TM1 TM2 … T MN

From Eq. 5.1-3, it is possible to relate the output image vector p to the input image matrix F by the equation



N

TN n Fv n

(5.2-4)

Furthermore, from Eq. 5.1-4, the output image matrix P is related to the input image vector p by

P =

m=1



M

Mm pu m

T

T

(5.2-5)

Combining the above yields the relation between the input and output image matrices,

P =

m=1

∑ ∑

M

N

( M m TNn )F ( v n u m )

T

T

(5.2-6)

n=1

where it is observed that the operators Mm and N n simply extract the partition T mn from T. Hence

P =

m =1 n =1

∑ ∑ Tmn F ( vn um )
T

M

N

(5.2-7)

If the linear transformation is separable such that T may be expressed in the direct product form

T = TC ⊗ TR

(5.2-8)

GENERALIZED TWO-DIMENSIONAL LINEAR OPERATOR

125

FIGURE 5.2-1. Structure of linear operator matrices.

where T R and T C are row and column operators on F, then
T mn = T R ( m, n )T C

(5.2-9)

As a consequence,
M N

P = TC F

m =1 n = 1

∑ ∑

T R ( m, n )v n u m = T C FT R

T

T

(5.2-10)

Hence the output image matrix P can be produced by sequential row and column operations. In many image processing applications, the linear transformations operator T is highly structured, and computational simplifications are possible. Special cases of interest are listed below and illustrated in Figure 5.2-1 for the case in which the input and output images are of the same dimension, M = N .

126

DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION

1. Column processing of F:
T = diag [ T C1, T C2, …, T CN ]

(5.2-11)

where T Cj is the transformation matrix for the jth column. 2. Identical column processing of F:
T = diag [ T C, T C, …, T C ] = T C ⊗ I N

(5.2-12)

3. Row processing of F:
T mn = diag [ T R1 ( m, n ), T R2 ( m, n ), …, T RN ( m, n ) ]

(5.2-13)

where T Rj is the transformation matrix for the jth row. 4. Identical row processing of F:
T mn = diag [ T R ( m, n ), T R ( m, n ), …, T R ( m, n ) ]

(5.2-14a)

and
T = IN ⊗ TR

(5.2-14b)

5. Identical row and identical column processing of F:
T = TC ⊗ I N + I N ⊗ T R

(5.2-15)

The number of computational operations for each of these cases is tabulated in Table 5.2-1. Equation 5.2-10 indicates that separable two-dimensional linear transforms can be computed by sequential one-dimensional row and column operations on a data array. As indicated by Table 5.2-1, a considerable savings in computation is possible 2 2 for such transforms: computation by Eq 5.2-2 in the general case requires M N 2 2 operations; computation by Eq. 5.2-10, when it applies, requires only MN + M N operations. Furthermore, F may be stored in a serial memory and fetched line by line. With this technique, however, it is necessary to transpose the result of the column transforms in order to perform the row transforms. References 2 and 3 describe algorithms for line storage matrix transposition.

IMAGE STATISTICAL CHARACTERIZATION

127

TABLE 5.2-1. Computational Requirements for Linear Transform Operator Case General Column processing Row processing Row and column processing Separable row and column processing matrix form Operations (Multiply and Add) N4 N3 N3 2N3– N2 2N3

5.3. IMAGE STATISTICAL CHARACTERIZATION The statistical descriptors of continuous images presented in Chapter 1 can be applied directly to characterize discrete images. In this section, expressions are developed for the statistical moments of discrete image arrays. Joint probability density models for discrete image fields are described in the following section. Reference 4 provides background information for this subject. The moments of a discrete image process may be expressed conveniently in vector-space form. The mean value of the discrete image function is a matrix of the form
E { F } = [ E { F ( n 1, n 2 ) } ]

(5.3-1)

If the image array is written as a column-scanned vector, the mean of the image vector is ηf = E { f } =

n= 1



N2

N n E { F }v n

(5.3-2)

The correlation function of the image array is given by
R ( n 1, n 2 ; n 3 , n 4 ) = E { F ( n 1, n 2 )F∗ ( n 3, n 4 ) }

(5.3-3)

where the n i represent points of the image array. Similarly, the covariance function of the image array is
K ( n 1, n 2 ; n 3 , n 4) = E { [ F ( n 1, n 2 ) – E { F ( n 1, n 2 ) } ] [ F∗ ( n 3, n 4 ) – E { F∗ ( n 3, n 4 ) } ] }

(5.3-4)

128

DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION

Finally, the variance function of the image array is obtained directly from the covariance function as σ ( n 1, n 2 ) = K ( n 1, n 2 ; n 1, n 2 )
2

(5.3-5)

If the image array is represented in vector form, the correlation matrix of f can be written in terms of the correlation of elements of F as
 T R f = E { ff∗ } = E      2 T T T  ∑ Nm Fvm   ∑ vn F∗ Nn   n = 1  m=1
N2 N

(5.3-6a)

or
Rf =

m=1 n=1

∑ ∑

N2

N2

 T T T N m E  Fv m v n F∗ N n  

(5.3-6b)

The term
 T T E  Fv m v n F∗  = R mn  

(5.3-7)

is the N 1 × N1 correlation matrix of the mth and nth columns of F. Hence it is possible to express R f in partitioned form as
R11 Rf = R21 RN …
21

R 12 R 22 RN …
22

… … …

R 1N R 2N RN …

2

2

2 N2

(5.3-8)

The covariance matrix of f can be found from its correlation matrix and mean vector by the relation
K f = R f – η f η f∗
T

(5.3-9)

A variance matrix V F of the array F ( n1, n 2 ) is defined as a matrix whose elements represent the variances of the corresponding elements of the array. The elements of this matrix may be extracted directly from the covariance matrix partitions of K f . That is,

IMAGE STATISTICAL CHARACTERIZATION

129

V F ( n 1, n 2 ) = K n

2,

n 2 ( n 1,

n1 )

(5.3-10)

If the image matrix F is wide-sense stationary, the correlation function can be expressed as
R ( n 1, n 2 ; n3, n 4 ) = R ( n 1 – n3 , n 2 – n 4 ) = R ( j, k )

(5.3-11)

where j = n 1 – n 3 and k = n 2 – n 4. Correspondingly, the covariance matrix partitions of Eq. 5.3-9 are related by
K mn = K k K∗ mn = K∗ k m≥n m T and y > T . Then,
˜ G ( j 1 ∆S, j1 ∆S) =

∫j ∆S – T ∫ j ∆S – T
1 2

j 1 ∆S + T

j 2 ∆S + T

˜ ˜ F ( α, β ) J ( j 1 ∆S, j 2 ∆S ; α, β ) dα dβ (7.2-5)

Truncation of the impulse response is equivalent to multiplying the impulse response by a window function V(x, y), which is unity for x < T and y < T and zero elsewhere. By the Fourier convolution theorem, the Fourier spectrum of G(x, y) is equivalently convolved with the Fourier transform of V(x, y), which is a twodimensional sinc function. This distortion of the Fourier spectrum of G(x, y) results in the introduction of high-spatial-frequency artifacts (a Gibbs phenomenon) at spatial frequency multiples of 2π ⁄ T . Truncation distortion can be reduced by using a shaped window, such as the Bartlett, Blackman, Hamming, or Hanning windows (3), which smooth the sharp cutoff effects of a rectangular window. This step is especially important for image restoration modeling because ill-conditioning of the superposition operator may lead to severe amplification of the truncation artifacts. In the next step of the discrete representation, the continuous ideal image array ˜ F ( α, β ) is represented by mesh points on a rectangular grid of resolution ∆I and dimension ( 2K + 1 ) × ( 2K + 1 ). This is not a physical sampling process, but merely an abstract numerical representation whose general term is described by
˜ ˜ F ( k 1 ∆I, k2 ∆I) = F ( α, β )δ ( α – k 1 ∆I, α – k 2 ∆I)

(7.2-6)

where K iL ≤ k i ≤ K iU , with K iU and K iL denoting the upper and lower index limits. If the ultimate objective is to estimate the continuous ideal image field by processing the physical observation samples, the mesh spacing ∆I should be fine enough to satisfy the Nyquist criterion for the ideal image. That is, if the spectrum of the ideal image is bandlimited and the limits are known, the mesh spacing should be set at the corresponding Nyquist spacing. Ideally, this will permit perfect interpola˜ ˜ tion of the estimated points F ( k 1 ∆I, k 2 ∆I) to reconstruct F ( x, y ). The continuous integration of Eq. 7.2-5 can now be approximated by a discrete summation by employing a quadrature integration formula (4). The physical image samples may then be expressed as
˜ G ( j 1 ∆ S, j 2 ∆S) =
K 1U

k 1 = K 1L k 2 = K 2L



K 2U



˜ ˜ ˜ F ( k 1 ∆ I, k 2 ∆ I)W ( k 1, k 2 )J ( j 1 ∆ S, j 2 ∆ S ; k 1 ∆I, k 2 ∆I )

(7.2-7)

172

SUPERPOSITION AND CONVOLUTION

˜ where W ( k 1 , k 2 ) is a weighting coefficient for the particular quadrature formula employed. Usually, a rectangular quadrature formula is used, and the weighting coefficients are unity. In any case, it is notationally convenient to lump the weighting coefficient and the impulse response function together so that ˜ ˜ ˜ H ( j1 ∆S, j 2 ∆S; k 1 ∆I, k 2 ∆I) = W ( k 1, k 2 )J ( j 1 ∆S, j 2 ∆S ; k 1 ∆I, k 2 ∆I)

(7.2-8)

Then,
˜ G ( j 1 ∆S, j 2 ∆S) =
K 1U

k 1 = K 1L k 2 = K 2L



K 2U



˜ ˜ F ( k 1 ∆I, k 2 ∆I )H ( j 1 ∆S, j 2 ∆S ; k 1 ∆I, k 2 ∆I ) (7.2-9)

˜ Again, it should be noted that H is not spatially discretized; the function is simply evaluated at its appropriate spatial argument. The limits of summation of Eq. 7.2-9 are K iL = ∆S T j i ------ – ----∆I ∆I K iU = ∆S T ji ------ + ----∆I ∆I

(7.2-10)
N

N

where [ · ] N denotes the nearest integer value of the argument. Figure 7.2-1 provides an example relating actual physical sample values ˜ ˜ G ( j1 ∆ S, j2 ∆S) to mesh points F ( k 1 ∆I, k 2 ∆I ) on the ideal image field. In this example, the mesh spacing is twice as large as the physical sample spacing. In the figure,

FIGURE 7.2-1. Relationship of physical image samples to mesh points on an ideal image field for numerical representation of a superposition integral.

SAMPLED IMAGE SUPERPOSITION AND CONVOLUTION

173

FIGURE 7.2-2. Relationship between regions of physical samples and mesh points for numerical representation of a superposition integral.

the values of the impulse response function that are utilized in the summation of Eq. 7.2-9 are represented as dots. An important observation should be made about the discrete model of Eq. 7.2-9 for a sampled superposition integral; the physical area of the ideal image field ˜ F ( x, y ) containing mesh points contributing to physical image samples is larger ˜ than the sample image G ( j 1 ∆S, j2 ∆S) regardless of the relative number of physical samples and mesh points. The dimensions of the two image fields, as shown in Figure 7.2-2, are related by
J ∆S + T = K ∆I

(7.2-11)

to within an accuracy of one sample spacing. At this point in the discussion, a discrete and finite model for the sampled super˜ position integral has been obtained in which the physical samples G ( j1 ∆S, j2 ∆S) ˜ ( k ∆I, k ∆I) by a discrete mathematiare related to points on an ideal image field F 1 2 cal superposition operation. This discrete superposition is an approximation to continuous superposition because of the truncation of the impulse response function ˜ J ( x, y ; α, β ) and quadrature integration. The truncation approximation can, of course, be made arbitrarily small by extending the bounds of definition of the impulse response, but at the expense of large dimensionality. Also, the quadrature integration approximation can be improved by use of complicated formulas of quadrature, but again the price paid is computational complexity. It should be noted, however, that discrete superposition is a perfect approximation to continuous superposition if the spatial functions of Eq. 7.2-1 are all bandlimited and the physical

174

SUPERPOSITION AND CONVOLUTION

sampling and numerical representation periods are selected to be the corresponding Nyquist period (5). It is often convenient to reformulate Eq. 7.2-9 into vector-space form. Toward ˜ ˜ this end, the arrays G and F are reindexed to M × M and N × N arrays, respectively, such that all indices are positive. Let
˜ F ( n 1 ∆I, n2 ∆I ) = F ( k 1 ∆I, k2 ∆I )

(7.2-12a)

where n i = k i + K + 1 and let
G ( m 1 ∆S, m 2 ∆S) = G ( j 1 ∆S, j2 ∆S )

(7.2-12b)

where m i = j i + J + 1 . Also, let the impulse response be redefined such that
˜ H ( m1 ∆S, m 2 ∆S ; n 1 ∆ I, n 2 ∆I ) = H ( j1 ∆S, j2 ∆S ; k 1 ∆I, k 2 ∆I )

(7.2-12c)

Figure 7.2-3 illustrates the geometrical relationship between these functions. The discrete superposition relationship of Eq. 7.2-9 for the shifted arrays becomes
N 1U

G ( m 1 ∆S, m 2 ∆S) =

n 1 = N 1L n 2 = N 2L



N 2U



F ( n 1 ∆I, n 2 ∆I ) H ( m1 ∆ S, m 2 ∆S ; n 1 ∆I, n 2 ∆I )

(7.2-13) for ( 1 ≤ m i ≤ M ) where
N iL = m i ∆S -----∆I N iU = m i ∆S + 2T ------ ----∆I ∆I

N

N

Following the techniques outlined in Chapter 5, the vectors g and f may be formed by column scanning the matrices G and F to obtain g = Bf

(7.2-14)

where B is a M × N matrix of the form
B 1, 1 B1, 2 … B = 0 … 0 B 2, 2 … B ( 1, L ) … … 0… 0 … 0 … B M, N

2

2

(7.2-15)

0 B M, N – L + 1

SAMPLED IMAGE SUPERPOSITION AND CONVOLUTION

175

FIGURE 7.2-3. Sampled image arrays.

The general term of B is defined as
Bm
n 2 ( m 1,

2,

n 1 ) = H ( m1 ∆S, m 2 ∆S ; n 1 ∆I, n 2 ∆I )

(7.2-16)

for 1 ≤ m i ≤ M and m i ≤ n i ≤ m i + L – 1 where L = [ 2T ⁄ ∆I ] N represents the nearest odd integer dimension of the impulse response in resolution units ∆I . For descriptional simplicity, B is called the blur matrix of the superposition integral. If the impulse response function is translation invariant such that
H ( m 1 ∆S, m 2 ∆S ; n 1 ∆I, n 2 ∆I ) = H ( m 1 ∆S – n 1 ∆I, m2 ∆S – n 2 ∆I )

(7.2-17)

then the discrete superposition operation of Eq. 7.2-13 becomes a discrete convolution operation of the form
N 1U

G ( m 1 ∆S, m 2 ∆S ) =

n 1 = N 1L n 2 = N 2L



N 2U



F ( n 1 ∆I, n 2 ∆I )H ( m1 ∆S – n 1 ∆I, m 2 ∆S – n 2 ∆I )

(7.2-18) If the physical sample and quadrature mesh spacings are equal, the general term of the blur matrix assumes the form

Bm

2,

n 2 ( m 1,

n 1 ) = H ( m 1 – n 1 + L, m 2 – n 2 + L )

(7.2-19)

176

SUPERPOSITION AND CONVOLUTION

11 12 13 H = 21 22 23 31 32 33 33 23 13 B= 0 32 22 12 0 31 21 11 0 0 0 0 0 0 0 0 0 0

0 33 23 13 0 0 0 0 0 0

0 32 22 12

0 31 21 11

0 33 23 13 0

0 32 22 12 (a)

0 31 21 11

0 33 23 13

0 32 22 12

0 31 21 11

(b)

FIGURE 7.2-4. Sampled infinite area convolution operators: (a) General impulse array, M = 2, N = 4, L = 3; (b) Gaussian-shaped impulse array, M = 8, N = 16, L = 9.

In Eq. 7.2-19, the mesh spacing variable ∆I is understood. In addition,
Bm = Bm

2,

n2

2

+ 1, n 2 + 1

(7.2-20)

Consequently, the rows of B are shifted versions of the first row. The operator B then becomes a sampled infinite area convolution operator, and the series form representation of Eq. 7.2-19 reduces to m1 + L – 1 m 2 + L – 1

G ( m 1 ∆S, m 2 ∆S ) =

n1 = m1



n2 = m2



F ( n 1, n 2 )H ( m 1 – n 1 + L, m2 – n 2 + L )

(7.2-21)

where the sampling spacing is understood. Figure 7.2-4a is a notational example of the sampled image convolution operator for a 4 × 4 (N = 4) data array, a 2 × 2 (M = 2) filtered data array, and a 3 × 3 (L = 3) impulse response array. An extension to larger dimension is shown in Figure 7.2-4b for M = 8, N = 16, L = 9 and a Gaussian-shaped impulse response. When the impulse response is spatially invariant and orthogonally separable,
B = BC ⊗ BR

(7.2-22)

where B R and B C are M × N matrices of the form

CIRCULANT SUPERPOSITION AND CONVOLUTION

177

hR ( L ) … BR = 0 0

hR ( L – 1 ) hR ( L ) …



hR ( 1 )

0



0 …

(7.2-23)

0

hR ( L )



0 hR ( 1 )

The two-dimensional convolution operation then reduces to sequential row and column convolutions of the matrix form of the image array. Thus
T

G = B C FB R

(7.2-24)
2 2

The superposition or convolution operator expressed in vector form requires M L operations if the zero multiplications of B are avoided. A separable convolution operator can be computed in matrix form with only ML ( M + N ) operations.

7.3. CIRCULANT SUPERPOSITION AND CONVOLUTION In circulant superposition (2), the input data, the processed output, and the impulse response arrays are all assumed spatially periodic over a common period. To unify the presentation, these arrays will be defined in terms of the spatially limited arrays considered previously. First, let the N × N data array F ( n1 ,n 2 ) be embedded in the upper left corner of a J × J array ( J > N ) of zeros, giving

 F ( n 1, n 2 )   FE ( n 1, n 2 ) =    0 

for 1 ≤ n i ≤ N

(7.3-1a)

for N + 1 ≤ n i ≤ J

(7.3-1b)

In a similar manner, an extended impulse response array is created by embedding the spatially limited impulse array in a J × J matrix of zeros. Thus, let
H(l , l ; m , m ) 1 2 1 2   H E ( l 1, l 2 ; m 1, m 2 ) =    0 

for 1 ≤ li ≤ L

(7.3-2a)

for L + 1 ≤ l i ≤ J

(7.3-2b)

178

SUPERPOSITION AND CONVOLUTION

Periodic arrays F E ( n 1, n 2 ) and H E ( l 1, l 2 ; m 1, m 2 ) are now formed by replicating the extended arrays over the spatial period J. Then, the circulant superposition of these functions is defined as
J J

KE ( m 1, m 2 ) =

n1 = 1 n2 = 1





F E ( n 1, n 2 )H E ( m 1 – n 1 + 1, m 2 – n 2 + 1 ; m 1, m 2)

(7.3-3)

Similarity of this equation with Eq. 7.1-6 describing finite-area superposition is evident. In fact, if J is chosen such that J = N + L – 1, the terms F E ( n 1, n 2 ) = F ( n 1, n 2 ) for 1 ≤ ni ≤ N . The similarity of the circulant superposition operation and the sampled image superposition operation should also be noted. These relations become clearer in the vector-space representation of the circulant superposition operation. 2 Let the arrays FE and KE be expressed in vector form as the J × 1 vectors fE and kE, respectively. Then, the circulant superposition operator can be written as k E = CfE
2 2

(7.3-4)

where C is a J × J matrix containing elements of the array HE. The circulant superposition operator can then be conveniently expressed in terms of J × J submatrices Cmn as given by

C 1, 1 C 2, 1 C 2, 1 0 0 … 0 … C =

0 C 2, 2 · C L, 2 C L + 1, 2

0 … 0 …

0 0

C 1, J – L + 2 … 0 …

C 1, J …

0 C L – 1, J 0 … 0 … … … … 0 C J, J – L + 1 C J, J – L + 2 … C J, J

(7.3-5)

where
Cm
n 2 ( m 1,

2,

n 1 ) = H E ( k 1, k 2 ; m 1, m 2 )

(7.3-6)

CIRCULANT SUPERPOSITION AND CONVOLUTION

179

11 12 13 H = 21 22 23 31 32 33 11 0 31 21 0 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 13 0 33 23 12 0 32 22 0 32 0

21 11

0 23 13 0

0 33 22 12

31 21 11

0 33 23 13

0 32 22 12

0 31 21 11 12

0 33 23 13 0 0 0 0 0 0 0 0 0 0 0 0

0 32 22 12 0 33 23 0 33 0

0 32 22 11

0 31 21 0 31 0

0 13

22 12

0 32 21 11

0 23 13 0

32 22 12 C=

0 31 21 11

0 33 23 13

0 32 22 12 13

0 31 21 11

0 33 23 13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 33 23 12

0 32 22 11

0 31 21 0 31 0

23 13

0 33 22 12

0 32 21 11

33 23 13

0 32 22 12

0 31 21 11

0 33 23 13 0 0 0 0 0 0 0 0 0 0 0 0

0 32 22 12

0 31 21 11

0 13

0 33 23 12

0 32 22 11

0 31 21 0 31 0

0 23 13 0

0 33 22 12

0 32 21 11

0 33 23 13

0 32 22 12

0 31 21 11

0 33 23 13 (a)

0 32 22 12

0 31 21 11

(b)

FIGURE 7.3-1. Circulant convolution operators: (a) General impulse array, J = 4, L = 3; (b) Gaussian-shaped impulse array, J = 16, L = 9.

180

SUPERPOSITION AND CONVOLUTION

for 1 ≤ n i ≤ J and 1 ≤ m i ≤ J with k i = ( m i – n i + 1 ) modulo J and HE(0, 0) = 0. It should be noted that each row and column of C contains L nonzero submatrices. If the impulse response array is spatially invariant, then
Cm = Cm

2,

n2

2

+ 1, n 2 + 1

(7.3-7)

and the submatrices of the rows (columns) can be obtained by a circular shift of the first row (column). Figure 7.3-la illustrates the circulant convolution operator for 16 × 16 (J = 4) data and filtered data arrays and for a 3 × 3 (L = 3) impulse response array. In Figure 7.3-lb, the operator is shown for J = 16 and L = 9 with a Gaussianshaped impulse response. Finally, when the impulse response is spatially invariant and orthogonally separable,
C = CC ⊗ CR

(7.3-8)

where C R and CC are J × J matrices of the form hR ( 1 ) hR ( 2 ) … CR = hR ( L – 1 ) hR ( L ) 0 … 0 hR ( L – 1 ) hR ( L ) … 0 hR ( L ) … 0 hR ( 1 ) … … … 0 0 h R ( L ) … hR ( 3 ) hR ( 2 ) 0 … hR ( 3 ) hR ( L ) 0 0 … h (2) h (1) R R … … 0 …

(7.3-9)

Two-dimensional circulant convolution may then be computed as

KE = C C FE C R

T

(7.3-10)

7.4. SUPERPOSITION AND CONVOLUTION OPERATOR RELATIONSHIPS The elements of the finite-area superposition operator D and the elements of the sampled image superposition operator B can be extracted from circulant superposition operator C by use of selection matrices defined as (2)

SUPERPOSITION AND CONVOLUTION OPERATOR RELATIONSHIPS
(K)

181

S1 J S2 J

= IK 0 = 0A IK 0

(7.4-1a) (7.4-1b)

(K )

and S2 J are K × J matrices, IK is a K × K identity matrix, and 0 A is a K × L – 1 matrix. For future reference, it should be noted that the generalized inverses of S1 and S2 and their transposes are where S1 J
[ S1 J ]
(K) –

(K)

(K)

= [ S1 J ] = S1 J
K

( K) T

(7.4-2a) (7.4-2b) (7.4-2c) (7.4-2d)

[ [ S1 J ] ] [ S2 J ]
( K) –

( K) T –

= [ S2 J ] = S2 J

(K) T

[ [ S2 J ] ]

( K) T –

K

Examination of the structure of the various superposition operators indicates that
D = [ S1 J B = [ S2 J
(M)

⊗ S1 J ]C [ S1 J ⊗ S2 J ]C [ S1 J
(M)

(M)

( N)

⊗ S1 J ] ⊗ S1 J ]

(N) T

(7.4-3a) (7.4-3b)

(M)

( N)

( N) T

That is, the matrix D is obtained by extracting the first M rows and N columns of submatrices Cmn of C. The first M rows and N columns of each submatrix are also extracted. A similar explanation holds for the extraction of B from C. In Figure 7.3-1, the elements of C to be extracted to form D and B are indicated by boxes. From the definition of the extended input data array of Eq. 7.3-1, it is obvious that the spatially limited input data vector f can be obtained from the extended data vector fE by the selection operation f = [ S1 J
(N)

⊗ S1 J ]f E

(N )

(7.4-4a)

and furthermore,
( N) ( N) T

fE = [ S1 J

⊗ S1 J ] f

(7.4-4b)

182

SUPERPOSITION AND CONVOLUTION

It can also be shown that the output vector for finite-area superposition can be obtained from the output vector for circulant superposition by the selection operation q = [ S1 J
(M)

⊗ S1 J

(M)

]k E

(7.4-5a)

The inverse relationship also exists in the form k E = [ S1J
(M)

⊗ S1 J

(M) T

] q

(7.4-5b)

For sampled image superposition
(M) (M)

g = [ S2 J

⊗ S2 J

]k E

(7.4-6)

but it is not possible to obtain kE from g because of the underdeterminacy of the sampled image superposition operator. Expressing both q and kE of Eq. 7.4-5a in matrix form leads to

Q =

m=1 n=1

∑ ∑ Mm [ S1J
T

M

J

(M)

⊗ S1 J

(M)

]N n K E v n u m

T

(7.4-7)

As a result of the separability of the selection operator, Eq. 7.4-7 reduces to
Q = [ S1 J ]K E [ S1 J ]
(M) (M) T

(7.4-8)

Similarly, for Eq. 7.4-6 describing sampled infinite-area superposition,

FIGURE 7.4-1. Location of elements of processed data Q and G from KE.

REFERENCES
(M) (M) T

183

G = [ S2 J ]K E [ S2 J ]

(7.4-9)

Figure 7.4-1 illustrates the locations of the elements of G and Q extracted from KE for finite-area and sampled infinite-area superposition. In summary, it has been shown that the output data vectors for either finite-area or sampled image superposition can be obtained by a simple selection operation on the output data vector of circulant superposition. Computational advantages that can be realized from this result are considered in Chapter 9.

REFERENCES
1. J. F. Abramatic and O. D. Faugeras, “Design of Two-Dimensional FIR Filters from Small Generating Kernels,” Proc. IEEE Conference on Pattern Recognition and Image Processing, Chicago, May 1978. W. K. Pratt, “Vector Formulation of Two Dimensional Signal Processing Operations,” Computer Graphics and Image Processing, 4, 1, March 1975, 1–24. A. V. Oppenheim and R. W. Schaefer (Contributor), Digital Signal Processing, Prentice Hall, Englewood Cliffs, NJ, 1975. T. R. McCalla, Introduction to Numerical Methods and FORTRAN Programming, Wiley, New York, 1967. A. Papoulis, Systems and Transforms with Applications in Optics, 2nd ed., McGrawHill, New York, 1981.

2. 3. 4. 5.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

8
UNITARY TRANSFORMS

Two-dimensional unitary transforms have found two major applications in image processing. Transforms have been utilized to extract features from images. For example, with the Fourier transform, the average value or dc term is proportional to the average image amplitude, and the high-frequency terms (ac term) give an indication of the amplitude and orientation of edges within an image. Dimensionality reduction in computation is a second image processing application. Stated simply, those transform coefficients that are small may be excluded from processing operations, such as filtering, without much loss in processing accuracy. Another application in the field of image coding is transform image coding, in which a bandwidth reduction is achieved by discarding or grossly quantizing low-magnitude transform coefficients. In this chapter we consider the properties of unitary transforms commonly used in image processing.

8.1. GENERAL UNITARY TRANSFORMS A unitary transform is a specific type of linear transformation in which the basic linear operation of Eq. 5.4-1 is exactly invertible and the operator kernel satisfies certain orthogonality conditions (1,2). The forward unitary transform of the N 1 × N 2 image array F ( n1, n 2 ) results in a N1 × N 2 transformed image array as defined by

F ( m 1, m 2 ) =

n1 = 1 n2 = 1

∑ ∑

N1

N2

F ( n 1, n 2 )A ( n 1, n 2 ; m 1, m 2 )

(8.1-1)

185

186

UNITARY TRANSFORMS

where A ( n1, n 2 ; m1 , m 2 ) represents the forward transform kernel. A reverse or inverse transformation provides a mapping from the transform domain to the image space as given by
F ( n 1, n 2 ) =

m1 = 1 m2 = 1

∑ ∑

N1

N2

F ( m 1, m 2 )B ( n 1, n 2 ; m 1, m2 )

(8.1-2)

where B ( n1, n 2 ; m 1, m2 ) denotes the inverse transform kernel. The transformation is unitary if the following orthonormality conditions are met:

∑ ∑ A ( n1, n2 ; m1, m2 )A∗ ( j1, j2 ; m1, m2 ) = δ ( n1 – j1, n 2 – j2 ) m1 m2

(8.1-3a) (8.1-3b) (8.1-3c) (8.1-3c)

∑ ∑ B ( n1, n 2 ; m1, m2 )B∗ ( j1, j2 ; m1, m2 ) m1 m 2

= δ ( n 1 – j 1, n 2 – j2 )

∑ ∑ A ( n1, n2 ; n1 n2 n1 n2

m 1, m 2 )A∗ ( n 1, n 2 ; k 1, k 2 ) = δ ( m 1 – k 1, m 2 – k 2 ) δ ( m 1 – k 1, m 2 – k 2 )

∑ ∑ B ( n1, n 2 ; m1, m2 )B∗ ( n1, n2 ; k1, k2 ) =

The transformation is said to be separable if its kernels can be written in the form
A ( n1, n 2 ; m 1, m 2 ) = A C ( n 1, m 1 )AR ( n 2, m 2 ) B ( n1, n 2 ; m 1, m 2 ) = B C ( n 1, m 1 )BR ( n 2, m 2 )

(8.1-4a) (8.1-4b)

where the kernel subscripts indicate row and column one-dimensional transform operations. A separable two-dimensional unitary transform can be computed in two steps. First, a one-dimensional transform is taken along each column of the image, yielding
P ( m 1, n 2 ) =

n1= 1



N1

F ( n 1, n 2 )A C ( n 1, m 1 )

(8.1-5)

Next, a second one-dimensional unitary transform is taken along each row of P ( m1, n 2 ), giving
F ( m 1, m 2 ) =

n2 = 1



N2

P ( m1, n 2 )A R ( n 2, m 2 )

(8.1-6)

GENERAL UNITARY TRANSFORMS

187

Unitary transforms can conveniently be expressed in vector-space form (3). Let F and f denote the matrix and vector representations of an image array, and let F and f be the matrix and vector forms of the transformed image. Then, the two-dimensional unitary transform written in vector form is given by f = Af

(8.1-7)

where A is the forward transformation matrix. The reverse transform is f = Bf f

(8.1-8)

where B represents the inverse transformation matrix. It is obvious then that
B = A
–1

(8.1-9)

For a unitary transformation, the matrix inverse is given by
A
–1 T = A∗

(8.1-10)

and A is said to be a unitary matrix. A real unitary matrix is called an orthogonal matrix. For such a matrix,
A
–1

= A

T

(8.1-11)

If the transform kernels are separable such that
A = AC ⊗ AR

(8.1-12)

where A R and A C are row and column unitary transform matrices, then the transformed image matrix can be obtained from the image matrix by
F = A C FA R
T

(8.1-13a)

The inverse transformation is given by
F = BC F BR
T

(8.1-13b)

188

UNITARY TRANSFORMS
–1 –1

where B C = A C and BR = A R . Separable unitary transforms can also be expressed in a hybrid series–vector space form as a sum of vector outer products. Let a C ( n 1 ) and a R ( n 2 ) represent rows n1 and n2 of the unitary matrices AR and AR, respectively. Then, it is easily verified that
F =

n1 = 1 n2 = 1

∑ ∑

N1

N2

F ( n 1, n 2 )a C ( n 1 )a R ( n 2 )

T

(8.1-14a)

Similarly,

F =

m1 = 1 m2 = 1

∑ ∑

N1

N2

F ( m 1, m 2 )b C ( m 1 )b R ( m 2 )

T

(8.1-14b)

where b C ( m 1 ) and b R ( m 2 ) denote rows m1 and m2 of the unitary matrices BC and BR, respectively. The vector outer products of Eq. 8.1-14 form a series of matrices, called basis matrices, that provide matrix decompositions of the image matrix F or its unitary transformation F. There are several ways in which a unitary transformation may be viewed. An image transformation can be interpreted as a decomposition of the image data into a generalized two-dimensional spectrum (4). Each spectral component in the transform domain corresponds to the amount of energy of the spectral function within the original image. In this context, the concept of frequency may now be generalized to include transformations by functions other than sine and cosine waveforms. This type of generalized spectral analysis is useful in the investigation of specific decompositions that are best suited for particular classes of images. Another way to visualize an image transformation is to consider the transformation as a multidimensional rotation of coordinates. One of the major properties of a unitary transformation is that measure is preserved. For example, the mean-square difference between two images is equal to the mean-square difference between the unitary transforms of the images. A third approach to the visualization of image transformation is to consider Eq. 8.1-2 as a means of synthesizing an image with a set of two-dimensional mathematical functions B ( n1, n 2 ; m 1, m 2 ) for a fixed transform domain coordinate ( m 1, m2 ) . In this interpretation, the kernel B ( n 1, n 2 ; m 1, m 2 ) is called a two-dimensional basis function and the transform coefficient F ( m1, m 2 ) is the amplitude of the basis function required in the synthesis of the image. In the remainder of this chapter, to simplify the analysis of two-dimensional unitary transforms, all image arrays are considered square of dimension N. Furthermore, when expressing transformation operations in series form, as in Eqs. 8.1-1 and 8.1-2, the indices are renumbered and renamed. Thus the input image array is denoted by F(j, k) for j, k = 0, 1, 2,..., N - 1, and the transformed image array is represented by F(u, v) for u, v = 0, 1, 2,..., N - 1. With these definitions, the forward unitary transform becomes

FOURIER TRANSFORM
N–1 N–1

189

F ( u, v ) =

j=0 k=0

∑ ∑ F ( j, k )A ( j, k ; u, v )

(8.1-15a)

and the inverse transform is
N–1 N–1

F ( j, k ) =

u=0 v=0

∑ ∑ F ( u, v )B ( j, k ; u, v )

(8.1-15b)

8.2. FOURIER TRANSFORM The discrete two-dimensional Fourier transform of an image array is defined in series form as (5–10)
1 F ( u, v ) = --N
N–1 N–1 j=0

∑ ∑ F ( j, k ) exp  ----------- ( uj + vk )   N  k=0  – 2πi



(8.2-1a)

where i =

– 1 , and the discrete inverse transform is given by 1 F ( j, k ) = --N
N–1 N–1

u=0 v=0

∑ ∑ F ( u, v ) exp  -------- ( uj + vk )   N 

 2πi



(8.2-1b)

The indices (u, v) are called the spatial frequencies of the transformation in analogy with the continuous Fourier transform. It should be noted that Eq. 8.2-1 is not universally accepted by all authors; some prefer to place all scaling constants in the inverse transform equation, while still others employ a reversal in the sign of the kernels. Because the transform kernels are separable and symmetric, the two dimensional transforms can be computed as sequential row and column one-dimensional transforms. The basis functions of the transform are complex exponentials that may be decomposed into sine and cosine components. The resulting Fourier transform pairs then become
 – 2πi   2π   2π  A ( j, k ; u, v ) = exp  ----------- ( uj + vk )  = cos  ----- ( uj + vk )  – i sin  ----- ( uj + vk )   N  N  N   2πi   2π   2π  B ( j, k ; u, v ) = exp  ------- ( uj + vk )  = cos  ----- ( uj + vk )  + i sin  ----- ( uj + vk )  N N N      

(8.2-2a)

(8.2-2b)

Figure 8.2-1 shows plots of the sine and cosine components of the one-dimensional Fourier basis functions for N = 16. It should be observed that the basis functions are a rough approximation to continuous sinusoids only for low frequencies; in fact, the

190

UNITARY TRANSFORMS

FIGURE 8.2-1 Fourier transform basis functions, N = 16.

highest-frequency basis function is a square wave. Also, there are obvious redundancies between the sine and cosine components. The Fourier transform plane possesses many interesting structural properties. The spectral component at the origin of the Fourier domain
1 F ( 0, 0 ) = --N
N–1 N–1

j=0 k=0

∑ ∑ F ( j, k )

(8.2-3)

is equal to N times the spatial average of the image plane. Making the substitutions u = u + mN , v = v + nN in Eq. 8.2-1, where m and n are constants, results in

FOURIER TRANSFORM

191

FIGURE 8.2-2. Periodic image and Fourier transform arrays.

1 F ( u + mN, v + nN ) = --N

N–1 N–1 j=0 k=0

∑ ∑ F ( j, k ) exp  ----------- ( uj + vk )  exp { –2πi ( mj + nk ) }  N 
(8.2-4)

 – 2πi



For all integer values of m and n, the second exponential term of Eq. 8.2-5 assumes a value of unity, and the transform domain is found to be periodic. Thus, as shown in Figure 8.2-2a,
F ( u + mN, v + nN ) = F ( u, v )

(8.2-5)

for m, n = 0, ± 1, ± 2, … . The two-dimensional Fourier transform of an image is essentially a Fourier series representation of a two-dimensional field. For the Fourier series representation to be valid, the field must be periodic. Thus, as shown in Figure 8.2-2b, the original image must be considered to be periodic horizontally and vertically. The right side of the image therefore abuts the left side, and the top and bottom of the image are adjacent. Spatial frequencies along the coordinate axes of the transform plane arise from these transitions. If the image array represents a luminance field, F ( j, k ) will be a real positive function. However, its Fourier transform will, in general, be complex. Because the 2 transform domain contains 2N components, the real and imaginary, or phase and magnitude components, of each coefficient, it might be thought that the Fourier transformation causes an increase in dimensionality. This, however, is not the case because F ( u, v ) exhibits a property of conjugate symmetry. From Eq. 8.2-4, with m and n set to integer values, conjugation yields

192

UNITARY TRANSFORMS

FIGURE 8.2-3. Fourier transform frequency domain.

1 F * ( u + mN, v + nN ) = --N

N–1 N–1 j=0 k=0

∑ ∑ F ( j, k ) exp  ----------- ( uj + vk )   N 

 – 2πi



(8.2-6)

By the substitution u = – u and v = – v it can be shown that
F ( u, v ) = F * ( – u + mN, – v + nN )

(8.2-7)

for n = 0, ± 1, ±2, … . As a result of the conjugate symmetry property, almost onehalf of the transform domain samples are redundant; that is, they can be generated from other transform samples. Figure 8.2-3 shows the transform plane with a set of redundant components crosshatched. It is possible, of course, to choose the left halfplane samples rather than the upper plane samples as the nonredundant set. Figure 8.2-4 shows a monochrome test image and various versions of its Fourier transform, as computed by Eq. 8.2-1a, where the test image has been scaled over unit range 0.0 ≤ F ( j, k ) ≤ 1.0. Because the dynamic range of transform components is much larger than the exposure range of photographic film, it is necessary to compress the coefficient values to produce a useful display. Amplitude compression to a unit range display array D ( u, v ) can be obtained by clipping large-magnitude values according to the relation

FOURIER TRANSFORM

193

(a) Original

(b) Clipped magnitude, nonordered

(c) Log magnitude, nonordered

(d) Log magnitude, ordered

FIGURE 8.2-4. Fourier transform of the smpte_girl_luma image.

D ( u, v )

 1.0  =  F ( u, v )  -------------------c F max 

if F ( u, v ) ≥ c F max if F ( u, v ) < c F max

(8.2-8a) (8.2-8b)

where 0.0 < c ≤ 1.0 is the clipping factor and F max is the maximum coefficient magnitude. Another form of amplitude compression is to take the logarithm of each component as given by log { a + b F ( u, v ) } D ( u, v ) = -----------------------------------------------log { a + b F max }

(8.2-9)

194

UNITARY TRANSFORMS

where a and b are scaling constants. Figure 8.2-4b is a clipped magnitude display of the magnitude of the Fourier transform coefficients. Figure 8.2-4c is a logarithmic display for a = 1.0 and b = 100.0. In mathematical operations with continuous signals, the origin of the transform domain is usually at its geometric center. Similarly, the Fraunhofer diffraction pattern of a photographic transparency of transmittance F ( x, y ) produced by a coherent optical system has its zero-frequency term at the center of its display. A computer-generated two-dimensional discrete Fourier transform with its origin at its center can be produced by a simple reordering of its transform coefficients. Alternatively, the quadrants of the Fourier transform, as computed by Eq. 8.2-la, can be j+k reordered automatically by multiplying the image function by the factor ( – 1 ) prior to the Fourier transformation. The proof of this assertion follows from Eq. -8.2-4 with the substitution m = n = 1 . Then, by the identity 2 exp { iπ ( j + k ) } = ( – 1 ) j+k (8.2-10)

Eq. 8.2-5 can be expressed as
N–1 N–1 j=0 k=0

1 F ( u + N ⁄ 2, v + N ⁄ 2 ) = --N

∑ ∑ F ( j, k ) ( – 1 )

j+k

 – 2πi  exp  ----------- ( uj + vk )  N  

(8.2-11)

Figure 8.2-4d contains a log magnitude display of the reordered Fourier components. The conjugate symmetry in the Fourier domain is readily apparent from the photograph. The Fourier transform written in series form in Eq. 8.2-1 may be redefined in vector-space form as f = Af

(8.2-12a)

T f = A∗ f

(8.2-12b)

where f and f are vectors obtained by column scanning the matrices F and F, respectively. The transformation matrix A can be written in direct product form as
A = AC ⊗ A R

(8.2-13)

COSINE, SINE, AND HARTLEY TRANSFORMS

195

where
W AR = AC = W W W
0 0 0

W W W ·

0 1 2

W W W ·

0 2 4

… … … W … W

W W

0

N–1

2(N – 1)

(8.2-14)

with W = exp { – 2πi ⁄ N }. As a result of the direct product decomposition of A, the image matrix and transformed image matrix are related by
F = A C FA R F = A C∗ F A R∗

The properties of the Fourier transform previously proved in series form obviously hold in the matrix formulation. One of the major contributions to the field of image processing was the discovery (5) of an efficient computational algorithm for the discrete Fourier transform (DFT). Brute-force computation of the discrete Fourier transform of a one-dimensional 2 sequence of N values requires on the order of N complex multiply and add operations. A fast Fourier transform (FFT) requires on the order of N log N operations. For large images the computational savings are substantial. The original FFT algorithms were limited to images whose dimensions are a power of 2 (e.g., 9 N = 2 = 512 ). Modern algorithms exist for less restrictive image dimensions. Although the Fourier transform possesses many desirable analytic properties, it has a major drawback: Complex, rather than real number computations are necessary. Also, for image coding it does not provide as efficient image energy compaction as other transforms.

8.3. COSINE, SINE, AND HARTLEY TRANSFORMS The cosine, sine, and Hartley transforms are unitary transforms that utilize sinusoidal basis functions, as does the Fourier transform. The cosine and sine transforms are not simply the cosine and sine parts of the Fourier transform. In fact, the cosine and sine parts of the Fourier transform, individually, are not orthogonal functions. The Hartley transform jointly utilizes sine and cosine basis functions, but its coefficients are real numbers, as contrasted with the Fourier transform whose coefficients are, in general, complex numbers.


0

(N – 1)


2

(8.2-15a) (8.2-15b)

196

UNITARY TRANSFORMS

8.3.1. Cosine Transform The cosine transform, discovered by Ahmed et al. (12), has found wide application in transform image coding. In fact, it is the foundation of the JPEG standard (13) for still image coding and the MPEG standard for the coding of moving images (14). The forward cosine transform is defined as (12)
2 F ( u, v ) = --- C ( u )C ( v ) N
N–1 N–1 j=0 k=0

∑ ∑ F ( j, k ) cos  --- [ u ( j + --- ) ]  cos  --- [ v ( k + --- ) ]  2 2 N  N 

π

1



π

1



(8.3-1a)
2 F ( j, k ) = --N
N–1 N–1

j=0 k=0

∑ ∑ C ( u )C ( v )F ( u, v ) cos  --- [ u ( j + --- ) ]  cos  --- [ v ( k + --- ) ]  2 2 N  N 

π

1



π

1



(8.3-1b) where C ( 0 ) = ( 2 ) and C ( w ) = 1 for w = 1, 2,..., N – 1. It has been observed that the basis functions of the cosine transform are actually a class of discrete Chebyshev polynomials (12). Figure 8.3-1 is a plot of the cosine transform basis functions for N = 16. A photograph of the cosine transform of the test image of Figure 8.2-4a is shown in Figure 8.3-2a. The origin is placed in the upper left corner of the picture, consistent with matrix notation. It should be observed that as with the Fourier transform, the image energy tends to concentrate toward the lower spatial frequencies. The cosine transform of a N × N image can be computed by reflecting the image about its edges to obtain a 2N × 2N array, taking the FFT of the array and then extracting the real parts of the Fourier transform (15). Algorithms also exist for the direct computation of each row or column of Eq. 8.3-1 with on the order of N log N real arithmetic operations (12,16). 8.3.2. Sine Transform The sine transform, introduced by Jain (17), as a fast algorithmic substitute for the Karhunen–Loeve transform of a Markov process is defined in one-dimensional form by the basis functions
–1 ⁄ 2

A ( u, j ) =

 ( j + 1 ) ( u + 1 )π  2 ------------ sin  -------------------------------------  N+1 N+1  

(8.3-2)

for u, j = 0, 1, 2,..., N – 1. Consider the tridiagonal matrix

COSINE, SINE, AND HARTLEY TRANSFORMS

197

FIGURE 8.3-1. Cosine transform basis functions, N = 16.

1 –α 0 … –α 1 –α · · · T = · · · 0 · ·

· ·

0 · · · ·

(8.3-3)

–α 1 –α … 0 –α 1

where α = ρ ⁄ ( 1 + ρ ) and 0.0 ≤ ρ ≤ 1.0 is the adjacent element correlation of a Markov process covariance matrix. It can be shown (18) that the basis functions of

2

198

UNITARY TRANSFORMS

(a) Cosine

(b) Sine

(c) Hartley

FIGURE 8.3-2. Cosine, sine, and Hartley transforms of the smpte_girl_luma image, log magnitude displays

Eq. 8.3-2, inserted as the elements of a unitary matrix A, diagonalize the matrix T in the sense that
ATA = D
T

(8.3-4)

Matrix D is a diagonal matrix composed of the terms
1–ρ D ( k, k ) = -----------------------------------------------------------------------2 1 – 2ρ cos { kπ ⁄ ( N + 1 ) } + ρ
2

(8.3-5)

for k = 1, 2,..., N. Jain (17) has shown that the cosine and sine transforms are interrelated in that they diagonalize a family of tridiagonal matrices.

COSINE, SINE, AND HARTLEY TRANSFORMS

199

FIGURE 8.3-3. Sine transform basis functions, N = 15.

The two-dimensional sine transform is defined as
N–1 N–1 j=0 k=0

2 F ( u, v ) = -----------N+1

∑ ∑ F ( j, k ) sin  -------------------------------------  N+1  

 ( j + 1 ) ( u + 1 )π 

 ( k + 1 ) ( v + 1 )π  sin  -------------------------------------  (8.3-6) N+1  

Its inverse is of identical form. Sine transform basis functions are plotted in Figure 8.3-3 for N = 15. Figure 8.3-2b is a photograph of the sine transform of the test image. The sine transform can also be computed directly from Eq. 8.3-10, or efficiently with a Fourier transform algorithm (17).

200

UNITARY TRANSFORMS

8.3.3. Hartley Transform Bracewell (19,20) has proposed a discrete real-valued unitary transform, called the Hartley transform, as a substitute for the Fourier transform in many filtering applications. The name derives from the continuous integral version introduced by Hartley in 1942 (21). The discrete two-dimensional Hartley transform is defined by the transform pair
N–1 N–1

1 F ( u, v ) = --N

j=0 k=0 N–1 N–1 u=0

∑ ∑ F ( j, k ) cas  ------ ( uj + vk )   N 
 2π  F ( u, v ) cas  ----- ( uj + vk )  N   v=0

 2π



(8.3-7a)

1 F ( j, k ) = --N

∑ ∑

(8.3-7b)

where casθ ≡ cos θ + sin θ . The structural similarity between the Fourier and Hartley transforms becomes evident when comparing Eq. 8.3-7 and Eq. 8.2-2. It can be readily shown (17) that the cas θ function is an orthogonal function. Also, the Hartley transform possesses equivalent but not mathematically identical structural properties of the discrete Fourier transform (20). Figure 8.3-2c is a photograph of the Hartley transform of the test image. The Hartley transform can be computed efficiently by a FFT-like algorithm (20). The choice between the Fourier and Hartley transforms for a given application is usually based on computational efficiency. In some computing structures, the Hartley transform may be more efficiently computed, while in other computing environments, the Fourier transform may be computationally superior.

8.4. HADAMARD, HAAR, AND DAUBECHIES TRANSFORMS The Hadamard, Haar, and Daubechies transforms are related members of a family of nonsinusoidal transforms. 8.4.1. Hadamard Transform The Hadamard transform (22,23) is based on the Hadamard matrix (24), which is a square array of plus and minus 1s whose rows and columns are orthogonal. A normalized N × N Hadamard matrix satisfies the relation
HH = I
T

(8.4-1)

The smallest orthonormal Hadamard matrix is the 2 × 2 Hadamard matrix given by

HADAMARD, HAAR, AND DAUBECHIES TRANSFORMS

201

FIGURE 8.4-1. Nonordered Hadamard matrices of size 4 and 8.

1 H 2 = ------ 1 1 2 1 –1

(8.4-2)

It is known that if a Hadamard matrix of size N exists (N > 2), then N = 0 modulo 4 (22). The existence of a Hadamard matrix for every value of N satisfying this requirement has not been shown, but constructions are available for nearly all permissible values of N up to 200. The simplest construction is for a Hadamard matrix of size N = 2n, where n is an integer. In this case, if H N is a Hadamard matrix of size N, the matrix
1 HN H 2N = -----2 HN HN – HN

(8.4-3)

is a Hadamard matrix of size 2N. Figure 8.4-1 shows Hadamard matrices of size 4 and 8 obtained by the construction of Eq. 8.4-3. Harmuth (25) has suggested a frequency interpretation for the Hadamard matrix generated from the core matrix of Eq. 8.4-3; the number of sign changes along each row of the Hadamard matrix divided by 2 is called the sequency of the row. It is posn sible to construct a Hadamard matrix of order N = 2 whose number of sign changes per row increases from 0 to N – 1. This attribute is called the sequency property of the unitary matrix.

202

UNITARY TRANSFORMS

FIGURE 8.4-2. Hadamard transform basis functions, N = 16.

The rows of the Hadamard matrix of Eq. 8.4-3 can be considered to be samples of rectangular waves with a subperiod of 1/N units. These continuous functions are called Walsh functions (26). In this context, the Hadamard matrix merely performs the decomposition of a function by a set of rectangular waveforms rather than the sine–cosine waveforms with the Fourier transform. A series formulation exists for the Hadamard transform (23). Hadamard transform basis functions for the ordered transform with N = 16 are shown in Figure 8.4-2. The ordered Hadamard transform of the test image in shown in Figure 8.4-3a.

HADAMARD, HAAR, AND DAUBECHIES TRANSFORMS

203

(a) Hadamard

(b) Haar

FIGURE 8.4-3. Hadamard and Haar transforms of the smpte_girl_luma image, log magnitude displays.

8.4.2. Haar Transform The Haar transform (1,26,27) is derived from the Haar matrix. The following are 4 × 4 and 8 × 8 orthonormal Haar matrices:
1 1 0 1 1 0 1 –1 0 1 –1 0

1 H 4 = -2

2 – 2

(8.4-4)

2 – 2

1 1 2 1 H 8 = -----8 0 2 0 0 0

1 1 0 –2 0 0 0

1 1 0 0 2 0 0

1 1 0 0 –2 0 0

1 –1 0

1 –1 0

1 –1 0

1 –1 0

2 – 2 – 2

2 2 – 2 – 2 0 0 0 0 0 0 0 0 2 –2 0 0 0 0 2 –2

(8.4-5)

Extensions to higher-order Haar matrices follow the structure indicated by Eqs. 8.4-4 and 8.4-5. Figure 8.4-4 is a plot of the Haar basis functions for N = 16 .

204

UNITARY TRANSFORMS

FIGURE 8.4-4. Haar transform basis functions, N = 16.

The Haar transform can be computed recursively (29) using the following N × N recursion matrix
RN = VN WN

(8.4-6)

where V N is a N ⁄ 2 × N scaling matrix and WN is a N ⁄ 2 × N wavelet matrix defined as
110 00 0 … 00 00 001 10 0 … 00 00

1 V N = -----2

(8.4-7a)
000 00 0 … 11 00 000 00 0 … 00 11

HADAMARD, HAAR, AND DAUBECHIES TRANSFORMS

205

1 W N = -----2

1 0 0 0

–1 0 0 0

0 1 … 0 0

0 –1 0 0

0 0 0 0

0 0 0 0

… … … …

0 0 1 0

0 0 –1 0 …

0 0 0 1

0 0

(8.4-7b)
0 –1

The elements of the rows of V N are called first-level scaling signals, and the elements of the rows of W N are called first-level Haar wavelets (29). The first-level Haar transform of a N × 1 vector f is

f1 = R N f = [ a 1 d1 ]

T

(8.4-8)

where a1 = VN f d 1 = WN f

(8.4-9a) (8.4-9b)

The vector a 1 represents the running average or trend of the elements of f , and the vector d1 represents the running fluctuation of the elements of f . The next step in the recursion process is to compute the second-level Haar transform from the trend part of the first-level transform and concatenate it with the first-level fluctuation vector. This results in f 2 = [ a2 d2 d1 ]
T

(8.4-10)

where a 2 = VN ⁄ 2 a1 d2 = WN ⁄ 2 a 1

(8.4-11a) (8.4-11b)

are N ⁄ 4 × 1 vectors. The process continues until the full transform f ≡ fn = [ an d n d n – 1 … d 1 ] n T

(8.4-12)

is obtained where N = 2 . It should be noted that the intermediate levels are unitary transforms.

206

UNITARY TRANSFORMS

The Haar transform can be likened to a sampling process in which rows of the transform matrix sample an input data sequence with finer and finer resolution increasing of powers of 2. In image processing applications, the Haar transform provides a transform domain in which a type of differential energy is concentrated in localized regions. 8.4.3. Daubechies Transforms Daubechies (30) has discovered a class of wavelet transforms that utilize running averages and running differences of the elements of a vector, as with the Haar transform. The difference between the Haar and Daubechies transforms is that the averages and differences are grouped in four or more elements. The Daubechies transform of support four, called Daub4, can be defined in a manner similar to the Haar recursive generation process. The first-level scaling and wavelet matrices are defined as α1 α2 α3 α 4 0 0 VN = … 0 β1 0 … 0 … … 0 0 … 0 … 0 0 … … 0 … 0 … 0 … … 0 0 … 0 0 … 0 0 …

0 α1 α 2 α3 α4 … 0 0 0

(8.4-13a)

0 … α1 α2 α3 α4 0 … 0 0 … … 0 … 0 α1 α2 0 0 … 0 0 … 0 0 …

α3 α4 0

β2 β3 β4 0 …

β1 β2 β3 β4 … 0 0 0 0 0 0 0 0

WN =

(8.4-13b)

… β1 β2 β3 β4 0 β1 β2

β3 β4

0 … 0

where
1+ 3 α 1 = – β 4 = --------------4 2 3+ 3 α 2 = β 3 = --------------4 2 --------------α3 = –β2 = 3 – 3 4 2 1– 3 α 4 = β 1 = --------------4 2

(8.4-14a)

(8.4-14b)

(8.4-14c)

(8.4-14d)

KARHUNEN–LOEVE TRANSFORM

207

In Eqs. 8.4-13a and 8.4-13b, the row-to-row shift is by two elements, and the last two scale factors wrap around on the last rows. Following the recursion process of the Haar transform results in the Daub4 transform final stage: f ≡ f n = [ an dn dn – 1 … d 1 ]
T

(8.4-15)

Daubechies has extended the wavelet transform concept for higher degrees of support, 6, 8, 10,..., by straightforward extension of Eq. 8.4-13 (29). Daubechies also has also constructed another family of wavelets, called coiflets, after a suggestion of Coifman (29).

8.5. KARHUNEN–LOEVE TRANSFORM Techniques for transforming continuous signals into a set of uncorrelated representational coefficients were originally developed by Karhunen (31) and Loeve (32). Hotelling (33) has been credited (34) with the conversion procedure that transforms discrete signals into a sequence of uncorrelated coefficients. However, most of the literature in the field refers to both discrete and continuous transformations as either a Karhunen–Loeve transform or an eigenvector transform. The Karhunen–Loeve transformation is a transformation of the general form
N–1 N–1

F ( u, v ) =

j=0 k=0

∑ ∑ F ( j, k )A ( j, k ; u, v )

(8.5-1)

for which the kernel A(j, k; u, v) satisfies the equation
N–1 N–1

λ ( u, v )A ( j, k ; u, v ) =

j′ = 0 k′ = 0

∑ ∑

K F ( j, k ; j′, k′ ) A ( j′, k′ ; u, v )

(8.5-2)

where KF ( j, k ; j′, k′ ) denotes the covariance function of the image array and λ ( u, v ) is a constant for fixed (u, v). The set of functions defined by the kernel are the eigenfunctions of the covariance function, and λ ( u, v ) represents the eigenvalues of the covariance function. It is usually not possible to express the kernel in explicit form. If the covariance function is separable such that
K F ( j, k ; j′, k′ ) = K C ( j, j′ )K R ( k, k′ )

(8.5-3)

then the Karhunen-Loeve kernel is also separable and
A ( j, k ; u , v ) = A C ( u, j )AR ( v, k )

(8.5-4)

208

UNITARY TRANSFORMS

The row and column kernels satisfy the equations
N–1

λ R ( u )AR ( v, k ) =

k′ = 0 N–1



K R ( k, k′ )A R ( v, k′ )

(8.5-5a)

λ C ( v )A C ( u, j ) =

j′ = 0

∑ KC ( j, j′ )AC ( u, j′ )

(8.5-5b)

In the special case in which the covariance matrix is of separable first-order Markov process form, the eigenfunctions can be written in explicit form. For a one-dimensional Markov process with correlation factor ρ , the eigenfunctions and eigenvalues are given by (35)
A ( u, j ) = 2 ----------------------2 N + λ ( u)
1⁄2

 ( u + 1 )π  N–1 sin  w ( u )  j – ------------ + --------------------   2 2   

(8.5-6)

and
1–ρ λ ( u ) = -------------------------------------------------------2 1 – 2ρ cos { w ( u ) } + ρ
2

for 0 ≤ j, u ≤ N – 1

(8.5-7)

where w(u) denotes the root of the transcendental equation
( 1 – ρ ) sin w tan { Nw } = -------------------------------------------------2 cos w – 2ρ + ρ cos w
2

(8.5-8)

The eigenvectors can also be generated by the recursion formula (36) λ(u) A ( u, 0 ) = -------------- [ A ( u, 0 ) – ρA ( u, 1 ) ] 2 1–ρ
2 λ(u) A ( u, j ) = -------------- [ – ρA ( u, j – 1 ) + ( 1 + ρ )A ( u, j ) – ρA ( u, j + 1 ) ] 2 1–ρ

(8.5-9a) for 0 < j < N – 1 (8.5-9b)

λ( u) A ( u, N – 1 ) = -------------- [ – ρA ( u, N – 2 ) + ρA ( u, N – 1 ) ] 2 1–ρ

(8.5-9c)

by initially setting A(u, 0) = 1 and subsequently normalizing the eigenvectors.

KARHUNEN–LOEVE TRANSFORM

209

If the image array and transformed image array are expressed in vector form, the Karhunen–Loeve transform pairs are f = Af f = A f
T

(8.5-10) (8.5-11)

The transformation matrix A satisfies the relation
AK f = Λ A

(8.5-12)

where K f is the covariance matrix of f, A is a matrix whose rows are eigenvectors of K f , and Λ is a diagonal matrix of the form

Λ =

λ(1) 0 0 λ(2) … 0 …

… … 0

0 … 0 λ( N )
2

(8.5-13)

If K f is of separable form, then
A = AC ⊗ A R

(8.5-14)

where AR and AC satisfy the relations

AR KR = ΛR AR AC KC = ΛC AC

(8.5-15a) (8.5-15b)

and λ ( w ) = λ R ( v )λ C ( u ) for u, v = 1, 2,..., N. Figure 8.5-1 is a plot of the Karhunen–Loeve basis functions for a onedimensional Markov process with adjacent element correlation ρ = 0.9.

210

UNITARY TRANSFORMS

FIGURE 8.5-1. Karhunen–Loeve transform basis functions, N = 16.

REFERENCES
1. H. C. Andrews, Computer Techniques in Image Processing, Academic Press, New York, 1970. 2. H. C. Andrews, “Two Dimensional Transforms,” in Topics in Applied Physics: Picture Processing and Digital Filtering, Vol. 6, T. S. Huang, Ed., Springer-Verlag, New York, 1975. 3. R. Bellman, Introduction to Matrix Analysis, 2nd ed., Society for Industrial and Applied Mathematics, Philadelphia, 1997.

REFERENCES

211

4. H. C. Andrews and K. Caspari, “A Generalized Technique for Spectral Analysis,” IEEE Trans. Computers, C-19, 1, January 1970, 16–25. 5. J. W. Cooley and J. W. Tukey, “An Algorithm for the Machine Calculation of Complex Fourier Series,” Mathematics of Computation 19, 90, April 1965, 297–301. 6. IEEE Trans. Audio and Electroacoustics, Special Issue on Fast Fourier Transforms, AU15, 2, June 1967. 7. W. T. Cochran et al., “What Is the Fast Fourier Transform?” Proc. IEEE, 55, 10, 1967, 1664–1674. 8. IEEE Trans. Audio and Electroacoustics, Special Issue on Fast Fourier Transforms, AU17, 2, June 1969. 9. J. W. Cooley, P. A. Lewis, and P. D. Welch, “Historical Notes on the Fast Fourier Transform,” Proc. IEEE, 55, 10, October 1967, 1675–1677. 10. B. O. Brigham and R. B. Morrow, “The Fast Fourier Transform,” IEEE Spectrum, 4, 12, December 1967, 63–70. 11. C. S. Burrus and T. W. Parks, DFT/FFT and Convolution Algorithms, Wiley-Interscience, New York, 1985. 12. N. Ahmed, T. Natarajan, and K. R. Rao, “On Image Processing and a Discrete Cosine Transform,” IEEE Trans. Computers, C-23, 1, January 1974, 90–93. 13. W. B. Pennebaker and J. L. Mitchell, JPEG Still Image Data Compression Standard, Van Nostrand Reinhold, New York, 1993. 14. K. R. Rao and J. J. Hwang, Techniques and Standards for Image, Video, and Audio Coding, Prentice Hall, Upper Saddle River, NJ, 1996. 15. R. W. Means, H. J. Whitehouse, and J. M. Speiser, “Television Encoding Using a Hybrid Discrete Cosine Transform and a Differential Pulse Code Modulator in Real Time,” Proc. National Telecommunications Conference, San Diego, CA, December 1974, 61– 66. 16. W. H. Chen, C. Smith, and S. C. Fralick, “Fast Computational Algorithm for the Discrete Cosine Transform,” IEEE Trans. Communications., COM-25, 9, September 1977, 1004–1009. 17. A. K. Jain, “A Fast Karhunen–Loeve Transform for Finite Discrete Images,” Proc. National Electronics Conference, Chicago, October 1974, 323–328. 18. A. K. Jain and E. Angel, “Image Restoration, Modeling, and Reduction of Dimensionality,” IEEE Trans. Computers, C-23, 5, May 1974, 470–476. 19. R. M. Bracewell, “The Discrete Hartley Transform,” J. Optical Society of America, 73, 12, December 1983, 1832–1835. 20. R. M. Bracewell, The Hartley Transform, Oxford University Press, Oxford, 1986. 21. R. V. L. Hartley, “A More Symmetrical Fourier Analysis Applied to Transmission Problems,” Proc. IRE, 30, 1942, 144–150. 22. J. E. Whelchel, Jr. and D. F. Guinn, “The Fast Fourier–Hadamard Transform and Its Use in Signal Representation and Classification,” EASCON 1968 Convention Record, 1968, 561–573. 23. W. K. Pratt, H. C. Andrews, and J. Kane, “Hadamard Transform Image Coding,” Proc. IEEE, 57, 1, January 1969, 58–68. 24. J. Hadamard, “Resolution d'une question relative aux determinants,” Bull. Sciences Mathematiques, Ser. 2, 17, Part I, 1893, 240–246.

212

UNITARY TRANSFORMS

25. H. F. Harmuth, Transmission of Information by Orthogonal Functions, Springer-Verlag, New York, 1969. 26. J. L. Walsh, “A Closed Set of Orthogonal Functions,” American J. Mathematics, 45, 1923, 5–24. 27. A. Haar, “Zur Theorie der Orthogonalen-Funktionen,” Mathematische Annalen, 5, 1955, 17–31. 28. K. R. Rao, M. A. Narasimhan, and K. Revuluri, “Image Data Processing by Hadamard– Haar Transforms,” IEEE Trans. Computers, C-23, 9, September 1975, 888–896. 29. J. S. Walker, A Primer on Wavelets and Their Scientific Applications, Chapman & Hall/ CRC, Press, Boca Raton, FL, 1999. 30. I. Daubechies, Ten Lectures on Wavelets, SIAM, Philadelphia, 1992. 31. H. Karhunen, 1947, English translation by I. Selin, “On Linear Methods in Probability Theory,” Doc. T-131, Rand Corporation, Santa Monica, CA, August 11, 1960. 32. M. Loeve, Fonctions aldatories de seconde ordre, Hermann, Paris, 1948. 33. H. Hotelling, “Analysis of a Complex of Statistical Variables into Principal Components,” J. Educational Psychology, 24, 1933, 417–441, 498–520. 34. P. A. Wintz, “Transform Picture Coding,” Proc. IEEE, 60, 7, July 1972, 809–820. 35. W. D. Ray and R. M. Driver, “Further Decomposition of the Karhunen–Loeve Series Representation of a Stationary Random Process,” IEEE Trans. Information Theory, IT16, 6, November 1970, 663–668. 36. W. K. Pratt, “Generalized Wiener Filtering Computation Techniques,” IEEE Trans. Computers, C-21, 7, July 1972, 636–641.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

9
LINEAR PROCESSING TECHNIQUES

Most discrete image processing computational algorithms are linear in nature; an output image array is produced by a weighted linear combination of elements of an input array. The popularity of linear operations stems from the relative simplicity of spatial linear processing as opposed to spatial nonlinear processing. However, for image processing operations, conventional linear processing is often computationally infeasible without efficient computational algorithms because of the large image arrays. This chapter considers indirect computational techniques that permit more efficient linear processing than by conventional methods.

9.1. TRANSFORM DOMAIN PROCESSING Two-dimensional linear transformations have been defined in Section 5.4 in series form as
P ( m 1, m 2 ) =

n1 = 1 n2 = 1

∑ ∑

N1

N2

F ( n 1, n 2 )T ( n 1, n 2 ; m 1, m 2 )

(9.1-1)

and defined in vector form as p = Tf

(9.1-2)

It will now be demonstrated that such linear transformations can often be computed more efficiently by an indirect computational procedure utilizing two-dimensional unitary transforms than by the direct computation indicated by Eq. 9.1-1 or 9.1-2.
213

214

LINEAR PROCESSING TECHNIQUES

FIGURE 9.1-1. Direct processing and generalized linear filtering; series formulation.

Figure 9.1-1 is a block diagram of the indirect computation technique called generalized linear filtering (1). In the process, the input array F ( n1, n 2 ) undergoes a two-dimensional unitary transformation, resulting in an array of transform coefficients F ( u 1, u 2 ) . Next, a linear combination of these coefficients is taken according to the general relation
M1 M2

˜ F ( w 1, w 2 ) =

u1 = 1 u2 = 1

∑ ∑

F ( u 1, u 2 )T ( u 1, u 2 ; w 1, w 2 )

(9.1-3)

where T ( u 1, u 2 ; w 1, w 2 ) represents the linear filtering transformation function. Finally, an inverse unitary transformation is performed to reconstruct the processed array P ( m1, m 2 ) . If this computational procedure is to be more efficient than direct computation by Eq. 9.1-1, it is necessary that fast computational algorithms exist for the unitary transformation, and also the kernel T ( u 1, u 2 ; w 1, w 2 ) must be reasonably sparse; that is, it must contain many zero elements. The generalized linear filtering process can also be defined in terms of vectorspace computations as shown in Figure 9.1-2. For notational simplicity, let N1 = N2 = N and M1 = M2 = M. Then the generalized linear filtering process can be described by the equations

f = [ A 2 ]f
N

(9.1-4a) (9.1-4b)

˜ = Tf f f p = [A
2

M

–1 ] ˜ f

(9.1-4c)

TRANSFORM DOMAIN PROCESSING

215

FIGURE 9.1-2. Direct processing and generalized linear filtering; vector formulation.

where A 2 is a N × N unitary transform matrix, T is a M × N linear filtering N 2 2 transform operation, and A 2 is a M × M unitary transform matrix. From M Eq. 9.1-4, the input and output vectors are related by p = [A
2

2

2

2

2

M

] T [ A 2 ]f
N

–1

(9.1-5)

Therefore, equating Eqs. 9.1-2 and 9.1-5 yields the relations between T and T given by
T = [A
M
2

] T [A 2]
N

–1

(9.1-6a) (9.1-6b)
2 2

T = [A

M

2

]T [ A 2 ]
N

–1

If direct processing is employed, computation by Eq. 9.1-2 requires k P ( M N ) operations, where 0 ≤ k P ≤ 1 is a measure of the sparseness of T. With the generalized linear filtering technique, the number of operations required for a given operator are: Forward transform:
N by direct transformation 2N log 2 N by fast transformation
2 4

Filter multiplication: Inverse transform:

kT M N
4

2 2

M by direct transformation 2M log 2 M by fast transformation
2

216

LINEAR PROCESSING TECHNIQUES

where 0 ≤ k T ≤ 1 is a measure of the sparseness of T. If k T = 1 and direct unitary transform computation is performed, it is obvious that the generalized linear filtering concept is not as efficient as direct computation. However, if fast transform algorithms, similar in structure to the fast Fourier transform, are employed, generalized linear filtering will be more efficient than direct processing if the sparseness index satisfies the inequality
2 2 k T < k P – ------ log 2 N – ----- log 2 M 2 2 N M

(9.1-7)

In many applications, T will be sufficiently sparse such that the inequality will be satisfied. In fact, unitary transformation tends to decorrelate the elements of T causing T to be sparse. Also, it is often possible to render the filter matrix sparse by setting small-magnitude elements to zero without seriously affecting computational accuracy (1). In subsequent sections, the structure of superposition and convolution operators is analyzed to determine the feasibility of generalized linear filtering in these applications.

9.2. TRANSFORM DOMAIN SUPERPOSITION The superposition operations discussed in Chapter 7 can often be performed more efficiently by transform domain processing rather than by direct processing. Figure 9.2-1a and b illustrate block diagrams of the computational steps involved in direct finite area or sampled image superposition. In Figure 9.2-1d and e, an alternative form of processing is illustrated in which a unitary transformation operation is performed on the data vector f before multiplication by a finite area filter matrix D or sampled image filter matrix B. An inverse transform reconstructs the output vector. From Figure 9.2-1, for finite-area superposition, because q = Df

(9.2-1a)

and q = [A ] D [ A 2 ]f
N –1

M

2

(9.2-1b)

then clearly the finite-area filter matrix may be expressed as
D = [A
]D [ A 2 ]
N –1

M

2

(9.2-2a)

TRANSFORM DOMAIN SUPERPOSITION

217

FIGURE 9.2-1. Data and transform domain superposition.

218

LINEAR PROCESSING TECHNIQUES

Similarly,
B = [A
]B [ A 2 ]
N –1

M

2

(9.2-2b)

If direct finite-area superposition is performed, the required number of 2 2 computational operations is approximately N L , where L is the dimension of the impulse response matrix. In this case, the sparseness index of D is
L 2 k D =  ---  N
2 2

(9.2-3a)

Direct sampled image superposition requires on the order of M L operations, and the corresponding sparseness index of B is
L 2 k B =  ----  M

(9.2-3b)

Figure 9.2-1f is a block diagram of a system for performing circulant superposition by transform domain processing. In this case, the input vector kE is the extended data vector, obtained by embedding the input image array F ( n1, n 2 ) in the left corner of a J × J array of zeros and then column scanning the resultant matrix. Following the same reasoning as above, it is seen that k E = Cf E = [ A 2 ] C [ A 2 ]f
J J –1

(9.2-4a)

and hence,
C = [ A 2 ]C [ A 2 ]
J J –1

(9.2-4b)

As noted in Chapter 7, the equivalent output vector for either finite-area or sampled image superposition can be obtained by an element selection operation of kE. For finite-area superposition, q = [ S1 J
(M)

⊗ S1 J

(M)

]k E

(9.2-5a)

and for sampled image superposition g = [ S2 J
(M)

⊗ S2 J

(M)

]k E

(9.2-5b)

TRANSFORM DOMAIN SUPERPOSITION

219

Also, the matrix form of the output for finite-area superposition is related to the extended image matrix KE by
Q = [ S1 J ]K E [ S1 J ]
(M) (M) T

(9.2-6a)

For sampled image superposition,
G = [ S2 J ]K E [ S2 J ]
(M) (M) T

(9.2-6b)

The number of computational operations required to obtain kE by transform domain processing is given by the previous analysis for M = N = J. Direct transformation Fast transformation:
2

3J
2

4 2

J + 4J log 2 J

If C is sparse, many of the J filter multiplication operations can be avoided. From the discussion above, it can be seen that the secret to computationally efficient superposition is to select a transformation that possesses a fast computational algorithm that results in a relatively sparse transform domain superposition filter matrix. As an example, consider finite-area convolution performed by Fourier domain processing (2,3). Referring to Figure 9.2-1, let
A = AK ⊗ AK

K

2

(9.2-7)

where
1 - ( x – 1) (y – 1 ) AK = ------- W K
(K)

with W ≡ exp  – 2πi  ---------- K 
2





for x, y = 1, 2,..., K. Also, let h E denote the K × 1 vector representation of the extended spatially invariant impulse response array of Eq. 7.3-2 for J = K. The Fou(K) rier transform of h E is denoted as hE (K)

= [ A 2 ]h E
K

(K )

(9.2-8)
2 2

These transform components are then inserted as the diagonal elements of a K × K matrix
H
( K)

= diag [ h E ( 1 ), …, h E ( K ) ]

( K)

(K)

2

(9.2-9)

220

LINEAR PROCESSING TECHNIQUES

Then, it can be shown, after considerable manipulation, that the Fourier transform domain superposition matrices for finite area and sampled image convolution can be written as (4)
D = H
(M)

[ PD ⊗ PD ]

(9.2-10)

for N = M – L + 1 and
B = [ PB ⊗ PB ] H
(N)

(9.2-11)

where N = M + L + 1 and
–(u – 1 ) (L – 1 )

1 – WM 1 P D ( u, v ) = -------- --------------------------------------------------------------M 1 – W M – ( u – 1 ) – W N –( v – 1 ) 1 – WN 1 PB ( u, v ) = ------- --------------------------------------------------------------N 1 – W M –( u – 1 ) – W N– ( v – 1 )
–( v – 1 ) ( L – 1 )

(9.2-12a)

(9.2-12b)

Thus the transform domain convolution operators each consist of a scalar weighting (K ) matrix H and an interpolation matrix ( P ⊗ P ) that performs the dimensionality con2 2 version between the N - element input vector and the M - element output vector. Generally, the interpolation matrix is relatively sparse, and therefore, transform domain superposition is quite efficient. Now, consider circulant area convolution in the transform domain. Following the previous analysis it is found (4) that the circulant area convolution filter matrix reduces to a scalar operator
C = JH
(J )

(9.2-13)

Thus, as indicated in Eqs. 9.2-10 to 9.2-13, the Fourier domain convolution filter matrices can be expressed in a compact closed form for analysis or operational storage. No closed-form expressions have been found for other unitary transforms. Fourier domain convolution is computationally efficient because the convolution operator C is a circulant matrix, and the corresponding filter matrix C is of diagonal form. Actually, as can be seen from Eq. 9.1-6, the Fourier transform basis vectors are eigenvectors of C (5). This result does not hold true for superposition in general, nor for convolution using other unitary transforms. However, in many instances, the filter matrices D, B, and C are relatively sparse, and computational savings can often be achieved by transform domain processing.

FAST FOURIER TRANSFORM CONVOLUTION

221

Signal

Fourier

Hadamard

(a) Finite length convolution

(b) Sampled data convolution

(c) Circulant convolution

FIGURE 9.2-2. One-dimensional Fourier and Hadamard domain convolution matrices.

Figure 9.2-2 shows the Fourier and Hadamard domain filter matrices for the three forms of convolution for a one-dimensional input vector and a Gaussian-shaped impulse response (6). As expected, the transform domain representations are much more sparse than the data domain representations. Also, the Fourier domain circulant convolution filter is seen to be of diagonal form. Figure 9.2-3 illustrates the structure of the three convolution matrices for two-dimensional convolution (4).

9.3. FAST FOURIER TRANSFORM CONVOLUTION As noted previously, the equivalent output vector for either finite-area or sampled image convolution can be obtained by an element selection operation on the extended output vector kE for circulant convolution or its matrix counterpart KE.

222

LINEAR PROCESSING TECHNIQUES

Spatial domain

Fourier domain

(a) Finite-area convolution

(b) Sampled image convolution

(c) Circulant convolution

FIGURE 9.2-3. Two-dimensional Fourier domain convolution matrices.

This result, combined with Eq. 9.2-13, leads to a particularly efficient means of convolution computation indicated by the following steps: 1. Embed the impulse response matrix in the upper left corner of an all-zero J × J matrix, J ≥ M for finite-area convolution or J ≥ N for sampled infinite-area convolution, and take the two-dimensional Fourier transform of the extended impulse response matrix, giving

FAST FOURIER TRANSFORM CONVOLUTION

223

HE = AJ HE AJ

(9.3-1)

2.

Embed the input data array in the upper left corner of an all-zero J × J matrix, and take the two-dimensional Fourier transform of the extended input data matrix to obtain
FE = A J FE A J

(9.3-2)

3.

Perform the scalar multiplication
K E ( m, n ) = JH E ( m, n )F E ( m, n )

(9.3-3)

where 1 ≤ m, n ≤ J . 4. Take the inverse Fourier transform
KE = [ A 2 ] HE [ A 2 ]
J J –1 –1

(9.3-4)

5.

Extract the desired output matrix
(M) (M) T

Q = [ S1 J ]K E [ S1 J ]

(9.3-5a)

or
G = [ S2 J ]K E [ S2 J ]
(M) (M) T

(9.3-5b)

It is important that the size of the extended arrays in steps 1 and 2 be chosen large enough to satisfy the inequalities indicated. If the computational steps are performed with J = N, the resulting output array, shown in Figure 9.3-1, will contain erroneous terms in a boundary region of width L – 1 elements, on the top and left-hand side of the output field. This is the wraparound error associated with incorrect use of the Fourier domain convolution method. In addition, for finite area (D-type) convolution, the bottom and right-hand-side strip of output elements will be missing. If the computation is performed with J = M, the output array will be completely filled with the correct terms for D-type convolution. To force J = M for B-type convolution, it is necessary to truncate the bottom and right-hand side of the input array. As a consequence, the top and left-hand-side elements of the output array are erroneous.

224

LINEAR PROCESSING TECHNIQUES

FIGURE 9.3-1. Wraparound error effects.

Figure 9.3-2 illustrates the Fourier transform convolution process with proper zero padding. The example in Figure 9.3-3 shows the effect of no zero padding. In both examples, the image has been filtered using a 11 × 11 uniform impulse response array. The source image of Figure 9.3-3 is 512 × 512 pixels. The source image of Figure 9.3-2 is 502 × 502 pixels. It has been obtained by truncating the bottom 10 rows and right 10 columns of the source image of Figure 9.3-3. Figure 9.3-4 shows computer printouts of the upper left corner of the processed images. Figure 9.3-4a is the result of finite-area convolution. The same output is realized in Figure 9.3-4b for proper zero padding. Figure 9.3-4c shows the wraparound error effect for no zero padding. In many signal processing applications, the same impulse response operator is used on different data, and hence step 1 of the computational algorithm need not be repeated. The filter matrix HE may be either stored functionally or indirectly as a computational algorithm. Using a fast Fourier transform algorithm, the forward and 2 inverse transforms require on the order of 2J log 2 J operations each. The scalar 2 2 multiplication requires J operations, in general, for a total of J ( 1 + 4 log 2 J ) operations. For an N × N input array, an M × M output array, and an L × L impulse 2 2 response array, finite-area convolution requires N L operations, and sampled 2 2 image convolution requires M L operations. If the dimension of the impulse response L is sufficiently large with respect to the dimension of the input array N, Fourier domain convolution will be more efficient than direct convolution, perhaps by an order of magnitude or more. Figure 9.3-5 is a plot of L versus N for equality

FAST FOURIER TRANSFORM CONVOLUTION

225

(a) HE

(b)

E

(c) FE

(d )

E

(e) KE

(f )

E

FIGURE 9.3-2. Fourier transform convolution of the candy_502_luma image with proper zero padding, clipped magnitude displays of Fourier images.

226

LINEAR PROCESSING TECHNIQUES

(a ) H E

(b )

E

(c ) F E

(d )

E

(e ) k E

(f )

E

FIGURE 9.3-3. Fourier transform convolution of the candy_512_luma image with improper zero padding, clipped magnitude displays of Fourier images.

FAST FOURIER TRANSFORM CONVOLUTION

227
0.013 0.026 0.039 0.051 0.064 0.076 0.088 0.101 0.113 0.125 0.137 0.136 0.135 0.134 0.134

0.001 0.002 0.003 0.005 0.006 0.007 0.008 0.009 0.010 0.011 0.012 0.012 0.012 0.012 0.012

0.002 0.005 0.007 0.009 0.011 0.014 0.016 0.018 0.020 0.023 0.025 0.025 0.025 0.025 0.025

0.003 0.007 0.010 0.014 0.017 0.020 0.024 0.027 0.031 0.034 0.037 0.037 0.037 0.037 0.037

0.005 0.009 0.014 0.018 0.023 0.027 0.032 0.036 0.041 0.045 0.050 0.049 0.049 0.049 0.049

0.006 0.011 0.017 0.023 0.028 0.034 0.040 0.045 0.051 0.056 0.062 0.062 0.062 0.061 0.061

0.007 0.014 0.020 0.027 0.034 0.041 0.048 0.054 0.061 0.068 0.074 0.074 0.074 0.074 0.074

0.008 0.016 0.024 0.032 0.040 0.048 0.056 0.064 0.071 0.079 0.087 0.086 0.086 0.086 0.086

0.009 0.018 0.027 0.036 0.045 0.054 0.064 0.073 0.081 0.090 0.099 0.099 0.099 0.098 0.098

0.010 0.021 0.031 0.041 0.051 0.061 0.072 0.082 0.092 0.102 0.112 0.111 0.111 0.110 0.110

0.011 0.023 0.034 0.046 0.057 0.068 0.080 0.091 0.102 0.113 0.124 0.124 0.123 0.123 0.122

0.013 0.025 0.038 0.050 0.063 0.075 0.088 0.100 0.112 0.124 0.136 0.136 0.135 0.135 0.134

0.013 0.025 0.038 0.051 0.063 0.076 0.088 0.100 0.112 0.124 0.137 0.136 0.135 0.135 0.134

0.013 0.026 0.038 0.051 0.063 0.076 0.088 0.100 0.112 0.125 0.137 0.136 0.135 0.135 0.134

0.013 0.026 0.039 0.051 0.064 0.076 0.088 0.100 0.113 0.125 0.137 0.136 0.135 0.135 0.134

(a) Finite-area convolution
0.001 0.002 0.003 0.005 0.006 0.007 0.008 0.009 0.010 0.011 0.012 0.012 0.012 0.012 0.012 0.002 0.005 0.007 0.009 0.011 0.014 0.016 0.018 0.020 0.023 0.025 0.025 0.025 0.025 0.025 0.003 0.007 0.010 0.014 0.017 0.020 0.024 0.027 0.031 0.034 0.037 0.037 0.037 0.037 0.037 0.005 0.009 0.014 0.018 0.023 0.027 0.032 0.036 0.041 0.045 0.050 0.049 0.049 0.049 0.049 0.006 0.011 0.017 0.023 0.028 0.034 0.040 0.045 0.051 0.056 0.062 0.062 0.062 0.061 0.061 0.007 0.014 0.020 0.027 0.034 0.041 0.048 0.054 0.061 0.068 0.074 0.074 0.074 0.074 0.074 0.008 0.016 0.024 0.032 0.040 0.048 0.056 0.064 0.071 0.079 0.087 0.086 0.086 0.086 0.086 0.009 0.018 0.027 0.036 0.045 0.054 0.064 0.073 0.081 0.090 0.099 0.099 0.099 0.098 0.098 0.010 0.021 0.031 0.041 0.051 0.061 0.072 0.082 0.092 0.102 0.112 0.111 0.111 0.110 0.110 0.011 0.023 0.034 0.046 0.057 0.068 0.080 0.091 0.102 0.113 0.124 0.124 0.123 0.123 0.122 0.013 0.025 0.038 0.050 0.063 0.075 0.088 0.100 0.112 0.124 0.136 0.136 0.135 0.135 0.134 0.013 0.025 0.038 0.051 0.063 0.076 0.088 0.100 0.112 0.124 0.137 0.136 0.135 0.135 0.134 0.013 0.026 0.038 0.051 0.063 0.076 0.088 0.100 0.112 0.125 0.137 0.136 0.135 0.135 0.134 0.013 0.026 0.039 0.051 0.064 0.076 0.088 0.100 0.113 0.125 0.137 0.136 0.135 0.135 0.134 0.013 0.026 0.039 0.051 0.064 0.076 0.088 0.101 0.113 0.125 0.137 0.136 0.135 0.134 0.134

(b) Fourier transform convolution with proper zero padding
0.771 0.721 0.673 0.624 0.578 0.532 0.486 0.438 0.387 0.334 0.278 0.273 0.266 0.257 0.247 0.700 0.655 0.612 0.569 0.528 0.488 0.448 0.405 0.361 0.313 0.264 0.260 0.254 0.246 0.237 0.626 0.587 0.550 0.513 0.477 0.442 0.407 0.371 0.333 0.292 0.249 0.246 0.241 0.234 0.227 0.552 0.519 0.488 0.456 0.426 0.396 0.367 0.336 0.304 0.270 0.233 0.231 0.228 0.222 0.215 0.479 0.452 4.426 0.399 0.374 0.350 0.326 0.301 0.275 0.247 0.218 0.216 0.213 0.209 0.204 0.407 0.385 0.365 0.344 0.324 0.305 0.286 0.266 0.246 0.225 0.202 0.200 0.198 0.195 0.192 0.334 0.319 0.304 0.288 0.274 0.260 0.246 0.232 0.218 0.203 0.186 0.185 0.183 0.181 0.179 0.260 0.252 0.243 0.234 0.225 0.217 0.208 0.200 0.191 0.182 0.172 0.171 0.169 0.168 0.166 0.187 0.185 0.182 0.180 0.177 0.174 0.172 0.169 0.166 0.163 0.159 0.158 0.157 0.156 0.155 0.113 0.118 0.122 0.125 0.129 0.133 0.136 0.139 0.142 0.145 0.148 0.147 0.146 0.145 0.144 0.040 0.050 0.061 0.071 0.081 0.091 0.101 0.110 0.119 0.128 0.136 0.136 0.135 0.135 0.134 0.036 0.047 0.057 0.067 0.078 0.088 0.098 0108 0.118 0.127 0.137 0.136 0.135 0.135 0.134 0.034 0.044 0.055 0.065 0.076 0.086 0.096 0.107 0.117 0.127 0.137 0.136 0.135 0.135 0.134 0.033 0.044 0.055 0.065 0.075 0.085 0.096 0.106 0.116 0.127 0.137 0.136 0.135 0.135 0.134 0.034 0.045 0.055 0.065 0.075 0.086 0.096 0.106 0.116 0.127 0.137 0.136 0.135 0.134 0.134

(c) Fourier transform convolution without zero padding

FIGURE 9.3-4. Wraparound error for Fourier transform convolution, upper left corner of processed image.

between direct and Fourier domain finite area convolution. The jaggedness of the plot, in this example, arises from discrete changes in J (64, 128, 256,...) as N increases. Fourier domain processing is more computationally efficient than direct processing for image convolution if the impulse response is sufficiently large. However, if the image to be processed is large, the relative computational advantage of Fourier domain processing diminishes. Also, there are attendant problems of computational

228

LINEAR PROCESSING TECHNIQUES

FIGURE 9.3-5. Comparison of direct and Fourier domain processing for finite-area convolution.

accuracy with large Fourier transforms. Both difficulties can be alleviated by a block-mode filtering technique in which a large image is separately processed in adjacent overlapped blocks (2, 7–9). Figure 9.3-6a illustrates the extraction of a NB × NB pixel block from the upper left corner of a large image array. After convolution with a L × L impulse response, the resulting M B × M B pixel block is placed in the upper left corner of an output

FIGURE 9.3-6. Geometric arrangement of blocks for block-mode filtering.

FOURIER TRANSFORM FILTERING

229

data array as indicated in Figure 9.3-6a. Next, a second block of N B × N B pixels is extracted from the input array to produce a second block of M B × M B output pixels that will lie adjacent to the first block. As indicated in Figure 9.3-6b, this second input block must be overlapped by (L – 1) pixels in order to generate an adjacent output block. The computational process then proceeds until all input blocks are filled along the first row. If a partial input block remains along the row, zero-value elements can be added to complete the block. Next, an input block, overlapped by (L –1) pixels with the first row blocks, is extracted to produce the first block of the second output row. The algorithm continues in this fashion until all output points are computed. A total of
O F = N + 2N log 2 N
2 2

(9.3-6)

operations is required for Fourier domain convolution over the full size image array. With block-mode filtering with N B × N B input pixel blocks, the required number of operations is
O B = R ( N B + 2NB log 2 N )
2 2 2

(9.3-7)

where R represents the largest integer value of the ratio N ⁄ ( N B + L – 1 ). Hunt (9) has determined the optimum block size as a function of the original image size and impulse response size.

9.4. FOURIER TRANSFORM FILTERING The discrete Fourier transform convolution processing algorithm of Section 9.3 is often utilized for computer simulation of continuous Fourier domain filtering. In this section we consider discrete Fourier transform filter design techniques. 9.4.1. Transfer Function Generation The first step in the discrete Fourier transform filtering process is generation of the discrete domain transfer function. For simplicity, the following discussion is limited to one-dimensional signals. The extension to two dimensions is straightforward. Consider a one-dimensional continuous signal f C ( x ) of wide extent which is bandlimited such that its Fourier transform f C ( ω ) is zero for ω greater than a cutoff frequency ω 0. This signal is to be convolved with a continuous impulse function h C ( x ) whose transfer function h C ( ω ) is also bandlimited to ω 0 . From Chapter 1 it is known that the convolution can be performed either in the spatial domain by the operation

230

LINEAR PROCESSING TECHNIQUES


gC ( x ) =

∫–∞ fC ( α )hC ( x – α ) dα

(9.4-1a)

or in the continuous Fourier domain by
1 g C ( x ) = ----2π


∫–∞ fC ( ω )hC ( ω ) exp { iωx } dω

(9.4-1b)

Chapter 7 has presented techniques for the discretization of the convolution integral of Eq. 9.4-1. In this process, the continuous impulse response function h C ( x ) must be truncated by spatial multiplication of a window function y(x) to produce the windowed impulse response b C ( x ) = h C ( x )y ( x )

(9.4-2)

where y(x) = 0 for x > T . The window function is designed to smooth the truncation effect. The resulting convolution integral is then approximated as gC ( x ) =

∫x – T

x+T

fC ( α )b C ( x – α ) dα

(9.4-3)

Next, the output signal g C ( x ) is sampled over 2J + 1 points at a resolution ∆ = π ⁄ ω 0, and the continuous integration is replaced by a quadrature summation at the same resolution ∆ , yielding the discrete representation j+K g C ( j∆ ) =

k=j–K



f C ( k∆ )b C [ ( j – k )∆ ]

(9.4-4)

where K is the nearest integer value of the ratio T ⁄ ∆. Computation of Eq. 9.4-4 by discrete Fourier transform processing requires formation of the discrete domain transfer function b D ( u ) . If the continuous domain impulse response function h C ( x ) is known analytically, the samples of the windowed impulse response function are inserted as the first L = 2K + 1 elements of a J-element sequence and the remaining J – L elements are set to zero. Thus, let b D ( p ) = b C ( – K ), …, b C ( 0 ), …, b C ( K ) , 0, …, 0             

(9.4-5)

L terms where 0 ≤ p ≤ P – 1. The terms of b D ( p ) can be extracted from the continuous impulse response function h C ( x ) and the window function by the sampling operation

FOURIER TRANSFORM FILTERING

231

b D ( p ) = y ( x )h C ( x )δ ( x – p∆ )

(9.4-6)

The next step in the discrete Fourier transform convolution algorithm is to perform a discrete Fourier transform of b D ( p ) over P points to obtain
1 b D ( u ) = ------P
P–1 p=1



 – 2πipu  b D ( p ) exp  -----------------   P 

(9.4-7)

where 0 ≤ u ≤ P – 1 . If the continuous domain transfer function hC ( ω ) is known analytically, then b D ( u ) can be obtained directly. It can be shown that
  1 --------------------------------b D ( u ) = ---------------- exp  – iπ ( L – 1 )  h C  2πu   P∆  2 P   4 Pπ bD( P – u ) = b * ( u ) D

(9.4-8a) (9.4-8b)

for u = 0, 1,..., P/2, where bC ( ω ) = h C ( ω ) * y ( ω )

(9.4-8c)

and y ( ω ) is the continuous domain Fourier transform of the window function y(x). If h C ( ω ) and y ( ω ) are known analytically, then, in principle, h C ( ω ) can be obtained by analytically performing the convolution operation of Eq. 9.4-8c and evaluating the resulting continuous function at points 2πu ⁄ P∆. In practice, the analytic convolution is often difficult to perform, especially in two dimensions. An alternative is to perform an analytic inverse Fourier transformation of the transfer function h C ( ω ) to obtain its continuous domain impulse response h C ( x ) and then form b D ( u ) from the steps of Eqs. 9.4-5 to 9.4-7. Still another alternative is to form b D ( u ) from h C ( ω ) according to Eqs. 9.4-8a and 9.4-8b, take its discrete inverse Fourier transform, window the resulting sequence, and then form b D ( u ) from Eq. 9.4-7. 9.4.2. Windowing Functions The windowing operation performed explicitly in the spatial domain according to Eq. 9.4-6 or implicitly in the Fourier domain by Eq. 9.4-8 is absolutely imperative if the wraparound error effect described in Section 9.3 is to be avoided. A common mistake in image filtering is to set the values of the discrete impulse response function arbitrarily equal to samples of the continuous impulse response function. The corresponding extended discrete impulse response function will generally possess nonzero elements in each of its J elements. That is, the length L of the discrete

232

LINEAR PROCESSING TECHNIQUES

impulse response embedded in the extended vector of Eq. 9.4-5 will implicitly be set equal to J. Therefore, all elements of the output filtering operation will be subject to wraparound error. A variety of window functions have been proposed for discrete linear filtering (10–12). Several of the most common are listed in Table 9.4-1 and sketched in Figure 9.4-1. Figure 9.4-2 shows plots of the transfer functions of these window functions. The window transfer functions consist of a main lobe and sidelobes whose peaks decrease in magnitude with increasing frequency. Examination of the structure of Eq. 9.4-8 indicates that the main lobe causes a loss in frequency response over the signal passband from 0 to ω 0 , while the sidelobes are responsible for an aliasing error because the windowed impulse response function b C ( ω ) is not bandlimited. A tapered window function reduces the magnitude of the sidelobes and consequently attenuates the aliasing error, but the main lobe becomes wider, causing the signal frequency response within the passband to be reduced. A design trade-off must be made between these complementary sources of error. Both sources of degradation can be reduced by increasing the truncation length of the windowed impulse response, but this strategy will either result in a shorter length output sequence or an increased number of computational operations.
TABLE 9.4-1. Window Functions a Function Rectangular Definition w(n) = 1 2n  ----------L–1 w(n) =  2  2 – ---------- L–1 0≤n≤L–1 ----------0≤n–L–1 2 L–1 ----------- ≤ n ≤ L – 1 2 0≤n≤L–1

Barlett (triangular)

Hanning

 2πn  1 w(n) = --  1 – cos  -----------  2  L – 1  2πnw(n) = 0.54 - 0.46 cos  -----------    L – 1 

Hamming

0≤n≤L–1

Blackman

 4πn   2πn  w(n) = 0.42 – 0.5 cos  -----------  + 0.08 cos  -----------  L–1 L – 1  

0≤n≤L–1

Kaiser

 2 2 1 ⁄ 2 I0  ωa [ ( ( L – 1 ) ⁄ 2 ) – [ n – ( ( L – 1 ) ⁄ 2 ) ] ]    ----------------------------------------------------------------------------------------------------------------- 0 ≤ n ≤ L – 1 I0 { ωa [ ( L – 1 ) ⁄ 2 ] }

a

I 0 { · } is the modified zeroth-order Bessel function of the first kind and ω a is a design parameter.

FOURIER TRANSFORM FILTERING

233

FIGURE 9.4-1. One-dimensional window functions.

9.4.3. Discrete Domain Transfer Functions In practice, it is common to define the discrete domain transform directly in the discrete Fourier transform frequency space. The following are definitions of several widely used transfer functions for a N × N pixel image. Applications of these filters are presented in Chapter 10. 1. Zonal low-pass filter:
H ( u, v ) = 1 0≤u≤C–1 0≤u≤C–1

and 0 ≤ v ≤ C – 1 and N + 1 – C ≤ v ≤ N – 1

N + 1 – C ≤ u ≤ N – 1 and 0 ≤ v ≤ C – 1 N + 1 – C ≤ u ≤ N – 1 and N + 1 – C ≤ v ≤ N – 1 H ( u, v ) = 0

(9.4-9a) (9.4-9b)

otherwise

where C is the filter cutoff frequency for 0 < C ≤ 1 + N ⁄ 2. Figure 9.4-3 illustrates the low-pass filter zones.

234

LINEAR PROCESSING TECHNIQUES

(a) Rectangular

(b) Triangular

(c) Hanning

(d) Hamming

(e) Blackman

FIGURE 9.4-2. Transfer functions of one-dimensional window functions.

2. Zonal high-pass filter:
H ( 0, 0 ) = 0 H ( u, v ) = 0 0≤u≤C–1 0≤u≤C–1

(9.4-10a) and 0 ≤ v ≤ C – 1 and N + 1 – C ≤ v ≤ N – 1

N + 1 – C ≤ u ≤ N – 1 and 0 ≤ v ≤ C – 1 N + 1 – C ≤ u ≤ N – 1 and N + 1 – C ≤ v ≤ N – 1 H ( u, v ) = 1

(9.4-10b) (9.4-10c)

otherwise

FOURIER TRANSFORM FILTERING

235

FIGURE 9.4-3. Zonal filter transfer function definition.

3. Gaussian filter:
H ( u, v ) = G ( u, v ) 0≤u≤N⁄2 0≤u≤N⁄2

and 0 ≤ v ≤ N ⁄ 2 and 1 + N ⁄ 2 ≤ v ≤ N – 1

1 + N ⁄ 2 ≤ u ≤ N – 1 and 0 ≤ v ≤ N ⁄ 2 1 + N ⁄ 2 ≤ u ≤ N – 1 and 1 + N ⁄ 2 ≤ v ≤ N – 1

(9.4-11a)

where

 1 2 2  G ( u, v ) = exp  – -- [ ( s u u ) + ( s v v ) ]  2  

(9.4-11b)

and su and sv are the Gaussian filter spread factors.

236

LINEAR PROCESSING TECHNIQUES

4. Butterworth low-pass filter:
H ( u, v ) = B ( u, v ) 0≤u≤N⁄2 0≤u≤N⁄2

and 0 ≤ v ≤ N ⁄ 2 and 1 + N ⁄ 2 ≤ v ≤ N – 1

1 + N ⁄ 2 ≤ u ≤ N – 1 and 0 ≤ v ≤ N ⁄ 2 1+N⁄2≤u≤N–1

and 1 + N ⁄ 2 ≤ v ≤ N – 1 (9.4-12a)

where
1 B ( u, v ) = ------------------------------------------------1 + (u + v ) ----------------------------C
2 2 1⁄2

2n

(9.4-12b)

where the integer variable n is the order of the filter. The Butterworth low-pass filter provides an attenuation of 50% at the cutoff frequency C = ( u + v ) 5. Butterworth high-pass filter:
H ( u, v ) = B ( u, v ) 0≤u≤N⁄2 0≤u≤N⁄2
2 2 1⁄2

.

and 0 ≤ v ≤ N ⁄ 2 and 1 + N ⁄ 2 ≤ v ≤ N – 1

1 + N ⁄ 2 ≤ u ≤ N – 1 and 0 ≤ v ≤ N ⁄ 2 1 + N ⁄ 2 ≤ u ≤ N – 1 and 1 + N ⁄ 2 ≤ v ≤ N – 1

(9.4-13a)

where
1 B ( u, v ) = ------------------------------------------------2n C ----------------------------1+ 2 2 1⁄2 (u + v )

(9.4-13b)

Figure 9.4-4 shows the transfer functions of zonal and Butterworth low- and highpass filters for a 512 × 512 pixel image.

9.5. SMALL GENERATING KERNEL CONVOLUTION It is possible to perform convolution on a N × N image array F( j, k) with an arbitrary L × L impulse response array H( j, k) by a sequential technique called small

SMALL GENERATING KERNEL CONVOLUTION

237

(a) Zonal low-pass

(b) Butterworth low-pass

(c) Zonal high-pass

(d ) Butterworth high-pass

FIGURE 9.4-4. Zonal and Butterworth low- and high-pass transfer functions; 512 × 512 images; cutoff frequency = 64.

generating kernel (SGK) convolution (13–16). Figure 9.5-1 illustrates the decomposition process in which a L × L prototype impulse response array H( j, k) is sequentially decomposed into 3 × 3 pixel SGKs according to the relation
ˆ H ( j, k ) = K 1 ( j, k ) K 2 ( j, k ) … K Q ( j, k )

(9.5-1)

ˆ where H ( j, k ) is the synthesized impulse response array, the symbol denotes centered two-dimensional finite-area convolution, as defined by Eq. 7.1-14, and K i ( j, k ) is the ith 3 × 3 pixel SGK of the decomposition, where Q = ( L – 1 ) ⁄ 2 . The SGK convolution technique can be extended to larger SGK kernels. Generally, the SGK synthesis of Eq. 9.5-1 is not exact. Techniques have been developed for choosing the ˆ SGKs to minimize the mean-square error between H ( j, k ) and H ( j, k ) (13).

238

LINEAR PROCESSING TECHNIQUES

FIGURE 9.5-1. Cascade decomposition of a two-dimensional impulse response array into small generating kernels.

Two-dimensional convolution can be performed sequentially without approximation error by utilizing the singular-value decomposition technique described in Appendix A1.2 in conjunction with the SGK decimation (17–19). With this method, called SVD/SGK convolution, the impulse response array H ( j, k ) is regarded as a matrix H. Suppose that H is orthogonally separable such that it can be expressed in the outer product form
H = ab
T

(9.5-2)

where a and b are column and row operator vectors, respectively. Then, the twodimensional convolution operation can be performed by first convolving the columns of F ( j, k ) with the impulse response sequence a(j) corresponding to the vector a, and then convolving the rows of that resulting array with the sequence b(k) corresponding to the vector b. If H is not separable, the matrix can be expressed as a sum of separable matrices by the singular-value decomposition by which
H =

i=1

∑ Hi
T

R

(9.5-3a) (9.5-3b)

Hi = si ai bi

where R ≥ 1 is the rank of H, si is the ith singular value of H. The vectors ai and bi are the L × 1 eigenvectors of HHT and HTH, respectively. Each eigenvector ai and bi of Eq. 9.5-3 can be considered to be a one-dimensional sequence, which can be decimated by a small generating kernel expansion as a i ( j ) = c i [ a i1 ( j ) b i ( k ) = r i [ b i1 ( k ) … … a iq ( j ) b iq ( k ) … … a iQ ( j ) ] b iQ ( k ) ]

(9.5-4a) (9.5-4b)

where a iq ( j ) and b iq ( k ) are 3 × 1 impulse response sequences corresponding to the ith singular-value channel and the qth SGK expansion. The terms ci and ri are column and row gain constants. They are equal to the sum of the elements of their respective sequences if the sum is nonzero, and equal to the sum of the magnitudes

REFERENCES

239

FIGURE 9.5-2. Nonseparable SVD/SGK expansion.

otherwise. The former case applies for a unit-gain filter impulse response, while the latter case applies for a differentiating filter. As a result of the linearity of the SVD expansion of Eq. 9.5-3b, the large size impulse response array Hi ( j, k ) corresponding to the matrix Hi of Eq. 9.5-3a can be synthesized by sequential 3 × 3 convolutions according to the relation
H i ( j, k ) = r i c i [ K i1 ( j, k ) * … * K iq ( j, k ) * … * KiQ ( j, k ) ]

(9.5-5)

where K iq ( j, k ) is the qth SGK of the ith SVD channel. Each K iq ( j, k ) is formed by an outer product expansion of a pair of the a iq ( j ) and b iq ( k ) terms of Eq. 9.5-4. The ordering is important only for low-precision computation when roundoff error becomes a consideration. Figure 9.5-2 is the flowchart for SVD/SGK convolution. The weighting terms in the figure are
W i = si ri ci

(9.5-6)

Reference 19 describes the design procedure for computing the K iq ( j, k ) . REFERENCES
1. W. K. Pratt, “Generalized Wiener Filtering Computation Techniques,” IEEE Trans. Computers, C-21, 7, July 1972, 636–641. 2. T. G. Stockham, Jr., “High Speed Convolution and Correlation,” Proc. Spring Joint Computer Conference, 1966, 229–233. 3. W. M. Gentleman and G. Sande, “Fast Fourier Transforms for Fun and Profit,” Proc. Fall Joint Computer Conference, 1966, 563–578.

240

LINEAR PROCESSING TECHNIQUES

4. W. K. Pratt, “Vector Formulation of Two-Dimensional Signal Processing Operations,” Computer Graphics and Image Processing, 4, 1, March 1975, 1–24. 5. B. R. Hunt, “A Matrix Theory Proof of the Discrete Convolution Theorem,” IEEE Trans. Audio and Electroacoustics, AU-19, 4, December 1973, 285–288. 6. W. K. Pratt, “Transform Domain Signal Processing Techniques,” Proc. National Electronics Conference, Chicago, 1974. 7. H. D. Helms, “Fast Fourier Transform Method of Computing Difference Equations and Simulating Filters,” IEEE Trans. Audio and Electroacoustics, AU-15, 2, June 1967, 85– 90. 8. M. P. Ekstrom and V. R. Algazi, “Optimum Design of Two-Dimensional Nonrecursive Digital Filters,” Proc. 4th Asilomar Conference on Circuits and Systems, Pacific Grove, CA, November 1970. 9. B. R. Hunt, “Computational Considerations in Digital Image Enhancement,” Proc. Conference on Two-Dimensional Signal Processing, University of Missouri, Columbia, MO, October 1971. 10. A. V. Oppenheim and R. W. Schafer, Digital Signal Processing, Prentice Hall, Englewood Cliffs, NJ, 1975. 11. R. B. Blackman and J. W. Tukey, The Measurement of Power Spectra, Dover Publications, New York, 1958. 12. J. F. Kaiser, “Digital Filters”, Chapter 7 in Systems Analysis by Digital Computer, F. F. Kuo and J. F. Kaiser, Eds., Wiley, New York, 1966. 13. J. F. Abramatic and O. D. Faugeras, “Design of Two-Dimensional FIR Filters from Small Generating Kernels,” Proc. IEEE Conference on Pattern Recognition and Image Processing, Chicago, May 1978. 14. W. K. Pratt, J. F. Abramatic, and O. D. Faugeras, “Method and Apparatus for Improved Digital Image Processing,” U.S. patent 4,330,833, May 18, 1982. 15. J. F. Abramatic and O. D. Faugeras, “Sequential Convolution Techniques for Image Filtering,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-30, 1, February 1982, 1–10. 16. J. F. Abramatic and O. D. Faugeras, “Correction to Sequential Convolution Techniques for Image Filtering,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-30, 2, April 1982, 346. 17. W. K. Pratt, “Intelligent Image Processing Display Terminal,” Proc. SPIE, 199, August 1979, 189–194. 18. J. F. Abramatic and S. U. Lee, “Singular Value Decomposition of 2-D Impulse Responses,” Proc. International Conference on Acoustics, Speech, and Signal Processing, Denver, CO, April 1980, 749–752. 19. S. U. Lee, “Design of SVD/SGK Convolution Filters for Image Processing,” Report USCIPI 950, University Southern California, Image Processing Institute, January 1980.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

PART 4 IMAGE IMPROVEMENT
The use of digital processing techniques for image improvement has received much interest with the publicity given to applications in space imagery and medical research. Other applications include image improvement for photographic surveys and industrial radiographic analysis. Image improvement is a term coined to denote three types of image manipulation processes: image enhancement, image restoration, and geometrical image modification. Image enhancement entails operations that improve the appearance to a human viewer, or operations to convert an image to a format better suited to machine processing. Image restoration has commonly been defined as the modification of an observed image in order to compensate for defects in the imaging system that produced the observed image. Geometrical image modification includes image magnification, minification, rotation, and nonlinear spatial warping. Chapter 10 describes several techniques of monochrome and color image enhancement. The chapters that follow develop models for image formation and restoration, and present methods of point and spatial image restoration. The final chapter of this part considers geometrical image modification.

241

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

10
IMAGE ENHANCEMENT

Image enhancement processes consist of a collection of techniques that seek to improve the visual appearance of an image or to convert the image to a form better suited for analysis by a human or a machine. In an image enhancement system, there is no conscious effort to improve the fidelity of a reproduced image with regard to some ideal form of the image, as is done in image restoration. Actually, there is some evidence to indicate that often a distorted image, for example, an image with amplitude overshoot and undershoot about its object edges, is more subjectively pleasing than a perfectly reproduced original. For image analysis purposes, the definition of image enhancement stops short of information extraction. As an example, an image enhancement system might emphasize the edge outline of objects in an image by high-frequency filtering. This edge-enhanced image would then serve as an input to a machine that would trace the outline of the edges, and perhaps make measurements of the shape and size of the outline. In this application, the image enhancement processor would emphasize salient features of the original image and simplify the processing task of a dataextraction machine. There is no general unifying theory of image enhancement at present because there is no general standard of image quality that can serve as a design criterion for an image enhancement processor. Consideration is given here to a variety of techniques that have proved useful for human observation improvement and image analysis.

10.1. CONTRAST MANIPULATION One of the most common defects of photographic or electronic images is poor contrast resulting from a reduced, and perhaps nonlinear, image amplitude range. Image
243

244

IMAGE ENHANCEMENT

FIGURE 10.1-1. Continuous and quantized image contrast enhancement.

contrast can often be improved by amplitude rescaling of each pixel (1,2). Figure 10.1-1a illustrates a transfer function for contrast enhancement of a typical continuous amplitude low-contrast image. For continuous amplitude images, the transfer function operator can be implemented by photographic techniques, but it is often difficult to realize an arbitrary transfer function accurately. For quantized amplitude images, implementation of the transfer function is a relatively simple task. However, in the design of the transfer function operator, consideration must be given to the effects of amplitude quantization. With reference to Figure l0.l-lb, suppose that an original image is quantized to J levels, but it occupies a smaller range. The output image is also assumed to be restricted to J levels, and the mapping is linear. In the mapping strategy indicated in Figure 10.1-1b, the output level chosen is that level closest to the exact mapping of an input level. It is obvious from the diagram that the output image will have unoccupied levels within its range, and some of the gray scale transitions will be larger than in the original image. The latter effect may result in noticeable gray scale contouring. If the output image is quantized to more levels than the input image, it is possible to approach a linear placement of output levels, and hence, decrease the gray scale contouring effect.

CONTRAST MANIPULATION

245

(a) Linear image scaling

(b) Linear image scaling with clipping

(c) Absolute value scaling

FIGURE 10.1-2. Image scaling methods.

10.1.1. Amplitude Scaling A digitally processed image may occupy a range different from the range of the original image. In fact, the numerical range of the processed image may encompass negative values, which cannot be mapped directly into a light intensity range. Figure 10.1-2 illustrates several possibilities of scaling an output image back into the domain of values occupied by the original image. By the first technique, the processed image is linearly mapped over its entire range, while by the second technique, the extreme amplitude values of the processed image are clipped to maximum and minimum limits. The second technique is often subjectively preferable, especially for images in which a relatively small number of pixels exceed the limits. Contrast enhancement algorithms often possess an option to clip a fixed percentage of the amplitude values on each end of the amplitude scale. In medical image enhancement applications, the contrast modification operation shown in Figure 10.2-2b, for a ≥ 0, is called a window-level transformation. The window value is the width of the linear slope, b – a; the level is located at the midpoint c of the slope line. The third technique of amplitude scaling, shown in Figure 10.1-2c, utilizes an absolute value transformation for visualizing an image with negatively valued pixels. This is a

246

IMAGE ENHANCEMENT

(a) Linear, full range, − 0.147 to 0.169

(b) Clipping, 0.000 to 0.169

(c) Absolute value, 0.000 to 0.169

FIGURE 10.1-3. Image scaling of the Q component of the YIQ representation of the dolls_gamma color image.

useful transformation for systems that utilize the two's complement numbering convention for amplitude representation. In such systems, if the amplitude of a pixel overshoots +1.0 (maximum luminance white) by a small amount, it wraps around by the same amount to –1.0, which is also maximum luminance white. Similarly, pixel undershoots remain near black. Figure 10.1-3 illustrates the amplitude scaling of the Q component of the YIQ transformation, shown in Figure 3.5-14, of a monochrome image containing negative pixels. Figure 10.1-3a presents the result of amplitude scaling with the linear function of Figure 10.1-2a over the amplitude range of the image. In this example, the most negative pixels are mapped to black (0.0), and the most positive pixels are mapped to white (1.0). Amplitude scaling in which negative value pixels are clipped to zero is shown in Figure 10.1-3b. The black regions of the image correspond to

CONTRAST MANIPULATION

247

(a) Original

(b) Original histogram

(c) Min. clip = 0.17, max. clip = 0.64

(d) Enhancement histogram

(e) Min. clip = 0.24, max. clip = 0.35

(f) Enhancement histogram

FIGURE 10.1-4. Window-level contrast stretching of an earth satellite image.

248

IMAGE ENHANCEMENT

negative pixel values of the Q component. Absolute value scaling is presented in Figure 10.1-3c. Figure 10.1-4 shows examples of contrast stretching of a poorly digitized original satellite image along with gray scale histograms of the original and enhanced pictures. In Figure 10.1-4c, the clip levels are set at the histogram limits of the original, while in Figure 10.1-4e, the clip levels truncate 5% of the original image upper and lower level amplitudes. It is readily apparent from the histogram of Figure 10.1-4f that the contrast-stretched image of Figure 10.1-4e has many unoccupied amplitude levels. Gray scale contouring is at the threshold of visibility. 10.1.2. Contrast Modification Section 10.1.1 dealt with amplitude scaling of images that do not properly utilize the dynamic range of a display; they may lie partly outside the dynamic range or occupy only a portion of the dynamic range. In this section, attention is directed to point transformations that modify the contrast of an image within a display's dynamic range. Figure 10.1-5a contains an original image of a jet aircraft that has been digitized to 256 gray levels and numerically scaled over the range of 0.0 (black) to 1.0 (white).

(a) Original

(b) Original histogram

(c) Transfer function

(d ) Contrast stretched

FIGURE 10.1-5. Window-level contrast stretching of the jet_mon image.

CONTRAST MANIPULATION

249

(a ) Square function

(b ) Square output

(c ) Cube function

(d ) Cube output

FIGURE 10.1-6. Square and cube contrast modification of the jet_mon image.

The histogram of the image is shown in Figure 10.1-5b. Examination of the histogram of the image reveals that the image contains relatively few low- or highamplitude pixels. Consequently, applying the window-level contrast stretching function of Figure 10.1-5c results in the image of Figure 10.1-5d, which possesses better visual contrast but does not exhibit noticeable visual clipping. Consideration will now be given to several nonlinear point transformations, some of which will be seen to improve visual contrast, while others clearly impair visual contrast. Figures 10.1-6 and 10.1-7 provide examples of power law point transformations in which the processed image is defined by p G ( j, k ) = [ F ( j, k ) ]

(10.1-1)

250

IMAGE ENHANCEMENT

(a) Square root function

(b) Square root output

(c ) Cube root function

(d ) Cube root output

FIGURE 10.1-7. Square root and cube root contrast modification of the jet_mon image.

where 0.0 ≤ F ( j, k ) ≤ 1.0 represents the original image and p is the power law variable. It is important that the amplitude limits of Eq. 10.1-1 be observed; processing of the integer code (e.g., 0 to 255) by Eq. 10.1-1 will give erroneous results. The square function provides the best visual result. The rubber band transfer function shown in Figure 10.1-8a provides a simple piecewise linear approximation to the power law curves. It is often useful in interactive enhancement machines in which the inflection point is interactively placed. The Gaussian error function behaves like a square function for low-amplitude pixels and like a square root function for high- amplitude pixels. It is defined as
 F ( j, k ) – 0.5  0.5 erf  -----------------------------  + --------a 2   a 2 G ( j, k ) = --------------------------------------------------------------- 0.5  2 erf  ---------  a 2

(10.1-2a)

CONTRAST MANIPULATION

251

(a ) Rubber-band function

(b ) Rubber-band output

FIGURE 10.1-8. Rubber-band contrast modification of the jet_mon image.

where
2erf { x } = -----π
2

∫0 exp { –y

x

} dy

(10.1-2b)

and a is the standard deviation of the Gaussian distribution. The logarithm function is useful for scaling image arrays with a very wide dynamic range. The logarithmic point transformation is given by

log e { 1.0 + aF ( j, k ) } G ( j, k ) = -------------------------------------------------log e { 2.0 }

(10.1-3)

under the assumption that 0.0 ≤ F ( j, k ) ≤ 1.0, where a is a positive scaling factor. Figure 8.2-4 illustrates the logarithmic transformation applied to an array of Fourier transform coefficients. There are applications in image processing in which monotonically decreasing and nonmonotonic amplitude scaling is useful. For example, contrast reverse and contrast inverse transfer functions, as illustrated in Figure 10.1-9, are often helpful in visualizing detail in dark areas of an image. The reverse function is defined as

G ( j, k ) = 1.0 – F ( j, k )

(10.1-4)

252

IMAGE ENHANCEMENT

(a) Reverse function

(b) Reverse function output

(c) Inverse function

(d) Inverse function output

FIGURE 10.1-9. Reverse and inverse function contrast modification of the jet_mon image.

where 0.0 ≤ F ( j, k ) ≤ 1.0 The inverse function
 1.0  G ( j, k ) =  0.1  -------------- F ( j, k )

for 0.0 ≤ F ( j, k ) < 0.1 for 0.1 ≤ F ( j, k ) ≤ 1.0

(10.1-5a)

(10.1-5b)

is clipped at the 10% input amplitude level to maintain the output amplitude within the range of unity. Amplitude-level slicing, as illustrated in Figure 10.1-10, is a useful interactive tool for visually analyzing the spatial distribution of pixels of certain amplitude within an image. With the function of Figure 10.1-10a, all pixels within the amplitude passband are rendered maximum white in the output, and pixels outside the passband are rendered black. Pixels outside the amplitude passband are displayed in their original state with the function of Figure 10.1-10b.

HISTOGRAM MODIFICATION

253

FIGURE 10.1-10. Level slicing contrast modification functions.

10.2. HISTOGRAM MODIFICATION The luminance histogram of a typical natural scene that has been linearly quantized is usually highly skewed toward the darker levels; a majority of the pixels possess a luminance less than the average. In such images, detail in the darker regions is often not perceptible. One means of enhancing these types of images is a technique called histogram modification, in which the original image is rescaled so that the histogram of the enhanced image follows some desired form. Andrews, Hall, and others (3–5) have produced enhanced imagery by a histogram equalization process for which the histogram of the enhanced image is forced to be uniform. Frei (6) has explored the use of histogram modification procedures that produce enhanced images possessing exponential or hyperbolic-shaped histograms. Ketcham (7) and Hummel (8) have demonstrated improved results by an adaptive histogram modification procedure.

254

IMAGE ENHANCEMENT

FIGURE 10.2-1. Approximate gray level histogram equalization with unequal number of quantization levels.

10.2.1. Nonadaptive Histogram Modification Figure 10.2-1 gives an example of histogram equalization. In the figure, H F ( c ) for c = 1, 2,..., C, represents the fractional number of pixels in an input image whose amplitude is quantized to the cth reconstruction level. Histogram equalization seeks to produce an output image field G by point rescaling such that the normalized gray-level histogram H G ( d ) = 1 ⁄ D for d = 1, 2,..., D. In the example of Figure 10.2-1, the number of output levels is set at one-half of the number of input levels. The scaling algorithm is developed as follows. The average value of the histogram is computed. Then, starting at the lowest gray level of the original, the pixels in the quantization bins are combined until the sum is closest to the average. All of these pixels are then rescaled to the new first reconstruction level at the midpoint of the enhanced image first quantization bin. The process is repeated for higher-value gray levels. If the number of reconstruction levels of the original image is large, it is possible to rescale the gray levels so that the enhanced image histogram is almost constant. It should be noted that the number of reconstruction levels of the enhanced image must be less than the number of levels of the original image to provide proper gray scale redistribution if all pixels in each quantization level are to be treated similarly. This process results in a somewhat larger quantization error. It is possible to perform the gray scale histogram equalization process with the same number of gray levels for the original and enhanced images, and still achieve a constant histogram of the enhanced image, by randomly redistributing pixels from input to output quantization bins.

HISTOGRAM MODIFICATION

255

The histogram modification process can be considered to be a monotonic point transformation g d = T { f c } for which the input amplitude variable f 1 ≤ f c ≤ fC is mapped into an output variable g 1 ≤ g d ≤ g D such that the output probability distribution PR { g d = b d } follows some desired form for a given input probability distribution PR { f c = a c } where ac and bd are reconstruction values of the cth and dth levels. Clearly, the input and output probability distributions must each sum to unity. Thus,

c=1 D

∑ PR { f c = ac }

C

= 1

(10.2-1a)

d=1

∑ PR { gd = bd }

= 1

(10.2-1b)

Furthermore, the cumulative distributions must equate for any input index c. That is, the probability that pixels in the input image have an amplitude less than or equal to ac must be equal to the probability that pixels in the output image have amplitude less than or equal to bd, where b d = T { a c } because the transformation is monotonic. Hence

n=1



d

PR { g n = bn } =

m=1



c

PR { fm = am }

(10.2-2)

The summation on the right is the cumulative probability distribution of the input image. For a given image, the cumulative distribution is replaced by the cumulative histogram to yield the relationship

n=1



d

PR { g n = bn } =

m=1



c

HF ( m )

(10.2-3)

Equation 10.2-3 now must be inverted to obtain a solution for gd in terms of fc. In general, this is a difficult or impossible task to perform analytically, but certainly possible by numerical methods. The resulting solution is simply a table that indicates the output image level for each input image level. The histogram transformation can be obtained in approximate form by replacing the discrete probability distributions of Eq. 10.2-2 by continuous probability densities. The resulting approximation is

∫g

g m in

p g ( g ) dg =

∫f

f m in

p f ( f ) df

(10.2-4)

256 Transfer Functiona 1 p g ( g ) = ---------------------------- g min ≤ g ≤ g max g max – g min g = ( g max – g min )Pf ( f ) + g min 1 g = g min – --- ln { 1 – Pf ( f ) } α
1⁄2 2  1 - g = g min + 2α ln  --------------------  1 – Pf( f )   1⁄3 1⁄3

TABLE 10.2-1. Histogram Modification Transfer Functions

Output Probability Density Model

Uniform p g ( g ) = α exp { – α ( g – g min ) } g ≤ g min
2

Exponential

Rayleigh

Hyperbolic (Cube root) max min

g – g min  ( g – g min )  p g ( g ) = ------------------- exp  – --------------------------  g ≥ g min 2 2   2α α –2 ⁄ 3 g 1 ---------------------------p g ( g ) = -3 g 1 ⁄ 3 – g1 ⁄ 3 1 p g ( g ) = ------------------------------------------------------------g [ ln { g max } – ln { g min } ] g max P f ( f ) g = g min  -----------  g  min

g = g max – g min [ P f ( f ) ] + g max

1⁄3

3

Hyperbolic (Logarithmic)

aThe cumulative probability distribution P (f), of the input image is approximated by its cumulative histogram: f j pf ( f ) ≈ m=0 ∑
HF ( m )

HISTOGRAM MODIFICATION

257

(a) Original

(b) Original histogram

(c) Transfer function

(d ) Enhanced

(e ) Enhanced histogram

FIGURE 10.2-2. Histogram equalization of the projectile image.

258

IMAGE ENHANCEMENT

where p f ( f ) and p g ( g ) are the probability densities of f and g, respectively. The integral on the right is the cumulative distribution function P f ( f ) of the input variable f. Hence,

∫g

g m in

pg ( g ) dg = P f ( f )

(10.2-5)

In the special case, for which the output density is forced to be the uniform density,
1 p g ( g ) = ----------------------------

g max – g min

(10.2-6)

for g min ≤ g ≤ g max , the histogram equalization transfer function becomes g = ( g max – g min )P f ( f ) + g min

(10.2-7)

Table 10.2-1 lists several output image histograms and their corresponding transfer functions. Figure 10.2-2 provides an example of histogram equalization for an x-ray of a projectile. The original image and its histogram are shown in Figure 10.2-2a and b, respectively. The transfer function of Figure 10.2-2c is equivalent to the cumulative histogram of the original image. In the histogram equalized result of Figure 10.2-2, ablating material from the projectile, not seen in the original, is clearly visible. The histogram of the enhanced image appears peaked, but close examination reveals that many gray level output values are unoccupied. If the high occupancy gray levels were to be averaged with their unoccupied neighbors, the resulting histogram would be much more uniform. Histogram equalization usually performs best on images with detail hidden in dark regions. Good-quality originals are often degraded by histogram equalization. As an example, Figure 10.2-3 shows the result of histogram equalization on the jet image. Frei (6) has suggested the histogram hyperbolization procedure listed in Table 10.2-1 and described in Figure 10.2-4. With this method, the input image histogram is modified by a transfer function such that the output image probability density is of hyperbolic form. Then the resulting gray scale probability density following the assumed logarithmic or cube root response of the photoreceptors of the eye model will be uniform. In essence, histogram equalization is performed after the cones of the retina. 10.2.2. Adaptive Histogram Modification The histogram modification methods discussed in Section 10.2.1 involve application of the same transformation or mapping function to each pixel in an image. The mapping function is based on the histogram of the entire image. This process can be

HISTOGRAM MODIFICATION

259

(a ) Original

(b) Transfer function

(c ) Histogram equalized

FIGURE 10.2-3. Histogram equalization of the jet_mon image.

made spatially adaptive by applying histogram modification to each pixel based on the histogram of pixels within a moving window neighborhood. This technique is obviously computationally intensive, as it requires histogram generation, mapping function computation, and mapping function application at each pixel. Pizer et al. (9) have proposed an adaptive histogram equalization technique in which histograms are generated only at a rectangular grid of points and the mappings at each pixel are generated by interpolating mappings of the four nearest grid points. Figure 10.2-5 illustrates the geometry. A histogram is computed at each grid point in a window about the grid point. The window dimension can be smaller or larger than the grid spacing. Let M00, M01, M10, M11 denote the histogram modification mappings generated at four neighboring grid points. The mapping to be applied at pixel F(j, k) is determined by a bilinear interpolation of the mappings of the four nearest grid points as given by
M = a [ bM00 + ( 1 – b )M 10 ] + ( 1 – a ) [ bM 01 + ( 1 – b )M 11 ]

(10.2-8a)

260

IMAGE ENHANCEMENT

FIGURE 10.2-4. Histogram hyperbolization.

where k – k0 a = --------------k1 – k0 j – j0 b = ------------j1 – j0

(10.2-8b)

(10.2-8c)

Pixels in the border region of the grid points are handled as special cases of Eq. 10.2-8. Equation 10.2-8 is best suited for general-purpose computer calculation.

FIGURE 10.2-5. Array geometry for interpolative adaptive histogram modification. * Grid point; • pixel to be computed.

NOISE CLEANING

261

(a) Original

(b) Nonadaptive

(c) Adaptive

FIGURE 10.2-6. Nonadaptive and adaptive histogram equalization of the brainscan image.

For parallel processors, it is often more efficient to use the histogram generated in the histogram window of Figure 10.2-5 and apply the resultant mapping function to all pixels in the mapping window of the figure. This process is then repeated at all grid points. At each pixel coordinate (j, k), the four histogram modified pixels obtained from the four overlapped mappings are combined by bilinear interpolation. Figure 10.2-6 presents a comparison between nonadaptive and adaptive histogram equalization of a monochrome image. In the adaptive histogram equalization example, the histogram window is 64 × 64 .

10.3. NOISE CLEANING An image may be subject to noise and interference from several sources, including electrical sensor noise, photographic grain noise, and channel errors. These noise

262

IMAGE ENHANCEMENT

effects can be reduced by classical statistical filtering techniques to be discussed in Chapter 12. Another approach, discussed in this section, is the application of ad hoc noise cleaning techniques. Image noise arising from a noisy sensor or channel transmission errors usually appears as discrete isolated pixel variations that are not spatially correlated. Pixels that are in error often appear visually to be markedly different from their neighbors. This observation is the basis of many noise cleaning algorithms (10–13). In this section we describe several linear and nonlinear techniques that have proved useful for noise reduction. Figure 10.3-1 shows two test images, which will be used to evaluate noise cleaning techniques. Figure 10.3-1b has been obtained by adding uniformly distributed noise to the original image of Figure 10.3-1a. In the impulse noise example of Figure 10.3-1c, maximum-amplitude pixels replace original image pixels in a spatially random manner.

(a ) Original

(b ) Original with uniform noise

(c) Original with impulse noise

FIGURE 10.3-1. Noisy test images derived from the peppers_mon image.

NOISE CLEANING

263

10.3.1. Linear Noise Cleaning Noise added to an image generally has a higher-spatial-frequency spectrum than the normal image components because of its spatial decorrelatedness. Hence, simple low-pass filtering can be effective for noise cleaning. Consideration will now be given to convolution and Fourier domain methods of noise cleaning. Spatial Domain Processing. Following the techniques outlined in Chapter 7, a spatially filtered output image G ( j, k ) can be formed by discrete convolution of an input image F ( j, k ) with a L × L impulse response array H ( j, k ) according to the relation
G ( j, k ) =

∑∑

F ( m, n )H ( m + j + C, n + k + C )

(10.13-1)

where C = (L + 1)/2. Equation 10.3-1 utilizes the centered convolution notation developed by Eq. 7.1-14, whereby the input and output arrays are centered with respect to one another, with the outer boundary of G ( j, k ) of width ( L – 1 ) ⁄ 2 pixels set to zero. For noise cleaning, H should be of low-pass form, with all positive elements. Several common 3 × 3 pixel impulse response arrays of low-pass form are listed below. Mask 1:
-H = 1 9 1 1 1 1 1 1 1 2 1 1 1 1 1 2 1 2 4 2 1 1 1 1 1 1 1 2 1

(10.3-2a)

Mask 2:

1 H = ----10

(10.3-2b)

Mask 3:

1H = ----16

(10.3-2c)

These arrays, called noise cleaning masks, are normalized to unit weighting so that the noise-cleaning process does not introduce an amplitude bias in the processed image. The effect of noise cleaning with the arrays on the uniform noise and impulse noise test images is shown in Figure 10.3-2. Mask 1 and 2 of Eq. 10.3-2 are special cases of a 3 × 3 parametric low-pass filter whose impulse response is defined as
1 1 H = ----------b+2 b 1 b b b
2

1 b 1

(10.3-3)

264

IMAGE ENHANCEMENT

(a ) Uniform noise, mask 1

(b ) Impulse noise, mask 1

(c ) Uniform noise, mask 2

(d ) Impulse noise, mask 2

(e ) Uniform noise, mask 3

(f ) Impulse noise, mask 3

FIGURE 10.3-2. Noise cleaning with 3 × 3 low-pass impulse response arrays on the noisy test images.

NOISE CLEANING

265

(a ) Uniform rectangle

(b) Uniform circular

(c ) Pyramid

(d ) Gaussian, s = 1.0

FIGURE 10.3-3. Noise cleaning with 7 × 7 impulse response arrays on the noisy test image with uniform noise.

The concept of low-pass filtering noise cleaning can be extended to larger impulse response arrays. Figures 10.3-3 and 10.3-4 present noise cleaning results for several 7 × 7 impulse response arrays for uniform and impulse noise. As expected, use of a larger impulse response array provides more noise smoothing, but at the expense of the loss of fine image detail. Fourier Domain Processing. It is possible to perform linear noise cleaning in the Fourier domain (13) using the techniques outlined in Section 9.3. Properly executed, there is no difference in results between convolution and Fourier filtering; the choice is a matter of implementation considerations. High-frequency noise effects can be reduced by Fourier domain filtering with a zonal low-pass filter with a transfer function defined by Eq. 9.3-9. The sharp cutoff characteristic of the zonal low-pass filter leads to ringing artifacts in a filtered image. This deleterious effect can be eliminated by the use of a smooth cutoff filter,

266

IMAGE ENHANCEMENT

(a ) Uniform rectangle

(b) Uniform circular

(c ) Pyramid

(d ) Gaussian, s = 1.0

FIGURE 10.3-4. Noise cleaning with 7 × 7 impulse response arrays on the noisy test image with impulse noise.

such as the Butterworth low-pass filter whose transfer function is specified by Eq. 9.4-12. Figure 10.3-5 shows the results of zonal and Butterworth low-pass filtering of noisy images. Unlike convolution, Fourier domain processing, often provides quantitative and intuitive insight into the nature of the noise process, which is useful in designing noise cleaning spatial filters. As an example, Figure 10.3-6a shows an original image subject to periodic interference. Its two-dimensional Fourier transform, shown in Figure 10.3-6b, exhibits a strong response at the two points in the Fourier plane corresponding to the frequency response of the interference. When multiplied point by point with the Fourier transform of the original image, the bandstop filter of Figure 10.3-6c attenuates the interference energy in the Fourier domain. Figure 10.3-6d shows the noise-cleaned result obtained by taking an inverse Fourier transform of the product.

NOISE CLEANING

267

(a ) Uniform noise, zonal

(b) Impulse noise, zonal

(c ) Uniform noise, Butterworth

(d ) Impulse noise, Butterworth

FIGURE 10.3-5. Noise cleaning with zonal and Butterworth low-pass filtering on the noisy test images; cutoff frequency = 64.

Homomorphic Filtering. Homomorphic filtering (14) is a useful technique for image enhancement when an image is subject to multiplicative noise or interference. Figure 10.3-7 describes the process. The input image F ( j, k ) is assumed to be modeled as the product of a noise-free image S ( j, k ) and an illumination interference array I ( j, k ). Thus,
F ( j, k ) = I ( j, k )S ( j, k )

(10.3-4)

Ideally, I ( j, k ) would be a constant for all ( j, k ) . Taking the logarithm of Eq. 10.3-4 yields the additive linear result

268

IMAGE ENHANCEMENT

(a) Original

(b) Original Fourier transform

(c) Bandstop filter

(d ) Noise cleaned

FIGURE 10.3-6. Noise cleaning with Fourier domain band stop filtering on the parts image with periodic interference.

log { F ( j, k ) } = log { I ( j, k ) } + log { S ( j, k ) }

(10.3-5)

Conventional linear filtering techniques can now be applied to reduce the log interference component. Exponentiation after filtering completes the enhancement process. Figure 10.3-8 provides an example of homomorphic filtering. In this example, the illumination field I ( j, k ) increases from left to right from a value of 0.1 to 1.0.

FIGURE 10.3-7. Homomorphic filtering.

NOISE CLEANING

269

(a) Illumination field

(b) Original

(c) Homomorphic filtering

FIGURE 10.3-8. Homomorphic filtering on the washington_ir image with a Butterworth high-pass filter; cutoff frequency = 4.

Therefore, the observed image appears quite dim on its left side. Homomorphic filtering (Figure 10.3-8c) compensates for the nonuniform illumination. 10.3.2. Nonlinear Noise Cleaning The linear processing techniques described previously perform reasonably well on images with continuous noise, such as additive uniform or Gaussian distributed noise. However, they tend to provide too much smoothing for impulselike noise. Nonlinear techniques often provide a better trade-off between noise smoothing and the retention of fine image detail. Several nonlinear techniques are presented below. Mastin (15) has performed subjective testing of several of these operators.

270

IMAGE ENHANCEMENT

FIGURE 10.3-9. Outlier noise cleaning algorithm.

Outlier. Figure 10.3-9 describes a simple outlier noise cleaning technique in which each pixel is compared to the average of its eight neighbors. If the magnitude of the difference is greater than some threshold level, the pixel is judged to be noisy, and it is replaced by its neighborhood average. The eight-neighbor average can be computed by convolution of the observed image with the impulse response array
1 1 1 1 H = -- 1 0 1 8 1 1 1

(10.3-6)

Figure 10.3-10 presents the results of outlier noise cleaning for a threshold level of 10%.

(a ) Uniform noise

(b ) Impulse noise

FIGURE 10.3-10. Noise cleaning with the outlier algorithm on the noisy test images.

NOISE CLEANING

271

The outlier operator can be extended straightforwardly to larger windows. Davis and Rosenfeld (16) have suggested a variant of the outlier technique in which the center pixel in a window is replaced by the average of its k neighbors whose amplitudes are closest to the center pixel. Median Filter. Median filtering is a nonlinear signal processing technique developed by Tukey (17) that is useful for noise suppression in images. In one-dimensional form, the median filter consists of a sliding window encompassing an odd number of pixels. The center pixel in the window is replaced by the median of the pixels in the window. The median of a discrete sequence a1, a2,..., aN for N odd is that member of the sequence for which (N – 1)/2 elements are smaller or equal in value and (N – 1)/2 elements are larger or equal in value. For example, if the values of the pixels within a window are 0.1, 0.2, 0.9, 0.4, 0.5, the center pixel would be replaced by the value 0.4, which is the median value of the sorted sequence 0.1, 0.2, 0.4, 0.5, 0.9. In this example, if the value 0.9 were a noise spike in a monotonically increasing sequence, the median filter would result in a considerable improvement. On the other hand, the value 0.9 might represent a valid signal pulse for a widebandwidth sensor, and the resultant image would suffer some loss of resolution. Thus, in some cases the median filter will provide noise suppression, while in other cases it will cause signal suppression. Figure 10.3-11 illustrates some examples of the operation of a median filter and a mean (smoothing) filter for a discrete step function, ramp function, pulse function, and a triangle function with a window of five pixels. It is seen from these examples that the median filter has the usually desirable property of not affecting step functions or ramp functions. Pulse functions, whose periods are less than one-half the window width, are suppressed. But the peak of the triangle is flattened. Operation of the median filter can be analyzed to a limited extent. It can be shown that the median of the product of a constant K and a sequence f ( j ) is
MED { K [ f ( j ) ] } = K [ MED { f ( j ) } ]

(10.3-7)

However, for two arbitrary sequences f ( j ) and g ( j ), it does not follow that the median of the sum of the sequences is equal to the sum of their medians. That is, in general,
MED { f ( j ) + g ( j ) } ≠ MED { f ( j ) } + MED { g ( j ) }

(10.3-8)

The sequences 0.1, 0.2, 0.3, 0.4, 0.5 and 0.1, 0.2, 0.3, 0.2, 0.1 are examples for which the additive linearity property does not hold. There are various strategies for application of the median filter for noise suppression. One method would be to try a median filter with a window of length 3. If there is no significant signal loss, the window length could be increased to 5 for median

272

IMAGE ENHANCEMENT

FIGURE 10.3-11. Median filtering on one-dimensional test signals.

filtering of the original. The process would be terminated when the median filter begins to do more harm than good. It is also possible to perform cascaded median filtering on a signal using a fixed-or variable-length window. In general, regions that are unchanged by a single pass of the filter will remain unchanged in subsequent passes. Regions in which the signal period is lower than one-half the window width will be continually altered by each successive pass. Usually, the process will continue until the resultant period is greater than one-half the window width, but it can be shown that some sequences will never converge (18). The concept of the median filter can be extended easily to two dimensions by utilizing a two-dimensional window of some desired shape such as a rectangle or discrete approximation to a circle. It is obvious that a two-dimensional L × L median filter will provide a greater degree of noise suppression than sequential processing with L × 1 median filters, but two-dimensional processing also results in greater signal suppression. Figure 10.3-12 illustrates the effect of two-dimensional median filtering of a spatial peg function with a 3 × 3 square filter and a 5 × 5 plus sign– shaped filter. In this example, the square median has deleted the corners of the peg, but the plus median has not affected the corners. Figures 10.3-13 and 10.3-14 show results of plus sign shaped median filtering on the noisy test images of Figure 10.3-1 for impulse and uniform noise, respectively.

NOISE CLEANING

273

FIGURE 10.3-12. Median filtering on two-dimensional test signals.

In the impulse noise example, application of the 3 × 3 median significantly reduces the noise effect, but some residual noise remains. Applying two 3 × 3 median filters in cascade provides further improvement. The 5 × 5 median filter removes almost all of the impulse noise. There is no visible impulse noise in the 7 × 7 median filter result, but the image has become somewhat blurred. In the case of uniform noise, median filtering provides little visual improvement. Huang et al. (19) and Astola and Campbell (20) have developed fast median filtering algorithms. The latter can be generalized to implement any rank ordering. Pseudomedian Filter. Median filtering is computationally intensive; the number of operations grows exponentially with window size. Pratt et al. (21) have proposed a computationally simpler operator, called the pseudomedian filter, which possesses many of the properties of the median filter. Let {SL} denote a sequence of elements s1, s2,..., sL. The pseudomedian of the sequence is

274

IMAGE ENHANCEMENT

(a) 3 × 3 median filter

(b) 3 × 3 cascaded median filter

(c) 5 × 5 median filter

(d) 7 × 7 median filter

FIGURE 10.3-13. Median filtering on the noisy test image with uniform noise.

PMED { S L } = ( 1 ⁄ 2 )MAXIMIN { S L } + ( 1 ⁄ 2 )MINIMAX { S L }

(10.3-9)

where for M = (L + 1)/2
MAXIMIN { S L } = MAX { [ MIN ( s 1, …, s M ) ], [ MIN ( s 2, …, s M + 1 ) ] …, [ MIN ( sL – M + 1, …, sL ) ] }

(10.3-10a)

MINIMAX { S L } = MIN { [ MAX ( s 1, …, s M ) ], [ MAX ( s 2, …, s M + 1 ) ] …, [ MAX ( s L – M + 1, …, s L ) ] }

(10.3-10b)

NOISE CLEANING

275

(a ) 3 × 3 median filter

(b) 5 × 5 median filter

(c ) 7 × 7 median filter

FIGURE 10.3-14. Median filtering on the noisy test image with uniform noise.

Operationally, the sequence of L elements is decomposed into subsequences of M elements, each of which is slid to the right by one element in relation to its predecessor, and the appropriate MAX and MIN operations are computed. As will be demonstrated, the MAXIMIN and MINIMAX operators are, by themselves, useful operators. It should be noted that it is possible to recursively decompose the MAX and MIN functions on long sequences into sliding functions of length 2 and 3 for pipeline computation (21). The one-dimensional pseudomedian concept can be extended in a variety of ways. One approach is to compute the MAX and MIN functions over rectangular windows. As with the median filter, this approach tends to over smooth an image. A plus-shape pseudomedian generally provides better subjective results. Consider a plus-shaped window containing the following two-dimensional set elements {SE}

276

IMAGE ENHANCEMENT

y1 · · · x 1 … xM … xC · · · yR

Let the sequences {XC} and {YR} denote the elements along the horizontal and vertical axes of the window, respectively. Note that the element xM is common to both sequences. Then the plus-shaped pseudomedian can be defined as
PMED { S E } = ( 1 ⁄ 2 )MAX [ MAXIMIN { X C }, MAXIMIN { Y R } ] + ( 1 ⁄ 2 ) MIN [ MINIMAX { XC }, MINIMAX { Y R } ]

(10.3-11) The MAXIMIN operator in one- or two-dimensional form is useful for removing bright impulse noise but has little or no effect on dark impulse noise. Conversely, the MINIMAX operator does a good job in removing dark, but not bright, impulse noise. A logical conclusion is to cascade the operators. Figure 10.3-16 shows the results of MAXIMIN, MINIMAX, and pseudomedian filtering on an image subjected to salt and pepper noise. As observed, the MAXIMIN operator reduces the salt noise, while the MINIMAX operator reduces the pepper noise. The pseudomedian provides attenuation for both types of noise. The cascade MINIMAX and MAXIMIN operators, in either order, show excellent results.

Wavelet De-noising. Section 8.4-3 introduced wavelet transforms. The usefulness of wavelet transforms for image coding derives from the property that most of the energy of a transformed image is concentrated in the trend transform components rather than the fluctuation components (22). The fluctuation components may be grossly quantized without serious image degradation. This energy compaction property can also be exploited for noise removal. The concept, called wavelet de-noising (22,23), is quite simple. The wavelet transform coefficients are thresholded such that the presumably noisy, low-amplitude coefficients are set to zero.

NOISE CLEANING

277

(a ) Original

(b) MAXIMIN

(c ) MINIMAX

(d ) Pseudomedian

(e ) MINIMAX of MAXIMIN

(f ) MAXIMIN of MINIMAX

FIGURE 10.3-15. 5 × 5 plus-shape MINIMAX, MAXIMIN, and pseudomedian filtering on the noisy test images.

278

IMAGE ENHANCEMENT

10.4. EDGE CRISPENING Psychophysical experiments indicate that a photograph or visual signal with accentuated or crispened edges is often more subjectively pleasing than an exact photometric reproduction. Edge crispening can be accomplished in a variety of ways. 10.4.1. Linear Edge Crispening Edge crispening can be performed by discrete convolution, as defined by Eq. 10.3-1, in which the impulse response array H is of high-pass form. Several common 3 × 3 high-pass masks are given below (24–26). Mask 1:
H = 0 –1 0 –1 5 –1 0 –1 0

(10.4-1a)

Mask 2:
H = –1 –1 –1 –1 9 –1 –1 –1 –1

(10.4-1b)

Mask 3:
1 –2 1 –2 5 –2 1 –2 1

H =

(10.3-1c)

These masks possess the property that the sum of their elements is unity, to avoid amplitude bias in the processed image. Figure 10.4-1 provides examples of edge crispening on a monochrome image with the masks of Eq. 10.4-1. Mask 2 appears to provide the best visual results. To obtain edge crispening on electronically scanned images, the scanner signal can be passed through an electrical filter with a high-frequency bandpass characteristic. Another possibility for scanned images is the technique of unsharp masking (27,28). In this process, the image is effectively scanned with two overlapping apertures, one at normal resolution and the other at a lower spatial resolution, which upon sampling produces normal and low-resolution images F ( j, k ) and F L ( j, k ), respectively. An unsharp masked image
1–c c G ( j, k ) = -------------- F ( j, k ) – -------------- F L ( j, k ) 2c – 1 2c – 1

(10.4-2)

EDGE CRISPENING

279

(a ) Original

(b) Mask 1

(c ) Mask 2

(d ) Mask 3

FIGURE 10.4-1. Edge crispening with 3 × 3 masks on the chest_xray image.

is then generated by forming the weighted difference between the normal and lowresolution images, where c is a weighting constant. Typically, c is in the range 3/5 to 5/6, so that the ratio of normal to low-resolution components in the masked image is from 1.5:1 to 5:1. Figure 10.4-2 illustrates typical scan signals obtained when scanning over an object edge. The masked signal has a longer-duration edge gradient as well as an overshoot and undershoot, as compared to the original signal. Subjectively, the apparent sharpness of the original image is improved. Figure 10.4-3 presents examples of unsharp masking in which the low-resolution image is obtained by convolution with a uniform L × L impulse response array. The sharpening effect is stronger as L increases and c decreases. Linear edge crispening can be performed by Fourier domain filtering. A zonal high-pass filter with a transfer function given by Eq. 9.4-10 suppresses all spatial frequencies below the cutoff frequency except for the dc component, which is necessary to maintain the average amplitude of the filtered image. Figure 10.4-4 shows

280

IMAGE ENHANCEMENT

FIGURE 10.4-2. Waveforms in an unsharp masking image enhancement system.

the result of zonal high-pass filtering of an image. Zonal high-pass filtering often causes ringing in a filtered image. Such ringing can be reduced significantly by utilization of a high-pass filter with a smooth cutoff response. One such filter is the Butterworth high-pass filter, whose transfer function is defined by Eq. 9.4-13. Figure 10.4-4 shows the results of zonal and Butterworth high-pass filtering. In both examples, the filtered images are biased to a midgray level for display. 10.4.2. Statistical Differencing Another form of edge crispening, called statistical differencing (29, p. 100), involves the generation of an image by dividing each pixel value by its estimated standard deviation D ( j, k ) according to the basic relation
F ( j, k ) G ( j, k ) = ----------------D ( j, k )

(10.4-3)

where the estimated standard deviation
1⁄2

1 D ( j, k ) = ---W

j+w

m = j–w n = k–w



k+w



[ F ( m, n ) – M ( m, n ) ]

2

(10.4-4)

EDGE CRISPENING

281

(a) L = 3, c = 0.6

(b) L = 3, c = 0.8

(c) L = 7, c = 0.6

(d ) L = 7, c = 0.8

FIGURE 10.4-3. Unsharp mask processing for L × L uniform low-pass convolution on the chest_xray image.

is computed at each pixel over some W × W neighborhood where W = 2w + 1. The function M ( j, k ) is the estimated mean value of the original image at point (j, k), which is computed as j+w m = j–w k+w n = k–w

1M ( j, k ) = ------2 W





F ( m, n )

(10.4-5)

The enhanced image G ( j, k ) is increased in amplitude with respect to the original at pixels that deviate significantly from their neighbors, and is decreased in relative amplitude elsewhere. The process is analogous to automatic gain control for an audio signal.

282

IMAGE ENHANCEMENT

(a) Zonal filtering

(b) Butterworth filtering

FIGURE 10.4-4. Zonal and Butterworth high-pass filtering on the chest_xray image; cutoff frequency = 32.

Wallis (30) has suggested a generalization of the statistical differencing operator in which the enhanced image is forced to a form with desired first- and second-order moments. The Wallis operator is defined by
A max D d G ( j, k ) = [ F ( j, k ) – M ( j, k ) ] ------------------------------------------ + [ pM d + ( 1 – p )M ( j, k ) ] A max D ( j, k ) + D d

(10.4-6)

where Md and Dd represent desired average mean and standard deviation factors, Amax is a maximum gain factor that prevents overly large output values when D ( j, k ) is small and 0.0 ≤ p ≤ 1.0 is a mean proportionality factor controlling the background flatness of the enhanced image. The Wallis operator can be expressed in a more general form as

G ( j, k ) = [ F ( j, k ) – M ( j, k ) ]A ( j, k ) + B ( j, k )

(10.4-7)

where A ( j, k ) is a spatially dependent gain factor and B ( j, k ) is a spatially dependent background factor. These gain and background factors can be derived directly from Eq. 10.4-4, or they can be specified in some other manner. For the Wallis operator, it is convenient to specify the desired average standard deviation Dd such that the spatial gain ranges between maximum Amax and minimum Amin limits. This can be accomplished by setting Dd to the value

EDGE CRISPENING

283

(a) Original

(b) Mean, 0.00 to 0.98

(c) Standard deviation, 0.01 to 0.26

(d ) Background, 0.09 to 0.88

(e) Spatial gain, 0.75 to 2.35

(f) Wallis enhancement, − 0.07 to 1.12

FIGURE 10.4-5. Wallis statistical differencing on the bridge image for Md = 0.45, Dd = 0.28, p = 0.20, Amax = 2.50, Amin = 0.75 using a 9 × 9 pyramid array.

284

IMAGE ENHANCEMENT

(a) Original

(b) Wallis enhancement

FIGURE 10.4-6. Wallis statistical differencing on the chest_xray image for Md = 0.64, Dd = 0.22, p = 0.20, Amax = 2.50, Amin = 0.75 using a 11 × 11 pyramid array.

A min A max D max D d = ------------------------------------A max – A min

(10.4-8)

where Dmax is the maximum value of D ( j, k ) . The summations of Eqs. 10.4-4 and 10.4-5 can be implemented by convolutions with a uniform impulse array. But, overshoot and undershoot effects may occur. Better results are usually obtained with a pyramid or Gaussian-shaped array. Figure 10.4-5 shows the mean, standard deviation, spatial gain, and Wallis statistical differencing result on a monochrome image. Figure 10.4-6 presents a medical imaging example.

10.5. COLOR IMAGE ENHANCEMENT The image enhancement techniques discussed previously have all been applied to monochrome images. This section considers the enhancement of natural color images and introduces the pseudocolor and false color image enhancement methods. In the literature, the terms pseudocolor and false color have often been used improperly. Pseudocolor produces a color image from a monochrome image, while false color produces an enhanced color image from an original natural color image or from multispectral image bands. 10.5.1. Natural Color Image Enhancement The monochrome image enhancement methods described previously can be applied to natural color images by processing each color component individually. However,

COLOR IMAGE ENHANCEMENT

285

care must be taken to avoid changing the average value of the processed image components. Otherwise, the processed color image may exhibit deleterious shifts in hue and saturation. Typically, color images are processed in the RGB color space. For some image enhancement algorithms, there are computational advantages to processing in a luma-chroma space, such as YIQ, or a lightness-chrominance space, such as L*u*v*. As an example, if the objective is to perform edge crispening of a color image, it is usually only necessary to apply the enhancement method to the luma or lightness component. Because of the high-spatial-frequency response limitations of human vision, edge crispening of the chroma or chrominance components may not be perceptible. Faugeras (31) has investigated color image enhancement in a perceptual space based on a color vision model similar to the model presented in Figure 2.5-3. The procedure is to transform the RGB tristimulus value original images according to the color vision model to produce a set of three perceptual space images that, ideally, are perceptually independent. Then, an image enhancement method is applied independently to the perceptual space images. Finally, the enhanced perceptual space images are subjected to steps that invert the color vision model and produce an enhanced color image represented in RGB color space. 10.5.2. Pseudocolor Pseudocolor (32–34) is a color mapping of a monochrome image array which is intended to enhance the detectability of detail within the image. The pseudocolor mapping of an array F ( j, k ) is defined as
R ( j, k ) = O R { F ( j, k ) } G ( j, k ) = O G { F ( j, k ) } B ( j, k ) = O B { F ( j, k ) }

(10.5-1a) (10.5-1b) (10.5-1c)

where R ( j, k ) , G ( j, k ) , B ( j, k ) are display color components and O R { F ( j, k ) }, O G { F ( j, k ) } , O B { F ( j, k ) } are linear or nonlinear functional operators. This mapping defines a path in three-dimensional color space parametrically in terms of the array F ( j, k ). Figure 10.5-1 illustrates the RGB color space and two color mappings that originate at black and terminate at white. Mapping A represents the achromatic path through all shades of gray; it is the normal representation of a monochrome image. Mapping B is a spiral path through color space. Another class of pseudocolor mappings includes those mappings that exclude all shades of gray. Mapping C, which follows the edges of the RGB color cube, is such an example. This mapping follows the perimeter of the gamut of reproducible colors as depicted by the uniform chromaticity scale (UCS) chromaticity chart shown in

286

IMAGE ENHANCEMENT

FIGURE 10.5-1. Black-to-white and RGB perimeter pseudocolor mappings.

Figure 10.5-2. The luminances of the colors red, green, blue, cyan, magenta, and yellow that lie along the perimeter of reproducible colors are noted in the figure. It is seen that the luminance of the pseudocolor scale varies between a minimum of 0.114 for blue to a maximum of 0.886 for yellow. A maximum luminance of unity is reached only for white. In some applications it may be desirable to fix the luminance of all displayed colors so that discrimination along the pseudocolor scale is by hue and saturation attributes of a color only. Loci of constant luminance are plotted in Figure 10.5-2. Figure 10.5-2 also includes bounds for displayed colors of constant luminance. For example, if the RGB perimeter path is followed, the maximum luminance of any color must be limited to 0.114, the luminance of blue. At a luminance of 0.2, the RGB perimeter path can be followed except for the region around saturated blue. At higher luminance levels, the gamut of constant luminance colors becomes severely limited. Figure 10.5-2b is a plot of the 0.5 luminance locus. Inscribed within this locus is the locus of those colors of largest constant saturation. A pseudocolor scale along this path would have the property that all points differ only in hue. With a given pseudocolor path in color space, it is necessary to choose the scaling between the data plane variable and the incremental path distance. On the UCS chromaticity chart, incremental distances are subjectively almost equally noticeable. Therefore, it is reasonable to subdivide geometrically the path length into equal increments. Figure 10.5-3 shows examples of pseudocoloring of a gray scale chart image and a seismic image. 10.5.3. False Color False color is a point-by-point mapping of an original color image, described by its three primary colors, or of a set of multispectral image planes of a scene, to a color

COLOR IMAGE ENHANCEMENT

287

FIGURE 10.5-2. Luminance loci for NTSC colors.

space defined by display tristimulus values that are linear or nonlinear functions of the original image pixel values (35,36). A common intent is to provide a displayed image with objects possessing different or false colors from what might be expected.

288

IMAGE ENHANCEMENT

(a) Gray scale chart

(b) Pseudocolor of chart

(c ) Seismic

(d ) Pseudocolor of seismic

FIGURE 10.5-3. Pseudocoloring of the gray_chart and seismic images. See insert for a color representation of this figure.

For example, blue sky in a normal scene might be converted to appear red, and green grass transformed to blue. One possible reason for such a color mapping is to place normal objects in a strange color world so that a human observer will pay more attention to the objects than if they were colored normally. Another reason for false color mappings is the attempt to color a normal scene to match the color sensitivity of a human viewer. For example, it is known that the luminance response of cones in the retina peaks in the green region of the visible spectrum. Thus, if a normally red object is false colored to appear green, it may become more easily detectable. Another psychophysical property of color vision that can be exploited is the contrast sensitivity of the eye to changes in blue light. In some situation it may be worthwhile to map the normal colors of objects with fine detail into shades of blue.

MULTISPECTRAL IMAGE ENHANCEMENT

289

A third application of false color is to produce a natural color representation of a set of multispectral images of a scene. Some of the multispectral images may even be obtained from sensors whose wavelength response is outside the visible wavelength range, for example, infrared or ultraviolet. In a false color mapping, the red, green, and blue display color components are related to natural or multispectral images Fi by
R D = O R { F 1, F 2, … } G D = O G { F 1, F2, … } B D = O B { F 1, F 2, … }

(10.5-2a) (10.5-2b) (10.5-2c)

where O R { · } , O R { · } , O B { · } are general functional operators. As a simple example, the set of red, green, and blue sensor tristimulus values (RS = F1, GS = F2, BS = F3) may be interchanged according to the relation
0 = 0 1 1 0 0 0 1 0

RD GD BD

RS GS BS

(10.5-3)

Green objects in the original will appear red in the display, blue objects will appear green, and red objects will appear blue. A general linear false color mapping of natural color images can be defined as

RD GD BD =

m 11 m 21 m 23

m 12 m 22 m 32

m 13 m 21 m 33

RS GS BS

(10.5-4)

This color mapping should be recognized as a linear coordinate conversion of colors reproduced by the primaries of the original image to a new set of primaries. Figure 10.5-4 provides examples of false color mappings of a pair of images.

10.6. MULTISPECTRAL IMAGE ENHANCEMENT Enhancement procedures are often performed on multispectral image bands of a scene in order to accentuate salient features to assist in subsequent human interpretation or machine analysis (35,37). These procedures include individual image band

290

IMAGE ENHANCEMENT

(a) Infrared band

(b) Blue band

(c) R = infrared, G = 0, B = blue

(d ) R = infrared, G = 1/2 [infrared + blue], B = blue

FIGURE 10.5-4. False coloring of multispectral images. See insert for a color representation of this figure.

enhancement techniques, such as contrast stretching, noise cleaning, and edge crispening, as described earlier. Other methods, considered in this section, involve the joint processing of multispectral image bands. Multispectral image bands can be subtracted in pairs according to the relation
D m, n ( j, k ) = Fm ( j, k ) – Fn ( j, k )

(10.6-1)

in order to accentuate reflectivity variations between the multispectral bands. An associated advantage is the removal of any unknown but common bias components that may exist. Another simple but highly effective means of multispectral image enhancement is the formation of ratios of the image bands. The ratio image between the mth and nth multispectral bands is defined as

MULTISPECTRAL IMAGE ENHANCEMENT

291

F m ( j, k ) Rm, n ( j, k ) = ------------------F n ( j, k )

(10.6-2)

It is assumed that the image bands are adjusted to have nonzero pixel values. In many multispectral imaging systems, the image band Fn ( j, k ) can be modeled by the product of an object reflectivity function R n ( j, k ) and an illumination function I ( j, k ) that is identical for all multispectral bands. Ratioing of such imagery provides an automatic compensation of the illumination factor. The ratio F m ( j, k ) ⁄ [ F n ( j, k ) ± ∆ ( j, k ) ], for which ∆ ( j, k ) represents a quantization level uncertainty, can vary considerably if F n ( j, k ) is small. This variation can be reduced significantly by forming the logarithm of the ratios defined by (24)

L m, n ( j, k ) = log { R m, n ( j, k ) } = log { F m ( j, k ) } – log { F n ( j, k ) }

(10.6-3)

There are a total of N(N – 1) different difference or ratio pairs that may be formed from N multispectral bands. To reduce the number of combinations to be considered, the differences or ratios are often formed with respect to an average image field:
N

1 A ( j, k ) = --N

n=1

∑ F n ( j, k )

(10.6-4)

Unitary transforms between multispectral planes have also been employed as a means of enhancement. For N image bands, a N × 1 vector

F 1 ( j, k ) F 2 ( j, k ) · · · F N ( j, k )

x =

(10.6-5)

is formed at each coordinate (j, k). Then, a transformation y = Ax

(10.6-6)

292

IMAGE ENHANCEMENT

is formed where A is a N × N unitary matrix. A common transformation is the principal components decomposition, described in Section 5.8, in which the rows of the matrix A are composed of the eigenvectors of the covariance matrix Kx between the bands. The matrix A performs a diagonalization of the covariance matrix Kx such that the covariance matrix of the transformed imagery bands
K y = AK x A = Λ
T

(10.6-7)

is a diagonal matrix Λ whose elements are the eigenvalues of Kx arranged in descending value. The principal components decomposition, therefore, results in a set of decorrelated data arrays whose energies are ranged in amplitude. This process, of course, requires knowledge of the covariance matrix between the multispectral bands. The covariance matrix must be either modeled, estimated, or measured. If the covariance matrix is highly nonstationary, the principal components method becomes difficult to utilize. Figure 10.6-1 contains a set of four multispectral images, and Figure 10.6-2 exhibits their corresponding log ratios (37). Principal components bands of these multispectral images are illustrated in Figure 10.6-3 (37).

(a) Band 4 (green)

(b) Band 5 (red)

(c) Band 6 (infrared 1)

(d ) Band 7 (infrared 2)

FIGURE 10.6-1. Multispectral images.

MULTISPECTRAL IMAGE ENHANCEMENT

293

(a) Band 4 Band 5

(b ) Band 4 Band 6

(c) Band 4 Band 7

(d ) Band 5 Band 6

(e) Band 5 Band 7

(f ) Band 6 Band 7

FIGURE 10.6-2. Logarithmic ratios of multispectral images.

294

IMAGE ENHANCEMENT

(a) First band

(b) Second band

(c) Third band

(d ) Fourth band

FIGURE 10.6-3. Principal components of multispectral images.

REFERENCES
1. R. Nathan, “Picture Enhancement for the Moon, Mars, and Man,” in Pictorial Pattern Recognition, G. C. Cheng, ed., Thompson, Washington DC, 1968, 239–235. 2. F. Billingsley, “Applications of Digital Image Processing,” Applied Optics, 9, 2, February 1970, 289–299. 3. H. C. Andrews, A. G. Tescher, and R. P. Kruger, “Image Processing by Digital Computer,” IEEE Spectrum, 9, 7, July 1972, 20–32. 4. E. L. Hall et al., “A Survey of Preprocessing and Feature Extraction Techniques for Radiographic Images,” IEEE Trans. Computers, C-20, 9, September 1971, 1032–1044. 5. E. L. Hall, “Almost Uniform Distribution for Computer Image Enhancement,” IEEE Trans. Computers, C-23, 2, February 1974, 207–208. 6. W. Frei, “Image Enhancement by Histogram Hyperbolization,” Computer Graphics and Image Processing, 6, 3, June 1977, 286–294.

REFERENCES

295

7. D. J. Ketcham, “Real Time Image Enhancement Technique,” Proc. SPIE/OSA Conference on Image Processing, Pacific Grove, CA, 74, February 1976, 120–125. 8. R. A. Hummel, “Image Enhancement by Histogram Transformation,” Computer Graphics and Image Processing, 6, 2, 1977, 184–195. 9. S. M. Pizer et al., “Adaptive Histogram Equalization and Its Variations,” Computer Vision, Graphics, and Image Processing. 39, 3, September 1987, 355–368. 10. G. P. Dineen, “Programming Pattern Recognition,” Proc. Western Joint Computer Conference, March 1955, 94–100. 11. R. E. Graham, “Snow Removal: A Noise Stripping Process for Picture Signals,” IRE Trans. Information Theory, IT-8, 1, February 1962, 129–144. 12. A. Rosenfeld, C. M. Park, and J. P. Strong, “Noise Cleaning in Digital Pictures,” Proc. EASCON Convention Record, October 1969, 264–273. 13. R. Nathan, “Spatial Frequency Filtering,” in Picture Processing and Psychopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970, 151–164. 14. A. V. Oppenheim, R. W. Schaefer, and T. G. Stockham, Jr., “Nonlinear Filtering of Multiplied and Convolved Signals,” Proc. IEEE, 56, 8, August 1968, 1264–1291. 15. G. A. Mastin, “Adaptive Filters for Digital Image Noise Smoothing: An Evaluation,” Computer Vision, Graphics, and Image Processing, 31, 1, July 1985, 103–121. 16. L. S. Davis and A. Rosenfeld, “Noise Cleaning by Iterated Local Averaging,” IEEE Trans. Systems, Man and Cybernetics, SMC-7, 1978, 705–710. 17. J. W. Tukey, Exploratory Data Analysis, Addison-Wesley, Reading, MA, 1971. 18. T. A. Nodes and N. C. Gallagher, Jr., “Median Filters: Some Manipulations and Their Properties,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-30, 5, October 1982, 739–746. 19. T. S. Huang, G. J. Yang, and G. Y. Tang, “A Fast Two-Dimensional Median Filtering Algorithm,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-27, 1, February 1979, 13–18. 20. J. T. Astola and T. G. Campbell, “On Computation of the Running Median,” IEEE Trans. Acoustics, Speech, and Signal Processing, 37, 4, April 1989, 572–574. 21. W. K. Pratt, T. J. Cooper, and I. Kabir, “Pseudomedian Filter,” Proc. SPIE Conference, Los Angeles, January 1984. 22. J. S. Walker, A Primer on Wavelets and Their Scientific Applications, Chapman & Hall CRC Press, Boca Raton, FL, 1999. 23. S. Mallat, A Wavelet Tour of Signal Processing, Academic Press, New York, 1998. 24. L. G. Roberts, “Machine Perception of Three-Dimensional Solids,” in Optical and Electro-Optical Information Processing, J. T. Tippett et al., Eds., MIT Press, Cambridge, MA, 1965. 25. J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psychopictorics, B. S. Lipkin and A. Rosenfeld, eds., Academic Press, New York, 1970, 75– 150. 26. A. Arcese, P. H. Mengert, and E. W. Trombini, “Image Detection Through Bipolar Correlation,” IEEE Trans. Information Theory, IT-16, 5, September 1970, 534–541. 27. W. F. Schreiber, “Wirephoto Quality Improvement by Unsharp Masking,” J. Pattern Recognition, 2, 1970, 111–121.

296

IMAGE ENHANCEMENT

28. J-S. Lee, “Digital Image Enhancement and Noise Filtering by Use of Local Statistics,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-2, 2, March 1980, 165–168. 29. A. Rosenfeld, Picture Processing by Computer, Academic Press, New York, 1969. 30. R. H. Wallis, “An Approach for the Space Variant Restoration and Enhancement of Images,” Proc. Symposium on Current Mathematical Problems in Image Science, Monterey, CA, November 1976. 31. O. D. Faugeras, “Digital Color Image Processing Within the Framework of a Human Visual Model,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-27, 4, August 1979, 380–393. 32. C. Gazley, J. E. Reibert, and R. H. Stratton, “Computer Works a New Trick in Seeing Pseudo Color Processing,” Aeronautics and Astronautics, 4, April 1967, 56. 33. L. W. Nichols and J. Lamar, “Conversion of Infrared Images to Visible in Color,” Applied Optics, 7, 9, September 1968, 1757. 34. E. R. Kreins and L. J. Allison, “Color Enhancement of Nimbus High Resolution Infrared Radiometer Data,” Applied Optics, 9, 3, March 1970, 681. 35. A. F. H. Goetz et al., “Application of ERTS Images and Image Processing to Regional Geologic Problems and Geologic Mapping in Northern Arizona,” Technical Report 32-1597, Jet Propulsion Laboratory, Pasadena, CA, May 1975. 36. W. Find, “Image Coloration as an Interpretation Aid,” Proc. SPIE/OSA Conference on Image Processing, Pacific Grove, CA, February 1976, 74, 209–215. 37. G. S. Robinson and W. Frei, “Final Research Report on Computer Processing of ERTS Images,” Report USCIPI 640, University of Southern California, Image Processing Institute, Los Angeles, September 1975.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

11
IMAGE RESTORATION MODELS

Image restoration may be viewed as an estimation process in which operations are performed on an observed or measured image field to estimate the ideal image field that would be observed if no image degradation were present in an imaging system. Mathematical models are described in this chapter for image degradation in general classes of imaging systems. These models are then utilized in subsequent chapters as a basis for the development of image restoration techniques.

11.1. GENERAL IMAGE RESTORATION MODELS In order effectively to design a digital image restoration system, it is necessary quantitatively to characterize the image degradation effects of the physical imaging system, the image digitizer, and the image display. Basically, the procedure is to model the image degradation effects and then perform operations to undo the model to obtain a restored image. It should be emphasized that accurate image modeling is often the key to effective image restoration. There are two basic approaches to the modeling of image degradation effects: a priori modeling and a posteriori modeling. In the former case, measurements are made on the physical imaging system, digitizer, and display to determine their response for an arbitrary image field. In some instances it will be possible to model the system response deterministically, while in other situations it will only be possible to determine the system response in a stochastic sense. The a posteriori modeling approach is to develop the model for the image degradations based on measurements of a particular image to be restored. Basically, these two approaches differ only in the manner in which information is gathered to describe the character of the image degradation.
297

298

IMAGE RESTORATION MODELS

FIGURE 11.1-1. Digital image restoration model.

Figure 11.1-1 shows a general model of a digital imaging system and restoration process. In the model, a continuous image light distribution C ( x, y, t, λ ) dependent on spatial coordinates (x, y), time (t), and spectral wavelength ( λ ) is assumed to exist as the driving force of a physical imaging system subject to point and spatial degradation effects and corrupted by deterministic and stochastic disturbances. Potential degradations include diffraction in the optical system, sensor nonlinearities, optical system aberrations, film nonlinearities, atmospheric turbulence effects, image motion blur, and geometric distortion. Noise disturbances may be caused by electronic imaging sensors or film granularity. In this model, the physical imaging (i) system produces a set of output image fields FO ( x, y, t j ) at time instant t j described by the general relation
F O ( x, y, tj ) = O P { C ( x, y, t, λ ) }
(i )

(11.1-1)

where O P { · } represents a general operator that is dependent on the space coordinates (x, y), the time history (t), the wavelength ( λ ), and the amplitude of the light distribution (C). For a monochrome imaging system, there will only be a single out(i) put field, while for a natural color imaging system, FO ( x, y, t j ) may denote the red, green, and blue tristimulus bands for i = 1, 2, 3, respectively. Multispectral imagery may also involve several output bands of data. (i) In the general model of Figure 11.1-1, each observed image field FO ( x, y, t j ) is digitized, following the techniques outlined in Part 3, to produce an array of image (i ) samples F S ( m 1, m 2, t j ) at each time instant t j . The output samples of the digitizer are related to the input observed field by
(i) (i)

F S ( m 1, m 2, t j ) = O G { F O ( x, y, t j ) }

(11.1-2)

GENERAL IMAGE RESTORATION MODELS

299

where O G { · } is an operator modeling the image digitization process. A digital image restoration system that follows produces an output array (i) FK ( k 1, k 2, tj ) by the transformation
F K ( k 1, k 2, t j ) = O R { FS ( m 1, m2, t j ) }
(i ) ( i)

(11.1-3)

where O R { · } represents the designed restoration operator. Next, the output samples of the digital restoration system are interpolated by the image display system to proˆ duce a continuous image estimate F ( i ) ( x, y, t j ). This operation is governed by the I relation
(i) ˆ (i) FI ( x, y, t j ) = O D { FK ( k 1, k 2, tj ) }

(11.1-4)

where O D { · } models the display transformation. The function of the digital image restoration system is to compensate for degradations of the physical imaging system, the digitizer, and the image display system (i) to produce an estimate of a hypothetical ideal image field FI ( x, y, t j ) that would be displayed if all physical elements were perfect. The perfect imaging system would produce an ideal image field modeled by
 ∞ t  (i) FI ( x, y, t j ) = O I  ∫ ∫ j C ( x, y, t, λ )U i ( t, λ ) dt dλ 0 tj – T  

(11.1-5)

where U i ( t, λ ) is a desired temporal and spectral response function, T is the observation period, and O I { · } is a desired point and spatial response function. Usually, it will not be possible to restore perfectly the observed image such that the output image field is identical to the ideal image field. The design objective of the image restoration processor is to minimize some error measure between (i) ˆ FI ( x, y, t j ) and F ( i ) ( x, y, t j ). The discussion here is limited, for the most part, to a I consideration of techniques that minimize the mean-square error between the ideal and estimated image fields as defined by
2  (i ) ˆ (i ) E i = E  [ F I ( x, y, t j ) – F I ( x, y, t j ) ]   

(11.1-6)

where E { · } denotes the expectation operator. Often, it will be desirable to place side constraints on the error minimization, for example, to require that the image estimate be strictly positive if it is to represent light intensities that are positive. Because the restoration process is to be performed digitally, it is often more convenient to restrict the error measure to discrete points on the ideal and estimated image fields. These discrete arrays are obtained by mathematical models of perfect image digitizers that produce the arrays

300

IMAGE RESTORATION MODELS

FI ( n 1, n 2, t j ) = F I ( x, y, tj )δ ( x – n 1 ∆, y – n 2 ∆ ) ˆ (i) ˆ (i) FI ( n 1, n 2, t j ) = F I ( x, y, tj )δ ( x – n 1 ∆, y – n 2 ∆ )

(i)

(i)

(11.1-7a) (11.1-7b)

It is assumed that continuous image fields are sampled at a spatial period ∆ satisfying the Nyquist criterion. Also, quantization error is assumed negligible. It should be noted that the processes indicated by the blocks of Figure 11.1-1 above the dashed division line represent mathematical modeling and are not physical operations performed on physical image fields and arrays. With this discretization of the continuous ideal and estimated image fields, the corresponding mean-square restoration error becomes

2  (i ) ˆ (i) E i = E  [ F I ( n 1, n 2, t j ) – F I ( n 1, n 2, tj ) ]   

(11.1-8)

With the relationships of Figure 11.1-1 quantitatively established, the restoration problem may be formulated as follows: Given the sampled observation F S ( m 1, m 2, t j ) expressed in terms of the image light distribution C ( x, y, t, λ ), determine the transfer function O K { · } (i ) ˆ (i) that minimizes the error measure between F I ( x, y, t j ) and FI ( x, y, t j ) subject to desired constraints. There are no general solutions for the restoration problem as formulated above because of the complexity of the physical imaging system. To proceed further, it is necessary to be more specific about the type of degradation and the method of restoration. The following sections describe models for the elements of the generalized imaging system of Figure 11.1-1.
(i)

11.2. OPTICAL SYSTEMS MODELS One of the major advances in the field of optics during the past 40 years has been the application of system concepts to optical imaging. Imaging devices consisting of lenses, mirrors, prisms, and so on, can be considered to provide a deterministic transformation of an input spatial light distribution to some output spatial light distribution. Also, the system concept can be extended to encompass the spatial propagation of light through free space or some dielectric medium. In the study of geometric optics, it is assumed that light rays always travel in a straight-line path in a homogeneous medium. By this assumption, a bundle of rays passing through a clear aperture onto a screen produces a geometric light projection of the aperture. However, if the light distribution at the region between the light and

OPTICAL SYSTEMS MODELS

301

FIGURE 11.2-1. Generalized optical imaging system.

dark areas on the screen is examined in detail, it is found that the boundary is not sharp. This effect is more pronounced as the aperture size is decreased. For a pinhole aperture, the entire screen appears diffusely illuminated. From a simplistic viewpoint, the aperture causes a bending of rays called diffraction. Diffraction of light can be quantitatively characterized by considering light as electromagnetic radiation that satisfies Maxwell's equations. The formulation of a complete theory of optical imaging from the basic electromagnetic principles of diffraction theory is a complex and lengthy task. In the following, only the key points of the formulation are presented; details may be found in References 1 to 3. Figure 11.2-1 is a diagram of a generalized optical imaging system. A point in the object plane at coordinate ( x o, y o ) of intensity I o ( x o, y o ) radiates energy toward an imaging system characterized by an entrance pupil, exit pupil, and intervening system transformation. Electromagnetic waves emanating from the optical system are focused to a point ( x i, y i ) on the image plane producing an intensity I i ( x i, y i ) . The imaging system is said to be diffraction limited if the light distribution at the image plane produced by a point-source object consists of a converging spherical wave whose extent is limited only by the exit pupil. If the wavefront of the electromagnetic radiation emanating from the exit pupil is not spherical, the optical system is said to possess aberrations. In most optical image formation systems, the optical radiation emitted by an object arises from light transmitted or reflected from an incoherent light source. The image radiation can often be regarded as quasimonochromatic in the sense that the spectral bandwidth of the image radiation detected at the image plane is small with respect to the center wavelength of the radiation. Under these joint assumptions, the imaging system of Figure 11.2-1 will respond as a linear system in terms of the intensity of its input and output fields. The relationship between the image intensity and object intensity for the optical system can then be represented by the superposition integral equation
Ii ( x i, y i ) =

∫–∞ ∫–∞ H ( xi, yi ; xo, yo )Io ( xo, yo ) dxo dyo





(11.2-1)

302

IMAGE RESTORATION MODELS

where H ( x i, y i ; x o, y o ) represents the image intensity response to a point source of light. Often, the intensity impulse response is space invariant and the input–output relationship is given by the convolution equation
I i ( x i, y i ) =

∫–∞ ∫–∞ H ( xi – xo, yi – yo )Io ( xo, yo ) dxo dyo





(11.2-2)

In this case, the normalized Fourier transforms

Io ( ω x, ω y ) = ----------------------------------------------------------------------------------------------------------------------∞ ∞ I o ( x o, y o ) dx o dy o ∫ ∫
–∞ –∞

∫–∞ ∫–∞ Io ( xo, yo ) exp{ –i ( ωx xo + ωy yo ) } dxo dyo
∞ ∞





(11.2-3a)

I i ( ω x, ω y ) = --------------------------------------------------------------------------------------------------------------∞ ∞ Ii ( x i, y i ) dx i dyi ∫ ∫
– ∞ –∞

∫–∞ ∫–∞ Ii ( xi, yi ) exp{ –i ( ωx xi + ω y yi ) } dxi dy i

(11.2-3b)

of the object and image intensity fields are related by
I o ( ω x, ω y ) = H ( ω x, ω y ) I i ( ω x, ω y )

(11.2-4)

where H ( ω x, ω y ) , which is called the optical transfer function (OTF), is defined by

H ( ω x, ω y ) = -------------------------------------------------------------------------------------------------------∞ ∞ H ( x , y ) dx dy ∫ ∫
–∞ –∞

∫–∞ ∫–∞ H ( x, y ) exp { – i ( ωx x + ωy y ) } dx dy





(11.2-5)

The absolute value H ( ω x, ω y ) of the OTF is known as the modulation transfer function (MTF) of the optical system. The most common optical image formation system is a circular thin lens. Figure 11.2-2 illustrates the OTF for such a lens as a function of its degree of misfocus (1, p. 486; 4). For extreme misfocus, the OTF will actually become negative at some spatial frequencies. In this state, the lens will cause a contrast reversal: Dark objects will appear light, and vice versa. Earth's atmosphere acts as an imaging system for optical radiation transversing a path through the atmosphere. Normally, the index of refraction of the atmosphere remains relatively constant over the optical extent of an object, but in some instances atmospheric turbulence can produce a spatially variable index of

OPTICAL SYSTEMS MODELS

303

FIGURE 11.2-2. Cross section of transfer function of a lens. Numbers indicate degree of misfocus.

refraction that leads to an effective blurring of any imaged object. An equivalent impulse response
2 2 5 ⁄ 6  H ( x, y ) = K 1 exp  – ( K 2 x + K3 y )   

(11.2-6)

where the Kn are constants, has been predicted and verified mathematically by experimentation (5) for long-exposure image formation. For convenience in analysis, the function 5/6 is often replaced by unity to obtain a Gaussian-shaped impulse response model of the form
2   x2 y  H ( x, y ) = K exp  –  -------- + --------     2b 2 2b 2   x y

(11.2-7)

where K is an amplitude scaling constant and bx and by are blur-spread factors. Under the assumption that the impulse response of a physical imaging system is independent of spectral wavelength and time, the observed image field can be modeled by the superposition integral equation
 ∞ ∞  (i) F O ( x, y, t j ) = O C  ∫ ∫ C ( α, β, t, λ )H ( x, y ; α, β ) dα dβ  –∞ –∞  

(11.2-8)

where O C { · } is an operator that models the spectral and temporal characteristics of the physical imaging system. If the impulse response is spatially invariant, the model reduces to the convolution integral equation

304

IMAGE RESTORATION MODELS

 ∞ ∞  (i ) F O ( x, y, t j ) = OC  ∫ ∫ C ( α, β, t, λ )H ( x – α, y – β ) dα d β   –∞ –∞ 

(11.2-9)

11.3. PHOTOGRAPHIC PROCESS MODELS There are many different types of materials and chemical processes that have been utilized for photographic image recording. No attempt is made here either to survey the field of photography or to deeply investigate the physics of photography. References 6 to 8 contain such discussions. Rather, the attempt here is to develop mathematical models of the photographic process in order to characterize quantitatively the photographic components of an imaging system. 11.3.1. Monochromatic Photography The most common material for photographic image recording is silver halide emulsion, depicted in Figure 11.3-1. In this material, silver halide grains are suspended in a transparent layer of gelatin that is deposited on a glass, acetate, or paper backing. If the backing is transparent, a transparency can be produced, and if the backing is a white paper, a reflection print can be obtained. When light strikes a grain, an electrochemical conversion process occurs, and part of the grain is converted to metallic silver. A development center is then said to exist in the grain. In the development process, a chemical developing agent causes grains with partial silver content to be converted entirely to metallic silver. Next, the film is fixed by chemically removing unexposed grains. The photographic process described above is called a non reversal process. It produces a negative image in the sense that the silver density is inversely proportional to the exposing light. A positive reflection print of an image can be obtained in a two-stage process with nonreversal materials. First, a negative transparency is produced, and then the negative transparency is illuminated to expose negative reflection print paper. The resulting silver density on the developed paper is then proportional to the light intensity that exposed the negative transparency. A positive transparency of an image can be obtained with a reversal type of film. This film is exposed and undergoes a first development similar to that of a nonreversal film. At this stage in the photographic process, all grains that have been exposed

FIGURE 11.3-1. Cross section of silver halide emulsion.

PHOTOGRAPHIC PROCESS MODELS

305

to light are converted completely to metallic silver. In the next step, the metallic silver grains are chemically removed. The film is then uniformly exposed to light, or alternatively, a chemical process is performed to expose the remaining silver halide grains. Then the exposed grains are developed and fixed to produce a positive transparency whose density is proportional to the original light exposure. The relationships between light intensity exposing a film and the density of silver grains in a transparency or print can be described quantitatively by sensitometric measurements. Through sensitometry, a model is sought that will predict the spectral light distribution passing through an illuminated transparency or reflected from a print as a function of the spectral light distribution of the exposing light and certain physical parameters of the photographic process. The first stage of the photographic process, that of exposing the silver halide grains, can be modeled to a first-order approximation by the integral equation
X ( C ) = kx

∫ C ( λ )L ( λ ) dλ

(11.3-1)

where X(C) is the integrated exposure, C ( λ ) represents the spectral energy distribution of the exposing light, L ( λ ) denotes the spectral sensitivity of the film or paper plus any spectral losses resulting from filters or optical elements, and kx is an exposure constant that is controllable by an aperture or exposure time setting. Equation 11.3-1 assumes a fixed exposure time. Ideally, if the exposure time were to be increased by a certain factor, the exposure would be increased by the same factor. Unfortunately, this relationship does not hold exactly. The departure from linearity is called a reciprocity failure of the film. Another anomaly in exposure prediction is the intermittency effect, in which the exposures for a constant intensity light and for an intermittently flashed light differ even though the incident energy is the same for both sources. Thus, if Eq. 11.3-1 is to be utilized as an exposure model, it is necessary to observe its limitations: The equation is strictly valid only for a fixed exposure time and constant-intensity illumination. The transmittance τ ( λ ) of a developed reversal or non-reversal transparency as a function of wavelength can be ideally related to the density of silver grains by the exponential law of absorption as given by τ ( λ ) = exp { – d e D ( λ ) }

(11.3-2)

where D ( λ ) represents the characteristic density as a function of wavelength for a reference exposure value, and de is a variable proportional to the actual exposure. For monochrome transparencies, the characteristic density function D ( λ ) is reasonably constant over the visible region. As Eq. 11.3-2 indicates, high silver densities result in low transmittances, and vice versa. It is common practice to change the proportionality constant of Eq. 11.3-2 so that measurements are made in exponent ten units. Thus, the transparency transmittance can be equivalently written as

306

IMAGE RESTORATION MODELS
–dx D ( λ )

τ ( λ ) = 10

(11.3-3)

where dx is the density variable, inversely proportional to exposure, for exponent 10 units. From Eq. 11.3-3, it is seen that the photographic density is logarithmically related to the transmittance. Thus, d x D ( λ ) = – log 10 τ ( λ )

(11.3-4)

The reflectivity r o ( λ ) of a photographic print as a function of wavelength is also inversely proportional to its silver density, and follows the exponential law of absorption of Eq. 11.3-2. Thus, from Eqs. 11.3-3 and 11.3-4, one obtains directly r o ( λ ) = 10
–d x D ( λ )

(11.3-5) (11.3-6)

d x D ( λ ) = – log 10 r o ( λ )

where dx is an appropriately evaluated variable proportional to the exposure of the photographic paper. The relational model between photographic density and transmittance or reflectivity is straightforward and reasonably accurate. The major problem is the next step of modeling the relationship between the exposure X(C) and the density variable dx. Figure 11.3-2a shows a typical curve of the transmittance of a nonreversal transparency

(a)

(b)

(c)

(d)

FIGURE 11.3-2. Relationships between transmittance, density, and exposure for a nonreversal film.

PHOTOGRAPHIC PROCESS MODELS

307

FIGURE 11.3-3. H & D curves for a reversal film as a function of development time.

as a function of exposure. It is to be noted that the curve is highly nonlinear except for a relatively narrow region in the lower exposure range. In Figure 11.3-2b, the curve of Figure 11.3-2a has been replotted as transmittance versus the logarithm of exposure. An approximate linear relationship is found to exist between transmittance and the logarithm of exposure, but operation in this exposure region is usually of little use in imaging systems. The parameter of interest in photography is the photographic density variable dx, which is plotted as a function of exposure and logarithm of exposure in Figure 11.3-2c and 11.3-2d. The plot of density versus logarithm of exposure is known as the H & D curve after Hurter and Driffield, who performed fundamental investigations of the relationships between density and exposure. Figure 11.3-3 is a plot of the H & D curve for a reversal type of film. In Figure 11.3-2d, the central portion of the curve, which is approximately linear, has been approximated by the line defined by d x = γ [ log 10 X ( C ) – KF ]

(11.3-7)

where γ represents the slope of the line and KF denotes the intercept of the line with the log exposure axis. The slope of the curve γ (gamma,) is a measure of the contrast of the film, while the factor KF is a measure of the film speed; that is, a measure of the base exposure required to produce a negative in the linear region of the H & D curve. If the exposure is restricted to the linear portion of the H & D curve, substitution of Eq. 11.3-7 into Eq. 11.3-3 yields a transmittance function τ ( λ ) = Kτ ( λ ) [ X ( C ) ]
– γD ( λ )

(11.3-8a)

where
K τ ( λ ) ≡ 10 γK F D ( λ )

(11.3-8b)

308

IMAGE RESTORATION MODELS

FIGURE 11.3-4. Color film integral tripack.

With the exposure model of Eq. 11.3-1, the transmittance or reflection models of Eqs. 11.3-3 and 11.3-5, and the H & D curve, or its linearized model of Eq. 11.3-7, it is possible mathematically to model the monochrome photographic process. 11.3.2. Color Photography Modern color photography systems utilize an integral tripack film, as illustrated in Figure 11.3-4, to produce positive or negative transparencies. In a cross section of this film, the first layer is a silver halide emulsion sensitive to blue light. A yellow filter following the blue emulsion prevents blue light from passing through to the green and red silver emulsions that follow in consecutive layers and are naturally sensitive to blue light. A transparent base supports the emulsion layers. Upon development, the blue emulsion layer is converted into a yellow dye transparency whose dye concentration is proportional to the blue exposure for a negative transparency and inversely proportional for a positive transparency. Similarly, the green and blue emulsion layers become magenta and cyan dye layers, respectively. Color prints can be obtained by a variety of processes (7). The most common technique is to produce a positive print from a color negative transparency onto nonreversal color paper. In the establishment of a mathematical model of the color photographic process, each emulsion layer can be considered to react to light as does an emulsion layer of a monochrome photographic material. To a first approximation, this assumption is correct. However, there are often significant interactions between the emulsion and dye layers, Each emulsion layer possesses a characteristic sensitivity, as shown by the typical curves of Figure 11.3-5. The integrated exposures of the layers are given by
X R ( C ) = d R ∫ C ( λ )L R ( λ ) dλ XG ( C ) = d G ∫ C ( λ )L G ( λ ) dλ X B ( C ) = d B ∫ C ( λ )L B ( λ ) dλ

(11.3-9a) (11.3-9b) (11.3-9c)

PHOTOGRAPHIC PROCESS MODELS

309

FIGURE 11.3-5. Spectral sensitivities of typical film layer emulsions.

where dR, dG, dB are proportionality constants whose values are adjusted so that the exposures are equal for a reference white illumination and so that the film is not saturated. In the chemical development process of the film, a positive transparency is produced with three absorptive dye layers of cyan, magenta, and yellow dyes. The transmittance τT ( λ ) of the developed transparency is the product of the transmittance of the cyan τTC ( λ ), the magenta τ TM ( λ ), and the yellow τ TY ( λ ) dyes. Hence, τ T ( λ ) = τ TC ( λ )τTM ( λ )τ TY ( λ )

(11.3-10)

The transmittance of each dye is a function of its spectral absorption characteristic and its concentration. This functional dependence is conveniently expressed in terms of the relative density of each dye as τ TC ( λ ) = 10 τTM ( λ ) = 10
– cD NC ( λ )

(11.3-11a) (11.3-11b) (11.3-11c)

– mD NM ( λ ) – yD NY ( λ )

τTY ( λ ) = 10

where c, m, y represent the relative amounts of the cyan, magenta, and yellow dyes, and D NC ( λ ) , D NM ( λ ) , D NY ( λ ) denote the spectral densities of unit amounts of the dyes. For unit amounts of the dyes, the transparency transmittance is τTN ( λ ) = 10
– D TN ( λ )

(11.3-12a)

310

IMAGE RESTORATION MODELS

FIGURE 11.3-6. Spectral dye densities and neutral density of a typical reversal color film.

where
D TN ( λ ) = D NC ( λ ) + D NM ( λ ) + D NY ( λ )

(11.3-12b)

Such a transparency appears to be a neutral gray when illuminated by a reference white light. Figure 11.3-6 illustrates the typical dye densities and neutral density for a reversal film. The relationship between the exposure values and dye layer densities is, in general, quite complex. For example, the amount of cyan dye produced is a nonlinear function not only of the red exposure, but is also dependent to a smaller extent on the green and blue exposures. Similar relationships hold for the amounts of magenta and yellow dyes produced by their exposures. Often, these interimage effects can be neglected, and it can be assumed that the cyan dye is produced only by the red exposure, the magenta dye by the green exposure, and the blue dye by the yellow exposure. For this assumption, the dye density–exposure relationship can be characterized by the Hurter–Driffield plot of equivalent neutral density versus the logarithm of exposure for each dye. Figure 11.3-7 shows a typical H & D curve for a reversal film. In the central portion of each H & D curve, the density versus exposure characteristic can be modeled as c = γ C log 10 X R + K FC m = γ M log 10 X G + K FM y = γY log 10 X B + K FY

(11.3-13a) (11.3-13b) (11.3-13c)

PHOTOGRAPHIC PROCESS MODELS

311

FIGURE 11.3-7. H & D curves for a typical reversal color film.

where γC , γM , γ Y , representing the slopes of the curves in the linear region, are called dye layer gammas. The spectral energy distribution of light passing through a developed transparency is the product of the transparency transmittance and the incident illumination spectral energy distribution E ( λ ) as given by

CT ( λ ) = E ( λ )10

– [ cD NC ( λ ) + mD NM ( λ ) + yD NY ( λ ) ]

(11.3-14)

Figure 11.3-8 is a block diagram of the complete color film recording and reproduction process. The original light with distribution C ( λ ) and the light passing through the transparency C T ( λ ) at a given resolution element are rarely identical. That is, a spectral match is usually not achieved in the photographic process. Furthermore, the lights C and CT usually do not even provide a colorimetric match.

312

IMAGE RESTORATION MODELS

FIGURE 11.3-8. Color film model.

11.4. DISCRETE IMAGE RESTORATION MODELS This chapter began with an introduction to a general model of an imaging system and a digital restoration process. Next, typical components of the imaging system were described and modeled within the context of the general model. Now, the discussion turns to the development of several discrete image restoration models. In the development of these models, it is assumed that the spectral wavelength response and temporal response characteristics of the physical imaging system can be separated from the spatial and point characteristics. The following discussion considers only spatial and point characteristics. After each element of the digital image restoration system of Figure 11.1-1 is modeled, following the techniques described previously, the restoration system may be conceptually distilled to three equations: Observed image:
F S ( m 1, m 2 ) = O M { F I ( n 1, n 2 ), N 1 ( m 1, m 2 ), …, N N ( m 1, m 2 ) }

(11.4-1a)

Compensated image:
FK ( k 1, k 2 ) = O R { F S ( m 1, m 2 ) }

(11.4-1b)

Restored image:
ˆ F I ( n 1, n 2 ) = O D { F K ( k 1, k 2 ) }

(11.4-1c)

DISCRETE IMAGE RESTORATION MODELS

313

ˆ where FS represents an array of observed image samples, FI and F I are arrays of ideal image points and estimates, respectively, FK is an array of compensated image points from the digital restoration system, Ni denotes arrays of noise samples from various system elements, and O M { · } , O R { · } , O D { · } represent general transfer functions of the imaging system, restoration processor, and display system, respectively. Vector-space equivalents of Eq. 11.4-1 can be formed for purposes of analysis by column scanning of the arrays of Eq. 11.4-1. These relationships are given by f S = O M { f I, n 1, …, n N } fK = OR { fS } ˆ = O {f } fI D K

(11.4-2a) (11.4-2b) (11.4-2c)

Several estimation approaches to the solution of 11.4-1 or 11.4-2 are described in the following chapters. Unfortunately, general solutions have not been found; recourse must be made to specific solutions for less general models. The most common digital restoration model is that of Figure 11.4-1a, in which a continuous image field is subjected to a linear blur, the electrical sensor responds nonlinearly to its input intensity, and the sensor amplifier introduces additive Gaussian noise independent of the image field. The physical image digitizer that follows may also introduce an effective blurring of the sampled image as the result of sampling with extended pulses. In this model, display degradation is ignored.

FIGURE 11.4-1. Imaging and restoration models for a sampled blurred image with additive noise.

314

IMAGE RESTORATION MODELS

Figure 11.4-1b shows a restoration model for the imaging system. It is assumed that the imaging blur can be modeled as a superposition operation with an impulse response J(x, y) that may be space variant. The sensor is assumed to respond nonlinearly to the input field FB(x, y) on a point-by-point basis, and its output is subject to an additive noise field N(x, y). The effect of sampling with extended sampling pulses, which are assumed symmetric, can be modeled as a convolution of FO(x, y) with each pulse P(x, y) followed by perfect sampling. ˆ The objective of the restoration is to produce an array of samples F I ( n 1, n 2 ) that are estimates of points on the ideal input image field FI(x, y) obtained by a perfect image digitizer sampling at a spatial period ∆I . To produce a digital restoration model, it is necessary quantitatively to relate the physical image samples FS ( m 1, m 2 ) to the ideal image points FI ( n 1, n 2 ) following the techniques outlined in Section 7.2. This is accomplished by truncating the sampling pulse equivalent impulse response P(x, y) to some spatial limits ± TP , and then extracting points from the continuous observed field FO(x, y) at a grid spacing ∆P . The discrete representation must then be carried one step further by relating points on the observed image field FO(x, y) to points on the image field FP(x, y) and the noise field N(x, y). The final step in the development of the discrete restoration model involves discretization of the superposition operation with J(x, y). There are two potential sources of error in this modeling process: truncation of the impulse responses J(x, y) and P(x, y), and quadrature integration errors. Both sources of error can be made negligibly small by choosing the truncation limits TB and TP large and by choosing the quadrature spacings ∆I and ∆P small. This, of course, increases the sizes of the arrays, and eventually, the amount of storage and processing required. Actually, as is subsequently shown, the numerical stability of the restoration estimate may be impaired by improving the accuracy of the discretization process! The relative dimensions of the various arrays of the restoration model are important. Figure 11.4-2 shows the nested nature of the arrays. The image array observed, FO ( k 1, k 2 ), is smaller than the ideal image array, F I ( n 1, n 2 ), by the half-width of the truncated impulse response J(x, y). Similarly, the array of physical sample points FS(m1, m2) is smaller than the array of image points observed, FO ( k 1, k 2 ), by the half-width of the truncated impulse response P ( x, y ). It is convenient to form vector equivalents of the various arrays of the restoration model in order to utilize the formal structure of vector algebra in the subsequent restoration analysis. Again, following the techniques of Section 7.2, the arrays are reindexed so that the first element appears in the upper-left corner of each array. Next, the vector relationships between the stages of the model are obtained by column scanning of the arrays to give fS = BP fO fO = fP + n fP = O P { f B } f B = BB f I

(11.4-3a) (11.4-3b) (11.4-3c) (11.4-3d)

DISCRETE IMAGE RESTORATION MODELS

315

FIGURE 11.4-2. Relationships of sampled image arrays.

where the blur matrix BP contains samples of P(x, y) and BB contains samples of J(x, y). The nonlinear operation of Eq. 1 l.4-3c is defined as a point-by-point nonlinear transformation. That is, fP ( i ) = OP { fB ( i ) }

(11.4-4)

Equations 11.4-3a to 11.4-3d can be combined to yield a single equation for the observed physical image samples in terms of points on the ideal image: fS = BP OP { BB fI } + BP n

(11.4-5)

Several special cases of Eq. 11.4-5 will now be defined. First, if the point nonlinearity is absent,

fS = BfI + n B

(11.4-6)

316

IMAGE RESTORATION MODELS

(a) Original

(b) Impulse response (c) Observation

FIGURE 11.4-3. Image arrays for underdetermined model.

where B = BPBB and nB = BPn. This is the classical discrete model consisting of a set of linear equations with measurement uncertainty. Another case that will be defined for later discussion occurs when the spatial blur of the physical image digitizer is negligible. In this case, f S = O P { Bf I } + n

(11.4-7)

where B = BB is defined by Eq. 7.2-15. Chapter 12 contains results for several image restoration experiments based on the restoration model defined by Eq. 11.4-6. An artificial image has been generated for these computer simulation experiments (9). The original image used for the analysis of underdetermined restoration techniques, shown in Figure 11.4-3a, consists of a 4 × 4 pixel square of intensity 245 placed against an extended background of intensity

REFERENCES

317

10 referenced to an intensity scale of 0 to 255. All images are zoomed for display purposes. The Gaussian-shaped impulse response function is defined as l2     l1 H ( l1, l 2 ) = K exp  –  -------- + --------   2   2b C 2b 2   R

(11.4-8)

over a 5 × 5 point array where K is an amplitude scaling constant and bC and bR are blur-spread constants. In the computer simulation restoration experiments, the observed blurred image model has been obtained by multiplying the column-scanned original image of Figure 11.4-3a by the blur matrix B. Next, additive white Gaussian observation noise has been simulated by adding output variables from an appropriate random number generator to the blurred images. For display, all image points restored are clipped to the intensity range 0 to 255.

REFERENCES
1. 2. 3. 4. 5. M. Born and E. Wolf, Principles of Optics, 7th ed., Pergamon Press, New York, 1999. J. W. Goodman, Introduction to Fourier Optics, 2nd ed., McGraw-Hill, New York, 1996. E. L. O'Neill and E. H. O’Neill, Introduction to Statistical Optics, reprint ed., AddisonWesley, Reading, MA, 1992. H. H. Hopkins, Proc. Royal Society, A, 231, 1184, July 1955, 98. R. E. Hufnagel and N. R. Stanley, “Modulation Transfer Function Associated with Image Transmission Through Turbulent Media,” J. Optical Society of America, 54, 1, January 1964, 52–61. K. Henney and B. Dudley, Handbook of Photography, McGraw-Hill, New York, 1939. R. M. Evans, W. T. Hanson, and W. L. Brewer, Principles of Color Photography, Wiley, New York, 1953. C. E. Mees, The Theory of Photographic Process, Macmillan, New York, 1966. N. D. A. Mascarenhas and W. K. Pratt, “Digital Image Restoration Under a Regression Model,” IEEE Trans. Circuits and Systems, CAS-22, 3, March 1975, 252–266.

6. 7. 8. 9.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

12
POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

A common defect in imaging systems is unwanted nonlinearities in the sensor and display systems. Post processing correction of sensor signals and pre-processing correction of display signals can reduce such degradations substantially (1). Such point restoration processing is usually relatively simple to implement. One of the most common image restoration tasks is that of spatial image restoration to compensate for image blur and to diminish noise effects. References 2 to 6 contain surveys of spatial image restoration methods.

12.1. SENSOR AND DISPLAY POINT NONLINEARITY CORRECTION This section considers methods for compensation of point nonlinearities of sensors and displays. 12.1.1. Sensor Point Nonlinearity Correction In imaging systems in which the source degradation can be separated into cascaded spatial and point effects, it is often possible directly to compensate for the point degradation (7). Consider a physical imaging system that produces an observed image field FO ( x, y ) according to the separable model
F O ( x, y ) = O Q { O D { C ( x, y, λ ) } }

(12.1-1)
319

320

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

FIGURE 12.1-1. Point luminance correction for an image sensor.

where C ( x, y, λ ) is the spectral energy distribution of the input light field, OQ { · } represents the point amplitude response of the sensor and O D { · } denotes the spatial and wavelength responses. Sensor luminance correction can then be accomplished by passing the observed image through a correction system with a point restoration operator O R { · } ideally chosen such that
OR { OQ { · } } = 1

(12.1-2)

For continuous images in optical form, it may be difficult to implement a desired point restoration operator if the operator is nonlinear. Compensation for images in analog electrical form can be accomplished with a nonlinear amplifier, while digital image compensation can be performed by arithmetic operators or by a table look-up procedure. Figure 12.1-1 is a block diagram that illustrates the point luminance correction methodology. The sensor input is a point light distribution function C that is converted to a binary number B for eventual entry into a computer or digital processor. In some imaging applications, processing will be performed directly on the binary representation, while in other applications, it will be preferable to convert to a real fixed-point computer number linearly proportional to the sensor input luminance. In ˜ the former case, the binary correction unit will produce a binary number B that is designed to be linearly proportional to C, and in the latter case, the fixed-point cor˜ rection unit will produce a fixed-point number C that is designed to be equal to C. A typical measured response B versus sensor input luminance level C is shown in Figure 12.1-2a, while Figure 12.1-2b shows the corresponding compensated response that is desired. The measured response can be obtained by scanning a gray scale test chart of known luminance values and observing the digitized binary value B at each step. Repeated measurements should be made to reduce the effects of noise and measurement errors. For calibration purposes, it is convenient to regard the binary-coded luminance as a fixed-point binary number. As an example, if the luminance range is sliced to 4096 levels and coded with 12 bits, the binary representation would be B = b8 b7 b6 b5 b4 b3 b2 b1. b–1 b–2 b–3 b–4 (12.1-3)

SENSOR AND DISPLAY POINT NONLINEARITY CORRECTION

321

FIGURE 12.1-2. Measured and compensated sensor luminance response.

The whole-number part in this example ranges from 0 to 255, and the fractional part divides each integer step into 16 subdivisions. In this format, the scanner can produce output levels over the range
255.9375 ≤ B ≤ 0.0

(12.1-4)

After the measured gray scale data points of Figure 12.1-2a have been obtained, a smooth analytic curve
C = g{B}

(12.1-5)

is fitted to the data. The desired luminance response in real number and binary number forms is

322

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

˜ C = C C – C min ˜ B = B max -----------------------------C max – C min

(12.1-6a) (12.1-6b)

Hence, the required compensation relationships are
˜ C = g{ B} g { B } – C min ˜ B = B max ------------------------------C max – C min

(12.1-7a) (12.1-7b)

The limits of the luminance function are commonly normalized to the range 0.0 to 1.0. To improve the accuracy of the calibration procedure, it is first wise to perform a rough calibration and then repeat the procedure as often as required to refine the correction curve. It should be observed that because B is a binary number, the corrected ˜ luminance value C will be a quantized real number. Furthermore, the corrected ˜ binary coded luminance B will be subject to binary roundoff of the right-hand side of Eq. 12.1-7b. As a consequence of the nonlinearity of the fitted curve C = g { B } and the amplitude quantization inherent to the digitizer, it is possible that some of the corrected binary-coded luminance values may be unoccupied. In other words, ˜ the image histogram of B may possess gaps. To minimize this effect, the number of output levels can be limited to less than the number of input levels. For example, B ˜ may be coded to 12 bits and B coded to only 8 bits. Another alternative is to add ˜ to smooth out the occupancy levels. pseudorandom noise to B Many image scanning devices exhibit a variable spatial nonlinear point luminance response. Conceptually, the point correction techniques described previously could be performed at each pixel value using the measured calibrated curve at that point. Such a process, however, would be mechanically prohibitive. An alternative approach, called gain correction, that is often successful is to model the variable spatial response by some smooth normalized two-dimensional curve G(j, k) over the sensor surface. Then, the corrected spatial response can be obtained by the operation
F ( j, k ) ˜ F ( j, k ) = ---------------G ( j, k )

(12.1-8)

˜ where F ( j, k ) and F ( j, k ) represent the raw and corrected sensor responses, respectively. Figure 12.1-3 provides an example of adaptive gain correction of a charge coupled device (CCD) camera. Figure 12.1-3a is an image of a spatially flat light box surface obtained with the CCD camera. A line profile plot of a diagonal line through the original image is presented in Figure 12.1-3b. Figure 12.3-3c is the gain-corrected original, in which G ( j, k ) is obtained by Fourier domain low-pass filtering of

SENSOR AND DISPLAY POINT NONLINEARITY CORRECTION

323

(a) Original

(b) Line profile of original

(c) Gain corrected

(d) Line profile of gain corrected

FIGURE 12.1-3. Gain correction of a CCD camera image.

the original image. The line profile plot of Figure 12.1-3d shows the “flattened” result. 12.1.2. Display Point Nonlinearity Correction Correction of an image display for point luminance nonlinearities is identical in principle to the correction of point luminance nonlinearities of an image sensor. The procedure illustrated in Figure 12.1-4 involves distortion of the binary coded image ˜ luminance variable B to form a corrected binary coded luminance function B so that ˜ the displayed luminance C will be linearly proportional to B. In this formulation, the display may include a photographic record of a displayed light field. The desired overall response is
˜ ˜ C max – C min ˜ ˜ C = B ------------------------------ + C min B max

(12.1-9)

Normally, the maximum and minimum limits of the displayed luminance ˜ function C are not absolute quantities, but rather are transmissivities or reflectivities

324

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

FIGURE 12.1-4. Point luminance correction of an image display.

normalized over a unit range. The measured response of the display and image reconstruction system is modeled by the nonlinear function
C = f {B}

(12.1-10)

Therefore, the desired linear response can be obtained by setting
˜ ˜  C max – C min ˜  ˜ B = g  B ------------------------------ + Cmin  Bmax  

(12.1-11)

where g { · } is the inverse function of f { · } . The experimental procedure for determining the correction function g { · } will be described for the common example of producing a photographic print from an image display. The first step involves the generation of a digital gray scale step chart over the full range of the binary number B. Usually, about 16 equally spaced levels of B are sufficient. Next, the reflective luminance must be measured over each step of the developed print to produce a plot such as in Figure 12.1-5. The data points are then fitted by the smooth analytic curve B = g { C }, which forms the desired transformation of Eq. 12.1-10. It is important that enough bits be allocated to B so that the discrete mapping g { · } can be approximated to sufficient accuracy. Also, the ˜ number of bits allocated to B must be sufficient to prevent gray scale contouring as the result of the nonlinear spacing of display levels. A 10-bit representation of B and ˜ an 8-bit representation of B should be adequate in most applications. Image display devices such as cathode ray tube displays often exhibit spatial luminance variation. Typically, a displayed image is brighter at the center of the display screen than at its periphery. Correction techniques, as described by Eq. 12.1-8, can be utilized for compensation of spatial luminance variations.

CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION

325

FIGURE 12.1-5. Measured image display response.

12.2. CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION For the class of imaging systems in which the spatial degradation can be modeled by a linear-shift-invariant impulse response and the noise is additive, restoration of continuous images can be performed by linear filtering techniques. Figure 12.2-1 contains a block diagram for the analysis of such techniques. An ideal image FI ( x, y ) passes through a linear spatial degradation system with an impulse response H D ( x, y ) and is combined with additive noise N ( x, y ). The noise is assumed to be uncorrelated with the ideal image. The image field observed can be represented by the convolution operation as
F O ( x, y ) =

∫– ∞ ∫– ∞





FI ( α, β ) H D ( x – α, y – β ) dα dβ + N ( x, y )

(12.2-1a)

or
FO ( x, y ) = F I ( x, y ) H D ( x, y ) + N ( x, y )

(12.2-1b)

The restoration system consists of a linear-shift-invariant filter defined by the impulse response H R ( x, y ). After restoration with this filter, the reconstructed image becomes
ˆ F I ( x, y ) =

∫–∞ ∫–∞ FO ( α, β )HR ( x – α, y – β ) dα dβ





(12.2-2a)

or
ˆ F I ( x, y ) = F O ( x, y ) H R ( x, y )

(12.2-2b)

326

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

FIGURE 12.2-1. Continuous image restoration model.

Substitution of Eq. 12.2-lb into Eq. 12.2-2b yields
ˆ F I ( x, y ) = [ F I ( x, y ) H D ( x, y ) + N ( x, y ) ] H R ( x, y )

(12.2-3)

It is analytically convenient to consider the reconstructed image in the Fourier transform domain. By the Fourier transform convolution theorem,
ˆ F I ( ω x, ω y ) = [ F I ( ω x, ω y )H D ( ω x, ω y ) + N ( ω x, ω y ) ]H R ( ω x, ω y )

(12.2-4)

ˆ where F I ( ω x, ω y ), F I ( ω x, ω y ), N ( ω x, ω y ), HD ( ω x, ω y ), H R ( ω x, ω y ) are the two-dimenˆ sional Fourier transforms of FI ( x, y ), F I ( x, y ) N ( x, y ), H D ( x, y ), H R ( x, y ), respec, tively. The following sections describe various types of continuous image restoration filters.

12.2.1. Inverse Filter The earliest attempts at image restoration were based on the concept of inverse filtering, in which the transfer function of the degrading system is inverted to yield a restored image (8–12). If the restoration inverse filter transfer function is chosen so that
1 H R ( ω x, ω y ) = --------------------------H D ( ω x, ω y )

(12.2-5)

then the spectrum of the reconstructed image becomes
N ( ω x, ω y ) ˆ F I ( ω x, ω y ) = F I ( ω x, ω y ) + --------------------------H D ( ω x, ω y )

(12.2-6)

CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION

327

FIGURE 12.2-2. Typical spectra of an inverse filtering image restoration system.

Upon inverse Fourier transformation, the restored image field
1 ˆ FI ( x, y ) = FI ( x, y ) + -------2 4π

∫– ∞ ∫– ∞





N ( ω x, ω y ) --------------------------- exp { i ( ω x x + ω y y ) } dω x dω y H D ( ω x, ω y )

(12.2-7) is obtained. In the absence of source noise, a perfect reconstruction results, but if source noise is present, there will be an additive reconstruction error whose value can become quite large at spatial frequencies for which HD ( ω x, ω y ) is small. Typically, H D ( ω x, ω y ) and F I ( ω x, ω y ) are small at high spatial frequencies, hence image quality becomes severely impaired in high-detail regions of the reconstructed image. Figure 12.2-2 shows typical frequency spectra involved in inverse filtering. The presence of noise may severely affect the uniqueness of a restoration estimate. That is, small changes in N ( x, y ) may radically change the value of the estiˆ mate F I ( x, y ) . For example, consider the dither function Z ( x, y ) added to an ideal image to produce a perturbed image
F Z ( x, y ) = F I ( x, y ) + Z ( x, y )

(12.2-8)

There may be many dither functions for which

328

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES
∞ ∞

∫– ∞ ∫– ∞

Z ( α, β )H D ( x – α, y – β ) dα dβ < N ( x, y )

(12.2-9)

For such functions, the perturbed image field FZ ( x, y ) may satisfy the convolution integral of Eq. 12.2-1 to within the accuracy of the observed image field. Specifically, it can be shown that if the dither function is a high-frequency sinusoid of arbitrary amplitude, then in the limit
 ∞ lim  ∫


n → ∞ – ∞ – ∞



 sin { n ( α + β ) } H D ( x – α, y – β ) dα dβ  = 0 

(12.2-10)

For image restoration, this fact is particularly disturbing, for two reasons. High-frequency signal components may be present in an ideal image, yet their presence may be masked by observation noise. Conversely, a small amount of observation noise may lead to a reconstruction of F I ( x, y ) that contains very large amplitude high-frequency components. If relatively small perturbations N ( x, y ) in the observation result in large dither functions for a particular degradation impulse response, the convolution integral of Eq. 12.2-1 is said to be unstable or ill conditioned. This potential instability is dependent on the structure of the degradation impulse response function. There have been several ad hoc proposals to alleviate noise problems inherent to inverse filtering. One approach (10) is to choose a restoration filter with a transfer function
H K ( ω x, ω y ) H R ( ω x, ω y ) = --------------------------H D ( ω x, ω y )

(12.2-11)

where H K ( ω x, ω y ) has a value of unity at spatial frequencies for which the expected magnitude of the ideal image spectrum is greater than the expected magnitude of the noise spectrum, and zero elsewhere. The reconstructed image spectrum is then
N ( ω x, ω y )H K ( ω x, ω y ) ˆ F I ( ω x, ω y ) = F I ( ω x, ω y )H K ( ω x, ω y ) + ----------------------------------------------------H D ( ω x, ω y )

(12.2-12)

The result is a compromise between noise suppression and loss of high-frequency image detail. Another fundamental difficulty with inverse filtering is that the transfer function of the degradation may have zeros in its passband. At such points in the frequency spectrum, the inverse filter is not physically realizable, and therefore the filter must be approximated by a large value response at such points.

CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION

329

12.2.2. Wiener Filter It should not be surprising that inverse filtering performs poorly in the presence of noise because the filter design ignores the noise process. Improved restoration quality is possible with Wiener filtering techniques, which incorporate a priori statistical knowledge of the noise field (13–17). In the general derivation of the Wiener filter, it is assumed that the ideal image FI ( x, y ) and the observed image F O ( x, y ) of Figure 12.2-1 are samples of twodimensional, continuous stochastic fields with zero-value spatial means. The impulse response of the restoration filter is chosen to minimize the mean-square restoration error
2 ˆ E = E { [ F I ( x, y ) – FI ( x, y ) ] }

(12.2-13)

The mean-square error is minimized when the following orthogonality condition is met (13):
ˆ E { [ FI ( x, y ) – FI ( x, y ) ]F O ( x′, y′ ) } = 0

(12.2-14)

for all image coordinate pairs ( x, y ) and ( x′, y′ ). Upon substitution of Eq. 12.2-2a for the restored image and some linear algebraic manipulation, one obtains
E { FI ( x, y )FO ( x, y ) } =

∫–∞ ∫–∞ E { FO ( α, β )FO ( x′, y′ ) }HR ( x – α, y – β ) dα dβ





(12.2-15)

Under the assumption that the ideal image and observed image are jointly stationary, the expectation terms can be expressed as covariance functions, as in Eq. 1.4-8. This yields
K F F ( x – x′, y – y′ ) =
I O

∫– ∞ ∫– ∞ K F





O FO

( α – x′, β – y′ )H R ( x – α, y – β ) dα dβ (12.2-16)

Then, taking the two-dimensional Fourier transform of both sides of Eq. 12.2-16 and solving for H R ( ω x, ω y ) , the following general expression for the Wiener filter transfer function is obtained:
W F F ( ω x, ω y ) I O HR ( ω x, ω y ) = ------------------------------------W F F ( ω x, ω y )
O O

(12.2-17)

In the special case of the additive noise model of Figure 12.2-1:

330

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

* WF F ( ω x, ω y ) = HD ( ω x, ω y )W F ( ω x, ω y )
I O I

(12.2-18a) (12.2-18b)

WF

O FO

( ω x, ω y ) = H D ( ω x, ω y ) W F ( ω x, ω y ) + WN ( ω x, ω y )
I

2

This leads to the additive noise Wiener filter
* H D ( ω x, ω y )W F ( ω x, ω y ) I H R ( ω x, ω y ) = ---------------------------------------------------------------------------------------------------2 H D ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y )
I

(12.2-19a)

or
* HD ( ω x, ω y ) H R ( ω x, ω y ) = -------------------------------------------------------------------------------------------------------2 H D ( ω x, ω y ) + W N ( ω x, ω y ) ⁄ W F ( ω x, ω y )
I

(12.2-19b)

In the latter formulation, the transfer function of the restoration filter can be expressed in terms of the signal-to-noise power ratio
W F ( ω x, ω y ) I SNR ( ω x, ω y ) ≡ ----------------------------W N ( ω x, ω y )

(12.2-20)

at each spatial frequency. Figure 12.2-3 shows cross-sectional sketches of a typical ideal image spectrum, noise spectrum, blur transfer function, and the resulting Wiener filter transfer function. As noted from the figure, this version of the Wiener filter acts as a bandpass filter. It performs as an inverse filter at low spatial frequencies, and as a smooth rolloff low-pass filter at high spatial frequencies. Equation 12.2-19 is valid when the ideal image and observed image stochastic processes are zero mean. In this case, the reconstructed image Fourier transform is
ˆ F I ( ω x, ω y ) = H R ( ω x, ω y )F O ( ω x, ω y )

(12.2-21)

If the ideal image and observed image means are nonzero, the proper form of the reconstructed image Fourier transform is
ˆ F I ( ω x, ω y ) = H R ( ω x, ω y ) [ F O ( ω x, ω y ) – M O ( ω x, ω y ) ] + MI ( ω x, ω y )

(12.2-22a)

where
M O ( ω x, ω y ) = H D ( ω x, ω y )M I ( ω x, ω y ) + MN ( ω x, ω y )

(12.2-22b)

CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION

331

FIGURE 12.2-3. Typical spectra of a Wiener filtering image restoration system.

and MI ( ω x, ω y ) and M N ( ω x, ω y ) are the two-dimensional Fourier transforms of the means of the ideal image and noise, respectively. It should be noted that Eq. 12.2-22 accommodates spatially varying mean models. In practice, it is common to estimate the mean of the observed image by its spatial average M O ( x, y ) and apply the Wiener filter of Eq. 12.2-19 to the observed image difference F O ( x, y ) – M O ( x, y ), and then add back the ideal image mean M I ( x, y ) to the Wiener filter result. It is useful to investigate special cases of Eq. 12.2-19. If the ideal image is assumed to be uncorrelated with unit energy, W F I ( ω x, ω y ) = 1 and the Wiener filter becomes
H ∗ ( ω x, ω y ) D H R ( ω x, ω y ) = --------------------------------------------------------------------2 H D ( ω x, ω y ) + W N ( ω x, ω y )

(12.2-23)

332

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

This version of the Wiener filter provides less noise smoothing than does the general case of Eq. 12.2-19. If there is no blurring of the ideal image, H D ( ω x, ω y ) = 1 and the Wiener filter becomes a noise smoothing filter with a transfer function
1 HR ( ω x, ω y ) = -------------------------------------1 + W N ( ω x, ω y )

(12.2-24)

In many imaging systems, the impulse response of the blur may not be fixed; rather, it changes shape in a random manner. A practical example is the blur caused by imaging through a turbulent atmosphere. Obviously, a Wiener filter applied to this problem would perform better if it could dynamically adapt to the changing blur impulse response. If this is not possible, a design improvement in the Wiener filter can be obtained by considering the impulse response to be a sample of a two-dimensional stochastic process with a known mean shape and with a random perturbation about the mean modeled by a known power spectral density. Transfer functions for this type of restoration filter have been developed by Slepian (18). 12.2.3. Parametric Estimation Filters Several variations of the Wiener filter have been developed for image restoration. Some techniques are ad hoc, while others have a quantitative basis. Cole (19) has proposed a restoration filter with a transfer function
W F ( ω x, ω y ) I H R ( ω x, ω y ) = ---------------------------------------------------------------------------------------------------2 H D ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y )
I

1⁄2

(12.2-25)

The power spectrum of the filter output is
W F ( ω x, ω y ) = HR ( ω x, ω y ) W F ( ω x, ω y ) ˆ
I O

2

(12.2-26)

where W FO ( ω x, ω y ) represents the power spectrum of the observation, which is related to the power spectrum of the ideal image by
W F ( ω x, ω y ) = HD ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y )
O I

2

(12.2-27)

Thus, it is easily seen that the power spectrum of the reconstructed image is identical to the power spectrum of the ideal image field. That is,
W F ( ω x, ω y ) = W F ( ω x, ω y ) ˆ
I I

(12.2-28)

CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION

333

For this reason, the restoration filter defined by Eq. 12.2-25 is called the image power-spectrum filter. In contrast, the power spectrum for the reconstructed image as obtained by the Wiener filter of Eq. 12.2-19 is
H D ( ω x, ω y ) [ W F ( ω x, ω y ) ] I WF ( ω x, ω y ) = ---------------------------------------------------------------------------------------------------ˆI 2 H D ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y )
I

2

2

(12.2-29)

In this case, the power spectra of the reconstructed and ideal images become identical only for a noise-free observation. Although equivalence of the power spectra of the ideal and reconstructed images appears to be an attractive feature of the image power-spectrum filter, it should be realized that it is more important that the Fourier spectra (Fourier transforms) of the ideal and reconstructed images be identical because their Fourier transform pairs are unique, but power-spectra transform pairs are not necessarily unique. Furthermore, the Wiener filter provides a minimum mean-square error estimate, while the image power-spectrum filter may result in a large residual mean-square error. Cole (19) has also introduced a geometrical mean filter, defined by the transfer function
–S

H R ( ω x, ω y ) = [ H D ( ω x, ω y ) ]

H ∗ ( ω x, ω y )W F ( ω x, ω y ) D I ---------------------------------------------------------------------------------------------------2 H D ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y )
I

1–S

(12.2-30)

where 0 ≤ S ≤ 1 is a design parameter. If S = 1 ⁄ 2 and H D = H ∗ , the geometrical D mean filter reduces to the image power-spectrum filter as given in Eq. 12.2-25. Hunt (20) has developed another parametric restoration filter, called the constrained least-squares filter, whose transfer function is of the form
H ∗ ( ω x, ω y ) D H R ( ω x, ω y ) = -----------------------------------------------------------------------2 2 H D ( ω x, ω y ) + γ C ( ω x, ω y )

(12.2-31)

where γ is a design constant and C ( ω x, ω y ) is a design spectral variable. If γ = 1 2 and C ( ω x, ω y ) is set equal to the reciprocal of the spectral signal-to-noise power ratio of Eq. 12.2-20, the constrained least-squares filter becomes equivalent to the Wiener filter of Eq. 12.2-19b. The spectral variable can also be used to minimize higher-order derivatives of the estimate. 12.2.4. Application to Discrete Images The inverse filtering, Wiener filtering, and parametric estimation filtering techniques developed for continuous image fields are often applied to the restoration of

334

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

(a) Original

(b) Blurred, b = 2.0

(c) Blurred with noise, SNR = 10.0

FIGURE 12.2-4. Blurred test images.

discrete images. The common procedure has been to replace each of the continuous spectral functions involved in the filtering operation by its discrete two-dimensional Fourier transform counterpart. However, care must be taken in this conversion process so that the discrete filtering operation is an accurate representation of the continuous convolution process and that the discrete form of the restoration filter impulse response accurately models the appropriate continuous filter impulse response. Figures 12.2-4 to 12.2-7 present examples of continuous image spatial filtering techniques by discrete Fourier transform filtering. The original image of Figure 12.2-4a has been blurred with a Gaussian-shaped impulse response with b = 2.0 to obtain the blurred image of Figure 12.2-4b. White Gaussian noise has been added to the blurred image to give the noisy blurred image of Figure l2.2-4c, which has a signal-to-noise ratio of 10.0.

PSEUDOINVERSE SPATIAL IMAGE RESTORATION

335

Figure 12.2-5 shows the results of inverse filter image restoration of the blurred and noisy-blurred images. In Figure 12.2-5a, the inverse filter transfer function follows Eq. 12.2-5 (i.e., no high-frequency cutoff). The restored image for the noisefree observation is corrupted completely by the effects of computational error. The computation was performed using 32-bit floating-point arithmetic. In Figure 12.2-5c the inverse filter restoration is performed with a circular cutoff inverse filter as defined by Eq. 12.2-11 with C = 200 for the 512 × 512 pixel noise-free observation. Some faint artifacts are visible in the restoration. In Figure 12.2-5e the cutoff frequency is reduced to C = 150 . The restored image appears relatively sharp and free of artifacts. Figure 12.2-5b, d, and f show the result of inverse filtering on the noisyblurred observed image with varying cutoff frequencies. These restorations illustrate the trade-off between the level of artifacts and the degree of deblurring. Figure 12.2-6 shows the results of Wiener filter image restoration. In all cases, the noise power spectral density is white and the signal power spectral density is circularly symmetric Markovian with a correlation factor ρ . For the noise-free observation, the Wiener filter provides restorations that are free of artifacts but only slightly sharper than the blurred observation. For the noisy observation, the restoration artifacts are less noticeable than for an inverse filter. Figure 12.2-7 presents restorations using the power spectrum filter. For a noisefree observation, the power spectrum filter gives a restoration of similar quality to an inverse filter with a low cutoff frequency. For a noisy observation, the power spectrum filter restorations appear to be grainier than for the Wiener filter. The continuous image field restoration techniques derived in this section are advantageous in that they are relatively simple to understand and to implement using Fourier domain processing. However, these techniques face several important limitations. First, there is no provision for aliasing error effects caused by physical undersampling of the observed image. Second, the formulation inherently assumes that the quadrature spacing of the convolution integral is the same as the physical sampling. Third, the methods only permit restoration for linear, space-invariant degradation. Fourth, and perhaps most important, it is difficult to analyze the effects of numerical errors in the restoration process and to develop methods of combatting such errors. For these reasons, it is necessary to turn to the discrete model of a sampled blurred image developed in Section 7.2 and then reformulate the restoration problem on a firm numeric basic. This is the subject of the remaining sections of the chapter.

12.3. PSEUDOINVERSE SPATIAL IMAGE RESTORATION The matrix pseudoinverse defined in Chapter 5 can be used for spatial image restoration of digital images when it is possible to model the spatial degradation as a vector-space operation on a vector of ideal image points yielding a vector of physical observed samples obtained from the degraded image (21–23).

336

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

(a) Noise-free, no cutoff

(b) Noisy, C = 100

(c) Noise-free, C = 200

(d ) Noisy, C = 75

(e) Noise-free, C = 150

(f ) Noisy, C = 50

FIGURE 12.2-5. Inverse filter image restoration on the blurred test images.

PSEUDOINVERSE SPATIAL IMAGE RESTORATION

337

(a) Noise-free, r = 0.9

(b) Noisy, r = 0.9

(c) Noise-free, r = 0.5

(d ) Noisy, r = 0.5

(e) Noise-free, r = 0.0

(f ) Noisy, r = 0.0

FIGURE 12.2-6. Wiener filter image restoration on the blurred test images; SNR = 10.0.

338

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

(a) Noise-free, r = 0.5

(b) Noisy, r = 0.5

(c) Noisy, r = 0.5

(d ) Noisy, r = 0.0

FIGURE 12.2-7. Power spectrum filter image restoration on the blurred test images; SNR = 10.0.

12.3.1. Pseudoinverse: Image Blur The first application of the pseudoinverse to be considered is that of the restoration of a blurred image described by the vector-space model g = Bf
2

(12.3-1)

as derived in Eq. 11.5-6, where g is a P × 1 vector ( P = M ) containing the M × M 2 physical samples of the blurred image, f is a Q × 1 vector ( Q = N ) containing N × N points of the ideal image and B is the P × Q matrix whose elements are points

PSEUDOINVERSE SPATIAL IMAGE RESTORATION

339

on the impulse function. If the physical sample period and the quadrature representation period are identical, P will be smaller than Q, and the system of equations will be underdetermined. By oversampling the blurred image, it is possible to force P > Q or even P = Q . In either case, the system of equations is called overdetermined. An overdetermined set of equations can also be obtained if some of the elements of the ideal image vector can be specified through a priori knowledge. For example, if the ideal image is known to contain a limited size object against a black background (zero luminance), the elements of f beyond the limits may be set to zero. ˆ In discrete form, the restoration problem reduces to finding a solution f to Eq. 12.3-1 in the sense that
ˆ Bf = g

(12.3-2)

Because the vector g is determined by physical sampling and the elements of B are ˆ specified independently by system modeling, there is no guarantee that a f even exists to satisfy Eq. 12.3-2. If there is a solution, the system of equations is said to be consistent; otherwise, the system of equations is inconsistent. In Appendix 1 it is shown that inconsistency in the set of equations of Eq. 12.3-1 can be characterized as g = Bf + e { f }

(12.3-3)

where e { f } is a vector of remainder elements whose value depends on f. If the set of equations is inconsistent, a solution of the form
ˆ = Wg f

(12.3-4)

is sought for which the linear operator W minimizes the least-squares modeling error
ˆ ˆ f f E M = [ e { ˆ } ] [ e { ˆ } ] = [ g – Bf ] [ g – Bf ]
T T

(12.3-5)

This error is shown, in Appendix 1, to be minimized when the operator W = B$ is set equal to the least-squares inverse of B. The least-squares inverse is not necessarily unique. It is also proved in Appendix 1 that the generalized inverse operator W = B–, which is a special case of the least-squares inverse, is unique, minimizes the least-squares modeling error, and simultaneously provides a minimum norm estimate. That is, the sum of the squares of ˆ is a minimum for all possible minif mum least-square error estimates. For the restoration of image blur, the generalized inverse provides a lowest-intensity restored image.

340

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

If Eq. 12.3-1 represents a consistent set of equations, one or more solutions may exist for Eq. 12.3-2. The solution commonly chosen is the estimate that minimizes the least-squares estimation error defined in the equivalent forms
T f f EE = ( f – ˆ) ( f – ˆ)

(12.3-6a)
T

ˆ f E E = tr { ( f – f ) ( f – ˆ ) }

(12.3-6b)

In Appendix 1 it is proved that the estimation error is minimum for a generalized inverse (W = B–) estimate. The resultant residual estimation error then becomes
E E = f [ I – [ B B ] ]f
T –

(12.3-7a)

or
E E = tr { ff [ I – [ B B ] ] }
T –

(12.3-7b)

The estimate is perfect, of course, if B–B = I. Thus, it is seen that the generalized inverse is an optimal solution, in the sense defined previously, for both consistent and inconsistent sets of equations modeling image blur. From Eq. 5.5-5, the generalized inverse has been found to be algebraically equivalent to
B


= [B B] B

T

–1 T

(12.3-8a)

if the P × Q matrix B is of rank Q. If B is of rank P, then
B


= B [B B]

T

T

–1

(12.3-8b)

For a consistent set of equations and a rank Q generalized inverse, the estimate
ˆ = B – g = B – Bf = [ [ B T B ] B T ]Bf = f f
–1

(12.3-9)

is obviously perfect. However, in all other cases, a residual estimation error may occur. Clearly, it would be desirable to deal with an overdetermined blur matrix of rank Q in order to achieve a perfect estimate. Unfortunately, this situation is rarely

PSEUDOINVERSE SPATIAL IMAGE RESTORATION

341

achieved in image restoration. Oversampling the blurred image can produce an overdetermined set of equations ( P > Q ), but the rank of the blur matrix is likely to be much less than Q because the rows of the blur matrix will become more linearly dependent with finer sampling. A major problem in application of the generalized inverse to image restoration is dimensionality. The generalized inverse is a Q × P matrix where P is equal to the number of pixel observations and Q is equal to the number of pixels to be estimated in an image. It is usually not computationally feasible to use the generalized inverse operator, defined by Eq. 12.3-8, over large images because of difficulties in reliably computing the generalized inverse and the large number of vector multiplications associated with Eq. 12.3-4. Computational savings can be realized if the blur matrix B is separable such that
B = BC ⊗ B R

(12.3-10)

where BC and BR are column and row blur operators. In this case, the generalized inverse is separable in the sense that
B
– – –

= BC ⊗ BR





(12.3-11)

where B C and B R are generalized inverses of BC and BR, respectively. Thus, when the blur matrix is of separable form, it becomes possible to form the estimate of the image by sequentially applying the generalized inverse of the row blur matrix to each row of the observed image array and then using the column generalized inverse operator on each column of the array. Pseudoinverse restoration of large images can be accomplished in an approximate fashion by a block mode restoration process, similar to the block mode filtering technique of Section 9.3, in which the blurred image is partitioned into small blocks that are restored individually. It is wise to overlap the blocks and accept only the pixel estimates in the center of each restored block because these pixels exhibit the least uncertainty. Section 12.3.3 describes an efficient computational algorithm for pseudoinverse restoration for space-invariant blur. Figure l2.3-1a shows a blurred image based on the model of Figure 11.5-3. Figure 12.3-1b shows a restored image using generalized inverse image restoration. In this example, the observation is noise free and the blur impulse response function is Gaussian shaped, as defined in Eq. 11.5-8, with bR = bC = 1.2. Only the center 8 × 8 region of the 12 × 12 blurred picture is displayed, zoomed to an image size of 256 × 256 pixels. The restored image appears to be visually improved compared to the blurred image, but the restoration is not identical to the original unblurred image of Figure 11.5-3a. The figure also gives the percentage least-squares error (PLSE) as defined in Appendix 3, between the blurred image and the original unblurred image, and between the restored image and the original. The restored image has less error than the blurred image.

342

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

(a) Blurred, PLSE = 4.97%

(b) Restored, PLSE = 1.41%

FIGURE 12.3-1. Pseudoinverse image restoration for test image blurred with Gaussian shape impulse response. M = 8, N = 12, L = 5; bR = bC = 1.2; noise-free observation.

12.3.2. Pseudoinverse: Image Blur Plus Additive Noise In many imaging systems, an ideal image is subject to both blur and additive noise; the resulting vector-space model takes the form g = Bf + n

(12.3-12)

where g and n are P × 1 vectors of the observed image field and noise field, respectively, f is a Q × 1 vector of ideal image points, and B is a P × Q blur matrix. The vector n is composed of two additive components: samples of an additive external noise process and elements of the vector difference ( g – Bf ) arising from modeling errors in the formulation of B. As a result of the noise contribution, there may be no vector solutions ˆ that satisfy Eq. 12.3-12. However, as indicated in Appendix 1, the f generalized inverse B– can be utilized to determine a least-squares error, minimum norm estimate. In the absence of modeling error, the estimate
ˆ = B – g = B – Bf + B – n f

(12.3-13)


differs from the ideal image because of the additive noise contribution B n. Also, – for the underdetermined model, B B will not be an identity matrix. If B is an over– determined rank Q matrix, as defined in Eq. 12.3-8a, then B B = I , and the resulting – estimate is equal to the original image vector f plus a perturbation vector ∆ f = B n . The perturbation error in the estimate can be measured as the ratio of the vector

PSEUDOINVERSE SPATIAL IMAGE RESTORATION

343

norm of the perturbation to the vector norm of the estimate. It can be shown (24, p. 52) that the relative error is subject to the bound
– ∆f ----------- < B f –

n⋅ B -------g

(12.3-14)

The product B ⋅ B , which is called the condition number C{B} of B, determines the relative error in the estimate in terms of the ratio of the vector norm of the noise to the vector norm of the observation. The condition number can be computed directly or found in terms of the ratio
W1 C { B } = ------WN

(12.3-15)

of the largest W1 to smallest WN singular values of B. The noise perturbation error for the underdetermined matrix B is also governed by Eqs. 12.3-14 and 12.3-15 if WN is defined to be the smallest nonzero singular value of B (25, p. 41). Obviously, the larger the condition number of the blur matrix, the greater will be the sensitivity to noise perturbations. Figure 12.3-2 contains image restoration examples for a Gaussian-shaped blur function for several values of the blur standard deviation and a noise variance of 10.0 on an amplitude scale of 0.0 to 255.0. As expected, observation noise degrades the restoration. Also as expected, the restoration for a moderate degree of blur is worse than the restoration for less blur. However, this trend does not continue; the restoration for severe blur is actually better in a subjective sense than for moderate blur. This seemingly anomalous behavior, which results from spatial truncation of the point-spread function, can be explained in terms of the condition number of the blur matrix. Figure 12.3-3 is a plot of the condition number of the blur matrix of the previous examples as a function of the blur coefficient (21). For small amounts of blur, the condition number is low. A maximum is attained for moderate blur, followed by a decrease in the curve for increasing values of the blur coefficient. The curve tends to stabilize as the blur coefficient approaches infinity. This curve provides an explanation for the previous experimental results. In the restoration operation, the blur impulse response is spatially truncated over a square region of 5 × 5 quadrature points. As the blur coefficient increases, for fixed M and N, the blur impulse response becomes increasingly wider, and its tails become truncated to a greater extent. In the limit, the nonzero elements in the blur matrix become constant values, and the condition number assumes a constant level. For small values of the blur coefficient, the truncation effect is negligible, and the condition number curve follows an ascending path toward infinity with the asymptotic value obtained for a smoothly represented blur impulse response. As the blur factor increases, the number of nonzero elements in the blur matrix increases, and the condition number stabilizes to a constant value. In effect, a trade-off exists between numerical errors caused by ill-conditioning and modeling accuracy. Although this conclusion

344

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

Blurred

Restored

(a ) PLSE = 1.30%

bR = bC = 0.6

(b ) PLSE = 0.21%

bR = bC = 1.2
(c ) PLSE = 4.91% (d ) PLSE = 2695.81%

bR = bC = 50.0
(e ) PLSE = 7.99% (f ) PLSE = 7.29%

FIGURE 12.3-2. Pseudoinverse image restoration for test image blurred with Gaussian shape impulse response. M = 8, N = 12, L = 5; noisy observation, Var = 10.0.

PSEUDOINVERSE SPATIAL IMAGE RESTORATION

345

FIGURE 12.3-3. Condition number curve.

is formulated on the basis of a particular degradation model, the inference seems to be more general because the inverse of the integral operator that describes the blur is unbounded. Therefore, the closer the discrete model follows the continuous model, the greater the degree of ill-conditioning. A move in the opposite direction reduces singularity but imposes modeling errors. This inevitable dilemma can only be broken with the intervention of correct a priori knowledge about the original image.

12.3.3. Pseudoinverse Computational Algorithms Efficient computational algorithms have been developed by Pratt and Davarian (22) for pseudoinverse image restoration for space-invariant blur. To simplify the explanation of these algorithms, consideration will initially be limited to a one-dimensional example. Let the N × 1 vector fT and the M × 1 vector g T be formed by selecting the center portions of f and g, respectively. The truncated vectors are obtained by dropping L 1 elements at each end of the appropriate vector. Figure 12.3-4a illustrates the relationships of all vectors for N = 9 original vector points, M = 7 observations and an impulse response of length L = 3. The elements f T and g T are entries in the adjoint model q E = Cf E + n E

(12.3-16a)

346

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

FIGURE 12.3-4. One-dimensional convolution.

sampled

continuous

convolution

and

discrete

where the extended vectors q E , f E and n E are defined in correspondence with fT 0 nT 0

g = 0 C

+

(12.3-16b)

where g is a M × 1 vector, f T and nT are K × 1 vectors, and C is a J × J matrix. As noted in Figure 12.3-4b, the vector q is identical to the image observation g over its R = M – 2 ( L – 1 ) center elements. The outer elements of q can be approximated by
˜ q ≈ q = Eg

(12.3-17)

where E, called an extraction weighting matrix, is defined as a 0 0 0 I 0 0 0 b

E =

(12.3-18)

where a and b are L × L submatrices, which perform a windowing function similar to that described in Section 9.4.2 (22). Combining Eqs. 12.3-17 and 12.3-18, an estimate of fT can be obtained from
–1 ˆ ˆ f E = C qE

(12.3-19)

PSEUDOINVERSE SPATIAL IMAGE RESTORATION

347

(a ) Original image vectors, f

(b ) Truncated image vectors, fT

(c ) Observation vectors, g

(d ) Windowed observation vectors, q

(e ) Restoration without windowing, fT

^

(f ) Restoration with windowing, fT

^

FIGURE 12.3-5. Pseudoinverse image restoration for small degree of horizontal blur, bR = 1.5.

348

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

Equation 12.3-19 can be solved efficiently using Fourier domain convolution techniques, as described in Section 9.3. Computation of the pseudoinverse by Fou2 rier processing requires on the order of J ( 1 + 4 log 2 J ) operations in two 2 2 dimensions; spatial domain computation requires about M N operations. As an example, for M = 256 and L = 17, the computational savings are nearly 1750:1 (22). Figure 12.3-5 is a computer simulation example of the operation of the pseudoinverse image restoration algorithm for one-dimensional blur of an image. In the first step of the simulation, the center K pixels of the original image are extracted to form the set of truncated image vectors f T shown in Figure 12.3-5b. Next, the truncated image vectors are subjected to a simulated blur with a Gaussian-shaped impulse response with bR = 1.5 to produce the observation of Figure 12.3-5c. Figure 12.3-5d shows the result of the extraction operation on the observation. Restoration results without and with the extraction weighting operator E are presented in Figure 12.3-5e and f, respectively. These results graphically illustrate the importance of the

(a) Observation, g

(b) Restoration, fT Gaussian blur, bR = 2.0



(c ) Observation, g

(d ) Restoration, fT Uniform motion blur, L = 15.0



FIGURE 12.3-6. Pseudoinverse image restoration for moderate and high degrees of horizontal blur.

SVD PSEUDOINVERSE SPATIAL IMAGE RESTORATION

349

extraction operation. Without weighting, errors at the observation boundary completely destroy the estimate in the boundary region, but with weighting the restoration is subjectively satisfying, and the restoration error is significantly reduced. Figure 12.3-6 shows simulation results for the experiment of Figure 12.3-5 when the degree of blur is increased by setting bR = 2.0. The higher degree of blur greatly increases the ill-conditioning of the blur matrix, and the residual error in formation of the modified observation after weighting leads to the disappointing estimate of Figure 12.3-6b. Figure 12.3-6c and d illustrate the restoration improvement obtained with the pseudoinverse algorithm for horizontal image motion blur. In this example, the blur impulse response is constant, and the corresponding blur matrix is better conditioned than the blur matrix for Gaussian image blur.

12.4. SVD PSEUDOINVERSE SPATIAL IMAGE RESTORATION In Appendix 1 it is shown that any matrix can be decomposed into a series of eigenmatrices by the technique of singular value decomposition. For image restoration, this concept has been extended (26–29) to the eigendecomposition of blur matrices in the imaging model g = Bf + n

(12.4-1)

From Eq. A1.2-3, the blur matrix B may be expressed as
B = UΛ Λ
1⁄2

V

T

(12.4-2)

where the P × P matrix U and the Q × Q matrix V are unitary matrices composed of the eigenvectors of BBT and BTB, respectively and Λ is a P × Q matrix whose diagonal terms λ ( i ) contain the eigenvalues of BBT and BTB. As a consequence of the orthogonality of U and V, it is possible to express the blur matrix in the series form
B =

i=1

∑ [ λ( i )]

R

1⁄2

u i vi

T

(12.4-3)

where ui and v i are the ith columns of U and V, respectively, and R is the rank of the matrix B. From Eq. 12.4-2, because U and V are unitary matrices, the generalized inverse of B is
B


Λ = VΛ

1⁄2

U =

T

i=1

∑ [ λ( i ) ]

R

–1 ⁄ 2

vi ui

T

(12.4-4)

Figure 12.4-1 shows an example of the SVD decomposition of a blur matrix. The generalized inverse estimate can then be expressed as

350

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

(a) Blur matrix, B

T (b) u1v1 , l(1) = 0.871

T (c ) u2v2 , l(2) = 0.573

T (d ) u3v3 , l(3) = 0.285

T (e) u4v4 , l(4) = 0.108

T (f ) u5v5 , l(5) = 0.034

(g) u6v6T, l(6) = 0.014

T (h ) u7v7 , l(7) = 0.011

(i ) u8v8T, l(8) = 0.010

FIGURE 12.4-1. SVD decomposition of a blur matrix for bR = 2.0, M = 8, N = 16, L = 9.

SVD PSEUDOINVERSE SPATIAL IMAGE RESTORATION

351

ˆ = B– g = VΛ1 ⁄ 2 U T g f Λ

(12.4-5a)

or, equivalently,
ˆ = f

i=1



R

[λ(i )]

–1 ⁄ 2

vi ui g =
T

T

i=1

∑ [λ(i)]

R

–1 ⁄ 2

[ u i g ]v i

T

(12.4-5b)

recognizing the fact that the inner product u i g is a scalar. Equation 12.4-5 provides the basis for sequential estimation; the kth estimate of f in a sequence of estimates is equal to
–1 ⁄ 2 T ˆ = ˆ fk – 1 + [ λ ( k ) ] [ u k g ]v k fk

(12.4-6)

One of the principal advantages of the sequential formulation is that problems of illconditioning generally occur only for higher-order singular values. Thus, it is possible interactively to terminate the expansion before numerical problems occur. Figure 12.4-2 shows an example of sequential SVD restoration for the underdetermined model example of Figure 11.5-3 with a poorly conditioned Gaussian blur matrix. A one-step pseudoinverse would have resulted in the final image estimate that is totally overwhelmed by numerical errors. The sixth step, which is the best subjective restoration, offers a considerable improvement over the blurred original, but the lowest least-squares error occurs for three singular values. The major limitation of the SVD image restoration method formulation in Eqs. 12.4-5 and 12.4-6 is computational. The eigenvectors ui and v i must first be determined for the matrix BBT and BTB. Then the vector computations of Eq 12.4-5 or 12.4-6 must be performed. Even if B is direct-product separable, permitting separable row and column SVD pseudoinversion, the computational task is staggering in the general case. The pseudoinverse computational algorithm described in the preceding section can be adapted for SVD image restoration in the special case of space-invariant blur (23). From the adjoint model of Eq. 12.3-16 given by q E = Cf E + n E

(12.4-7)

the circulant matrix C can be expanded in SVD form as
C = X∆ ∆
1⁄2 T Y∗

(12.4-8)

where X and Y are unitary matrices defined by

352

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

(a ) 8 singular values PLSE = 2695.81%

(b ) 7 singular values PLSE = 148.93%

(c ) 6 singular values PLSE = 6.88%

(d ) 5 singular values PLSE = 3.31%

(e ) 4 singular values PLSE = 3.06%

(f ) 3 singular values PLSE = 3.05%

(g ) 2 singular values PLSE = 9.52%

(h) 1 singular value PLSE = 9.52%

FIGURE 12.4-2. SVD restoration for test image blurred with a Gaussian-shaped impulse response. bR = bC = 1.2, M = 8, N = 12, L = 5; noisy observation, Var = 10.0.

SVD PSEUDOINVERSE SPATIAL IMAGE RESTORATION
T T X [ CC ]X∗ = ∆

353

(12.4-9a) (12.4-9b)

Y [ C C ]Y∗ = ∆

T

T

Because C is circulant, CCT is also circulant. Therefore X and Y must be equivalent –1 to the Fourier transform matrix A or A because the Fourier matrix produces a diagonalization of a circulant matrix. For purposes of standardization, let –1 X = Y = A . As a consequence, the eigenvectors x i = y i , which are rows of X and Y, are actually the complex exponential basis functions
 2πi  x k ( j ) = exp  ------- ( k – 1 ) ( j – 1 )  * J  

(12.4-10)

of a Fourier transform for 1 ≤ j, k ≤ J . Furthermore,
∆ = C C∗
T

(12.4-11)

where C is the Fourier domain circular area convolution matrix. Then, in correspondence with Eq. 12.4-5
–1 –1 ⁄ 2 ˆ ˜ Aq E fE = A Λ

(12.4-12)

˜ where q E is the modified blurred image observation of Eqs. 12.3-19 and 12.3-20. Equation 12.4-12 should be recognized as being a Fourier domain pseudoinverse estimate. Sequential SVD restoration, analogous to the procedure of Eq. 12.4-6, can –1 ⁄ 2 be obtained by replacing the SVD pseudoinverse matrix ∆ of Eq. 12.4-12 by the operator

[ ∆T ( 1 ) ] ·
–1 ⁄ 2 ∆T

–1 ⁄ 2

· = · · · 0

[ ∆T( 2 ) ]

–1 ⁄ 2

0 · … [ ∆T ( T ) ]
–1 ⁄ 2

· · 0 … · · 0

(12.4-13)

354

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

(a) Blurred observation

(b) Restoration, T = 58

(c ) Restoration, T = 60

FIGURE 12.4-3. Sequential SVD pseudoinverse image restoration for horizontal Gaussian blur, bR = 3.0, L = 23, J = 256.

Complete truncation of the high-frequency terms to avoid ill-conditioning effects may not be necessary in all situations. As an alternative to truncation, the diagonal –1 ⁄ 2 zero elements can be replaced by [ ∆T ( T ) ] or perhaps by some sequence that declines in value as a function of frequency. This concept is actually analogous to the truncated inverse filtering technique defined by Eq. 12.2-11 for continuous image fields. Figure 12.4-3 shows an example of SVD pseudoinverse image restoration for one-dimensional Gaussian image blur with bR = 3.0. It should be noted that the restoration attempt with the standard pseudoinverse shown in Figure 12.3-6b was subject to severe ill-conditioning errors at a blur spread of bR = 2.0.

STATISTICAL ESTIMATION SPATIAL IMAGE RESTORATION

355

12.5. STATISTICAL ESTIMATION SPATIAL IMAGE RESTORATION A fundamental limitation of pseudoinverse restoration techniques is that observation noise may lead to severe numerical instability and render the image estimate unusable. This problem can be alleviated in some instances by statistical restoration techniques that incorporate some a priori statistical knowledge of the observation noise (21). 12.5.1. Regression Spatial Image Restoration Consider the vector-space model g = Bf + n

(12.5-1)

for a blurred image plus additive noise in which B is a P × Q blur matrix and the noise is assumed to be zero mean with known covariance matrix Kn. The regression method seeks to form an estimate
ˆ f = Wg

(12.5-2)

where W is a restoration matrix that minimizes the weighted error measure
ˆ ˆ ˆ Θ { f } = [ g – Bf ] K n [ g – Bf ]
T –1

(12.5-3)

Minimization of the restoration error can be accomplished by the classical method ˆ ˆ of setting the partial derivative of Θ { f } with respect to f to zero. In the underdetermined case, for which P < Q , it can be shown (30) that the minimum norm estimate regression operator is
–1 – –1

W = [K B] K

(12.5-4)

where K is a matrix obtained from the spectral factorization
K n = KK
T

(12.5-5)
2

of the noise covariance matrix K n. For white noise, K = σ n I, and the regression operator assumes the form of a rank P generalized inverse for an underdetermined system as given by Eq. 12.3-8b.

356

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

12.5.2. Wiener Estimation Spatial Image Restoration With the regression technique of spatial image restoration, the noise field is modeled as a sample of a two-dimensional random process with a known mean and covariance function. Wiener estimation techniques assume, in addition, that the ideal image is also a sample of a two-dimensional random process with known first and second moments (21,22,31). Wiener Estimation: General Case. Consider the general discrete model of Figure 12.5-1 in which a Q × 1 image vector f is subject to some unspecified type of point and spatial degradation resulting in the P × 1 vector of observations g. An estimate of f is formed by the linear operation
ˆ = Wg + b f

(12.5-6)

where W is a Q × P restoration matrix and b is a Q × 1 bias vector. The objective of Wiener estimation is to choose W and b to minimize the mean-square restoration error, which may be defined as
T E = E{[f – ˆ] [f – ˆ] } f f

(12.5-7a)

or
T E = tr { E { [ f – ˆ ] [ f – ˆ ] } } f f

(12.5-7b)

Equation 12.5-7a expresses the error in inner-product form as the sum of the squares of the elements of the error vector [ f – ˆ ], while Eq. 12.5-7b forms the covariance f matrix of the error, and then sums together its variance terms (diagonal elements) by the trace operation. Minimization of Eq. 12.5-7 in either of its forms can be accomplished by differentiation of E with respect to ˆ . An alternative approach, f

FIGURE 12.5-1. Wiener estimation for spatial image restoration.

STATISTICAL ESTIMATION SPATIAL IMAGE RESTORATION

357

which is of quite general utility, is to employ the orthogonality principle (32, p. 219) to determine the values of W and b that minimize the mean-square error. In the context of image restoration, the orthogonality principle specifies two necessary and sufficient conditions for the minimization of the mean-square restoration error: 1. The expected value of the image estimate must equal the expected value of the image
E{ ˆ} = E{f } f

(12.5-8)

2. The restoration error must be orthogonal to the observation about its mean
T E{[f – ˆ][g – E{g }] } = 0 f

(12.5-9)

From condition 1, one obtains b = E { f } – WE { g }

(12.5-10)

and from condition 2
E{[ W + b – f][g – E{g }] } = 0
T

(12.5-11)

Upon substitution for the bias vector b from Eq. 12.5-10 and simplification, Eq. 12.5-11 yields
W = K fg [ K gg ]
–1

(12.5-12)

where K gg is the P × P covariance matrix of the observation vector (assumed nonsingular) and K fg is the Q × P cross-covariance matrix between the image and observation vectors. Thus, the optimal bias vector b and restoration matrix W may be directly determined in terms of the first and second joint moments of the ideal image and observation vectors. It should be noted that these solutions apply for nonlinear and space-variant degradations. Subsequent sections describe applications of Wiener estimation to specific restoration models. Wiener Estimation: Image Blur with Additive Noise. For the discrete model for a blurred image subjective to additive noise given by g = Bf + n

(12.5-13)

358

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

the Wiener estimator is composed of a bias term b = E { f } – WE { g } = E { f } – WBE { f } + WE { n }

(12.5-14)

and a matrix operator
W = K fg [ K gg ]
–1

= K f B [ BK f B + K n ]

T

T

–1

(12.5-15)
2

If the ideal image field is assumed uncorrelated, K f = σ f I where σ f represents the image energy. Equation 12.5-15 then reduces to
W = σ f B [ σ f BB + K n ]
2 2 T 2 T –1

2

(12.5-16)

For a white-noise process with energy σ n , the Wiener filter matrix becomes
T  T σn  W = B  BB + ----- I 2  σ  f 2 2 2

(12.5-17)

As the ratio of image energy to noise energy ( σ f ⁄ σ n ) approaches infinity, the Wiener estimator of Eq. 12.5-17 becomes equivalent to the generalized inverse estimator. Figure 12.5-2 shows restoration examples for the model of Figure 11.5-3 for a Gaussian-shaped blur function. Wiener restorations of large size images are given in Figure 12.5-3 using a fast computational algorithm developed by Pratt and Davarian (22). In the example of Figure 12.5-3a illustrating horizontal image motion blur, the impulse response is of rectangular shape of length L = 11. The center pixels have been restored and replaced within the context of the blurred image to show the visual restoration improvement. The noise level and blur impulse response of the electron microscope original image of Figure 12.5-3c were estimated directly from the photographic transparency using techniques to be described in Section 12.7. The parameters were then utilized to restore the center pixel region, which was then replaced in the context of the blurred original.

12.6. CONSTRAINED IMAGE RESTORATION The previously described image restoration techniques have treated images as arrays of numbers. They have not considered that a restored natural image should be subject to physical constraints. A restored natural image should be spatially smooth and strictly positive in amplitude.

CONSTRAINED IMAGE RESTORATION

359

Blurred

Restored

bR = bC = 1.2, Var = 10.0, r = 0.75, SNR = 200.0
(a) PLSE = 4.91% (b) PLSE = 3.71%

bR = bC = 50.0, Var = 10.0, r = 0.75, SNR = 200.0
(c) PLSE = 7.99% (d ) PLSE = 4.20%

bR = bC = 50.0, Var = 100.0, r = 0.75, SNR = 60.0 (f ) PLSE = 4.74% (e) PLSE = 7.93%

FIGURE 12.5-2. Wiener estimation for test image blurred with Gaussian-shaped impulse response. M = 8, N = 12, L = 5.

360

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

(a) Observation

(b) Restoration

(c) Observation

(d ) Restoration

FIGURE 12.5-3. Wiener image restoration.

12.6.1. Smoothing Methods Smoothing and regularization techniques (33–35) have been used in an attempt to overcome the ill-conditioning problems associated with image restoration. Basically, these methods attempt to force smoothness on the solution of a least-squares error problem. Two formulations of these methods are considered (21). The first formulation T consists of finding the minimum of ˆ Sf subject to the equality constraint f ˆ
ˆ ˆ T [ g – Bf ] M [ g – Bf ] = e

(12.6-1)

where S is a smoothing matrix, M is an error-weighting matrix, and e denotes a residual scalar estimation error. The error-weighting matrix is often chosen to be

CONSTRAINED IMAGE RESTORATION
–1

361

equal to the inverse of the observation noise covariance matrix, M = K n . The Lagrangian estimate satisfying Eq. 12.6-1 is (19)
–1 T – 1 T 1 –1 ˆ f = S B BS B + -- M γ –1

g

(12.6-2)

In Eq. 12.6-2, the Lagrangian factor γ is chosen so that Eq. 12.6-1 is satisfied; that is, the compromise between residual error and smoothness of the estimator is deemed satisfactory. Now consider the second formulation, which involves solving an equality-constrained least-squares problem by minimizing the left-hand side of Eq. 12.6-1 such that
ˆT ˆ f Sf = d

(12.6-3)

where the scalar d represents a fixed degree of smoothing. In this case, the optimal solution for an underdetermined nonsingular system is found to be
ˆ = S –1 B T [ BS – 1 B T + γM– 1 ] –1 g f

(12.6-4)

A comparison of Eqs. 12.6-2 and 12.6-4 reveals that the two inverse problems are solved by the same expression, the only difference being the Lagrange multipliers, which are inverses of one another. The smoothing estimates of Eq. 12.6-4 are closely related to the regression and Wiener estimates derived previously. If γ = 0, –1 S = I and M = K n where K n is the observation noise covariance matrix, then the smoothing and regression estimates become equivalent. Substitution of γ = 1, –1 –1 S = K f and M = K n where K f is the image covariance matrix results in equivalence to the Wiener estimator. These equivalences account for the relative smoothness of the estimates obtained with regression and Wiener restoration as compared to pseudoinverse restoration. A problem that occurs with the smoothing and regularizing techniques is that even though the variance of a solution can be calculated, its bias can only be determined as a function of f. 12.6.2. Constrained Restoration Techniques Equality and inequality constraints have been suggested (21) as a means of improving restoration performance for ill-conditioned restoration models. Examples of constraints include the specification of individual pixel values, of ratios of the values of some pixels, or the sum of part or all of the pixels, or amplitude limits of pixel values. Quite often a priori information is available in the form of inequality constraints involving pixel values. The physics of the image formation process requires that

362

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

pixel values be non-negative quantities. Furthermore, an upper bound on these values is often known because images are digitized with a finite number of bits assigned to each pixel. Amplitude constraints are also inherently introduced by the need to “fit” a restored image to the dynamic range of a display. One approach is linearly to rescale the restored image to the display image. This procedure is usually undesirable because only a few out-of-range pixels will cause the contrast of all other pixels to be reduced. Also, the average luminance of a restored image is usually affected by rescaling. Another common display method involves clipping of all pixel values exceeding the display limits. Although this procedure is subjectively preferable to rescaling, bias errors may be introduced. If a priori pixel amplitude limits are established for image restoration, it is best to incorporate these limits directly in the restoration process rather than arbitrarily invoke the limits on the restored image. Several techniques of inequality constrained restoration have been proposed. Consider the general case of constrained restoration in which the vector estimate ˆ f is subject to the inequality constraint l≤ˆ≤u f

(12.6-5)

where u and l are vectors containing upper and lower limits of the pixel estimate, respectively. For least-squares restoration, the quadratic error must be minimized subject to the constraint of Eq. 12.6-5. Under this framework, restoration reduces to the solution of a quadratic programming problem (21). In the case of an absolute error measure, the restoration task can be formulated as a linear programming problem (36,37). The a priori knowledge involving the inequality constraints may substantially reduce pixel uncertainty in the restored image; however, as in the case of equality constraints, an unknown amount of bias may be introduced. Figure 12.6-1 is an example of image restoration for the Gaussian blur model of Chapter 11 by pseudoinverse restoration and with inequality constrained (21) in which the scaled luminance of each pixel of the restored image has been limited to the range of 0 to 255. The improvement obtained by the constraint is substantial. Unfortunately, the quadratic programming solution employed in this example requires a considerable amount of computation. A brute-force extension of the procedure does not appear feasible. Several other methods have been proposed for constrained image restoration. One simple approach, based on the concept of homomorphic filtering, is to take the logarithm of each observation. Exponentiation of the corresponding estimates automatically yields a strictly positive result. Burg (38), Edward and Fitelson (39), and Frieden (6,40,41) have developed restoration methods providing a positivity constraint, which are based on a maximum entropy principle originally employed to estimate a probability density from observation of its moments. Huang et al. (42) have introduced a projection method of constrained image restoration in which the set of equations g = Bf are iteratively solved by numerical means. At each stage of the solution the intermediate estimates are amplitude clipped to conform to amplitude limits.

BLIND IMAGE RESTORATION

363

(a) Blurred observation

(b) Unconstrained restoration

(c) Constrained restoration

FIGURE 12.6-1. Comparison of unconstrained and inequality constrained image restoration for a test image blurred with Gaussian-shaped impulse response. bR = bC = 1.2, M = 12, N = 8, L = 5; noisy observation, Var = 10.0.

12.7. BLIND IMAGE RESTORATION Most image restoration techniques are based on some a priori knowledge of the image degradation; the point luminance and spatial impulse responses of the system degradation are assumed known. In many applications, such information is simply not available. The degradation may be difficult to measure or may be time varying in an unpredictable manner. In such cases, information about the degradation must be extracted from the observed image either explicitly or implicitly. This task is called blind image restoration (5,19,43). Discussion here is limited to blind image restoration methods for blurred images subject to additive noise.

364

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

There are two major approaches to blind image restoration: direct measurement and indirect estimation. With the former approach, the blur impulse response and noise level are first estimated from an image to be restored, and then these parameters are utilized in the restoration. Indirect estimation techniques employ temporal or spatial averaging to either obtain a restoration or to determine key elements of a restoration algorithm. 12.7.1. Direct Measurement Methods Direct measurement blind restoration of a blurred noisy image usually requires measurement of the blur impulse response and noise power spectrum or covariance function of the observed image. The blur impulse response is usually measured by isolating the image of a suspected object within a picture. By definition, the blur impulse response is the image of a point-source object. Therefore, a point source in the observed scene yields a direct indication of the impulse response. The image of a suspected sharp edge can also be utilized to derive the blur impulse response. Averaging several parallel line scans normal to the edge will significantly reduce noise effects. The noise covariance function of an observed image can be estimated by measuring the image covariance over a region of relatively constant background luminance. References 5, 44, and 45 provide further details on direct measurement methods. 12.7.2. Indirect Estimation Methods Temporal redundancy of scenes in real-time television systems can be exploited to perform blind restoration indirectly. As an illustration, consider the ith observed image frame
Gi ( x, y ) = FI ( x, y ) + N i ( x, y )

(12.7-1)

of a television system in which F I ( x, y ) is an ideal image and N i ( x, y ) is an additive noise field independent of the ideal image. If the ideal image remains constant over a sequence of M frames, then temporal summation of the observed images yields the relation
1FI ( x, y ) = ---M
M

i=1



1G i ( x, y ) – ---M

i=1



M

N i ( x, y )

(12.7-2)

The value of the noise term on the right will tend toward its ensemble average E { N ( x, y ) } for M large. In the common case of zero-mean white Gaussian noise, the

BLIND IMAGE RESTORATION

365

(a) Noise-free original

(b) Noisy image 1

(c) Noisy image 2

(d ) Temporal average

FIGURE 12.7-1 Temporal averaging of a sequence of eight noisy images. SNR = 10.0.

ensemble average is zero at all (x, y), and it is reasonable to form the estimate as
1 ˆ F I ( x, y ) = ---M

i=1

∑ G i ( x, y )

M

(12.7-3)

Figure 12.7-1 presents a computer-simulated example of temporal averaging of a sequence of noisy images. In this example the original image is unchanged in the sequence. Each image observed is subjected to a different additive random noise pattern. The concept of temporal averaging is also useful for image deblurring. Consider an imaging system in which sequential frames contain a relatively stationary object degraded by a different linear-shift invariant impulse response H i ( x, y ) over each

366

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

frame. This type of imaging would be encountered, for example, when photographing distant objects through a turbulent atmosphere if the object does not move significantly between frames. By taking a short exposure at each frame, the atmospheric turbulence is “frozen” in space at each frame interval. For this type of object, the degraded image at the ith frame interval is given by
G i ( x, y ) = F I ( x, y ) * H i ( x, y )

(12.7-4)

for i = 1, 2,..., M. The Fourier spectra of the degraded images are then
G i ( ω x, ω y ) = F I ( ω x, ω y )H i ( ω x, ω y )

(12.7-5)

On taking the logarithm of the degraded image spectra ln { G i ( ω x, ω y ) } = ln { F I ( ω x, ω y ) } + ln { H i ( ω x, ω y ) }

(12.7-6)

the spectra of the ideal image and the degradation transfer function are found to separate additively. It is now possible to apply any of the common methods of statistical estimation of a signal in the presence of additive noise. If the degradation impulse responses are uncorrelated between frames, it is worthwhile to form the sum

i=1



M

ln { G i ( ω x, ω y ) } = M ln { F I ( ω x, ω y ) } +

i=1



M

ln { H i ( ω x, ω y ) }

(12.7-7)

because for large M the latter summation approaches the constant value
H M ( ω x, ω y ) =   lim  ∑ ln { H i ( ω x, ω y ) }  M → ∞ i = 1
M

(12.7-8)

The term H M ( ω x, ω y ) may be viewed as the average logarithm transfer function of the atmospheric turbulence. An image estimate can be expressed as
 H M ( ω x, ω y )  ˆ F I ( ω x, ω y ) = exp  – ----------------------------  M  

i=1



M

[ G i ( ω x, ω y ) ]

1⁄M

(12.7-9)

An inverse Fourier transform then yields the spatial domain estimate. In any practical imaging system, Eq. 12.7-4 must be modified by the addition of a noise component Ni(x, y). This noise component unfortunately invalidates the separation step of Eq. 12.7-6, and therefore destroys the remainder of the derivation. One possible ad hoc solution to this problem would be to perform noise smoothing or filtering on

REFERENCES

367

each observed image field and then utilize the resulting estimates as assumed noiseless observations in Eq. 12.7-9. Alternatively, the blind restoration technique of Stockham et al. (43) developed for nonstationary speech signals may be adapted to the multiple-frame image restoration problem.

REFERENCES
1. D. A. O’Handley and W. B. Green, “Recent Developments in Digital Image Processing at the Image Processing Laboratory at the Jet Propulsion Laboratory,” Proc. IEEE, 60, 7, July 1972, 821–828. 2. M. M. Sondhi, “Image Restoration: The Removal of Spatially Invariant Degradations,” Proc. IEEE, 60, 7, July 1972, 842–853. 3. H. C. Andrews, “Digital Image Restoration: A Survey,” IEEE Computer, 7, 5, May 1974, 36–45. 4. B. R. Hunt, “Digital Image Processing,” Proc. IEEE, 63, 4, April 1975, 693–708. 5. H. C. Andrews and B. R. Hunt, Digital Image Restoration, Prentice Hall, Englewood Cliffs, NJ, 1977. 6. B. R. Frieden, “Image Enhancement and Restoration,” in Picture Processing and Digital Filtering, T. S. Huang, Ed., Springer-Verlag, New York, 1975. 7. T. G. Stockham, Jr., “A–D and D–A Converters: Their Effect on Digital Audio Fidelity,” in Digital Signal Processing, L. R. Rabiner and C. M. Rader, Eds., IEEE Press, New York, 1972, 484–496. 8. A. Marechal, P. Croce, and K. Dietzel, “Amelioration du contrast des details des images photographiques par filtrage des fréquencies spatiales,” Optica Acta, 5, 1958, 256–262. 9. J. Tsujiuchi, “Correction of Optical Images by Compensation of Aberrations and by Spatial Frequency Filtering,” in Progress in Optics, Vol. 2, E. Wolf, Ed., Wiley, New York, 1963, 131–180. 10. J. L. Harris, Sr., “Image Evaluation and Restoration,” J. Optical Society of America, 56, 5, May 1966, 569–574. 11. B. L. McGlamery, “Restoration of Turbulence-Degraded Images,” J. Optical Society of America, 57, 3, March 1967, 293–297. 12. P. F. Mueller and G. O. Reynolds, “Image Restoration by Removal of Random Media Degradations,” J. Optical Society of America, 57, 11, November 1967, 1338–1344. 13. C. W. Helstrom, “Image Restoration by the Method of Least Squares,” J. Optical Society of America, 57, 3, March 1967, 297–303. 14. J. L. Harris, Sr., “Potential and Limitations of Techniques for Processing Linear MotionDegraded Imagery,” in Evaluation of Motion Degraded Images, US Government Printing Office, Washington DC, 1968, 131–138. 15. J. L. Homer, “Optical Spatial Filtering with the Least-Mean-Square-Error Filter,” J. Optical Society of America, 51, 5, May 1969, 553–558. 16. J. L. Homer, “Optical Restoration of Images Blurred by Atmospheric Turbulence Using Optimum Filter Theory,” Applied Optics, 9, 1, January 1970, 167–171. 17. B. L. Lewis and D. J. Sakrison, “Computer Enhancement of Scanning Electron Micrographs,” IEEE Trans. Circuits and Systems, CAS-22, 3, March 1975, 267–278.

368

POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES

18. D. Slepian, “Restoration of Photographs Blurred by Image Motion,” Bell System Technical J., XLVI, 10, December 1967, 2353–2362. 19. E. R. Cole, “The Removal of Unknown Image Blurs by Homomorphic Filtering,” Ph.D. dissertation, Department of Electrical Engineering, University of Utah, Salt Lake City, UT June 1973. 20. B. R. Hunt, “The Application of Constrained Least Squares Estimation to Image Restoration by Digital Computer,” IEEE Trans. Computers, C-23, 9, September 1973, 805– 812. 21. N. D. A. Mascarenhas and W. K. Pratt, “Digital Image Restoration Under a Regression Model,” IEEE Trans. Circuits and Systems, CAS-22, 3, March 1975, 252–266. 22. W. K. Pratt and F. Davarian, “Fast Computational Techniques for Pseudoinverse and Wiener Image Restoration,” IEEE Trans. Computers, C-26, 6, June 1977, 571–580. 23. W. K. Pratt, “Pseudoinverse Image Restoration Computational Algorithms,” in Optical Information Processing Vol. 2, G. W. Stroke, Y. Nesterikhin, and E. S. Barrekette, Eds., Plenum Press, New York, 1977. 24. B. W. Rust and W. R. Burrus, Mathematical Programming and the Numerical Solution of Linear Equations, American Elsevier, New York, 1972. 25. A. Albert, Regression and the Moore–Penrose Pseudoinverse, Academic Press, New York, 1972. 26. H. C. Andrews and C. L. Patterson, “Outer Product Expansions and Their Uses in Digital Image Processing,” American Mathematical. Monthly, 1, 82, January 1975, 1–13. 27. H. C. Andrews and C. L. Patterson, “Outer Product Expansions and Their Uses in Digital Image Processing,” IEEE Trans. Computers, C-25, 2, February 1976, 140–148. 28. T. S. Huang and P. M. Narendra, “Image Restoration by Singular Value Decomposition,” Applied Optics, 14, 9, September 1975, 2213–2216. 29. H. C. Andrews and C. L. Patterson, “Singular Value Decompositions and Digital Image Processing,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-24, 1, February 1976, 26–53. 30. T. O. Lewis and P. L. Odell, Estimation in Linear Models, Prentice Hall, Englewood Cliffs, NJ, 1971. 31. W. K. Pratt, “Generalized Wiener Filter Computation Techniques,” IEEE Trans. Computers, C-21, 7, July 1972, 636–641. 32. A. Papoulis, Probability Random Variables and Stochastic Processes, 3rd Ed., McGrawHill, New York, 1991. 33. S. Twomey, “On the Numerical Solution of Fredholm Integral Equations of the First Kind by the Inversion of the Linear System Produced by Quadrature,” J. Association for Computing Machinery, 10, 1963, 97–101. 34. D. L. Phillips, “A Technique for the Numerical Solution of Certain Integral Equations of the First Kind,” J. Association for Computing Machinery, 9, 1964, 84-97. 35. A. N. Tikonov, “Regularization of Incorrectly Posed Problems,” Soviet Mathematics, 4, 6, 1963, 1624–1627. 36. E. B. Barrett and R. N. Devich, “Linear Programming Compensation for Space-Variant Image Degradation,” Proc. SPIE/OSA Conference on Image Processing, J. C. Urbach, Ed., Pacific Grove, CA, February 1976, 74, 152–158. 37. D. P. MacAdam, “Digital Image Restoration by Constrained Deconvolution,” J. Optical Society of America, 60, 12, December 1970, 1617–1627.

REFERENCES

369

38. J. P. Burg, “Maximum Entropy Spectral Analysis,” 37th Annual Society of Exploration Geophysicists Meeting, Oklahoma City, OK, 1967. 39. J. A. Edward and M. M. Fitelson, “Notes on Maximum Entropy Processing,” IEEE Trans. Information Theory, IT-19, 2, March 1973, 232–234. 40. B. R. Frieden, “Restoring with Maximum Likelihood and Maximum Entropy,” J. Optical Society America, 62, 4, April 1972, 511–518. 41. B. R. Frieden, “Maximum Entropy Restorations of Garrymede,” in Proc. SPIE/OSA Conference on Image Processing, J. C. Urbach, Ed., Pacific Grove, CA, February 1976, 74, 160–165. 42. T. S. Huang, D. S. Baker, and S. P. Berger, “Iterative Image Restoration,” Applied Optics, 14, 5, May 1975, 1165–1168. 43. T. G. Stockham, Jr., T. M. Cannon, and P. B. Ingebretsen, “Blind Deconvolution Through Digital Signal Processing,” Proc. IEEE, 63, 4, April 1975, 678–692. 44. A. Papoulis, “Approximations of Point Spreads for Deconvolution,” J. Optical Society of America, 62, 1, January 1972, 77–80. 45. B. Tatian, “Asymptotic Expansions for Correcting Truncation Error in Transfer-Function Calculations,” J. Optical Society of America, 61, 9, September 1971, 1214–1224.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

13
GEOMETRICAL IMAGE MODIFICATION

One of the most common image processing operations is geometrical modification in which an image is spatially translated, scaled, rotated, nonlinearly warped, or viewed from a different perspective.

13.1. TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION Image translation, scaling, and rotation can be analyzed from a unified standpoint. Let G ( j, k ) for 1 ≤ j ≤ J and 1 ≤ k ≤ K denote a discrete output image that is created by geometrical modification of a discrete input image F ( p, q ) for 1 ≤ p ≤ P and 1 ≤ q ≤ Q . In this derivation, the input and output images may be different in size. Geometrical image transformations are usually based on a Cartesian coordinate system representation in which the origin ( 0, 0 ) is the lower left corner of an image, while for a discrete image, typically, the upper left corner unit dimension pixel at indices (1, 1) serves as the address origin. The relationships between the Cartesian coordinate representations and the discrete image arrays of the input and output images are illustrated in Figure 13.1-1. The output image array indices are related to their Cartesian coordinates by
-xk = k – 1 2

(13.1-1a) (13.1-1b)

-yk = J + 1 – j 2

371

372

GEOMETRICAL IMAGE MODIFICATION

FIGURE 13.1-1. Relationship between discrete image array and Cartesian coordinate representation.

Similarly, the input array relationship is given by
-uq = q – 1 2 -vp = P + 1 – p 2

(13.1-2a) (13.1-2b)

13.1.1. Translation Translation of F ( p, q ) with respect to its Cartesian origin to produce G ( j, k ) involves the computation of the relative offset addresses of the two images. The translation address relationships are x k = uq + tx yj = vp + ty

(13.1-3a) (13.1-3b)

where t x and ty are translation offset constants. There are two approaches to this computation for discrete images: forward and reverse address computation. In the forward approach, u q and v p are computed for each input pixel ( p, q ) and

TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION

373

substituted into Eq. 13.1-3 to obtain x k and y j . Next, the output array addresses ( j, k ) are computed by inverting Eq. 13.1-1. The composite computation reduces to j′ = p – ( P – J ) – t y k′ = q + tx

(13.1-4a) (13.1-4b)

where the prime superscripts denote that j′ and k′ are not integers unless tx and t y are integers. If j′ and k′ are rounded to their nearest integer values, data voids can occur in the output image. The reverse computation approach involves calculation of the input image addresses for integer output image addresses. The composite address computation becomes p′ = j + ( P – J ) + ty q′ = k – t x

(13.1-5a) (13.1-5b)

where again, the prime superscripts indicate that p′ and q′ are not necessarily integers. If they are not integers, it becomes necessary to interpolate pixel amplitudes of ˆ F ( p, q ) to generate a resampled pixel estimate F ( p, q ), which is transferred to G ( j, k ). The geometrical resampling process is discussed in Section 13.5. 13.1.2. Scaling Spatial size scaling of an image can be obtained by modifying the Cartesian coordinates of the input image according to the relations xk = sx uq yj = sy vp

(13.1-6a) (13.1-6b)

where s x and s y are positive-valued scaling constants, but not necessarily integer valued. If s x and s y are each greater than unity, the address computation of Eq. 13.1-6 will lead to magnification. Conversely, if s x and s y are each less than unity, minification results. The reverse address relations for the input image address are found to be
--p′ = ( 1 ⁄ s y ) ( j + J – 1 ) + P + 1 2 2 --q′ = ( 1 ⁄ s x ) ( k – 1 ) + 1 2 2

(13.1-7a) (13.1-7b)

374

GEOMETRICAL IMAGE MODIFICATION

As with generalized translation, it is necessary to interpolate F ( p, q ) to obtain G ( j, k ) . 13.1.3. Rotation Rotation of an input image about its Cartesian origin can be accomplished by the address computation x k = u q cos θ – v p sin θ y j = u q sin θ + v p cos θ

(13.1-8a) (13.1-8b)

where θ is the counterclockwise angle of rotation with respect to the horizontal axis of the input image. Again, interpolation is required to obtain G ( j, k ) . Rotation of an input image about an arbitrary pivot point can be accomplished by translating the origin of the image to the pivot point, performing the rotation, and then translating back by the first translation offset. Equation 13.1-8 must be inverted and substitutions made for the Cartesian coordinates in terms of the array indices in order to obtain the reverse address indices ( p′, q′ ). This task is straightforward but results in a messy expression. A more elegant approach is to formulate the address computation as a vector-space manipulation. 13.1.4. Generalized Linear Geometrical Transformations The vector-space representations for translation, scaling, and rotation are given below. Translation: xk yj = uq vp + tx ty

(13.1-9)

Scaling: xk yj = sx 0 0 sy uq vp

(13.1-10)

Rotation: xk yj = cos θ – sin θ sin θ cos θ uq vp

(13.1-11)

TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION

375

Now, consider a compound geometrical modification consisting of translation, followed by scaling followed by rotation. The address computations for this compound operation can be expressed as xk yj cos θ – sin θ sin θ cos θ sx 0 0 sy uq vp cos θ – sin θ sin θ cos θ sx 0 0 sy tx ty

=

+

(13.1-12a)

or upon consolidation

xk yj

=

s x cos θ – s y sin θ s x sin θ s y cos θ

uq vp

+

s x t x cos θ – s t sin θ y y s x t x sin θ + s y t y cos θ

(13.1-12b)

Equation 13.1-12b is, of course, linear. It can be expressed as xk yj c0 c 1 d0 d 1 uq vp c2 d2

=

+

(13.1-13a)

in one-to-one correspondence with Eq. 13.1-12b. Equation 13.1-13a can be rewritten in the more compact form

xk yj

=

c0 d0

c1 d1

c2 d2

uq vp 1

(13.1-13b)

As a consequence, the three address calculations can be obtained as a single linear address computation. It should be noted, however, that the three address calculations are not commutative. Performing rotation followed by minification followed by translation results in a mathematical transformation different than Eq. 13.1-12. The overall results can be made identical by proper choice of the individual transformation parameters. To obtain the reverse address calculation, it is necessary to invert Eq. 13.1-13b to solve for ( u q, v p ) in terms of ( x k, y j ). Because the matrix in Eq. 13.1-13b is not square, it does not possess an inverse. Although it is possible to obtain ( u q, v p ) by a pseudoinverse operation, it is convenient to augment the rectangular matrix as follows:

376

GEOMETRICAL IMAGE MODIFICATION

xk yj 1 =

c0 d0 0

c1 d1 0

c2 d2 1

uq vp 1

(13.1-14)

This three-dimensional vector representation of a two-dimensional vector is a special case of a homogeneous coordinates representation (1–3). The use of homogeneous coordinates enables a simple formulation of concatenated operators. For example, consider the rotation of an image by an angle θ about a pivot point ( x c, y c ) in the image. This can be accomplished by xk yj 1 = 1 0 xc 0 1 yc 0 0 1 cos θ sin θ 0 – sin θ 0 cos θ 0 0 1 1 0 0 0 –xc 1 0 –yc 1 uq vp 1

(13.1-15)

which reduces to a single 3 × 3 transformation: xk yj 1 = cos θ sin θ 0 – sin θ cos θ 0 – x c cos θ + y c sin θ + x c – x c sin θ – y c cos θ + y c 1 uq vp 1

(13.1-16)

The reverse address computation for the special case of Eq. 13.1-16, or the more general case of Eq. 13.1-13, can be obtained by inverting the 3 × 3 transformation matrices by numerical methods. Another approach, which is more computationally efficient, is to initially develop the homogeneous transformation matrix in reverse order as uq vp 1 = a0 a1 a2 b0 b1 b 2 0 0 1 xk yj 1

(13.1-17)

where for translation a0 = 1 a1 = 0 a2 = – tx b0 = 0 b1 = 1 b 2 = –ty

(13.1-18a) (13.1-18b) (13.1-18c) (13.1-18d) (13.1-18e) (13.1-18f)

TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION

377

and for scaling a 0 = 1 ⁄ sx a1 = 0 a2 = 0 b0 = 0 b 1 = 1 ⁄ sy b2 = 0

(13.1-19a) (13.1-19b) (13.1-19c) (13.1-19d) (13.1-19e) (13.1-19f)

and for rotation a 0 = cos θ a 1 = sin θ a2 = 0 b 0 = – sin θ b 1 = cos θ b2 = 0

(13.1-20a) (13.1-20b) (13.1-20c) (13.1-20d) (13.1-20e) (13.1-20f)

Address computation for a rectangular destination array G ( j, k ) from a rectangular source array F ( p, q ) of the same size results in two types of ambiguity: some pixels of F ( p, q ) will map outside of G ( j, k ); and some pixels of G ( j, k ) will not be mappable from F ( p, q ) because they will lie outside its limits. As an example, Figure 13.1-2 illustrates rotation of an image by 45° about its center. If the desire of the mapping is to produce a complete destination array G ( j, k ) , it is necessary to access a sufficiently large source image F ( p, q ) to prevent mapping voids in G ( j, k ) . This is accomplished in Figure 13.1-2d by embedding the original image of Figure 13.1-2a in a zero background that is sufficiently large to encompass the rotated original. 13.1.5. Affine Transformation The geometrical operations of translation, size scaling, and rotation are special cases of a geometrical operator called an affine transformation. It is defined by Eq. 13.1-13b, in which the constants ci and di are general weighting factors. The affine transformation is not only useful as a generalization of translation, scaling, and rotation. It provides a means of image shearing in which the rows or columns are successively uniformly translated with respect to one another. Figure 13.1-3

378

GEOMETRICAL IMAGE MODIFICATION

(a) Original, 500 × 500

(b) Rotated, 500 × 500

(c) Original, 708 × 708

(d) Rotated, 708 × 708

FIGURE 13.1-2. Image rotation by 45° on the washington_ir image about its center.

illustrates image shearing of rows of an image. In this example, c 0 = d 1 = 1.0 , c 1 = 0.1, d 0 = 0.0, and c 2 = d 2 = 0.0.

13.1.6. Separable Translation, Scaling, and Rotation The address mapping computations for translation and scaling are separable in the sense that the horizontal output image coordinate xk depends only on uq, and yj depends only on vp. Consequently, it is possible to perform these operations separably in two passes. In the first pass, a one-dimensional address translation is performed independently on each row of an input image to produce an intermediate array I ( p, k ). In the second pass, columns of the intermediate array are processed independently to produce the final result G ( j, k ).

TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION

379

(a) Original

(b) Sheared

FIGURE 13.1-3. Horizontal image shearing on the washington_ir image.

Referring to Eq. 13.1-8, it is observed that the address computation for rotation is of a form such that xk is a function of both uq and vp; and similarly for yj. One might then conclude that rotation cannot be achieved by separable row and column processing, but Catmull and Smith (4) have demonstrated otherwise. In the first pass of the Catmull and Smith procedure, each row of F ( p, q ) is mapped into the corresponding row of the intermediate array I ( p, k ) using the standard row address computation of Eq. 13.1-8a. Thus x k = u q cos θ – v p sin θ

(13.1-21)

Then, each column of I ( p, k ) is processed to obtain the corresponding column of G ( j, k ) using the address computation x k sin θ + v p y j = ---------------------------cos θ

(13.1-22)

Substitution of Eq. 13.1-21 into Eq. 13.1-22 yields the proper composite y-axis transformation of Eq. 13.1-8b. The “secret” of this separable rotation procedure is the ability to invert Eq. 13.1-21 to obtain an analytic expression for uq in terms of xk. In this case, x k + v p sin θ u q = --------------------------cos θ

(13.1-23)

when substituted into Eq. 13.1-21, gives the intermediate column warping function of Eq. 13.1-22.

380

GEOMETRICAL IMAGE MODIFICATION

The Catmull and Smith two-pass algorithm can be expressed in vector-space form as xk yj 1 = 0 cos θ 0 – sin θ 1 uq vp

1 tan θ ----------cos θ

(13.1-24)

The separable processing procedure must be used with caution. In the special case of a rotation of 90°, all of the rows of F ( p, q ) are mapped into a single column of I ( p, k ) , and hence the second pass cannot be executed. This problem can be avoided by processing the columns of F ( p, q ) in the first pass. In general, the best overall results are obtained by minimizing the amount of spatial pixel movement. For example, if the rotation angle is + 80°, the original should be rotated by +90° by conventional row–column swapping methods, and then that intermediate image should be rotated by –10° using the separable method. Figure 13.14 provides an example of separable rotation of an image by 45°. Figure 13.l-4a is the original, Figure 13.1-4b shows the result of the first pass and Figure 13.1-4c presents the final result.

(a) Original

(b) First-pass result

(c) Second-pass result

FIGURE 13.1-4. Separable two-pass image rotation on the washington_ir image.

TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION

381

Separable, two-pass rotation offers the advantage of simpler computation compared to one-pass rotation, but there are some disadvantages to two-pass rotation. Two-pass rotation causes loss of high spatial frequencies of an image because of the intermediate scaling step (5), as seen in Figure 13.1-4b. Also, there is the potential of increased aliasing error (5,6), as discussed in Section 13.5. Several authors (5,7,8) have proposed a three-pass rotation procedure in which there is no scaling step and hence no loss of high-spatial-frequency content with proper interpolation. The vector-space representation of this procedure is given by xk yj = 1 0 – tan ( θ ⁄ 2 ) 1 1 sin θ 0 1 1 0 – tan ( θ ⁄ 2 ) 1 uq vp

(13.1-25)

This transformation is a series of image shearing operations without scaling. Figure 13.1-5 illustrates three-pass rotation for rotation by 45°.

(a) Original

(b) First-pass result

(c) Second-pass result

(d) Third-pass result

FIGURE 13.1-5. Separable three-pass image rotation on the washington_ir image.

382

GEOMETRICAL IMAGE MODIFICATION

13.2 SPATIAL WARPING The address computation procedures described in the preceding section can be extended to provide nonlinear spatial warping of an image. In the literature, this process is often called rubber-sheet stretching (9,10). Let x = X ( u, v ) y = Y ( u, v )

(13.2-1a) (13.2-1b)

denote the generalized forward address mapping functions from an input image to an output image. The corresponding generalized reverse address mapping functions are given by u = U ( x, y ) v = V ( x, y )

(13.2-2a) (13.2-2b)

For notational simplicity, the ( j, k ) and ( p, q ) subscripts have been dropped from these and subsequent expressions. Consideration is given next to some examples and applications of spatial warping. 13.2.1. Polynomial Warping The reverse address computation procedure given by the linear mapping of Eq. 13.1-17 can be extended to higher dimensions. A second-order polynomial warp address mapping can be expressed as u = a 0 + a 1 x + a 2 y + a 3 x + a 4 xy + a5 y v = b0 + b 1 x + b 2 y + b 3 x + b 4 xy + b 5 y
2 2 2

(13.2-3a) (13.2-3b)

2

In vector notation, u v = a0 b0 a1 b1 a2 b2 a3 b3 a4 b4 a5 b5 1 x y x xy y
2 2

(13.2-3c)

For first-order address mapping, the weighting coefficients ( a i, b i ) can easily be related to the physical mapping as described in Section 13.1. There is no simple physical

SPATIAL WARPING

383

FIGURE 13.2-1. Geometric distortion.

counterpart for second address mapping. Typically, second-order and higher-order address mapping are performed to compensate for spatial distortion caused by a physical imaging system. For example, Figure 13.2-1 illustrates the effects of imaging a rectangular grid with an electronic camera that is subject to nonlinear pincushion or barrel distortion. Figure 13.2-2 presents a generalization of the problem. An ideal image F ( j, k ) is subject to an unknown physical spatial distortion. The observed image is measured over a rectangular array O ( p, q ). The objective is to ˆ perform a spatial correction warp to produce a corrected image array F ( j, k ) . Assume that the address mapping from the ideal image space to the observation space is given by u = O u { x, y } v = O v { x, y }

(13.2-4a) (13.2-4b)

FIGURE 13.2-2. Spatial warping concept.

384

GEOMETRICAL IMAGE MODIFICATION

where Ou { x, y } and O v { x, y } are physical mapping functions. If these mapping functions are known, then Eq. 13.2-4 can, in principle, be inverted to obtain the proper corrective spatial warp mapping. If the physical mapping functions are not known, Eq. 13.2-3 can be considered as an estimate of the physical mapping functions based on the weighting coefficients ( a i, b i ) . These polynomial weighting coefficients are normally chosen to minimize the mean-square error between a set of observation coordinates ( u m, v m ) and the polynomial estimates ( u, v ) for a set ( 1 ≤ m ≤ M ) of known data points ( x m, y m ) called control points. It is convenient to arrange the observation space coordinates into the vectors u = [ u 1, u 2, …, u M ] v = [ v 1, v 2, …, v M ]
T T

(13.2-5a) (13.2-5b)

Similarly, let the second-order polynomial coefficients be expressed in vector form as a = [ a 0, a 1, …, a 5 ] b = [ b 0, b 1, …, b 5 ]
T T

(13.2-6a) (13.2-6b)

The mean-square estimation error can be expressed in the compact form
E = ( u – Aa ) ( u – Aa ) + ( v – Ab ) ( v – Ab )
T T

(13.2-7)

where
1 A = 1 x1 x2 y1 y2 x1 x2
2 2

x1 y1 x2 y2

y1 y2
2

2

(13.2-8)

1

xM

yM

xM xM yM yM

2

2

From Appendix 1, it has been determined that the error will be minimum if a = A u b = A v
– –

(13.2-9a) (13.2-9b)

where A– is the generalized inverse of A. If the number of control points is chosen greater than the number of polynomial coefficients, then
A


= [A A] A

T

–1

(13.2-10)

SPATIAL WARPING

385

(a) Source control points

(b) Destination control points

(c) Warped

FIGURE 13.2-3. Second-order polynomial spatial warping on the mandrill_mon image.

provided that the control points are not linearly related. Following this procedure, the polynomial coefficients ( a i, b i ) can easily be computed, and the address mapping of Eq. 13.2-1 can be obtained for all ( j, k ) pixels in the corrected image. Of course, proper interpolation is necessary. Equation 13.2-3 can be extended to provide a higher-order approximation to the physical mapping of Eq. 13.2-3. However, practical problems arise in computing the pseudoinverse accurately for higher-order polynomials. For most applications, second-order polynomial computation suffices. Figure 13.2-3 presents an example of second-order polynomial warping of an image. In this example, the mapping of control points is indicated by the graphics overlay.

386

GEOMETRICAL IMAGE MODIFICATION

FIGURE 13.3-1. Basic imaging system model.

13.3. PERSPECTIVE TRANSFORMATION Most two-dimensional images are views of three-dimensional scenes from the physical perspective of a camera imaging the scene. It is often desirable to modify an observed image so as to simulate an alternative viewpoint. This can be accomplished by use of a perspective transformation. Figure 13.3-1 shows a simple model of an imaging system that projects points of light in three-dimensional object space to points of light in a two-dimensional image plane through a lens focused for distant objects. Let ( X, Y, Z ) be the continuous domain coordinate of an object point in the scene, and let ( x, y ) be the continuous domain-projected coordinate in the image plane. The image plane is assumed to be at the center of the coordinate system. The lens is located at a distance f to the right of the image plane, where f is the focal length of the lens. By use of similar triangles, it is easy to establish that fX x = ---------f–Z fY y = ---------f–Z

(13.3-1a) (13.3-1b)

Thus the projected point ( x, y ) is related nonlinearly to the object point ( X, Y, Z ) . This relationship can be simplified by utilization of homogeneous coordinates, as introduced to the image processing community by Roberts (1). Let v = X Y Z

(13.3-2)

PERSPECTIVE TRANSFORMATION

387

˜ be a vector containing the object point coordinates. The homogeneous vector v corresponding to v is sX sY sZ s

˜ v =

(13.3-3)

where s is a scaling constant. The Cartesian vector v can be generated from the ˜ homogeneous vector v by dividing each of the first three components by the fourth. The utility of this representation will soon become evident. Consider the following perspective transformation matrix:
1 0 0 0 0 1 0 0 0 0 1 –1 ⁄ f 0 0 0 1

P =

(13.3-4)

This is a modification of the Roberts (1) definition to account for a different labeling of the axes and the use of column rather than row vectors. Forming the vector product
˜ ˜ w = Pv

(13.3-5a)

yields sX sY sZ s – sZ ⁄ f

˜ w =

(13.3-5b)

˜ The corresponding image plane coordinates are obtained by normalization of w to obtain

fX ---------f–Z w = fY ---------f–Z fZ ---------f–Z

(13.3-6)

388

GEOMETRICAL IMAGE MODIFICATION

It should be observed that the first two elements of w correspond to the imaging relationships of Eq. 13.3-1. It is possible to project a specific image point ( x i, y i ) back into three-dimensional object space through an inverse perspective transformation
–1 ˜ ˜ v = P w

(13.3-7a)

where
1 0 0 0 0 1 0 0 0 0 1 1⁄f 0 0 0 1

P

–1

=

(13.3-7b)

and sx i ˜ w = sy i sz i s

(13.3-7c)

In Eq. 13.3-7c, z i is regarded as a free variable. Performing the inverse perspective transformation yields the homogeneous vector

sx i ˜ w = sy i sz i s + sz i ⁄ f

(13.3-8)

The corresponding Cartesian coordinate vector is

fxi ---------f – zi w = fyi ---------f – zi fz i ---------f – zi

(13.3-9)

or equivalently,

CAMERA IMAGING MODEL

389

fx i x = ---------f – zi fyi y = ---------f – zi fzi z = ---------f – zi

(13.3-10a)

(13.3-10b)

(13.3-10c)

Equation 13.3-10 illustrates the many-to-one nature of the perspective transformation. Choosing various values of the free variable z i results in various solutions for ( X, Y, Z ), all of which lie along a line from ( x i, y i ) in the image plane through the lens center. Solving for the free variable z i in Eq. 13.3-l0c and substituting into Eqs. 13.3-10a and 13.3-10b gives x X = ---i ( f – Z ) f y Y = ---i ( f – Z ) f

(13.3-11a) (13.3-11b)

The meaning of this result is that because of the nature of the many-to-one perspective transformation, it is necessary to specify one of the object coordinates, say Z, in order to determine the other two from the image plane coordinates ( x i, y i ). Practical utilization of the perspective transformation is considered in the next section.

13.4. CAMERA IMAGING MODEL The imaging model utilized in the preceding section to derive the perspective transformation assumed, for notational simplicity, that the center of the image plane was coincident with the center of the world reference coordinate system. In this section, the imaging model is generalized to handle physical cameras used in practical imaging geometries (11). This leads to two important results: a derivation of the fundamental relationship between an object and image point; and a means of changing a camera perspective by digital image processing. Figure 13.4-1 shows an electronic camera in world coordinate space. This camera is physically supported by a gimbal that permits panning about an angle θ (horizontal movement in this geometry) and tilting about an angle φ (vertical movement). The gimbal center is at the coordinate ( X G, Y G, Z G ) in the world coordinate system. The gimbal center and image plane center are offset by a vector with coordinates ( X o, Y o, Z o ).

390

GEOMETRICAL IMAGE MODIFICATION

FIGURE 13.4-1. Camera imaging model.

If the camera were to be located at the center of the world coordinate origin, not panned nor tilted with respect to the reference axes, and if the camera image plane was not offset with respect to the gimbal, the homogeneous image model would be as derived in Section 13.3; that is
˜ ˜ w = Pv

(13.4-1)

˜ ˜ where v is the homogeneous vector of the world coordinates of an object point, w is the homogeneous vector of the image plane coordinates, and P is the perspective transformation matrix defined by Eq. 13.3-4. The camera imaging model can easily be derived by modifying Eq. 13.4-1 sequentially using a three-dimensional extension of translation and rotation concepts presented in Section 13.1. The offset of the camera to location ( XG, YG, ZG ) can be accommodated by the translation operation ˜ ˜ w = PT G v

(13.4-2)

where
1 0 TG = 0 –XG

0 1 0 –Y G 0 0 1 –Z G 0 0 0 1

(13.4-3)

CAMERA IMAGING MODEL

391

Pan and tilt are modeled by a rotation transformation
˜ ˜ w = PRT G v

(13.4-4)

where R = R φ R θ and cos θ – sin θ sin θ cos θ 0 0 0 0 0 0 1 0 0 0 0 1

Rθ =

(13.4-5)

and
1 0 0 0 0 0 cos φ – sin φ sin φ cos φ 0 0 0 0 0 1

Rφ =

(13.4-6)

The composite rotation matrix then becomes

R =

cos θ cos φ sin θ sin φ sin θ 0

– sin θ 0 cos φ cos θ – sin φ sin φ cos θ cos φ 0 0

0 0 0 1

(13.4-7)

Finally, the camera-to-gimbal offset is modeled as
˜ ˜ w = PT C RT G v

(13.4-8)

where

1 TC = 0 0 0

0 1 0 0

0 0 1 0

–Xo –Yo –Zo 1

(13.4-9)

392

GEOMETRICAL IMAGE MODIFICATION

Equation 13.4-8 is the final result giving the complete camera imaging model transformation between an object and an image point. The explicit relationship between an object point ( X, Y, Z ) and its image plane projection ( x, y ) can be obtained by performing the matrix multiplications analytically and then forming the Cartesian ˜ coordinates by dividing the first two components of w by the fourth. Upon performing these operations, one obtains f [ ( X – X G ) cos θ – ( Y – Y G ) sin θ – X 0 ] x = -------------------------------------------------------------------------------------------------------------------------------------------------------------------– ( X – X G ) sin θ sin φ – ( Y – Y G ) cos θ sin φ – ( Z – Z G ) cos φ + Z 0 + f f [ ( X – XG ) sin θ cos φ + ( Y – Y G ) cos θ cos φ – ( Z – Z G ) sin φ – Y 0 ] y = -----------------------------------------------------------------------------------------------------------------------------------------------------------------– ( X – X G ) sin θ sin φ – ( Y – Y G ) cosθ sin φ – ( Z – Z G ) cos φ + Z 0 + f

(13.4-10a)

(13.4-10b)

Equation 13.4-10 can be used to predict the spatial extent of the image of a physical scene on an imaging sensor. Another important application of the camera imaging model is to form an image by postprocessing such that the image appears to have been taken by a camera at a ˜ ˜ different physical perspective. Suppose that two images defined by w 1 and w 2 are formed by taking two views of the same object with the same camera. The resulting camera model relationships are then
˜ ˜ w 1 = PT C R 1 T G1 v ˜ ˜ w 2 = PT C R 2 T G2 v

(13.4-11a) (13.4-11b)

Because the camera is identical for the two images, the matrices P and TC are invariant in Eq. 13.4-11. It is now possible to perform an inverse computation of Eq. 13.4-11b to obtain
–1 –1 –1 –1 ˜ ˜ v = [ TG1 ] [ R 1 ] [ TC ] [ P ] w 1

(13.4-12)

and by substitution into Eq. 13.4-11b, it is possible to relate the image plane coordinates of the image of the second view to that obtained in the first view. Thus
–1 –1 –1 –1 ˜ ˜ w 2 = PT C R 2 TG2 [ T G1 ] [ R 1 ] [ T C ] [ P ] w 1

(13.4-13)

As a consequence, an artificial image of the second view can be generated by performing the matrix multiplications of Eq. 13.4-13 mathematically on the physical image of the first view. Does this always work? No, there are limitations. First, if some portion of a physical scene were not “seen” by the physical camera, perhaps it

GEOMETRICAL IMAGE RESAMPLING

393

was occluded by structures within the scene, then no amount of processing will recreate the missing data. Second, the processed image may suffer severe degradations resulting from undersampling if the two camera aspects are radically different. Nevertheless, this technique has valuable applications.

13.5. GEOMETRICAL IMAGE RESAMPLING As noted in the preceding sections of this chapter, the reverse address computation process usually results in an address result lying between known pixel values of an input image. Thus it is necessary to estimate the unknown pixel amplitude from its known neighbors. This process is related to the image reconstruction task, as described in Chapter 4, in which a space-continuous display is generated from an array of image samples. However, the geometrical resampling process is usually not spatially regular. Furthermore, the process is discrete to discrete; only one output pixel is produced for each input address. In this section, consideration is given to the general geometrical resampling process in which output pixels are estimated by interpolation of input pixels. The special, but common case, of image magnification by an integer zooming factor is also discussed. In this case, it is possible to perform pixel estimation by convolution. 13.5.1. Interpolation Methods The simplest form of resampling interpolation is to choose the amplitude of an output image pixel to be the amplitude of the input pixel nearest to the reverse address. This process, called nearest-neighbor interpolation, can result in a spatial offset error by as much as 1 ⁄ 2 pixel units. The resampling interpolation error can be significantly reduced by utilizing all four nearest neighbors in the interpolation. A common approach, called bilinear interpolation, is to interpolate linearly along each row of an image and then interpolate that result linearly in the columnar direction. Figure 13.5-1 illustrates the process. The estimated pixel is easily found to be
F ( p′, q′ ) = ( 1 – a ) [ ( 1 – b )F ( p, q ) + bF ( p, q + 1 ) ] + a [ ( 1 – b )F ( p + 1, q ) + bF ( p + 1, q + 1 ) ]

(13.5-1)

Although the horizontal and vertical interpolation operations are each linear, in general, their sequential application results in a nonlinear surface fit between the four neighboring pixels. The expression for bilinear interpolation of Eq. 13.5-1 can be generalized for any interpolation function R { x } that is zero-valued outside the range of ± 1 sample spacing. With this generalization, interpolation can be considered as the summing of four weighted interpolation functions as given by

394

GEOMETRICAL IMAGE MODIFICATION

F(p,q) b a

F(p,q+1)

F(p',q')

^

F(p+1,q)

F(p+1,q+1)

FIGURE 13.5-1. Bilinear interpolation.

F ( p′, q′ ) = F ( p, q )R { – a }R { b } + F ( p, q + 1 )R { – a }R { – ( 1 – b ) } + F ( p + 1, q )R { 1 – a }R { b } + F ( p + 1, q + 1 )R { 1 – a }R { – ( 1 – b ) }

(13.5-2) In the special case of linear interpolation, R { x } = R 1 { x } , where R1 { x } is defined in Eq. 4.3-2. Making this substitution, it is found that Eq. 13.5-2 is equivalent to the bilinear interpolation expression of Eq. 13.5-1. Typically, for reasons of computational complexity, resampling interpolation is limited to a 4 × 4 pixel neighborhood. Figure 13.5-2 defines a generalized bicubic interpolation neighborhood in which the pixel F ( p, q ) is the nearest neighbor to the pixel to be interpolated. The interpolated pixel may be expressed in the compact form
F ( p′, q′ ) =

m = – 1 n = –1

∑ ∑

2

2

F ( p + m, q + n )R C { ( m – a ) }R C { – ( n – b ) }

(13.5-3)

where RC ( x ) denotes a bicubic interpolation function such as a cubic B-spline or cubic interpolation function, as defined in Section 4.3-2. 13.5.2. Convolution Methods When an image is to be magnified by an integer zoom factor, pixel estimation can be implemented efficiently by convolution (12). As an example, consider image magnification by a factor of 2:1. This operation can be accomplished in two stages. First, the input image is transferred to an array in which rows and columns of zeros are interleaved with the input image data as follows:

GEOMETRICAL IMAGE RESAMPLING

395

F(p−1,q−1)

F(p−1,q)

F(p−1,q+1)

F(p−1,q+2)

F(p,q−1)

F(p,q) a
^

F(p,q+1)

F(p,q+2)

b

F(p',q')

F(p+1,q−1)

F(p+1,q)

F(p+1,q+1)

F(p+1,q+2)

F(p+2,q−1)

F(p+2,q)

F(p+2,q+1)

F(p+2,q+2)

FIGURE 13.5-2. Bicubic interpolation.

FIGURE 13.5-3. Interpolation kernels for 2:1 magnification.

396

GEOMETRICAL IMAGE MODIFICATION

(a) Original

(b) Zero interleaved quadrant

(c) Peg

(d ) Pyramid

(e) Bell

(f ) Cubic B-spline

FIGURE 13.5-4. Image interpolation on the mandrill_mon image for 2:1 magnification.

REFERENCES

397

A C

B D

A 0 B 0 0 0 C 0 D

input image neighborhood

zero-interleaved neighborhood

Next, the zero-interleaved neighborhood image is convolved with one of the discrete interpolation kernels listed in Figure 13.5-3. Figure 13.5-4 presents the magnification results for several interpolation kernels. The inevitable visual trade-off between the interpolation error (the jaggy line artifacts) and the loss of high spatial frequency detail in the image is apparent from the examples. This discrete convolution operation can easily be extended to higher-order magnification factors. For N:1 magnification, the core kernel is a N × N peg array. For large kernels it may be more computationally efficient in many cases, to perform the interpolation indirectly by Fourier domain filtering rather than by convolution (6).

REFERENCES
1. L. G. Roberts, “Machine Perception of Three-Dimensional Solids,” in Optical and Electro-Optical Information Processing, J. T. Tippett et al., Eds., MIT Press, Cambridge, MA, 1965. 2. D. F. Rogers, Mathematical Elements for Computer Graphics, 2nd ed., McGraw-Hill, New York, 1989. 3. J. D. Foley et al., Computer Graphics: Principles and Practice, 2nd ed. in C, AddisonWesley, Reading, MA, 1996. 4. E. Catmull and A. R. Smith, “3-D Transformation of Images in Scanline Order,” Computer Graphics, SIGGRAPH '80 Proc., 14, 3, July 1980, 279–285. 5. M. Unser, P. Thevenaz, and L. Yaroslavsky, “Convolution-Based Interpolation for Fast, High-Quality Rotation of Images, IEEE Trans. Image Processing, IP-4, 10, October 1995, 1371–1381. 6. D. Fraser and R. A. Schowengerdt, “Avoidance of Additional Aliasing in Multipass Image Rotations,” IEEE Trans. Image Processing, IP-3, 6, November 1994, 721–735. 7. A. W. Paeth, “A Fast Algorithm for General Raster Rotation,” in Proc. Graphics Interface ‘86-Vision Interface, 1986, 77–81. 8. P. E. Danielson and M. Hammerin, “High Accuracy Rotation of Images, in CVGIP: Graphical Models and Image Processing, 54, 4, July 1992, 340–344. 9. R. Bernstein, “Digital Image Processing of Earth Observation Sensor Data,” IBM J. Research and Development, 20, 1, 1976, 40–56. 10. D. A. O’Handley and W. B. Green, “Recent Developments in Digital Image Processing at the Image Processing Laboratory of the Jet Propulsion Laboratory,” Proc. IEEE, 60, 7, July 1972, 821–828.

398

GEOMETRICAL IMAGE MODIFICATION

11. K. S. Fu, R. C. Gonzalez and C. S. G. Lee, Robotics: Control, Sensing, Vision, and Intelligence, McGraw-Hill, New York, 1987. 12. W. K. Pratt, “Image Processing and Analysis Using Primitive Computational Elements,” in Selected Topics in Signal Processing, S. Haykin, Ed., Prentice Hall, Englewood Cliffs, NJ, 1989.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

PART 5 IMAGE ANALYSIS
Image analysis is concerned with the extraction of measurements, data or information from an image by automatic or semiautomatic methods. In the literature, this field has been called image data extraction, scene analysis, image description, automatic photo interpretation, image understanding, and a variety of other names. Image analysis is distinguished from other types of image processing, such as coding, restoration, and enhancement, in that the ultimate product of an image analysis system is usually numerical output rather than a picture. Image analysis also diverges from classical pattern recognition in that analysis systems, by definition, are not limited to the classification of scene regions to a fixed number of categories, but rather are designed to provide a description of complex scenes whose variety may be enormously large and ill-defined in terms of a priori expectation.

399

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

14
MORPHOLOGICAL IMAGE PROCESSING

Morphological image processing is a type of processing in which the spatial form or structure of objects within an image are modified. Dilation, erosion, and skeletonization are three fundamental morphological operations. With dilation, an object grows uniformly in spatial extent, whereas with erosion an object shrinks uniformly. Skeletonization results in a stick figure representation of an object. The basic concepts of morphological image processing trace back to the research on spatial set algebra by Minkowski (1) and the studies of Matheron (2) on topology. Serra (3–5) developed much of the early foundation of the subject. Steinberg (6,7) was a pioneer in applying morphological methods to medical and industrial vision applications. This research work led to the development of the cytocomputer for high-speed morphological image processing (8,9). In the following sections, morphological techniques are first described for binary images. Then these morphological concepts are extended to gray scale images.

14.1. BINARY IMAGE CONNECTIVITY Binary image morphological operations are based on the geometrical relationship or connectivity of pixels that are deemed to be of the same class (10,11). In the binary image of Figure 14.1-1a, the ring of black pixels, by all reasonable definitions of connectivity, divides the image into three segments: the white pixels exterior to the ring, the white pixels interior to the ring, and the black pixels of the ring itself. The pixels within each segment are said to be connected to one another. This concept of connectivity is easily understood for Figure 14.1-1a, but ambiguity arises when considering Figure 14.1-1b. Do the black pixels still define a ring, or do they instead form four disconnected lines? The answers to these questions depend on the definition of connectivity.
401

402

MORPHOLOGICAL IMAGE PROCESSING

FIGURE 14.1-1. Connectivity.

Consider the following neighborhood pixel pattern:

X3 X4 X5

X2 X X6

X1 X0 X7

in which a binary-valued pixel F ( j, k ) = X , where X = 0 (white) or X = 1 (black) is surrounded by its eight nearest neighbors X 0, X 1, …, X 7. An alternative nomenclature is to label the neighbors by compass directions: north, northeast, and so on:
NW W SW N X S NE E SE

Pixel X is said to be four-connected to a neighbor if it is a logical 1 and if its east, north, west, or south ( X 0, X 2, X4, X6 ) neighbor is a logical 1. Pixel X is said to be eight-connected if it is a logical 1 and if its north, northeast, etc. ( X0, X1, …, X 7 ) neighbor is a logical 1. The connectivity relationship between a center pixel and its eight neighbors can be quantified by the concept of a pixel bond, the sum of the bond weights between the center pixel and each of its neighbors. Each four-connected neighbor has a bond of two, and each eight-connected neighbor has a bond of one. In the following example, the pixel bond is seven.
1 0 1 1 X 1 1 0 0

BINARY IMAGE CONNECTIVITY

403

FIGURE 14.1-2. Pixel neighborhood connectivity definitions.

Under the definition of four-connectivity, Figure 14.1-1b has four disconnected black line segments, but with the eight-connectivity definition, Figure 14.1-1b has a ring of connected black pixels. Note, however, that under eight-connectivity, all white pixels are connected together. Thus a paradox exists. If the black pixels are to be eight-connected together in a ring, one would expect a division of the white pixels into pixels that are interior and exterior to the ring. To eliminate this dilemma, eight-connectivity can be defined for the black pixels of the object, and four-connectivity can be established for the white pixels of the background. Under this definition, a string of black pixels is said to be minimally connected if elimination of any black pixel results in a loss of connectivity of the remaining black pixels. Figure 14.1-2 provides definitions of several other neighborhood connectivity relationships between a center black pixel and its neighboring black and white pixels. The preceding definitions concerning connectivity have been based on a discrete image model in which a continuous image field is sampled over a rectangular array of points. Golay (12) has utilized a hexagonal grid structure. With such a structure, many of the connectivity problems associated with a rectangular grid are eliminated. In a hexagonal grid, neighboring pixels are said to be six-connected if they are in the same set and share a common edge boundary. Algorithms have been developed for the linking of boundary points for many feature extraction tasks (13). However, two major drawbacks have hindered wide acceptance of the hexagonal grid. First, most image scanners are inherently limited to rectangular scanning. The second problem is that the hexagonal grid is not well suited to many spatial processing operations, such as convolution and Fourier transformation.

404

MORPHOLOGICAL IMAGE PROCESSING

14.2. BINARY IMAGE HIT OR MISS TRANSFORMATIONS The two basic morphological operations, dilation and erosion, plus many variants can be defined and implemented by hit-or-miss transformations (3). The concept is quite simple. Conceptually, a small odd-sized mask, typically 3 × 3 , is scanned over a binary image. If the binary-valued pattern of the mask matches the state of the pixels under the mask (hit), an output pixel in spatial correspondence to the center pixel of the mask is set to some desired binary state. For a pattern mismatch (miss), the output pixel is set to the opposite binary state. For example, to perform simple binary noise cleaning, if the isolated 3 × 3 pixel pattern
0 0 0 0 1 0 0 0 0

is encountered, the output pixel is set to zero; otherwise, the output pixel is set to the state of the input center pixel. In more complicated morphological algorithms, a 9 large number of the 2 = 512 possible mask patterns may cause hits. It is often possible to establish simple neighborhood logical relationships that define the conditions for a hit. In the isolated pixel removal example, the defining equation for the output pixel G ( j, k ) becomes
G ( j, k ) = X ∩ ( X 0 ∪ X 1 ∪ … ∪ X 7 )

(14.2-1)

where ∩ denotes the intersection operation (logical AND) and ∪ denotes the union operation (logical OR). For complicated algorithms, the logical equation method of definition can be cumbersome. It is often simpler to regard the hit masks as a collection of binary patterns. Hit-or-miss morphological algorithms are often implemented in digital image processing hardware by a pixel stacker followed by a look-up table (LUT), as shown in Figure 14.2-1 (14). Each pixel of the input image is a positive integer, represented by a conventional binary code, whose most significant bit is a 1 (black) or a 0 (white). The pixel stacker extracts the bits of the center pixel X and its eight neighbors and puts them in a neighborhood pixel stack. Pixel stacking can be performed by convolution with the 3 × 3 pixel kernel
2 2
–4 –5 –6

2

–3 0 –7

2 2

–2 –1

2 2

2

2

–8

The binary number state of the neighborhood pixel stack becomes the numeric input address of the LUT whose entry is Y For isolated pixel removal, integer entry 256, corresponding to the neighborhood pixel stack state 100000000, contains Y = 0; all other entries contain Y = X.

BINARY IMAGE HIT OR MISS TRANSFORMATIONS

405

FIGURE 14.2-1. Look-up table flowchart for binary unconditional operations.

Several other 3 × 3 hit-or-miss operators are described in the following subsections.

14.2.1. Additive Operators Additive hit-or-miss morphological operators cause the center pixel of a 3 × 3 pixel window to be converted from a logical 0 state to a logical 1 state if the neighboring pixels meet certain predetermined conditions. The basic operators are now defined. Interior Fill. Create a black pixel if all four-connected neighbor pixels are black.
G ( j, k ) = X ∪ [ X 0 ∩ X 2 ∩ X 4 ∩ X 6 ]

(14.2-2)

Diagonal Fill. Create a black pixel if creation eliminates the eight-connectivity of the background.
G ( j, k ) = X ∪ [ P 1 ∪ P 2 ∪ P 3 ∪ P 4 ]

(14.2-3a)

406

MORPHOLOGICAL IMAGE PROCESSING

where
P 1 = X ∩ X0 ∩ X 1 ∩ X 2 P 2 = X ∩ X2 ∩ X 3 ∩ X 4 P 3 = X ∩ X4 ∩ X 5 ∩ X 6 P 4 = X ∩ X6 ∩ X 7 ∩ X 0

(14.2-3b) (14.2-3c) (14.2-3d) (14.2-3e)

In Eq. 14.2-3, the overbar denotes the logical complement of a variable. Bridge. Create a black pixel if creation results in connectivity of previously unconnected neighboring black pixels.
G ( j, k ) = X ∪ [ P 1 ∪ P 2 ∪ … ∪ P 6 ]

(14.2-4a)

where
P 1 = X 2 ∩ X6 ∩ [ X3 ∪ X 4 ∪ X 5 ] ∩ [ X 0 ∪ X 1 ∪ X7 ] ∩ PQ P 2 = X 0 ∩ X4 ∩ [ X1 ∪ X 2 ∪ X 3 ] ∩ [ X 5 ∪ X 6 ∪ X7 ] ∩ PQ P 3 = X 0 ∩ X6 ∩ X7 ∩ [ X 2 ∪ X 3 ∪ X 4 ] P 4 = X 0 ∩ X2 ∩ X1 ∩ [ X 4 ∪ X 5 ∪ X6 ] P 5 = X2 ∩ X4 ∩ X 3 ∩ [ X 0 ∪ X 6 ∪ X7 ] P 6 = X4 ∩ X6 ∩ X 5 ∩ [ X 0 ∪ X 1 ∪ X2 ]

(14.2-4b) (14.2-4c) (14.2-4d) (14.2-4e) (14.2-4f) (14.2-4g)

and
P Q = L 1 ∪ L2 ∪ L 3 ∪ L 4 L1 = X ∩ X 0 ∩ X1 ∩ X 2 ∩ X 3 ∩ X4 ∩ X5 ∩ X 6 ∩ X 7 L2 = X ∩ X0 ∩ X 1 ∩ X2 ∩ X 3 ∩ X 4 ∩ X5 ∩ X 6 ∩ X 7 L3 = X ∩ X0 ∩ X 1 ∩ X2 ∩ X 3 ∩ X 4 ∩ X5 ∩ X 6 ∩ X 7 L 4 = X ∩ X0 ∩ X 1 ∩ X 2 ∩ X3 ∩ X 4 ∩ X 5 ∩ X6 ∩ X 7

(14.2-4h) (14.2-4i) (14.2-4j) (14.2-4k) (14.2-4l)

BINARY IMAGE HIT OR MISS TRANSFORMATIONS

407

The following is one of 119 qualifying patterns
1 1 0 0 0 0 0 1 1

A pattern such as
0 0 1 0 0 0 0 0 1

does not qualify because the two black pixels will be connected when they are on the middle row of a subsequent observation window if they are indeed unconnected. Eight-Neighbor Dilate. Create a black pixel if at least one eight-connected neighbor pixel is black.
G ( j, k ) = X ∪ X 0 ∪ … ∪ X 7

(14.2-5)

This hit-or-miss definition of dilation is a special case of a generalized dilation operator that is introduced in Section 14.4. The dilate operator can be applied recursively. With each iteration, objects will grow by a single pixel width ring of exterior pixels. Figure 14.2-2 shows dilation for one and for three iterations for a binary image. In the example, the original pixels are recorded as black, the background pixels are white, and the added pixels are midgray. Fatten. Create a black pixel if at least one eight-connected neighbor pixel is black, provided that creation does not result in a bridge between previously unconnected black pixels in a 3 × 3 neighborhood. The following is an example of an input pattern in which the center pixel would be set black for the basic dilation operator, but not for the fatten operator.
0 1 1 0 0 1 1 0 0

There are 132 such qualifying patterns. This strategem will not prevent connection of two objects separated by two rows or columns of white pixels. A solution to this problem is considered in Section 14.3. Figure 14.2-3 provides an example of fattening.

408

MORPHOLOGICAL IMAGE PROCESSING

(a) Original

(b) One iteration

(c) Three iterations

FIGURE 14.2-2. Dilation of a binary image.

14.2.2. Subtractive Operators Subtractive hit-or-miss morphological operators cause the center pixel of a 3 × 3 window to be converted from black to white if its neighboring pixels meet predetermined conditions. The basic subtractive operators are defined below. Isolated Pixel Remove. Erase a black pixel with eight white neighbors.
G ( j, k ) = X ∩ [ X 0 ∪ X 1 ∪ … ∪ X 7 ]

(14.2-6)

Spur Remove. Erase a black pixel with a single eight-connected neighbor.

BINARY IMAGE HIT OR MISS TRANSFORMATIONS

409

FIGURE 14.2-3. Fattening of a binary image.

The following is one of four qualifying patterns:
0 0 1 0 1 0 0 0 0

Interior Pixel Remove. Erase a black pixel if all four-connected neighbors are black.
G ( j, k ) = X ∩ [ X 0 ∪ X 2 ∪ X 4 ∪ X 6 ]

(14.2-7)

There are 16 qualifying patterns. H-Break. Erase a black pixel that is H-connected. There are two qualifying patterns.
1 0 1 1 1 1 1 0 1 1 1 1 0 1 0 1 1 1

Eight-Neighbor Erode. Erase a black pixel if at least one eight-connected neighbor pixel is white.
G ( j, k ) = X ∩ X 0 ∩ … ∩ X 7

(14.2-8)

410

MORPHOLOGICAL IMAGE PROCESSING

(a) Original

(b) One iteration

(c) Three iterations

FIGURE 14.2-4. Erosion of a binary image.

A generalized erosion operator is defined in Section 14.4. Recursive application of the erosion operator will eventually erase all black pixels. Figure 14.2-4 shows results for one and three iterations of the erode operator. The eroded pixels are midgray. It should be noted that after three iterations, the ring is totally eroded. 14.2.3. Majority Black Operator The following is the definition of the majority black operator: Majority Black. Create a black pixel if five or more pixels in a 3 × 3 window are black; otherwise, set the output pixel to white. The majority black operator is useful for filling small holes in objects and closing short gaps in strokes. An example of its application to edge detection is given in Chapter 15.

BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING

411

14.3. BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING Shrinking, thinning, skeletonizing, and thickening are forms of conditional erosion in which the erosion process is controlled to prevent total erasure and to ensure connectivity. 14.3.1. Binary Image Shrinking The following is a definition of shrinking: Shrink. Erase black pixels such that an object without holes erodes to a single pixel at or near its center of mass, and an object with holes erodes to a connected ring lying midway between each hole and its nearest outer boundary. A 3 × 3 pixel object will be shrunk to a single pixel at its center. A 2 × 2 pixel object will be arbitrarily shrunk, by definition, to a single pixel at its lower right corner. It is not possible to perform shrinking using single-stage 3 × 3 pixel hit-or-miss transforms of the type described in the previous section. The 3 × 3 window does not provide enough information to prevent total erasure and to ensure connectivity. A 5 × 5 hit-or-miss transform could provide sufficient information to perform proper shrinking. But such an approach would result in excessive computational complexity (i.e., 225 possible patterns to be examined!). References 15 and 16 describe twostage shrinking and thinning algorithms that perform a conditional marking of pixels for erasure in a first stage, and then examine neighboring marked pixels in a second stage to determine which ones can be unconditionally erased without total erasure or loss of connectivity. The following algorithm developed by Pratt and Kabir (17) is a pipeline processor version of the conditional marking scheme. In the algorithm, two concatenated 3 × 3 hit-or-miss transformations are performed to obtain indirect information about pixel patterns within a 5 × 5 window. Figure 14.3-1 is a flowchart for the look-up table implementation of this algorithm. In the first stage, the states of nine neighboring pixels are gathered together by a pixel stacker, and a following look-up table generates a conditional mark M for possible erasures. Table 14.3-1 lists all patterns, as indicated by the letter S in the table column, which will be conditionally marked for erasure. In the second stage of the algorithm, the center pixel X and the conditional marks in a 3 × 3 neighborhood centered about X are examined to create an output pixel. The shrinking operation can be expressed logically as
G ( j, k ) = X ∩ [ M ∪ P ( M, M 0, …, M 7 ) ]

(14.3-1)

where P ( M, M 0, …, M 7 ) is an erasure inhibiting logical variable, as defined in Table 14.3-2. The first four patterns of the table prevent strokes of single pixel width from being totally erased. The remaining patterns inhibit erasure that would break object connectivity. There are a total of 157 inhibiting patterns. This two-stage process must be performed iteratively until there are no further erasures.

412

MORPHOLOGICAL IMAGE PROCESSING

FIGURE 14.3-1. Look-up table flowchart for binary conditional mark operations.

As an example, the 2 × 2 square pixel object
1 1 1 1

results in the following intermediate array of conditional marks

M M

M M

The corner cluster pattern of Table 14.3-2 gives a hit only for the lower right corner mark. The resulting output is
0 0 0 1

BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING

413

TABLE 14.3-1. Shrink, Thin, and Skeletonize Conditional Mark Patterns [M = 1 if hit] Table S Bond 0 0 1 1 0 1 0 0 0 0 0 0 0 S 2 0 1 1 0 0 0 0 0 1 S 3 0 1 1 0 0 0 0 1 0 TK 4 0 1 1 0 0 0 0 0 1 STK 4 0 1 1 0 0 1 1 1 0 ST 5 0 1 1 0 0 0 0 1 1 ST 5 0 1 1 0 0 0 1 1 0 ST 6 0 1 1 0 0 1 1 1 1 STK 6 0 1 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 1 0 1 0 0 0 0 0 1 0 1 1 0 0 0 0 1 1 1 0 1 0 0 0 0 0 1 0 0 1 1 0 0 1 1 1 0 1 1 0 0 0 0 0 1 1 1 1 0 1 0 0 0 1 1 0 1 1 0 0 1 1 1 1 1 1 0 0 0 0 1 1 0 1 1 0 1 0 0 1 0 0 1 1 0 1 1 0 0 0 0 1 1 0 1 1 1 0 0 0 0 1 1 1 1 1 0 0 1 0 1 1 0 1 1 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 1 1 0 0 1 0 1 0 0 1 1 0 1 0 0 0 1 1 1 1 0 0 0 0 0 0 0 1 1 0 1 1 0 Pattern 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 1 0 1 1 1 0 0 1 0 1 1 0 1 0 0 0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 1 0 0 1 1 0 0 0 0 1 1 0 0 1

(Continued)

414

MORPHOLOGICAL IMAGE PROCESSING

TABLE 14.3-1 (Continued) Table STK Bond 1 1 1 7 0 1 1 0 0 1 0 1 1 STK 8 0 1 1 0 1 1 1 1 1 STK 9 0 1 1 0 1 1 1 1 1 STK 10 0 1 1 1 1 1 1 1 1 K 11 1 1 1 0 1 1 1 1 1 1 1 0 1 0 0 1 1 1 1 1 1 0 0 0 0 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 0 0 1 1 0 1 1 1 1 1 0 1 1 0 1 1 0 1 1 1 1 1 1 1 0 0 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 1 Pattern 0 0 1 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 0 1 1 0 1 1 0 1 1 1 1 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1

Figure 14.3-2 shows an example of the shrinking of a binary image for four and 13 iterations of the algorithm. No further shrinking occurs for more than 13 iterations. At this point, the shrinking operation has become idempotent (i. e., reapplication evokes no further change. This shrinking algorithm does not shrink the symmetric original ring object to a ring that is also symmetric because of some of the conditional mark patterns of Table 14.3-2, which are necessary to ensure that objects of even dimension shrink to a single pixel. For the same reason, the shrink ring is not minimally connected. 14.3.2. Binary Image Thinning The following is a definition of thinning: Thin. Erase black pixels such that an object without holes erodes to a minimally connected stroke located equidistant from its nearest outer boundaries, and an object with holes erodes to a minimally connected ring midway between each hole and its nearest outer boundary.

BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING

415

TABLE 14.3-2. Shrink and Thin Unconditional Mark Patterns [P(M, M0, M1, M2, M3, M4, M5, M6, M7) = 1 if hit] a Pattern Spur 0 0 M 0 M0 0 0 0 M0 0 0 M0 0 0 0 Single 4-connection 0 0 0 0 0 0 0 M0 0 MM 0 M0 0 0 0

L Cluster (thin only) 0 0 M 0 MM MM0 0 MM 0 M0 0 M0 0 0 0 0 0 0 0 0 0 4-Connected offset 0 MM MM0 0 M0 MM0 0 MM 0 MM 0 0 0 0 0 0 0 0 M Spur corner cluster 0 A M MB 0 0 0 M 0 MB A M0 A M0 M0 0 0 0 M MB 0 Corner cluster MMD MMD DDD Tee branch D M0 0 MD MMM MMM D0 0 0 0 D Vee branch MD M MD C D MD D MB A B C MD A Diagonal branch D M0 0 MD 0 MM MM0 M0 D D 0 M a M0 0 MM0 0 0 0

0 0 0 MM0 M0 0

0 0 0 0 M0 MM0

0 0 0 0 M0 0 MM

0 0 0 0 MM 0 0 M

0 0 M 0 MM 0 M0

M0 0 0 MB 0 AM

0 0 D MMM 0 MD

D0 0 MMM D M0

D MD MM0 0 M0

0 M0 MM0 D MD

0 M0 0 MM D MD

D MD 0 MM 0 M0

CBA D MD MD M

A DM B MD CDM

D0 M MM0 0 MD

M0 D 0 MM D M0 A ∪ B = 1.

A∪B∪C = 1

D = 0∪1

416

MORPHOLOGICAL IMAGE PROCESSING

(a) Four iterations

(b) Thirteen iterations

FIGURE 14.3-2. Shrinking of a binary image.

The following is an example of the thinning of a 3 × 5 pixel object without holes
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0

before A 2 × 5 object is thinned as follows:
1 1 1 1 1 1 1 1 1 1 0 0 0 1

after

0 1

0 1

0 1

before

after

Table 14.3-1 lists the conditional mark patterns, as indicated by the letter T in the table column, for thinning by the conditional mark algorithm of Figure 14.3-1. The shrink and thin unconditional patterns are identical, as shown in Table 14.3-2. Figure 14.3-3 contains an example of the thinning of a binary image for four and eight iterations. Figure 14.3-4 provides an example of the thinning of an image of a printed circuit board in order to locate solder pads that have been deposited improperly and that do not have holes for component leads. The pads with holes erode to a minimally connected ring, while the pads without holes erode to a point. Thinning can be applied to the background of an image containing several objects as a means of separating the objects. Figure 14.3-5 provides an example of the process. The original image appears in Figure 14.3-5a, and the backgroundreversed image is Figure 14.3-5b. Figure 14.3-5c shows the effect of thinning the background. The thinned strokes that separate the original objects are minimally

BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING

417

(a) Four iterations

(b) Eight iterations

FIGURE 14.3-3. Thinning of a binary image.

connected, and therefore the background of the separating strokes is eight-connected throughout the image. This is an example of the connectivity ambiguity discussed in Section 14.1. To resolve this ambiguity, a diagonal fill operation can be applied to the thinned strokes. The result, shown in Figure 14.3-5d, is called the exothin of the original image. The name derives from the exoskeleton, discussed in the following section. 14.3.3. Binary Image Skeletonizing A skeleton or stick figure representation of an object can be used to describe its structure. Thinned objects sometimes have the appearance of a skeleton, but they are not always uniquely defined. For example, in Figure 14.3-3, both the rectangle and ellipse thin to a horizontal line.

(a) Original

(b) Thinned

FIGURE 14.3-4. Thinning of a printed circuit board image.

418

MORPHOLOGICAL IMAGE PROCESSING

(a) Original

(b) Background-reversed

(c) Thinned background

(d ) Exothin

FIGURE 14.3-5. Exothinning of a binary image.

Blum (18) has introduced a skeletonizing technique called medial axis transformation that produces a unique skeleton for a given object. An intuitive explanation of the medial axis transformation is based on the prairie fire analogy (19–22). Consider the circle and rectangle regions of Figure 14.3-6 to be composed of dry grass on a bare dirt background. If a fire were to be started simultaneously on the perimeter of the grass, the fire would proceed to burn toward the center of the regions until all the grass was consumed. In the case of the circle, the fire would burn to the center point of the circle, which is the quench point of the circle. For the rectangle, the fire would proceed from each side. As the fire moved simultaneously from left and top, the fire lines would meet and quench the fire. The quench points or quench lines of a figure are called its medial axis skeleton. More generally, the medial axis skeleton consists of the set of points that are equally distant from two closest points of an object boundary. The minimal distance function is called the quench distance of the object. From the medial axis skeleton of an object and its quench distance, it is

BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING

419

(a) Circle

(b) Rectangle

FIGURE 14.3-6. Medial axis transforms.

possible to reconstruct the object boundary. The object boundary is determined by the union of a set of circular disks formed by circumscribing a circle whose radius is the quench distance at each point of the medial axis skeleton. A reasonably close approximation to the medial axis skeleton can be implemented by a slight variation of the conditional marking implementation shown in Figure 14.31. In this approach, an image is iteratively eroded using conditional and unconditional mark patterns until no further erosion occurs. The conditional mark patterns for skeletonization are listed in Table 14.3-1 under the table indicator K. Table 14.3-3 lists the unconditional mark patterns. At the conclusion of the last iteration, it is necessary to perform a single iteration of bridging as defined by Eq. 14.2-4 to restore connectivity, which will be lost whenever the following pattern is encountered:
1 11 11 1 111 1

Inhibiting the following mark pattern created by the bit pattern above:
MM M M

will prevent elliptically shaped objects from being improperly skeletonized.

420

MORPHOLOGICAL IMAGE PROCESSING

TABLE 14.3-3. Skeletonize Unconditional Mark Patterns [P(M, M0 , M1 , M2, M3, M4, M5, M6 , M7) = 1 if hit]a Pattern Spur 0 0 0 0 0 0 0 0 0 0 M 0 0 M M M M 0 0 0 M 0 0 0 0 M 0 0 0 M 0 0 0 0 M 0 0 M 0 0 M 0 M M 0 0 0 0 0 M 0 0 0 0 0 0 0 0 M 0 0 0 0 0 M 0 0 M 0 0 M M M 0 0 0 0 0 0 M 0 M 0 0 0 0 0 0 M 0 0 M 0 M M 0 0 M M 0 0 0 0 0 0 0 0 0

Single 4-connection

L corner

Corner cluster D D D D M D M M D M M 0 M M D D M 0 D M M D M D D M M M M M D D D D D D M M D D M D M M D D M M D D D D M D D D D D D D D M M M M M D M M D M D

Tee branch

Vee branch M D A D 0 M a D M B M M 0

M D C 0 M D

M D M 0 M D D = 0 ∪ 1.

D M D M M 0

C B A D 0 M

C D M D M 0

B M D 0 M M

A D M M 0 D

A B C M 0 D

D M D 0 M M

M D M D M 0

Digonal branch

A∪B∪C = 1

BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING

421

(a) Four iterations

(b) Ten iterations

FIGURE 14.3-7. Skeletonizing of a binary image.

Figure 14.3-7 shows an example of the skeletonization of a binary image. The eroded pixels are midgray. It should observed that skeletonizing gives different results than thinning for many objects. Prewitt (23, p. 136) has coined the term exoskeleton for the skeleton of the background of object in a scene. The exoskeleton partitions each objects from neighboring object, as does the thinning of the background. 14.3.4. Binary Image Thickening In Section 14.2.1, the fatten operator was introduced as a means of dilating objects such that objects separated by a single pixel stroke would not be fused. But the fatten operator does not prevent fusion of objects separated by a double width white stroke. This problem can be solved by iteratively thinning the background of an image and then performing a diagonal fill operation. This process, called thickening, when taken to its idempotent limit, forms the exothin of the image, as discussed in Section 14.3.2. Figure 14.3-8 provides an example of thickening. The exothin operation is repeated three times on the background reversed version of the original image. Figure 14.3-8b shows the final result obtained by reversing the background of the exothinned image.

422

MORPHOLOGICAL IMAGE PROCESSING

(a) Original

(b) Thickened

FIGURE 14.3-8. Thickening of a binary image.

14.4. BINARY IMAGE GENERALIZED DILATION AND EROSION Dilation and erosion, as defined earlier in terms of hit-or-miss transformations, are limited to object modification by a single ring of boundary pixels during each iteration of the process. The operations can be generalized. Before proceeding further, it is necessary to introduce some fundamental concepts of image set algebra that are the basis for defining the generalized dilation and erosions operators. Consider a binary-valued source image function F ( j, k ). A pixel at coordinate ( j, k ) is a member of F ( j, k ) , as indicated by the symbol ∈, if and only if it is a logical 1. A binary-valued image B ( j, k ) is a subset of a binary-valued image A ( j, k ), as indicated by B ( j, k ) ⊆ A ( j, k ), if for every spatial occurrence of a logical 1 of A ( j, k ), B ( j, k ) is a logical 1. The complement F ( j, k ) of F ( j, k ) is a binary-valued image whose pixels are in the opposite logical state of those in F ( j, k ). Figure 14.4-1 shows an example of the complement process and other image set ˜ algebraic operations on a pair of binary images. A reflected image F ( j, k ) is an image that has been flipped from left to right and from top to bottom. Figure 14.4-2 provides an example of image complementation. Translation of an image, as indicated by the function

G ( j, k ) = T r, c { F ( j, k ) }

(14.4-1)

consists of spatially offsetting F ( j, k ) with respect to itself by r rows and c columns, where – R ≤ r ≤ R and – C ≤ c ≤ C . Figure 14.4-2 presents an example of the translation of a binary image.

BINARY IMAGE GENERALIZED DILATION AND EROSION

423

FIGURE 14.4-1. Image set algebraic operations on binary arrays.

14.4.1. Generalized Dilation Generalized dilation is expressed symbolically as
G ( j, k ) = F ( j, k ) ⊕ H ( j , k )

(14.4-2)

where F ( j, k ) for 1 ≤ j, k ≤ N is a binary-valued image and H ( j, k ) for 1 ≤ j, k ≤ L , where L is an odd integer, is a binary-valued array called a structuring element. For notational simplicity, F ( j, k ) and H ( j, k ) are assumed to be square arrays. Generalized dilation can be defined mathematically and implemented in several ways. The Minkowski addition definition (1) is
G ( j, k ) =

( r, c ) ∈ H

    

∪ ∪ T r , c { F ( j, k ) }

(14.4-3)

424

MORPHOLOGICAL IMAGE PROCESSING

FIGURE 14.4-2. Reflection and translation of a binary array.

It states that G ( j, k ) is formed by the union of all translates of F ( j, k ) with respect to itself in which the translation distance is the row and column index of pixels of H ( j, k ) that is a logical 1. Figure 14.4-3 illustrates the concept. Equation 14.4-3 results in an M × M output array G ( j, k ) that is justified with the upper left corner of the input array F ( j, k ) . The output array is of dimension M = N + L – 1, where L is the size of the structuring element. In order to register the input and output images properly, F ( j, k ) should be translated diagonally right by Q = ( L – 1 ) ⁄ 2 pixels. Figure 14.4-3 shows the exclusive-OR difference between G ( j, k ) and the translate of F ( j, k ) . This operation identifies those pixels that have been added as a result of generalized dilation. An alternative definition of generalized dilation is based on the scanning and processing of F ( j, k ) by the structuring element H ( j, k ) . With this approach, generalized dilation is formulated as (17)
G ( j, k ) =

∪ ∪ F ( m, n ) ∩ H ( j – m + 1, k – n + 1 ) m n

(14.4-4)

With reference to Eq. 7.1-7, the spatial limits of the union combination are
MAX { 1, j – L + 1 } ≤ m ≤ MIN { N, j } MAX { 1, k – L + 1 } ≤ n ≤ MIN { N, k }

(14.4-5a) (14.4-5b)

Equation 14.4-4 provides an output array that is justified with the upper left corner of the input array. In image processing systems, it is often convenient to center the input and output images and to limit their size to the same overall dimension. This can be accomplished easily by modifying Eq. 14.4-4 to the form
G ( j, k ) =

∪ ∪ F ( m , n ) ∩ H ( j – m + S, k – n + S ) m n

(14.4-6)

BINARY IMAGE GENERALIZED DILATION AND EROSION

425

FIGURE 14.4-3. Generalized dilation computed by Minkowski addition.

where S = ( L – 1 ) ⁄ 2 and, from Eq. 7.1-10, the limits of the union combination are
MAX { 1, j – Q } ≤ m ≤ MIN { N, j + Q } MAX { 1, k – Q } ≤ n ≤ MIN { N, k + Q }

(14.4-7a) (14.4-7b)

426

MORPHOLOGICAL IMAGE PROCESSING

and where Q = ( L – 1 ) ⁄ 2 . Equation 14.4-6 applies for S ≤ j, k ≤ N – Q and G ( j, k ) = 0 elsewhere. The Minkowski addition definition of generalized erosion given in Eq. 14.4-2 can be modified to provide a centered result by taking the translations about the center of the structuring element. In the following discussion, only the centered definitions of generalized dilation will be utilized. In the special case for which L = 3, Eq. 14.4-6 can be expressed explicitly as
( G ( j, k ) ) = [ H ( 3, 3 ) ∩ F ( j – 1, k – 1 ) ] ∪ [ H ( 3, 2 ) ∩ F ( j – 1, k ) ] ∪ [ H ( 3, 1 ) ∩ F ( j – 1, K + 1 ) ] ∪ [ H ( 2, 3 ) ∩ F ( j, k – 1 ) ] ∪ [ H ( 2, 2 ) ∩ F ( j, k ) ] ∪ [ H ( 2, 1 ) ∩ F ( j, k + 1 ) ] ∪ [ H ( 1, 3 ) ∩ F ( j + 1, k – 1 ) ] ∪ [ H ( 1, 2 ) ∩ F ( j + 1, k ) ] ∪ [ H ( 1, 1 ) ∩ F ( j + 1, k + 1 ) ]

(14.4-8) If H ( j, k ) = 1 for 1 ≤ j, k ≤ 3 , then G ( j, k ) , as computed by Eq. 14.4-8, gives the same result as hit-or-miss dilation, as defined by Eq. 14.2-5. It is interesting to compare Eqs. 14.4-6 and 14.4-8, which define generalized dilation, and Eqs. 7.1-14 and 7.1-15, which define convolution. In the generalized dilation equation, the union operations are analogous to the summation operations of convolution, while the intersection operation is analogous to point-by-point multiplication. As with convolution, dilation can be conceived as the scanning and processing of F ( j, k ) by H ( j, k ) rotated by 180°. 14.4.2. Generalized Erosion Generalized erosion is expressed symbolically as
G ( j, k ) = F ( j, k ) – H ( j, k )

(14.4-9)

where again H ( j, k ) is an odd size L × L structuring element. Serra (3) has adopted, as his definition for erosion, the dual relationship of Minkowski addition given by Eq. 14.4-1, which was introduced by Hadwiger (24). By this formulation, generalized erosion is defined to be
G ( j, k ) =

The meaning of this relation is that erosion of F ( j, k ) by H ( j, k ) is the intersection of all translates of F ( j, k ) in which the translation distance is the row and column index of pixels of H ( j, k ) that are in the logical 1 state. Steinberg et al. (6,25) have adopted the subtly different formulation

    

∩ ∩ Tr, c { F ( j, k ) }

(14.4-10)

( r, c ) ∈ H

BINARY IMAGE GENERALIZED DILATION AND EROSION

427

FIGURE 14.4-4. Comparison of erosion results for two definitions of generalized erosion.

G ( j, k ) =

˜ ( r, c ) ∈ H

introduced by Matheron (2), in which the translates of F ( j, k ) are governed by the ˜ reflection H ( j, k ) of the structuring element rather than by H ( j, k ) itself. Using the Steinberg definition, G ( j, k ) is a logical 1 if and only if the logical 1s of H ( j, k ) form a subset of the spatially corresponding pattern of the logical 1s of F ( j, k ) as H ( j, k ) is scanned over F ( j, k ) . It should be noted that the logical zeros of H ( j, k ) do not have to match the logical zeros of F ( j, k ) . With the Serra definition, the statements above hold when F ( j, k ) is scanned and processed by the reflection of the structuring element. Figure 14.4-4 presents a comparison of the erosion results for the two definitions of erosion. Clearly, the results are inconsistent. Pratt (26) has proposed a relation, which is the dual to the generalized dilation expression of Eq. 14.4-6, as a definition of generalized erosion. By this formulation, generalized erosion in centered form is
G ( j, k ) =

∩ ∩ F ( m, n ) ∪ H ( j – m + S, k – n + S ) m n

    

∩ ∩ Tr, c { F ( j, k ) }

(14.4-11)

(14.4-12)

where S = ( L – 1 ) ⁄ 2 , and the limits of the intersection combination are given by Eq. 14.4-7. In the special case for which L = 3, Eq. 14.4-12 becomes

428

MORPHOLOGICAL IMAGE PROCESSING

G ( j, k ) = [ H ( 3, 3 ) ∪ F ( j – 1, k – 1 ) ] ∩ [ H ( 3, 2 ) ∪ F ( j – 1, k ) ] ∩ [ H ( 3, 1 ) ∪ F ( j – 1, k + 1 ) ] ∪ [ H ( 2, 3 ) ∪ F ( j, k – 1 ) ] ∩ [ H ( 2, 2 ) ∪ F ( j, k ) ] ∩ [ H ( 2, 1 ) ∪ F ( j, k + 1 ) ] ∩ [ H ( 1, 3 ) ∪ F ( j + 1, k – 1 ) ] ∩ [ H ( 1, 2 ) ∪ F ( j + 1, k ) ] ∩ [ H ( 1, 1 ) ∪ F ( j + 1, k + 1 ) ]

(14.4-13) If H ( j, k ) = 1 for 1 ≤ j, k ≤ 3 , Eq. 14.4-13 gives the same result as hit-or-miss eightneighbor erosion as defined by Eq. 14.2-6. Pratt's definition is the same as the Serra definition. However, Eq. 14.4-12 can easily be modified by substituting the reflec˜ tion H ( j, k ) for H ( j, k ) to provide equivalency with the Steinberg definition. Unfortunately, the literature utilizes both definitions, which can lead to confusion. The definition adopted in this book is that of Hadwiger, Serra, and Pratt, because the

FIGURE 14.4-5. Generalized dilation and erosion for a 5 × 5 structuring element.

BINARY IMAGE GENERALIZED DILATION AND EROSION

429

defining relationships (Eq. 14.4-1 or 14.4-12) are duals to their counterparts for generalized dilation (Eq. 14.4-3 or 14.4-6). Figure 14.4-5 shows examples of generalized dilation and erosion for a symmetric 5 × 5 structuring element. 14.4.3. Properties of Generalized Dilation and Erosion Consideration is now given to several mathematical properties of generalized dilation and erosion. Proofs of these properties are found in Reference 25. For notational simplicity, in this subsection the spatial coordinates of a set are dropped, i.e., A( j, k) = A. Dilation is commutative:
A⊕B = B⊕A

(14.4-14a)

But in general, erosion is not commutative:
A –B≠B–A

(14.4-14b)

Dilation and erosion are increasing operations in the sense that if A ⊆ B , then
A⊕C⊆B⊕C A–C⊆B –C

(14.4-15a) (14.4-15b)

Dilation and erosion are opposite in effect; dilation of the background of an object behaves like erosion of the object. This statement can be quantified by the duality relationship
A–B = A⊕B

(14.4-16)

For the Steinberg definition of erosion, B on the right-hand side of Eq. 14.4-16 ˜ should be replaced by its reflection B . Figure 14.4-6 contains an example of the duality relationship. The dilation and erosion of the intersection and union of sets obey the following relations:
[A ∩ B] ⊕ C ⊆ [A ⊕ C] ∩ [B ⊕ C] [ A ∩ B] – C = [ A – C ] ∩ [ B – C] [ A ∪ B ] ⊕ C = [ A ⊕ C] ∪ [ B ⊕ C ] [A ∪ B] – C ⊇ [A – C] ∪ [B – C]

(14.4-17a) (14.4-17b) (14.4-17c) (14.4-17d)

430

MORPHOLOGICAL IMAGE PROCESSING

FIGURE 14.4-6. Duality relationship between dilation and erosion.

The dilation and erosion of a set by the intersection of two other sets satisfy these containment relations:
A ⊕ [B ∩ C] ⊆ [A ⊕ B] ∩ [A ⊕ C] A – [B ∩ C] ⊇ [A – B] ∪ [A – C]

(14.4-18a) (14.4-18b)

On the other hand, dilation and erosion of a set by the union of a pair of sets are governed by the equality relations
A ⊕ [B ∪ C] = [A ⊕ B] ∪ [A ⊕ C] A – [B ∪ C] = [A – B] ∪ [A – C]

(14.4-19a) (14.4-19b)

The following chain rules hold for dilation and erosion.
A ⊕[B ⊕ C] = [ A ⊕ B] ⊕ C A – [B ⊕ C ] = [ A – B] – C

(14.4-20a) (14.4-20b)

14.4.4. Structuring Element Decomposition Equation 14.4-20 is important because it indicates that if a L × L structuring element can be expressed as
H ( j, k ) = K 1 ( j, k ) ⊕ … ⊕ Kq ( j, k ) ⊕ … ⊕ K Q ( j, k )

(14.4-21)

BINARY IMAGE GENERALIZED DILATION AND EROSION

431

FIGURE 14.4-7. Structuring element decomposition.

where Kq ( j, k ) is a small structuring element, it is possible to perform dilation and erosion by operating on an image sequentially. In Eq. 14.4-21, if the small structuring elements K q ( j, k ) are all 3 × 3 arrays, then Q = ( L – 1 ) ⁄ 2 . Figure 14.4-7 gives several examples of small structuring element decomposition. Sequential small structuring element (SSE) dilation and erosion is analogous to small generating kernel (SGK) convolution as given by Eq. 9.6-1. Not every large impulse response array can be decomposed exactly into a sequence of SGK convolutions; similarly, not every large structuring element can be decomposed into a sequence of SSE dilations or erosions. Following is an example in which a 5 × 5 structuring element cannot be decomposed into the sequential dilation of two 3 × 3 SSEs. Zhuang and Haralick (27) have developed a computational search method to find a SEE decomposition into 1 × 2 and 2 × 1 elements.

432

MORPHOLOGICAL IMAGE PROCESSING

FIGURE 14.4-8. Small structuring element decomposition of a 5 × 5 pixel ring.

1 1 1 1 1

1 0 0 0 1

1 0 0 0 1

1 0 0 0 1

1 1 1 1 1

For two-dimensional convolution it is possible to decompose any large impulse response array into a set of sequential SGKs that are computed in parallel and

BINARY IMAGE CLOSE AND OPEN OPERATIONS

433

summed together using the singular-value decomposition/small generating kernel (SVD/SGK) algorithm, as illustrated by the flowchart of Figure 9.6-2. It is logical to conjecture as to whether an analog to the SVD/SGK algorithm exists for dilation and erosion. Equation 14.4-19 suggests that such an algorithm may exist. Figure 14.4-8 illustrates an SSE decomposition of the 5 × 5 ring example based on Eqs. 14.4-19a and 14.4-21. Unfortunately, no systematic method has yet been found to decompose an arbitrarily large structuring element.

14.5. BINARY IMAGE CLOSE AND OPEN OPERATIONS Dilation and erosion are often applied to an image in concatenation. Dilation followed by erosion is called a close operation. It is expressed symbolically as
G ( j, k ) = F ( j, k ) • H ( j, k )

(14.5-1a)

where H ( j, k ) is a L × L structuring element. In accordance with the Serra formulation of erosion, the close operation is defined as
˜ G ( j, k ) = [ F ( j, k ) ⊕ H ( j, k ) ] – H ( j, k )

(14.5-1b)

where it should be noted that erosion is performed with the reflection of the structuring element. Closing of an image with a compact structuring element without holes (zeros), such as a square or circle, smooths contours of objects, eliminates small holes in objects, and fuses short gaps between objects. An open operation, expressed symbolically as
G ( j, k ) = F ( j, k ) H ( j, k )

(14.5-2a)

consists of erosion followed by dilation. It is defined as
˜ G ( j, k ) = [ F ( j, k ) – H ( j, k ) ] ⊕ H ( j, k )

(14.5-2b)

where again, the erosion is with the reflection of the structuring element. Opening of an image smooths contours of objects, eliminates small objects, and breaks narrow strokes. The close operation tends to increase the spatial extent of an object, while the open operation decreases its spatial extent. In quantitative terms
F ( j, k ) • H ( j, k ) ⊇ F ( j, k ) F ( j, k ) H ( j, k ) ⊆ F ( j, k )

(14.5-3a) (14.5-3b)

434

MORPHOLOGICAL IMAGE PROCESSING

blob (a ) Original

closing (b ) Close

overlay of blob & closing (c ) Overlay of original and close

opening (d ) Open

overlay of blob & opening (e ) Overlay of original and open

FIGURE 14.5-1. Close and open operations on a binary image.

GRAY SCALE IMAGE MORPHOLOGICAL OPERATIONS

435

It can be shown that the close and open operations are stable in the sense that (25)
[ F ( j, k ) • H ( j, k ) ] • H ( j, k ) = F ( j, k ) • H ( j, k ) [ F ( j, k ) H ( j, k ) ] H ( j, k ) = F ( j, k ) H ( j, k )

(14.5-4a) (14.5-4b)

Also, it can be easily shown that the open and close operations satisfy the following duality relationship:
F ( j, k ) • H ( j, k ) = F ( j, k ) H ( j, k )

(14.5-5)

Figure 14.5-1 presents examples of the close and open operations on a binary image.

14.6. GRAY SCALE IMAGE MORPHOLOGICAL OPERATIONS Morphological concepts can be extended to gray scale images, but the extension often leads to theoretical issues and to implementation complexities. When applied to a binary image, dilation and erosion operations cause an image to increase or decrease in spatial extent, respectively. To generalize these concepts to a gray scale image, it is assumed that the image contains visually distinct gray scale objects set against a gray background. Also, it is assumed that the objects and background are both relatively spatially smooth. Under these conditions, it is reasonable to ask: Why not just threshold the image and perform binary image morphology? The reason for not taking this approach is that the thresholding operation often introduces significant error in segmenting objects from the background. This is especially true when the gray scale image contains shading caused by nonuniform scene illumination. 14.6.1. Gray Scale Image Dilation and Erosion Dilation or erosion of an image could, in principle, be accomplished by hit-or-miss transformations in which the quantized gray scale patterns are examined in a 3 × 3 window and an output pixel is generated for each pattern. This approach is, however, not computationally feasible. For example, if a look-up table implementation were to be used, the table would require 2 72 entries for 256-level quantization of each pixel! The common alternative is to use gray scale extremum operations over a 3 × 3 pixel neighborhoods. Consider a gray scale image F ( j, k ) quantized to an arbitrary number of gray levels. According to the extremum method of gray scale image dilation, the dilation operation is defined as
G ( j, k ) = MAX { F ( j, k ), F ( j, k + 1 ), F ( j – 1, k + 1 ), …, F ( j + 1, k + 1 ) }

(14.6-1)

436

MORPHOLOGICAL IMAGE PROCESSING

printed circuit board (a ) Original

PCB profile (b ) Original profile

dilation profile 1 iteration (c ) One iteration

dilation profile 2 iterations (d ) Two iterations

dilation profile 3 iterations (e ) Three iterations

FIGURE 14.6-1. One-dimensional gray scale image dilation on a printed circuit board image.

where MAX { S 1, …, S 9 } generates the largest-amplitude pixel of the nine pixels in the neighborhood. If F ( j, k ) is quantized to only two levels, Eq. 14.6-1 provides the same result as that using binary image dilation as defined by Eq. 14.2-5.

GRAY SCALE IMAGE MORPHOLOGICAL OPERATIONS

437

By the extremum method, gray scale image erosion is defined as
G ( j, k ) = MIN { F ( j, k ), F ( j, k + 1 ), F ( j – 1, k + 1 ), …, F ( j + 1, k + 1 ) }

(14.6-2)

where MIN { S 1, …, S 9 } generates the smallest-amplitude pixel of the nine pixels in the 3 × 3 pixel neighborhood. If F ( j, k ) is binary-valued, then Eq. 14.6-2 gives the same result as hit-or-miss erosion as defined in Eq. 14.2-8. In Chapter 10, when discussing the pseudomedian, it was shown that the MAX and MIN operations can be computed sequentially. As a consequence, Eqs. 14.6-1 and 14.6-2 can be applied iteratively to an image. For example, three iterations gives the same result as a single iteration using a 7 × 7 moving-window MAX or MIN operator. By selectively excluding some of the terms S 1, …, S 9 of Eq. 14.6-1 or 14.6-2 during each iteration, it is possible to synthesize large nonsquare gray scale structuring elements in the same number as illustrated in Figure 14.4-7 for binary structuring elements. However, no systematic decomposition procedure has yet been developed. Figures 14.6-1 and 14.6-2 show the amplitude profile of a row of a gray scale image of a printed circuit board (PCB) after several dilation and erosion iterations. The row selected is indicated by the white horizontal line in Figure 14.6-la. In Figure 14.6-2, two-dimensional gray scale dilation and erosion are performed on the PCB image. 14.6.2. Gray Scale Image Close and Open Operators The close and open operations introduced in Section 14.5 for binary images can easily be extended to gray scale images. Gray scale closing is realized by first performing gray scale dilation with a gray scale structuring element, then gray scale erosion with the same structuring element. Similarly, gray scale opening is accomplished by gray scale erosion followed by gray scale dilation. Figure 14.6-3 gives examples of gray scale image closing and opening. Steinberg (28) has introduced the use of three-dimensional structuring elements for gray scale image closing and opening operations. Although the concept is well defined mathematically, it is simpler to describe in terms of a structural image model. Consider a gray scale image to be modeled as an array of closely packed square pegs, each of which is proportional in height to the amplitude of a corresponding pixel. Then a three-dimensional structuring element, for example a sphere, is placed over each peg. The bottom of the structuring element as it is translated over the peg array forms another spatially discrete surface, which is the close array of the original image. A spherical structuring element will touch pegs at peaks of the original peg array, but will not touch pegs at the bottom of steep valleys. Consequently, the close surface “fills in” dark spots in the original image. The opening of a gray scale image can be conceptualized in a similar manner. An original image is modeled as a peg array in which the height of each peg is inversely proportional to

438

MORPHOLOGICAL IMAGE PROCESSING

erosion profile 1 iteration (a ) One iteration

erosion profile 2 iterations (b ) Two iterations

erosion profile 3 iterations (c ) Three iterations

FIGURE 14.6-2. One-dimensional gray scale image erosion on a printed circuit board image.

the amplitude of each corresponding pixel (i.e., the gray scale is subtractively inverted). The translated structuring element then forms the open surface of the original image. For a spherical structuring element, bright spots in the original image are made darker. 14.6.3. Conditional Gray Scale Image Morphological Operators There have been attempts to develop morphological operators for gray scale images that are analogous to binary image shrinking, thinning, skeletonizing, and thickening. The stumbling block to these extensions is the lack of a definition for connectivity of neighboring gray scale pixels. Serra (4) has proposed approaches based on topographic mapping techniques. Another approach is to iteratively perform the basic dilation and erosion operations on a gray scale image and then use a binary thresholded version of the resultant image to determine connectivity at each iteration.

GRAY SCALE IMAGE MORPHOLOGICAL OPERATIONS

439

Printed Circuit Board (a ) Original

5x5 square dilation (b ) Dilation

5x5 square erosion (c ) Erosion

5x5 square closing (d ) Close

5x5 square opening (e ) Open

FIGURE 14.6-3. Two-dimensional gray scale image dilation, erosion, close, and open on a printed circuit board image.

440

MORPHOLOGICAL IMAGE PROCESSING

REFERENCES
1. H. Minkowski, “Volumen und Oberfiläche,” Mathematische Annalen, 57, 1903, 447– 459. 2. G. Matheron, Random Sets and Integral Geometry, Wiley, New York, 1975. 3. J. Serra, Image Analysis and Mathematical Morphology, Vol. 1, Academic Press, London, 1982. 4. J. Serra, Image Analysis and Mathematical Morphology: Theoretical Advances, Vol. 2, Academic Press, London, 1988. 5. J. Serra, “Introduction to Mathematical Morphology,” Computer Vision, Graphics, and Image Processing, 35, 3, September 1986, 283–305. 6. S. R. Steinberg, “Parallel Architectures for Image Processing,” Proc. 3rd International IEEE Compsac, Chicago, 1981. 7. S. R. Steinberg, “Biomedical Image Processing,” IEEE Computer, January 1983, 22–34. 8. S. R. Steinberg, “Automatic Image Processor,” US patent 4,167,728. 9. R. M. Lougheed and D. L. McCubbrey, “The Cytocomputer: A Practical Pipelined Image Processor,” Proc. 7th Annual International Symposium on Computer Architecture, 1980. 10. A. Rosenfeld, “Connectivity in Digital Pictures,” J. Association for Computing Machinery, 17, 1, January 1970, 146–160. 11. A. Rosenfeld, Picture Processing by Computer, Academic Press, New York, 1969. 12. M. J. E. Golay, “Hexagonal Pattern Transformation,” IEEE Trans. Computers, C-18, 8, August 1969, 733–740. 13. K. Preston, Jr., “Feature Extraction by Golay Hexagonal Pattern Transforms,” IEEE Trans. Computers, C-20, 9, September 1971, 1007–1014. 14. F. A. Gerritsen and P. W. Verbeek, “Implementation of Cellular Logic Operators Using 3 × 3 Convolutions and Lookup Table Hardware,” Computer Vision, Graphics, and Image Processing, 27, 1, 1984, 115–123. 15. A. Rosenfeld, “A Characterization of Parallel Thinning Algorithms,” Information and Control, 29, 1975, 286–291. 16. T. Pavlidis, “A Thinning Algorithm for Discrete Binary Images,” Computer Graphics and Image Processing, 13, 2, 1980, 142–157. 17. W. K. Pratt and I. Kabir, “Morphological Binary Image Processing with a Local Neighborhood Pipeline Processor,” Computer Graphics, Tokyo, 1984. 18. H. Blum, “A Transformation for Extracting New Descriptors of Shape,” in Symposium Models for Perception of Speech and Visual Form, W. Whaten-Dunn, Ed., MIT Press, Cambridge, MA, 1967. 19. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley-Interscience, New York, 1973. 20. L. Calabi and W. E. Harnett, “Shape Recognition, Prairie Fires, Convex Deficiencies and Skeletons,” American Mathematical Monthly, 75, 4, April 1968, 335–342. 21. J. C. Mott-Smith, “Medial Axis Transforms,” in Picture Processing and Psychopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970.

REFERENCES

441

22. C. Arcelli and G. Sanniti Di Baja, “On the Sequential Approach to Medial Line Thinning Transformation,” IEEE Trans. Systems, Man and Cybernetics, SMC-8, 2, 1978, 139– 144. 23. J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psychopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970. 24. H. Hadwiger, Vorslesunger uber Inhalt, Oberfläche und Isoperimetrie, Springer-Verlag, Berlin, 1957. 25. R. M. Haralick, S. R. Steinberg, and X. Zhuang, “Image Analysis Using Mathematical Morphology,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-9, 4, July 1987, 532–550. 26. W. K. Pratt, “Image Processing with Primitive Computational Elements,” McMaster University, Hamilton, Ontario, Canada, 1987. 27. X. Zhuang and R. M. Haralick, “Morphological Structuring Element Decomposition,” Computer Vision, Graphics, and Image Processing, 35, 3, September 1986, 370–382. 28. S. R. Steinberg, “Grayscale Morphology,” Computer Vision, Graphics, and Image Processing, 35, 3, September 1986, 333–355.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

15
EDGE DETECTION

Changes or discontinuities in an image amplitude attribute such as luminance or tristimulus value are fundamentally important primitive characteristics of an image because they often provide an indication of the physical extent of objects within the image. Local discontinuities in image luminance from one level to another are called luminance edges. Global luminance discontinuities, called luminance boundary segments, are considered in Section 17.4. In this chapter the definition of a luminance edge is limited to image amplitude discontinuities between reasonably smooth regions. Discontinuity detection between textured regions is considered in Section 17.5. This chapter also considers edge detection in color images, as well as the detection of lines and spots within an image.

15.1. EDGE, LINE, AND SPOT MODELS Figure 15.1-1a is a sketch of a continuous domain, one-dimensional ramp edge modeled as a ramp increase in image amplitude from a low to a high level, or vice versa. The edge is characterized by its height, slope angle, and horizontal coordinate of the slope midpoint. An edge exists if the edge height is greater than a specified value. An ideal edge detector should produce an edge indication localized to a single pixel located at the midpoint of the slope. If the slope angle of Figure 15.1-1a is 90°, the resultant edge is called a step edge, as shown in Figure 15.1-1b. In a digital imaging system, step edges usually exist only for artificially generated images such as test patterns and bilevel graphics data. Digital images, resulting from digitization of optical images of real scenes, generally do not possess step edges because the anti aliasing low-pass filtering prior to digitization reduces the edge slope in the digital image caused by any sudden luminance change in the scene. The one-dimensional
443

444

EDGE DETECTION

FIGURE 15.1-1. One-dimensional, continuous domain edge and line models.

profile of a line is shown in Figure 15.1-1c. In the limit, as the line width w approaches zero, the resultant amplitude discontinuity is called a roof edge. Continuous domain, two-dimensional models of edges and lines assume that the amplitude discontinuity remains constant in a small neighborhood orthogonal to the edge or line profile. Figure 15.1-2a is a sketch of a two-dimensional edge. In addition to the edge parameters of a one-dimensional edge, the orientation of the edge slope with respect to a reference axis is also important. Figure 15.1-2b defines the edge orientation nomenclature for edges of an octagonally shaped object whose amplitude is higher than its background. Figure 15.1-3 contains step and unit width ramp edge models in the discrete domain. The vertical ramp edge model in the figure contains a single transition pixel whose amplitude is at the midvalue of its neighbors. This edge model can be obtained by performing a 2 × 2 pixel moving window average on the vertical step edge

EDGE, LINE, AND SPOT MODELS

445

FIGURE 15.1-2. Two-dimensional, continuous domain edge model.

model. The figure also contains two versions of a diagonal ramp edge. The singlepixel transition model contains a single midvalue transition pixel between the regions of high and low amplitude; the smoothed transition model is generated by a 2 × 2 pixel moving window average of the diagonal step edge model. Figure 15.1-3 also presents models for a discrete step and ramp corner edge. The edge location for discrete step edges is usually marked at the higher-amplitude side of an edge transition. For the single-pixel transition model and the smoothed transition vertical and corner edge models, the proper edge location is at the transition pixel. The smoothed transition diagonal ramp edge model has a pair of adjacent pixels in its transition zone. The edge is usually marked at the higher-amplitude pixel of the pair. In Figure 15.1-3 the edge pixels are italicized. Discrete two-dimensional single-pixel line models are presented in Figure 15.1-4 for step lines and unit width ramp lines. The single-pixel transition model has a midvalue transition pixel inserted between the high value of the line plateau and the low-value background. The smoothed transition model is obtained by performing a 2 × 2 pixel moving window average on the step line model.

446

EDGE DETECTION

FIGURE 15.1-3. Two-dimensional, discrete domain edge models.

A spot, which can only be defined in two dimensions, consists of a plateau of high amplitude against a lower amplitude background, or vice versa. Figure 15.1-5 presents single-pixel spot models in the discrete domain. There are two generic approaches to the detection of edges, lines, and spots in a luminance image: differential detection and model fitting. With the differential detection approach, as illustrated in Figure 15.1-6, spatial processing is performed on an original image F ( j, k ) to produce a differential image G ( j, k ) with accentuated spatial amplitude changes. Next, a differential detection operation is executed to determine the pixel locations of significant differentials. The second general approach to edge, line, or spot detection involves fitting of a local region of pixel values to a model of the edge, line, or spot, as represented in Figures 15.1-1 to 15.1-5. If the fit is sufficiently close, an edge, line, or spot is said to exist, and its assigned parameters are those of the appropriate model. A binary indicator map E ( j, k ) is often generated to indicate the position of edges, lines, or spots within an

EDGE, LINE, AND SPOT MODELS

447

FIGURE 15.1-4. Two-dimensional, discrete domain line models.

image. Typically, edge, line, and spot locations are specified by black pixels against a white background. There are two major classes of differential edge detection: first- and second-order derivative. For the first-order class, some form of spatial first-order differentiation is performed, and the resulting edge gradient is compared to a threshold value. An edge is judged present if the gradient exceeds the threshold. For the second-order derivative class of differential edge detection, an edge is judged present if there is a significant spatial change in the polarity of the second derivative. Sections 15.2 and 15.3 discuss the first- and second-order derivative forms of edge detection, respectively. Edge fitting methods of edge detection are considered in Section 15.4.

448

EDGE DETECTION

FIGURE 15.1-5. Two-dimensional, discrete domain single pixel spot models.

15.2. FIRST-ORDER DERIVATIVE EDGE DETECTION There are two fundamental methods for generating first-order derivative edge gradients. One method involves generation of gradients in two orthogonal directions in an image; the second utilizes a set of directional derivatives.

FIRST-ORDER DERIVATIVE EDGE DETECTION

449

FIGURE 15.1-6. Differential edge, line, and spot detection.

15.2.1. Orthogonal Gradient Generation An edge in a continuous domain edge segment F ( x, y ) such as the one depicted in Figure 15.1-2a can be detected by forming the continuous one-dimensional gradient G ( x, y ) along a line normal to the edge slope, which is at an angle θ with respect to the horizontal axis. If the gradient is sufficiently large (i.e., above some threshold value), an edge is deemed present. The gradient along the line normal to the edge slope can be computed in terms of the derivatives along orthogonal axes according to the following (1, p. 106)
∂F ( x, y ) ∂F ( x, y ) G ( x, y ) = ------------------- cos θ + ------------------- sin θ ∂y ∂x

(15.2-1)

Figure 15.2-1 describes the generation of an edge gradient G ( x, y ) in the discrete domain in terms of a row gradient G R ( j, k ) and a column gradient G C ( j, k ) . The spatial gradient amplitude is given by
G ( j , k ) = [ [ G R ( j, k ) ] + [ G C ( j, k ) ] ]
2 2 1⁄2

(15.2-2)

For computational efficiency, the gradient amplitude is sometimes approximated by the magnitude combination
G ( j, k ) = G R ( j, k ) + G C ( j, k )

(15.2-3)

FIGURE 15.2-1. Orthogonal gradient generation.

450

EDGE DETECTION

The orientation of the spatial gradient with respect to the row axis is
 G C ( j, k )  θ ( j, k ) = arc tan  -------------------   G R ( j, k ) 

(15.2-4)

The remaining issue for discrete domain orthogonal gradient generation is to choose a good discrete approximation to the continuous differentials of Eq. 15.2-1. The simplest method of discrete gradient generation is to form the running difference of pixels along rows and columns of the image. The row gradient is defined as
G R ( j, k ) = F ( j, k ) – F ( j, k – 1 )

(15.2-5a)

and the column gradient is
G C ( j, k ) = F ( j, k ) – F ( j + 1, k )

(15.2-5b)

These definitions of row and column gradients, and subsequent extensions, are chosen such that GR and GC are positive for an edge that increases in amplitude from left to right and from bottom to top in an image. As an example of the response of a pixel difference edge detector, the following is the row gradient along the center row of the vertical step edge model of Figure 15.1-3:
0 0 0 0 h 0 0 0 0

In this sequence, h = b – a is the step edge height. The row gradient for the vertical ramp edge model is

0 0 0 0 h h 0 0 0 -- -- 2 2

For ramp edges, the running difference edge detector cannot localize the edge to a single pixel. Figure 15.2-2 provides examples of horizontal and vertical differencing gradients of the monochrome peppers image. In this and subsequent gradient display photographs, the gradient range has been scaled over the full contrast range of the photograph. It is visually apparent from the photograph that the running difference technique is highly susceptible to small fluctuations in image luminance and that the object boundaries are not well delineated.

FIRST-ORDER DERIVATIVE EDGE DETECTION

451

(a) Original

(b) Horizontal magnitude

(c) Vertical magnitude

FIGURE 15.2-2. Horizontal and vertical differencing gradients of the peppers_mon image.

Diagonal edge gradients can be obtained by forming running differences of diagonal pairs of pixels. This is the basis of the Roberts (2) cross-difference operator, which is defined in magnitude form as
G ( j, k ) = G 1 ( j, k ) + G 2 ( j, k )

(15.2-6a)

and in square-root form as
G ( j , k ) = [ [ G 1 ( j, k ) ] + [ G 2 ( j, k ) ] ]
2 2 1⁄2

(15.2-6b)

452

EDGE DETECTION

where
G 1 ( j, k ) = F ( j , k ) – F ( j + 1 , k + 1 ) G 2 ( j, k ) = F ( j, k + 1 ) – F ( j + 1, k )

(15.2-6c) (15.2-6d)

The edge orientation with respect to the row axis is
 G 2 ( j, k )  π θ ( j, k ) = -- + arc tan  ------------------  4  G 1 ( j, k ) 

(15.2-7)

Figure 15.2-3 presents the edge gradients of the peppers image for the Roberts operators. Visually, the objects in the image appear to be slightly better distinguished with the Roberts square-root gradient than with the magnitude gradient. In Section 15.5, a quantitative evaluation of edge detectors confirms the superiority of the square-root combination technique. The pixel difference method of gradient generation can be modified to localize the edge center of the ramp edge model of Figure 15.1-3 by forming the pixel difference separated by a null value. The row and column gradients then become
G R ( j, k ) = F ( j, k + 1 ) – F ( j, k – 1 ) GC ( j, k ) = F ( j – 1, k ) – F ( j + 1, k )

(15.2-8a) (15.2-8b)

The row gradient response for a vertical ramp edge model is then h h 0 0 -- h -- 0 0 2 2

(a) Magnitude

(b) Square root

FIGURE 15.2-3. Roberts gradients of the peppers_mon image.

FIRST-ORDER DERIVATIVE EDGE DETECTION

453

FIGURE 15.2-4. Numbering convention for 3 × 3 edge detection operators.

Although the ramp edge is properly localized, the separated pixel difference gradient generation method remains highly sensitive to small luminance fluctuations in the image. This problem can be alleviated by using two-dimensional gradient formation operators that perform differentiation in one coordinate direction and spatial averaging in the orthogonal direction simultaneously. Prewitt (1, p. 108) has introduced a 3 × 3 pixel edge gradient operator described by the pixel numbering convention of Figure 15.2-4. The Prewitt operator square root edge gradient is defined as
G ( j , k ) = [ [ G R ( j, k ) ] + [ G C ( j, k ) ] ]
2 2 1⁄2

(15.2-9a)

with
1 G R ( j, k ) = ------------ [ ( A 2 + KA 3 + A 4 ) – ( A 0 + KA7 + A6 ) ] K+2 1 G C ( j, k ) = ------------ [ ( A0 + KA1 + A2 ) – ( A 6 + KA 5 + A 4 ) ] K+2

(15.2-9b)

(15.2-9c)

where K = 1. In this formulation, the row and column gradients are normalized to provide unit-gain positive and negative weighted averages about a separated edge position. The Sobel operator edge detector (3, p. 271) differs from the Prewitt edge detector in that the values of the north, south, east, and west pixels are doubled (i. e., K = 2). The motivation for this weighting is to give equal importance to each pixel in terms of its contribution to the spatial gradient. Frei and Chen (4) have proposed north, south, east, and west weightings by K = 2 so that the gradient is the same for horizontal, vertical, and diagonal edges. The edge gradient G ( j, k ) for these three operators along a row through the single pixel transition vertical ramp edge model of Figure 15.1-3 is

454

EDGE DETECTION

h 0 0 -2

h

h -- 0 0 2

Along a row through the single transition pixel diagonal ramp edge model, the gradient is

0

h ------------------------2 (2 + K)

h -----2

2 ( 1 + K )h ----------------------------2+K

h -----2

h ------------------------2 (2 + K)

0

In the Frei–Chen operator with K = 2 , the edge gradient is the same at the edge center for the single-pixel transition vertical and diagonal ramp edge models. The Prewitt gradient for a diagonal edge is 0.94 times that of a vertical edge. The

(a) Prewitt

(b) Sobel

(c) Frei−Chen

FIGURE 15.2-5. Prewitt, Sobel, and Frei–Chen gradients of the peppers_mon image.

FIRST-ORDER DERIVATIVE EDGE DETECTION

455

corresponding factor for a Sobel edge detector is 1.06. Consequently, the Prewitt operator is more sensitive to horizontal and vertical edges than to diagonal edges; the reverse is true for the Sobel operator. The gradients along a row through the smoothed transition diagonal ramp edge model are different for vertical and diagonal edges for all three of the 3 × 3 edge detectors. None of them are able to localize the edge to a single pixel. Figure 15.2-5 shows examples of the Prewitt, Sobel, and Frei–Chen gradients of the peppers image. The reason that these operators visually appear to better delineate object edges than the Roberts operator is attributable to their larger size, which provides averaging of small luminance fluctuations. The row and column gradients for all the edge detectors mentioned previously in this subsection involve a linear combination of pixels within a small neighborhood. Consequently, the row and column gradients can be computed by the convolution relationships
G R ( j, k ) = F ( j , k ) * H R ( j, k ) G C ( j, k ) = F ( j, k ) * H C ( j, k )

(15.2-10a) (15.2-10b)

where H R ( j, k ) and H C ( j, k ) are 3 × 3 row and column impulse response arrays, respectively, as defined in Figure 15.2-6. It should be noted that this specification of the gradient impulse response arrays takes into account the 180° rotation of an impulse response array inherent to the definition of convolution in Eq. 7.1-14. A limitation common to the edge gradient generation operators previously defined is their inability to detect accurately edges in high-noise environments. This problem can be alleviated by properly extending the size of the neighborhood operators over which the differential gradients are computed. As an example, a Prewitttype 7 × 7 operator has a row gradient impulse response of the form

1 H R = ----21

1 1 1 1 1 1 1

1 1 1 1 1 1 1

1 1 1 1 1 1 1

0 0 0 0 0 0 0

–1 –1 –1 –1 –1 –1 –1

–1 –1 –1 –1 –1 –1 –1

–1 –1 –1 –1 –1 –1 –1

(15.2-11)

An operator of this type is called a boxcar operator. Figure 15.2-7 presents the boxcar gradient of a 7 × 7 array.

456

EDGE DETECTION

FIGURE 15.2-6. Impulse response arrays for 3 × 3 orthogonal differential gradient edge operators.

Abdou (5) has suggested a truncated pyramid operator that gives a linearly decreasing weighting to pixels away from the center of an edge. The row gradient impulse response array for a 7 × 7 truncated pyramid operator is given by

1 H R = ----34

1 1 1 1 1 1 1

1 2 2 2 2 2 1

1 2 3 3 3 2 1

0 0 0 0 0 0 0

–1 –2 –3 –3 –3 –2 –1

–1 –2 –2 –2 –2 –2 –1

–1 –1 –1 –1 –1 –1 –1

(15.2-12)

FIRST-ORDER DERIVATIVE EDGE DETECTION

457

(a) 7 × 7 boxcar

(b) 9 × 9 truncated pyramid

(c) 11 × 11 Argyle, s = 2.0

(d ) 11 × 11 Macleod, s = 2.0

(e) 11 × 11 FDOG, s = 2.0

FIGURE 15.2-7. Boxcar, truncated pyramid, Argyle, Macleod, and FDOG gradients of the peppers_mon image.

458

EDGE DETECTION

Argyle (6) and Macleod (7,8) have proposed large neighborhood Gaussian-shaped weighting functions as a means of noise suppression. Let g ( x, s ) = [ 2πs ]
2 –1 ⁄ 2

exp { – 1 ⁄ 2 ( x ⁄ s ) }

2

(15.2-13)

denote a continuous domain Gaussian function with standard deviation s. Utilizing this notation, the Argyle operator horizontal coordinate impulse response array can be expressed as a sampled version of the continuous domain impulse response
 – 2 g ( x, s )g ( y, t )  H R ( j, k ) =   2g ( x, s )g ( y, t ) 

for x ≥ 0 for x < 0

(15.2-14a) (15.2-14b)

where s and t are spread parameters. The vertical impulse response function can be expressed similarly. The Macleod operator horizontal gradient impulse response function is given by
H R ( j, k ) = [ g ( x + s, s ) – g ( x – s, s ) ]g ( y, t )

(15.2-15)

The Argyle and Macleod operators, unlike the boxcar operator, give decreasing importance to pixels far removed from the center of the neighborhood. Figure 15.2-7 provides examples of the Argyle and Macleod gradients. Extended-size differential gradient operators can be considered to be compound operators in which a smoothing operation is performed on a noisy image followed by a differentiation operation. The compound gradient impulse response can be written as
H ( j, k ) = H G ( j, k ) H S ( j, k )

(15.2-16)

where H G ( j, k ) is one of the gradient impulse response operators of Figure 15.2-6 and H S ( j, k ) is a low-pass filter impulse response. For example, if H S ( j, k ) is the 3 × 3 Prewitt row gradient operator and H S ( j, k ) = 1 ⁄ 9 , for all ( j, k ) , is a 3 × 3 uniform smoothing operator, the resultant 5 × 5 row gradient operator, after normalization to unit positive and negative gain, becomes
1 2 3 2 1 1 2 3 2 1 0 0 0 0 0 –1 –2 –3 –2 –1 –1 –2 –3 –2 –1

1 H R = ----18

(15.2-17)

FIRST-ORDER DERIVATIVE EDGE DETECTION

459

The decomposition of Eq. 15.2-16 applies in both directions. By applying the SVD/ SGK decomposition of Section 9.6, it is possible, for example, to decompose a 5 × 5 boxcar operator into the sequential convolution of a 3 × 3 smoothing kernel and a 3 × 3 differentiating kernel. A well-known example of a compound gradient operator is the first derivative of Gaussian (FDOG) operator, in which Gaussian-shaped smoothing is followed by differentiation (9). The FDOG continuous domain horizontal impulse response is
– ∂[ g ( x, s )g ( y, t ) ] H R ( j, k ) = -----------------------------------------∂x

(15.2-18a)

which upon differentiation yields
– xg ( x, s )g ( y, t ) HR ( j, k ) = ------------------------------------2 s

(15.2-18b)

Figure 15.2-7 presents an example of the FDOG gradient. All of the differential edge enhancement operators presented previously in this subsection have been derived heuristically. Canny (9) has taken an analytic approach to the design of such operators. Canny's development is based on a onedimensional continuous domain model of a step edge of amplitude hE plus additive white Gaussian noise with standard deviation σ n . It is assumed that edge detection is performed by convolving a one-dimensional continuous domain noisy edge signal f ( x ) with an antisymmetric impulse response function h ( x ) , which is of zero amplitude outside the range [ – W, W ] . An edge is marked at the local maximum of the convolved gradient f ( x ) * h ( x ). The Canny operator impulse response h ( x ) is chosen to satisfy the following three criteria. 1. Good detection. The amplitude signal-to-noise ratio (SNR) of the gradient is maximized to obtain a low probability of failure to mark real edge points and a low probability of falsely marking nonedge points. The SNR for the model is hE S ( h ) SNR = ---------------σn

(15.2-19a)

with

S ( h ) = ---------------------------------2 W ∫ [ h ( x ) ] dx
–W

∫– W h ( x ) d x

0

(15.2-19b)

460

EDGE DETECTION

2. Good localization. Edge points marked by the operator should be as close to the center of the edge as possible. The localization factor is defined as hE L ( h ) LOC = ----------------σn

(15.2-20a)

with h′ ( 0 ) L ( h ) = ------------------------------------2 W [ h′ ( x ) ] dx ∫
–W

(15.2-20b)

where h′ ( x ) is the derivative of h ( x ) . 3. Single response. There should be only a single response to a true edge. The distance between peaks of the gradient when only noise is present, denoted as xm, is set to some fraction k of the operator width factor W. Thus x m = kW

(15.2-21)

Canny has combined these three criteria by maximizing the product S ( h )L ( h ) subject to the constraint of Eq. 15.2-21. Because of the complexity of the formulation, no analytic solution has been found, but a variational approach has been developed. Figure 15.2-8 contains plots of the Canny impulse response functions in terms of xm.

FIGURE 15.2-8. Comparison of Canny and first derivative of Gaussian impulse response functions.

FIRST-ORDER DERIVATIVE EDGE DETECTION

461

As noted from the figure, for low values of xm, the Canny function resembles a boxcar function, while for xm large, the Canny function is closely approximated by a FDOG impulse response function. Discrete domain versions of the large operators defined in the continuous domain can be obtained by sampling their continuous impulse response functions over some W × W window. The window size should be chosen sufficiently large that truncation of the impulse response function does not cause high-frequency artifacts. Demigny and Kamie (10) have developed a discrete version of Canny’s criteria, which lead to the computation of discrete domain edge detector impulse response arrays. 15.2.2. Edge Template Gradient Generation With the orthogonal differential edge enhancement techniques discussed previously, edge gradients are computed in two orthogonal directions, usually along rows and columns, and then the edge direction is inferred by computing the vector sum of the gradients. Another approach is to compute gradients in a large number of directions by convolution of an image with a set of template gradient impulse response arrays. The edge template gradient is defined as
G ( j, k ) = MAX { G 1 ( j, k ) , …, G m ( j, k ) , …, G M ( j, k ) }

(15.2-22a)

where
G m ( j, k ) = F ( j , k ) H m ( j, k )

(15.2-22b)

is the gradient in the mth equispaced direction obtained by convolving an image with a gradient impulse response array Hm ( j, k ). The edge angle is determined by the direction of the largest gradient. Figure 15.2-9 defines eight gain-normalized compass gradient impulse response arrays suggested by Prewitt (1, p. 111). The compass names indicate the slope direction of maximum response. Kirsch (11) has proposed a directional gradient defined by
7   G ( j, k ) = MAX  5S i – 3T i   i = 0 

(15.2-23a)

where
Si = A i + Ai + 1 + A i + 2 T i = A i + 3 + Ai + 4 + A i + 5 + Ai + 5 + A i + 6

(15.2-23b) (15.2-23c)

462

EDGE DETECTION

FIGURE 15.2-9. Template gradient 3 × 3 impulse response arrays.

The subscripts of A i are evaluated modulo 8. It is possible to compute the Kirsch gradient by convolution as in Eq. 15.2-22b. Figure 15.2-9 specifies the gain-normalized Kirsch operator impulse response arrays. This figure also defines two other sets of gain-normalized impulse response arrays proposed by Robinson (12), called the Robinson three-level operator and the Robinson five-level operator, which are derived from the Prewitt and Sobel operators, respectively. Figure 15.2-10 provides a comparison of the edge gradients of the peppers image for the four 3 × 3 template gradient operators.

FIRST-ORDER DERIVATIVE EDGE DETECTION

463

(a) Prewitt compass gradient

(b) Kirsch

(c) Robinson three-level

(d) Robinson five-level

FIGURE 15.2-10. 3 × 3 template gradients of the peppers_mon image.

Nevatia and Babu (13) have developed an edge detection technique in which the gain-normalized 5 × 5 masks defined in Figure 15.2-11 are utilized to detect edges in 30° increments. Figure 15.2-12 shows the template gradients for the peppers image. Larger template masks will provide both a finer quantization of the edge orientation angle and a greater noise immunity, but the computational requirements increase. Paplinski (14) has developed a design procedure for n-directional template masks of arbitrary size. 15.2.3. Threshold Selection After the edge gradient is formed for the differential edge detection methods, the gradient is compared to a threshold to determine if an edge exists. The threshold value determines the sensitivity of the edge detector. For noise-free images, the

464

EDGE DETECTION

FIGURE 15.2-11. Nevatia–Babu template gradient impulse response arrays.

FIRST-ORDER DERIVATIVE EDGE DETECTION

465

FIGURE 15.2-12. Nevatia–Babu gradient of the peppers_mon image.

threshold can be chosen such that all amplitude discontinuities of a minimum contrast level are detected as edges, and all others are called nonedges. With noisy images, threshold selection becomes a trade-off between missing valid edges and creating noise-induced false edges. Edge detection can be regarded as a hypothesis-testing problem to determine if an image region contains an edge or contains no edge (15). Let P(edge) and P(noedge) denote the a priori probabilities of these events. Then the edge detection process can be characterized by the probability of correct edge detection,

PD =

∫t



p ( G edge ) dG

(15.2-24a)

and the probability of false detection,


PF =

∫t

p ( G no – e dge ) dG

(15.2-24b)

where t is the edge detection threshold and p(G|edge) and p(G|no-edge) are the conditional probability densities of the edge gradient G ( j, k ). Figure 15.2-13 is a sketch of typical edge gradient conditional densities. The probability of edge misclassification error can be expressed as
P E = ( 1 – PD )P ( edge ) + ( P F )P ( no – e dge )

(15.2-25)

466

EDGE DETECTION

FIGURE 15.2-13. Typical edge gradient conditional probability densities.

This error will be minimum if the threshold is chosen such that an edge is deemed present when p ( G edge ) P ( no – e dge ) ------------------------------------ ≥ -----------------------------P ( edge ) p ( G no – e dge )

(15.2-26)

and the no-edge hypothesis is accepted otherwise. Equation 15.2-26 defines the well-known maximum likelihood ratio test associated with the Bayes minimum error decision rule of classical decision theory (16). Another common decision strategy, called the Neyman–Pearson test, is to choose the threshold t to minimize P F for a fixed acceptable P D (16). Application of a statistical decision rule to determine the threshold value requires knowledge of the a priori edge probabilities and the conditional densities of the edge gradient. The a priori probabilities can be estimated from images of the class under analysis. Alternatively, the a priori probability ratio can be regarded as a sensitivity control factor for the edge detector. The conditional densities can be determined, in principle, for a statistical model of an ideal edge plus noise. Abdou (5) has derived these densities for 2 × 2 and 3 × 3 edge detection operators for the case of a ramp edge of width w = 1 and additive Gaussian noise. Henstock and Chelberg (17) have used gamma densities as models of the conditional probability densities. There are two difficulties associated with the statistical approach of determining the optimum edge detector threshold: reliability of the stochastic edge model and analytic difficulties in deriving the edge gradient conditional densities. Another approach, developed by Abdou and Pratt (5,15), which is based on pattern recognition techniques, avoids the difficulties of the statistical method. The pattern recognition method involves creation of a large number of prototype noisy image regions, some of which contain edges and some without edges. These prototypes are then used as a training set to find the threshold that minimizes the classification error. Details of the design procedure are found in Reference 5. Table 15.2-1

TABLE 15.2-1. Threshold Levels and Associated Edge Detection Probabilities for 3 × 3 Edge Detectors as Determined by the Abdou and Pratt Pattern Recognition Design Procedure Vertical Edge SNR = 1 tN 1.36 1.16 1.18 1.52 1.43 1.16 1.24 0.581 0.361 0.66 0.924 0.049 0.590 0.369 0.65 0.926 0.038 1.16 1.22 0.531 0.341 0.69 0.898 0.058 1.45 0.524 0.587 0.593 0.613 0.466 0.73 0.886 0.136 1.51 0.618 0.472 0.324 0.365 0.374 0.600 0.395 0.66 0.923 0.057 1.14 0.604 0.376 0.608 0.384 0.66 0.912 0.480 1.19 0.593 0.387 0.64 0.63 0.71 0.79 0.61 0.65 0.559 0.400 0.67 0.892 0.105 1.74 0.551 0.469 0.78 PD PF tN PD PF tN PD PF tN PD 0.778 0.931 0.947 0.900 0.825 0.946 0.931 SNR = 10 SNR = 1 SNR = 10 PF 0.221 0.064 0.053 0.153 0.023 0.056 0.054 Diagonal Edge

Operator

Roberts orthogonal gradient

Prewitt orthogonal gradient

Sobel orthogonal gradient

Prewitt compass template gradient

Kirsch template gradient

Robinson three-level template gradient

Robinson five-level template gradient

467

468

EDGE DETECTION

(a) Sobel, t = 0.06

(b) FDOG, t = 0.08

(c) Sobel, t = 0.08

(d ) FDOG, t = 0.10

(e) Sobel, t = 0.10

(f ) FDOG, t = 0.12

FIGURE 15.2-14. Threshold sensitivity of the Sobel and first derivative of Gaussian edge detectors for the peppers_mon image.

SECOND-ORDER DERIVATIVE EDGE DETECTION

469

provides a tabulation of the optimum threshold for several 2 × 2 and 3 × 3 edge detectors for an experimental design with an evaluation set of 250 prototypes not in the training set (15). The table also lists the probability of correct and false edge detection as defined by Eq. 15.2-24 for theoretically derived gradient conditional densities. In the table, the threshold is normalized such that t N = t ⁄ G M , where G M is the maximum amplitude of the gradient in the absence of noise. The power signal2 to-noise ratio is defined as SNR = ( h ⁄ σ n ) , where h is the edge height and σ n is the noise standard deviation. In most of the cases of Table 15.2-1, the optimum threshold results in approximately equal error probabilities (i.e., PF = 1 – P D ). This is the same result that would be obtained by the Bayes design procedure when edges and nonedges are equally probable. The tests associated with Table 15.2-1 were conducted with relatively low signal-to-noise ratio images. Section 15.5 provides examples of such images. For high signal-to-noise ratio images, the optimum threshold is much lower. As a rule of thumb, under the condition that P F = 1 – P D , the edge detection threshold can be scaled linearly with signal-to-noise ratio. Hence, for an image with SNR = 100, the threshold is about 10% of the peak gradient value. Figure 15.2-14 shows the effect of varying the first derivative edge detector threshold for the 3 × 3 Sobel and the 11 × 11 FDOG edge detectors for the peppers image, which is a relatively high signal-to-noise ratio image. For both edge detectors, variation of the threshold provides a trade-off between delineation of strong edges and definition of weak edges. 15.2.4. Morphological Post Processing It is possible to improve edge delineation of first-derivative edge detectors by applying morphological operations on their edge maps. Figure 15.2-15 provides examples for the 3 × 3 Sobel and 11 × 11 FDOG edge detectors. In the Sobel example, the threshold is lowered slightly to improve the detection of weak edges. Then the morphological majority black operation is performed on the edge map to eliminate noise-induced edges. This is followed by the thinning operation to thin the edges to minimally connected lines. In the FDOG example, the majority black noise smoothing step is not necessary. 15.3. SECOND-ORDER DERIVATIVE EDGE DETECTION Second-order derivative edge detection techniques employ some form of spatial second-order differentiation to accentuate edges. An edge is marked if a significant spatial change occurs in the second derivative. Two types of second-order derivative methods are considered: Laplacian and directed second derivative. 15.3.1. Laplacian Generation The edge Laplacian of an image function F ( x, y ) in the continuous domain is defined as

470

EDGE DETECTION

(a) Sobel, t = 0.07

(b) Sobel majority black

(c) Sobel thinned

(d ) FDOG, t = 0.11

(e ) FDOG thinned

FIGURE 15.2-15. Morphological thinning of edge maps for the peppers_mon image.

SECOND-ORDER DERIVATIVE EDGE DETECTION

471

G ( x, y ) = – ∇2{ F ( x, y ) }

(15.3-1a)

where, from Eq. 1.2-17, the Laplacian is
∇ =
2

∂ ∂x

2 2

+

∂ ∂y

2 2

(15.3-1b)

The Laplacian G ( x, y ) is zero if F ( x, y ) is constant or changing linearly in amplitude. If the rate of change of F ( x, y ) is greater than linear, G ( x, y ) exhibits a sign change at the point of inflection of F ( x, y ). The zero crossing of G ( x, y ) indicates the presence of an edge. The negative sign in the definition of Eq. 15.3-la is present so that the zero crossing of G ( x, y ) has a positive slope for an edge whose amplitude increases from left to right or bottom to top in an image. Torre and Poggio (18) have investigated the mathematical properties of the Laplacian of an image function. They have found that if F ( x, y ) meets certain smoothness constraints, the zero crossings of G ( x, y ) are closed curves. In the discrete domain, the simplest approximation to the continuous Laplacian is to compute the difference of slopes along each axis:
G ( j , k ) = [ F ( j, k ) – F ( j, k – 1 ) ] – [ F ( j, k + 1 ) – F ( j, k ) ] + [ F ( j, k ) – F ( j + 1, k ) ] – [ F ( j – 1, k ) – F ( j, k ) ]

(15.3-2)

This four-neighbor Laplacian (1, p. 111) can be generated by the convolution operation
G ( j, k ) = F ( j, k ) H ( j, k )

(15.3-3)

with
0 H = –1 0 0 2 0 0 0 –1 + 0 0 0 –1 2 –1 0 0 0

(15.3-4a)

or
0 –1 0 H = –1 4 – 1 0 –1 0

(15.3-4b)

where the two arrays of Eq. 15.3-4a correspond to the second derivatives along image rows and columns, respectively, as in the continuous Laplacian of Eq. 15.3-lb. The four-neighbor Laplacian is often normalized to provide unit-gain averages of the positive weighted and negative weighted pixels in the 3 × 3 pixel neighborhood. The gain-normalized four-neighbor Laplacian impulse response is defined by

472

EDGE DETECTION

0 –1 1 H = -- – 1 4 4 0 –1

0 –1 0

(15.3-5)

Prewitt (1, p. 111) has suggested an eight-neighbor Laplacian defined by the gainnormalized impulse response array
–1 –1 –1 –1 – 1 8 –1 –1 – 1

1 H = -8

(15.3-6)

This array is not separable into a sum of second derivatives, as in Eq. 15.3-4a. A separable eight-neighbor Laplacian can be obtained by the construction
– 1 –1 –1 –1 2 –1 2 2 –1 2 –1 + 2 –1 –1 – 1 –1 2 –1

H =

(15.3-7)

in which the difference of slopes is averaged over three rows and three columns. The gain-normalized version of the separable eight-neighbor Laplacian is given by
H = 1 -8 –2 1 –2 1 –2 4 1 1 –2

(15.3-8)

It is instructive to examine the Laplacian response to the edge models of Figure 15.1-3. As an example, the separable eight-neighbor Laplacian corresponding to the center row of the vertical step edge model is
– 3 h 3h 0 -------- ----- 8 8 0

where h = b – a is the edge height. The Laplacian response of the vertical ramp edge model is
–3h 0 -------- 0 16 3h ----16 0

For the vertical edge ramp edge model, the edge lies at the zero crossing pixel between the negative- and positive-value Laplacian responses. In the case of the step edge, the zero crossing lies midway between the neighboring negative and positive response pixels; the edge is correctly marked at the pixel to the right of the zero

SECOND-ORDER DERIVATIVE EDGE DETECTION

473

crossing. The Laplacian response for a single-transition-pixel diagonal ramp edge model is
0 –h – h ----- ----8 8 0 h -8 h -8 0

and the edge lies at the zero crossing at the center pixel. The Laplacian response for the smoothed transition diagonal ramp edge model of Figure 15.1-3 is
–h –h –h h h h - 0 ----- ----- ----- ----- -- ----- 0 16 8 16 16 8 16

In this example, the zero crossing does not occur at a pixel location. The edge should be marked at the pixel to the right of the zero crossing. Figure 15.3-1 shows the Laplacian response for the two ramp corner edge models of Figure 15.1-3. The edge transition pixels are indicated by line segments in the figure. A zero crossing exists at the edge corner for the smoothed transition edge model, but not for the single-pixel transition model. The zero crossings adjacent to the edge corner do not occur at pixel samples for either of the edge models. From these examples, it can be

FIGURE 15.3-1. Separable eight-neighbor Laplacian responses for ramp corner models; all values should be scaled by h/8.

474

EDGE DETECTION

concluded that zero crossings of the Laplacian do not always occur at pixel samples. But for these edge models, marking an edge at a pixel with a positive response that has a neighbor with a negative response identifies the edge correctly. Figure 15.3-2 shows the Laplacian responses of the peppers image for the three types of 3 × 3 Laplacians. In these photographs, negative values are depicted as dimmer than midgray and positive values are brighter than midgray. Marr and Hildrith (19) have proposed the Laplacian of Gaussian (LOG) edge detection operator in which Gaussian-shaped smoothing is performed prior to application of the Laplacian. The continuous domain LOG gradient is
G ( x, y ) = – ∇2{ F ( x, y ) * H S ( x, y ) }

(15.3-9a)

where
G ( x, y ) = g ( x, s )g ( y, s )

(15.3-9b)

(a) Four-neighbor

(b) Eight-neighbor

(c) Separable eight-neighbor

(d ) 11 × 11 Laplacian of Gaussian

FIGURE 15.3-2. Laplacian responses of the peppers_mon image.

SECOND-ORDER DERIVATIVE EDGE DETECTION

475

is the impulse response of the Gaussian smoothing function as defined by Eq. 15.2-13. As a result of the linearity of the second derivative operation and of the linearity of convolution, it is possible to express the LOG response as
G ( j, k ) = F ( j, k ) * H ( j, k )

(15.3-10a)

where
H ( x, y ) = – ∇2{ g ( x, s )g ( y, s ) }

(15.3-10b)

Upon differentiation, one obtains
2 2 1-  H ( x, y ) = -------  1 – x + y  exp ---------------4 2 πs  2s 

 x2 + y2   – ----------------   2s2 

(15.3-11)

Figure 15.3-3 is a cross-sectional view of the LOG continuous domain impulse response. In the literature it is often called the Mexican hat filter. It can be shown (20,21) that the LOG impulse response can be expressed as
2 2 1  1  y  x  H ( x, y ) = -------  1 – ----  g ( x, s )g ( y, s ) + -------  1 – ----  g ( x, s )g ( y, s ) (15.3-12) 2 2 2 2 πs  πs  s  s 

Consequently, the convolution operation can be computed separably along rows and columns of an image. It is possible to approximate the LOG impulse response closely by a difference of Gaussians (DOG) operator. The resultant impulse response is
H ( x, y ) = g ( x, s 1 )g ( y, s 1 ) – g ( x, s 2 )g ( y, s 2 )

(15.3-13)

FIGURE 15.3-3. Cross section of continuous domain Laplacian of Gaussian impulse response.

476

EDGE DETECTION

where s 1 < s 2. Marr and Hildrith (19) have found that the ratio s 2 ⁄ s 1 = 1.6 provides a good approximation to the LOG. A discrete domain version of the LOG operator can be obtained by sampling the continuous domain impulse response function of Eq. 15.3-11 over a W × W window. To avoid deleterious truncation effects, the size of the array should be set such that W = 3c, or greater, where c = 2 2 s is the width of the positive center lobe of the LOG function (21). Figure 15.3-2d shows the LOG response of the peppers image for a 11 × 11 operator. 15.3.2. Laplacian Zero-Crossing Detection From the discrete domain Laplacian response examples of the preceding section, it has been shown that zero crossings do not always lie at pixel sample points. In fact, for real images subject to luminance fluctuations that contain ramp edges of varying slope, zero-valued Laplacian response pixels are unlikely. A simple approach to Laplacian zero-crossing detection in discrete domain images is to form the maximum of all positive Laplacian responses and to form the minimum of all negative-value responses in a 3 × 3 window, If the magnitude of the difference between the maxima and the minima exceeds a threshold, an edge is judged present.

FIGURE 15.3-4. Laplacian zero-crossing patterns.

SECOND-ORDER DERIVATIVE EDGE DETECTION

477

Huertas and Medioni (21) have developed a systematic method for classifying
3 × 3 Laplacian response patterns in order to determine edge direction. Figure

15.3-4 illustrates a somewhat simpler algorithm. In the figure, plus signs denote positive-value Laplacian responses, and negative signs denote negative Laplacian responses. The algorithm can be implemented efficiently using morphological image processing techniques. 15.3.3. Directed Second-Order Derivative Generation Laplacian edge detection techniques employ rotationally invariant second-order differentiation to determine the existence of an edge. The direction of the edge can be ascertained during the zero-crossing detection process. An alternative approach is first to estimate the edge direction and then compute the one-dimensional secondorder derivative along the edge direction. A zero crossing of the second-order derivative specifies an edge. The directed second-order derivative of a continuous domain image F ( x, y ) along a line at an angle θ with respect to the horizontal axis is given by
∂ F ( x, y ) F ′′ ( x, y ) = ∂ F ( x, y ) cos 2 θ + ∂ F ( x, y ) cos θ sin θ + ---------------------- sin 2 θ ------------------------------------------2 2 ∂x ∂y ∂x ∂y
2 2 2

(15.3-14)

It should be noted that unlike the Laplacian, the directed second-order derivative is a nonlinear operator. Convolving a smoothing function with F ( x, y ) prior to differentiation is not equivalent to convolving the directed second derivative of F ( x, y ) with the smoothing function. A key factor in the utilization of the directed second-order derivative edge detection method is the ability to determine its suspected edge direction accurately. One approach is to employ some first-order derivative edge detection method to estimate the edge direction, and then compute a discrete approximation to Eq. 15.3-14. Another approach, proposed by Haralick (22), involves approximating F ( x, y ) by a two-dimensional polynomial, from which the directed second-order derivative can be determined analytically. As an illustration of Haralick's approximation method, called facet modeling, let the continuous image function F ( x, y ) be approximated by a two-dimensional quadratic polynomial
2 2 2 2 2 2 ˆ F ( r, c ) = k 1 + k 2 r + k 3 c + k 4 r + k 5 rc + k 6 c + k 7 rc + k 8 r c + k 9 r c

(15.3-15)

about a candidate edge point ( j, k ) in the discrete image F ( j, k ), where the k n are weighting factors to be determined from the discrete image data. In this notation, the indices – ( W – 1 ) ⁄ 2 ≤ r, c ≤ ( W – 1 ) ⁄ 2 are treated as continuous variables in the row (y-coordinate) and column (x-coordinate) directions of the discrete image, but the discrete image is, of course, measurable only at integer values of r and c. From this model, the estimated edge angle is

478

EDGE DETECTION

 k2  θ = arc tan  ----   k3 

(15.3-16)

In principle, any polynomial expansion can be used in the approximation. The expansion of Eq. 15.3-15 was chosen because it can be expressed in terms of a set of orthogonal polynomials. This greatly simplifies the computational task of determining the weighting factors. The quadratic expansion of Eq. 15.3-15 can be rewritten as
ˆ F ( r, c ) =

n=1



N

a n Pn ( r, c )

(15.3-17)

where Pn ( r, c ) denotes a set of discrete orthogonal polynomials and the a n are weighting coefficients. Haralick (22) has used the following set of 3 × 3 Chebyshev orthogonal polynomials:
P 1 ( r, c ) = 1 P 2 ( r, c ) = r P3 ( r, c ) = c
2 2 P4 ( r, c ) = r – -3

(15.3-18a) (15.3-18b) (15.3-18c) (15.3-18d) (15.3-18e) (15.3-18f) (15.3-18g)

P5 ( r, c ) = rc
2 2 P 6 ( r, c ) = c – -3 2 2 P 7 ( r, c ) = c  r – --   3 2 2 P 8 ( r, c ) = r  c – --   3 2 2 2 2 P 9 ( r, c ) =  r – --   c – --   3 3

(15.3-18h)

(15.3-18i)

defined over the (r, c) index set {-1, 0, 1}. To maintain notational consistency with the gradient techniques discussed previously, r and c are indexed in accordance with the (x, y) Cartesian coordinate system (i.e., r is incremented positively up rows and c is incremented positively left to right across columns). The polynomial coefficients kn of Eq. 15.3-15 are related to the Chebyshev weighting coefficients by

SECOND-ORDER DERIVATIVE EDGE DETECTION

479

2 2 4 k 1 = a 1 – -- a 4 – -- a 6 + -- a 9 3 3 9 2 k 2 = a 2 – -- a 7 3 k3 = a3 – 2 a8 -3 2 k 4 = a 4 – -- a 9 3 k5 = a5 2 k 6 = a 6 – -- a 9 3 k7 = a7 k8 = a8 k9 = a9

(15.3-19a) (15.3-19b) (15.3-19c) (15.3-19d) (15.3-19e) (15.3-19f) (15.3-19g) (15.3-19h) (15.3-19i)

The optimum values of the set of weighting coefficients an that minimize the meanˆ square error between the image data F ( r, c ) and its approximation F ( r, c ) are found to be (22)

∑ ∑ Pn ( r, c )F ( r, c ) a n = ---------------------------------------------------2 ∑ ∑ [ P n ( r, c ) ]

(15.3-20)

As a consequence of the linear structure of this equation, the weighting coefficients An ( j, k ) = a n at each point in the image F ( j, k ) can be computed by convolution of the image with a set of impulse response arrays. Hence
An ( j, k ) = F ( j, k ) H n ( j, k )

(15.3-21a)

where
P n ( – j, – k ) H n ( j, k ) = ----------------------------------------2 ∑ ∑ [ Pn ( r, c ) ]

(15.3-21b)

Figure 15.3-5 contains the nine impulse response arrays corresponding to the 3 × 3 Chebyshev polynomials. The arrays H2 and H3, which are used to determine the edge angle, are seen from Figure 15.3-5 to be the Prewitt column and row operators, respectively. The arrays H4 and H6 are second derivative operators along columns

480

EDGE DETECTION

FIGURE 15.3-5. Chebyshev polynomial 3 × 3 impulse response arrays.

and rows, respectively, as noted in Eq. 15.3-7. Figure 15.3-6 shows the nine weighting coefficient responses for the peppers image. The second derivative along the line normal to the edge slope can be expressed explicitly by performing second-order differentiation on Eq. 15.3-15. The result is
ˆ F ′′ ( r, c ) = 2k 4 sin 2 θ + 2k 5 sin θ cos θ + 2k 6 cos 2 θ + ( 4k 7 sin θ cos θ + 2k 8 cos 2 θ )r + ( 2k 7 sin 2 θ + 4k 8 sin θ cos θ )c + ( 2k 9 cos 2 θ )r + ( 8k 9 sin θ cos θ )rc + ( 2k 9 sin 2 θ )c
2 2

(15.3-22)

This second derivative need only be evaluated on a line in the suspected edge direction. With the substitutions r = ρ sin θ and c = ρ cos θ, the directed second-order derivative can be expressed as
2 2 ˆ F ′′ ( ρ ) = 2 ( k 4 sin θ + k 5 sin θ cos θ + k 6 cos θ )

+ 6 sin θ cos θ ( k 7 sin θ + k 8 cos θ ) ρ + 12 ( k 9 sin 2 θ cos 2 θ )ρ (15.3-23) ˆ The next step is to detect zero crossings of F ′′ ( ρ ) in a unit pixel range – 0.5 ≤ ρ ≤ 0.5 of the suspected edge. This can be accomplished by computing the real root (if it exists) within the range of the quadratic relation of Eq. 15.3-23.

2

SECOND-ORDER DERIVATIVE EDGE DETECTION

481

(a ) Chebyshev 1

(b ) Chebyshev 2

(c ) Chebyshev 3

(d ) Chebyshev 4

(e ) Chebyshev 5

(f ) Chebyshev 6

FIGURE 15.3-6. 3 × 3 Chebyshev polynomial responses for the peppers_mon image.

482

EDGE DETECTION

(g ) Chebyshev 7

(h ) Chebyshev 8

(i ) Chebyshev 9

FIGURE 15.3-6 (Continued). 3 × 3 Chebyshev polynomial responses for the peppers_mon image.

15.4. EDGE-FITTING EDGE DETECTION Ideal edges may be viewed as one- or two-dimensional edges of the form sketched in Figure 15.1-1. Actual image data can then be matched against, or fitted to, the ideal edge models. If the fit is sufficiently accurate at a given image location, an edge is assumed to exist with the same parameters as those of the ideal edge model. In the one-dimensional edge-fitting case described in Figure 15.4-1, the image signal f(x) is fitted to a step function
a  s( x) =  a + h 

for x < x 0 for x ≥ x 0

(15.4-1a) (15.4-1b)

EDGE-FITTING EDGE DETECTION

483

FIGURE 15.4-1. One- and two-dimensional edge fitting.

An edge is assumed present if the mean-square error

E =

∫x

x0 + L
0–L

[ f ( x ) – s ( x ) ] dx

2

(15.4-2)

is below some threshold value. In the two-dimensional formulation, the ideal step edge is defined as
a  s(x) =   a + h

for x cos θ + y sin θ < ρ for x cos θ + y sin θ ≥ ρ

(15.4-3a) (15.4-3b)

where θ and ρ jointly specify the polar distance from the center of a circular test region to the normal point of the edge. The edge-fitting error is
E =

∫ ∫ [ F ( x, y ) – S ( x, y ) ]

2

dx dy

(15.4-4)

where the integration is over the circle in Figure 15.4-1.

484

EDGE DETECTION

Hueckel (23) has developed a procedure for two-dimensional edge fitting in which the pixels within the circle of Figure 15.4-1 are expanded in a set of twodimensional basis functions by a Fourier series in polar coordinates. Let B i ( x, y ) represent the basis functions. Then, the weighting coefficients for the expansions of the image and the ideal step edge become

fi = si =

∫ ∫ Bi ( x, y )F ( x, y ) dx dy ∫ ∫ Bi ( x, y )S ( x, y ) dx dy

(15.4-5a) (15.4-5b)

In Hueckel's algorithm, the expansion is truncated to eight terms for computational economy and to provide some noise smoothing. Minimization of the mean-square2 error difference of Eq. 15.4-4 is equivalent to minimization of ( f i – s i ) for all coefficients. Hueckel has performed this minimization, invoking some simplifying approximations and has formulated a set of nonlinear equations expressing the estimated edge parameter set in terms of the expansion coefficients f i . Nalwa and Binford (24) have proposed an edge-fitting scheme in which the edge angle is first estimated by a sequential least-squares fit within a 5 × 5 region. Then, the image data along the edge direction is fit to a hyperbolic tangent function ρ –ρ

e –e tanh ρ = ------------------ρ –ρ e +e

(15.4-6)

as shown in Figure 15.4-2. Edge-fitting methods require substantially more computation than do derivative edge detection methods. Their relative performance is considered in the following section.

FIGURE 15.4-2. Hyperbolic tangent edge model.

LUMINANCE EDGE DETECTOR PERFORMANCE

485

15.5. LUMINANCE EDGE DETECTOR PERFORMANCE Relatively few comprehensive studies of edge detector performance have been reported in the literature (15,25,26). A performance evaluation is difficult because of the large number of methods proposed, problems in determining the optimum parameters associated with each technique and the lack of definitive performance criteria. In developing performance criteria for an edge detector, it is wise to distinguish between mandatory and auxiliary information to be obtained from the detector. Obviously, it is essential to determine the pixel location of an edge. Other information of interest includes the height and slope angle of the edge as well as its spatial orientation. Another useful item is a confidence factor associated with the edge decision, for example, the closeness of fit between actual image data and an idealized model. Unfortunately, few edge detectors provide this full gamut of information. The next sections discuss several performance criteria. No attempt is made to provide a comprehensive comparison of edge detectors. 15.5.1. Edge Detection Probability The probability of correct edge detection PD and the probability of false edge detection PF, as specified by Eq. 15.2-24, are useful measures of edge detector performance. The trade-off between PD and PF can be expressed parametrically in terms of the detection threshold. Figure 15.5-1 presents analytically derived plots of PD versus PF for several differential operators for vertical and diagonal edges and a signal-to-noise ratio of 1.0 and 10.0 (13). From these curves it is apparent that the Sobel and Prewitt 3 × 3 operators are superior to the Roberts 2 × 2 operators. The Prewitt operator is better than the Sobel operator for a vertical edge. But for a diagonal edge, the Sobel operator is superior. In the case of template-matching operators, the Robinson three-level and five-level operators exhibit almost identical performance, which is superior to the Kirsch and Prewitt compass gradient operators. Finally, the Sobel and Prewitt differential operators perform slightly better than the Robinson three- and Robinson five-level operators. It has not been possible to apply this statistical approach to any of the larger operators because of analytic difficulties in evaluating the detection probabilities. 15.5.2. Edge Detection Orientation An important characteristic of an edge detector is its sensitivity to edge orientation. Abdou and Pratt (15) have analytically determined the gradient response of 3 × 3 template matching edge detectors and 2 × 2 and 3 × 3 orthogonal gradient edge detectors for square-root and magnitude combinations of the orthogonal gradients. Figure 15.5-2 shows plots of the edge gradient as a function of actual edge orientation for a unit-width ramp edge model. The figure clearly shows that magnitude combination of the orthogonal gradients is inferior to square-root combination.

486

EDGE DETECTION

FIGURE 15.5-1. Probability of detection versus probability of false detection for 2 × 2 and 3 × 3 operators.

Figure 15.5-3 is a plot of the detected edge angle as a function of the actual orientation of an edge. The Sobel operator provides the most linear response. Laplacian edge detectors are rotationally symmetric operators, and hence are invariant to edge orientation. The edge angle can be determined to within 45° increments during the 3 × 3 pixel zero-crossing detection process. 15.5.3. Edge Detection Localization Another important property of an edge detector is its ability to localize an edge. Abdou and Pratt (15) have analyzed the edge localization capability of several first derivative operators for unit width ramp edges. Figure 15.5-4 shows edge models in which the sampled continuous ramp edge is displaced from the center of the operator. Figure 15.5-5 shows plots of the gradient response as a function of edge

LUMINANCE EDGE DETECTOR PERFORMANCE

487

FIGURE 15.5-2. Edge gradient response as a function of edge orientation for 2 × 2 and 3 × 3 first derivative operators.

displacement distance for vertical and diagonal edges for 2 × 2 and 3 × 3 orthogonal gradient and 3 × 3 template matching edge detectors. All of the detectors, with the exception of the Kirsch operator, exhibit a desirable monotonically decreasing response as a function of edge displacement. If the edge detection threshold is set at one-half the edge height, or greater, an edge will be properly localized in a noisefree environment for all the operators, with the exception of the Kirsch operator, for which the threshold must be slightly higher. Figure 15.5-6 illustrates the gradient response of boxcar operators as a function of their size (5). A gradient response

FIGURE 15.5-3. Detected edge orientation as a function of actual edge orientation for 2 × 2 and 3 × 3 first derivative operators.

488

EDGE DETECTION

(a) 2 × 2 model

(b) 3 × 3 model

FIGURE 15.5-4. Edge models for edge localization analysis.

comparison of 7 × 7 orthogonal gradient operators is presented in Figure 15.5-7. For such large operators, the detection threshold must be set relatively high to prevent smeared edge markings. Setting a high threshold will, of course, cause low-amplitude edges to be missed. Ramp edges of extended width can cause difficulties in edge localization. For first-derivative edge detectors, edges are marked along the edge slope at all points for which the slope exceeds some critical value. Raising the threshold results in the missing of low-amplitude edges. Second derivative edge detection methods are often able to eliminate smeared ramp edge markings. In the case of a unit width ramp edge, a zero crossing will occur only at the midpoint of the edge slope. Extended-width ramp edges will also exhibit a zero crossing at the ramp midpoint provided that the size of the Laplacian operator exceeds the slope width. Figure 15.5-8 illustrates Laplacian of Gaussian (LOG) examples (21). Berzins (27) has investigated the accuracy to which the LOG zero crossings locate a step edge. Figure 15.5-9 shows the LOG zero crossing in the vicinity of a corner step edge. A zero crossing occurs exactly at the corner point, but the zerocrossing curve deviates from the step edge adjacent to the corner point. The maximum deviation is about 0.3s, where s is the standard deviation of the Gaussian smoothing function.

LUMINANCE EDGE DETECTOR PERFORMANCE

489

FIGURE 15.5-5. Edge gradient response as a function of edge displacement distance for 2 × 2 and 3 × 3 first derivative operators.

FIGURE 15.5-6. Edge gradient response as a function of edge displacement distance for variable-size boxcar operators.

490

EDGE DETECTION

FIGURE 15.5-7 Edge gradient response as a function of edge displacement distance for several 7 × 7 orthogonal gradient operators.

15.5.4. Edge Detector Figure of Merit There are three major types of error associated with determination of an edge: (1) missing valid edge points, (2) failure to localize edge points, and (3) classification of

FIGURE 15.5-8. Laplacian of Gaussian response of continuous domain for high- and lowslope ramp edges.

LUMINANCE EDGE DETECTOR PERFORMANCE

491

FIGURE 15.5-9. Locus of zero crossings in vicinity of a corner edge for a continuous Laplacian of Gaussian edge detector.

noise fluctuations as edge points. Figure 15.5-10 illustrates a typical edge segment in a discrete image, an ideal edge representation, and edge representations subject to various types of error. A common strategy in signal detection problems is to establish some bound on the probability of false detection resulting from noise and then attempt to maximize the probability of true signal detection. Extending this concept to edge detection simply involves setting the edge detection threshold at a level such that the probability of false detection resulting from noise alone does not exceed some desired value. The probability of true edge detection can readily be evaluated by a coincidence comparison of the edge maps of an ideal and an actual edge detector. The penalty for nonlocalized edges is somewhat more difficult to access. Edge detectors that provide a smeared edge location should clearly be penalized; however, credit should be given to edge detectors whose edge locations are localized but biased by a small amount. Pratt (28) has introduced a figure of merit that balances these three types of error. The figure of merit is defined by
IA

1 R = ---IN

i =1



1 ----------------2 1 + ad

(15.5-1)

where I N = MAX { I I, I A } and I I and I A represent the number of ideal and actual edge map points, a is a scaling constant, and d is the separation distance of an actual edge point normal to a line of ideal edge points. The rating factor is normalized so that R = 1 for a perfectly detected edge. The scaling factor may be adjusted to penalize edges that are localized but offset from the true position. Normalization by the maximum of the actual and ideal number of edge points ensures a penalty for smeared or fragmented edges. As an example of performance, if a = 1 , the rating of 9

492

EDGE DETECTION

FIGURE 15.5-10. Indications of edge location.

a vertical detected edge offset by one pixel becomes R = 0.90, and a two-pixel offset gives a rating of R = 0.69. With a = 1 , a smeared edge of three pixels width centered 9 about the true vertical edge yields a rating of R = 0.93, and a five-pixel-wide smeared edge gives R = 0.84. A higher rating for a smeared edge than for an offset edge is reasonable because it is possible to thin the smeared edge by morphological postprocessing. The figure-of-merit criterion described above has been applied to the assessment of some of the edge detectors discussed previously, using a test image consisting of a 64 × 64 pixel array with a vertically oriented edge of variable contrast and slope placed at its center. Independent Gaussian noise of standard deviation σ n has been 2 added to the edge image. The signal-to-noise ratio is defined as SNR = ( h ⁄ σ n ) , where h is the edge height scaled over the range 0.0 to 1.0. Because the purpose of the testing is to compare various edge detection methods, for fairness it is important that each edge detector be tuned to its best capabilities. Consequently, each edge detector has been permitted to train both on random noise fields without edges and

LUMINANCE EDGE DETECTOR PERFORMANCE

493

the actual test images before evaluation. For each edge detector, the threshold parameter has been set to achieve the maximum figure of merit subject to the maximum allowable false detection rate. Figure 15.5-11 shows plots of the figure of merit for a vertical ramp edge as a function of signal-to-noise ratio for several edge detectors (5). The figure of merit is also plotted in Figure 15.5-12 as a function of edge width. The figure of merit curves in the figures follow expected trends: low for wide and noisy edges; and high in the opposite case. Some of the edge detection methods are universally superior to others for all test images. As a check on the subjective validity of the edge location figure of merit, Figures 15.5-13 and 15.5-14 present the edge maps obtained for several high-and low-ranking edge detectors. These figures tend to corroborate the utility of the figure of merit. A high figure of merit generally corresponds to a well-located edge upon visual scrutiny, and vice versa.

(a) 3 × 3 edge detectors

(b) Larger size edge detectors

FIGURE 15.5-11. Edge location figure of merit for a vertical ramp edge as a function of signal-to-noise ratio for h = 0.1 and w = 1.

494

EDGE DETECTION

100
FIGURE OF MERIT

SOBEL

80 60 40 20 0 PREWITT COMPASS

ROBERTS MAGNITUDE

7

5 3 EDGE WIDTH, PIXELS

1

FIGURE 15.5-12. Edge location figure of merit for a vertical ramp edge as a function of signal-to-noise ratio for h = 0.1 and SNR = 100.

15.5.5. Subjective Assessment In many, if not most applications in which edge detection is performed to outline objects in a real scene, the only performance measure of ultimate importance is how well edge detector markings match with the visual perception of object boundaries. A human observer is usually able to discern object boundaries in a scene quite accurately in a perceptual sense. However, most observers have difficulty recording their observations by tracing object boundaries. Nevertheless, in the evaluation of edge detectors, it is useful to assess them in terms of how well they produce outline drawings of a real scene that are meaningful to a human observer. The peppers image of Figure 15.2-2 has been used for the subjective assessment of edge detectors. The peppers in the image are visually distinguishable objects, but shadows and nonuniform lighting create a challenge to edge detectors, which by definition do not utilize higher-order perceptive intelligence. Figures 15.5-15 and 15.5-16 present edge maps of the peppers image for several edge detectors. The parameters of the various edge detectors have been chosen to produce the best visual delineation of objects. Heath et al. (26) have performed extensive visual testing of several complex edge detection algorithms, including the Canny and Nalwa–Binford methods, for a number of natural images. The judgment criterion was a numerical rating as to how well the edge map generated by an edge detector allows for easy, quick, and accurate recognition of objects within a test image.

LUMINANCE EDGE DETECTOR PERFORMANCE

495

(a ) Original

SNR = 100

(b ) Edge map, R = 100%

SNR = 10 (c ) Original

(d ) Edge map, R = 85.1%

SNR = 1 (e ) Original (f ) Edge Map, R = 24.2%

FIGURE 15.5-13. Edge location performance of Sobel edge detector as a function of signalto-noise ratio, h = 0.1, w = 1, a = 1/9.

496

EDGE DETECTION

(a ) Original

(b) East compass, R = 66.1%

(c ) Roberts magnitude, R = 31.5%

(d ) Roberts square root, R = 37.0%

(e ) Sobel, R = 85.1%

(f ) Kirsch, R = 80.8%

FIGURE 15.5-14. Edge location performance of several edge detectors for SNR = 10, h = 0.1, w = 1, a = 1/9.

LUMINANCE EDGE DETECTOR PERFORMANCE

497

(a) 2 × 2 Roberts, t = 0.08

(b) 3 × 3 Prewitt, t = 0.08

(c) 3 × 3 Sobel, t = 0.09

(d ) 3 × 3 Robinson five-level

(e) 5 × 5 Nevatia−Babu, t = 0.05

(f ) 3 × 3 Laplacian

FIGURE 15.5-15. Edge maps of the peppers_mon image for several small edge detectors.

498

EDGE DETECTION

(a) 7 × 7 boxcar, t = 0.10

(b) 9 × 9 truncated pyramid, t = 0.10

(c) 11 × 11 Argyle, t = 0.05

(d ) 11 × 11 Macleod, t = 0.10

(e) 11 × 11 derivative of Gaussian, t = 0.11

(f ) 11 × 11 Laplacian of Gaussian

FIGURE 15.5-16. Edge maps of the peppers_mon image for several large edge detectors.

LINE AND SPOT DETECTION

499

15.6. COLOR EDGE DETECTION In Chapter 3 it was established that color images may be described quantitatively at each pixel by a set of three tristimulus values T1, T2, T3, which are proportional to the amount of red, green, and blue primary lights required to match the pixel color. The luminance of the color is a weighted sum Y = a 1 T 1 + a 2 T 2 + a 3 T3 of the tristimulus values, where the a i are constants that depend on the spectral characteristics of the primaries. Several definitions of a color edge have been proposed (29). An edge in a color image can be said to exist if and only if the luminance field contains an edge. This definition ignores discontinuities in hue and saturation that occur in regions of constant luminance. Another definition is to judge a color edge present if an edge exists in any of its constituent tristimulus components. A third definition is based on forming the sum
G ( j, k ) = G 1 ( j, k ) + G 2 ( j, k ) + G 3 ( j, k )

(15.6-1)

of the gradients G i ( j, k ) of the three tristimulus values or some linear or nonlinear color components. A color edge exists if the gradient exceeds a threshold. Still another definition is based on the vector sum gradient
G ( j, k ) = [ [ G 1 ( j, k ) ] + [ G 2 ( j, k ) ] + [ G 3 ( j, k ) ] ]
2 2 2 1⁄2

(15.6-2)

With the tricomponent definitions of color edges, results are dependent on the particular color coordinate system chosen for representation. Figure 15.6-1 is a color photograph of the peppers image and monochrome photographs of its red, green, and blue components. The YIQ and L*a*b* coordinates are shown in Figure 15.6-2. Edge maps of the individual RGB components are shown in Figure 15.6-3 for Sobel edge detection. This figure also shows the logical OR of the RGB edge maps plus the edge maps of the gradient sum and the vector sum. The RGB gradient vector sum edge map provides slightly better visual edge delineation than that provided by the gradient sum edge map; the logical OR edge map tends to produce thick edges and numerous isolated edge points. Sobel edge maps for the YIQ and the L*a*b* color components are presented in Figures 15.6-4 and 15.6-5. The YIQ gradient vector sum edge map gives the best visual edge delineation, but it does not delineate edges quite as well as the RGB vector sum edge map. Edge detection results for the L*a*b* coordinate system are quite poor because the a* component is very noise sensitive.

15.7 LINE AND SPOT DETECTION A line in an image could be considered to be composed of parallel, closely spaced edges. Similarly, a spot could be considered to be a closed contour of edges. This

500

EDGE DETECTION

(a ) Monochrome representation

(b) Red component

(c ) Green component

(d ) Blue component

FIGURE 15.6-1. The peppers_gamma color image and its RGB color components. See insert for a color representation of this figure.

method of line and spot detection involves the application of scene analysis techniques to spatially relate the constituent edges of the lines and spots. The approach taken in this chapter is to consider only small-scale models of lines and edges and to apply the detection methodology developed previously for edges. Figure 15.1-4 presents several discrete models of lines. For the unit-width line models, line detection can be accomplished by threshold detecting a line gradient
4

G ( j, k ) = MAX { F ( j, k ) m=1 H m ( j, k ) }

(15.7-1)

LINE AND SPOT DETECTION

501

(a ) Y component

(b ) L* component

(c ) I component

(d ) a* component

(e ) Q component

(f ) b* component

FIGURE 15.6-2. YIQ and L*a*b* color components of the peppers_gamma image.

502

EDGE DETECTION

(a ) Red edge map

(b ) Logical OR of RGB edges

(c ) Green edge map

(d ) RGB sum edge map

(e) Blue edge map

(f ) RGB vector sum edge map

FIGURE 15.6-3. Sobel edge maps for edge detection using the RGB color components of the peppers_gamma image.

LINE AND SPOT DETECTION

503

(a ) Y edge map

(b ) Logical OR of YIQ edges

(c ) I edge map

(d ) YIQ sum edge map

(e) Q edge map

(f ) YIQ vector sum edge map

FIGURE 15.6-4. Sobel edge maps for edge detection using the YIQ color components of the peppers_gamma image.

504

EDGE DETECTION

(a ) L* edge map

(b ) Logical OR of L*a *b * edges

(c ) a * edge map

(d ) L*a *b * sum edge map

(e ) b * edge map

(f ) L*a *b * vector sum edge map

FIGURE 15.6-5. Sobel edge maps for edge detection using the L*a*b* color components of the peppers_gamma image.

LINE AND SPOT DETECTION

505

where H m ( j, k ) is a 3 × 3 line detector impulse response array corresponding to a specific line orientation. Figure 15.7-1 contains two sets of line detector impulse response arrays, weighted and unweighted, which are analogous to the Prewitt and Sobel template matching edge detector impulse response arrays. The detection of ramp lines, as modeled in Figure 15.1-4, requires 5 × 5 pixel templates. Unit-width step spots can be detected by thresholding a spot gradient
G ( j, k ) = F ( j, k ) H ( j, k )

(15.7-2)

where H ( j, k ) is an impulse response array chosen to accentuate the gradient of a unit-width spot. One approach is to use one of the three types of 3 × 3 Laplacian operators defined by Eq. 15.3-5, 15.3-6, or 15.3-8, which are discrete approximations to the sum of the row and column second derivatives of an image. The gradient responses to these impulse response arrays for the unit-width spot model of Figure 15.1-6a are simply a replicas of each array centered at the spot, scaled by the spot height h and zero elsewhere. It should be noted that the Laplacian gradient responses are thresholded for spot detection, whereas the Laplacian responses are examined for sign changes (zero crossings) for edge detection. The disadvantage to using Laplacian operators for spot detection is that they evoke a gradient response for edges, which can lead to false spot detection in a noisy environment. This problem can be alleviated by the use of a 3 × 3 operator that approximates the continuous

FIGURE 15.7-1. Line detector 3 × 3 impulse response arrays.

506

EDGE DETECTION
2 2

cross second derivative ∂2 ⁄ ∂x ∂y . Prewitt (1, p. 126) has suggested the following discrete approximation:

1 H = -8

1 –2 1

–2 4 –2

1 –2 1

(15.7-3)

The advantage of this operator is that it evokes no response for horizontally or vertically oriented edges, however, it does generate a response for diagonally oriented edges. The detection of unit-width spots modeled by the ramp model of Figure 15.1-5 requires a 5 × 5 impulse response array. The cross second derivative operator of Eq. 15.7-3 and the separable eight-connected Laplacian operator are deceptively similar in appearance; often, they are mistakenly exchanged with one another in the literature. It should be noted that the cross second derivative is identical to within a scale factor with the ninth Chebyshev polynomial impulse response array of Figure 15.3-5. Cook and Rosenfeld (30) and Zucker et al. (31) have suggested several algorithms for detection of large spots. In one algorithm, an image is first smoothed with a W × W low-pass filter impulse response array. Then the value of each point in the averaged image is compared to the average value of its north, south, east, and west neighbors spaced W pixels away. A spot is marked if the difference is sufficiently large. A similar approach involves formation of the difference of the average pixel amplitude in a W × W window and the average amplitude in a surrounding ring region of width W. Chapter 19 considers the general problem of detecting objects within an image by template matching. Such templates can be developed to detect large spots.

REFERENCES
1. J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psychopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York. 1970. 2. L. G. Roberts, “Machine Perception of Three-Dimensional Solids,” in Optical and Electro-Optical Information Processing, J. T. Tippett et al., Eds., MIT Press, Cambridge, MA, 1965, 159–197. 3. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley, New York, 1973. 4. W. Frei and C. Chen, “Fast Boundary Detection: A Generalization and a New Algorithm,” IEEE Trans. Computers, C-26, 10, October 1977, 988–998. 5. I. Abdou, “Quantitative Methods of Edge Detection,” USCIPI Report 830, Image Processing Institute, University of Southern California, Los Angeles, 1973. 6. E. Argyle, “Techniques for Edge Detection,” Proc. IEEE, 59, 2, February 1971, 285–287.

REFERENCES

507

7. I. D. G. Macleod, “On Finding Structure in Pictures,” in Picture Processing and Psychopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970. 8. I. D. G. Macleod, “Comments on Techniques for Edge Detection,” Proc. IEEE, 60, 3, March 1972, 344. 9. J. Canny, “A Computational Approach to Edge Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-8, 6, November 1986, 679–698. 10. D. Demigny and T. Kamie, “A Discrete Expression of Canny’s Criteria for Step Edge Detector Performances Evaluation,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-19, 11, November 1997, 1199–1211. 11. R. Kirsch, “Computer Determination of the Constituent Structure of Biomedical Images,” Computers and Biomedical Research, 4, 3, 1971, 315–328. 12. G. S. Robinson, “Edge Detection by Compass Gradient Masks,” Computer Graphics and Image Processing, 6, 5, October 1977, 492–501. 13. R. Nevatia and K. R. Babu, “Linear Feature Extraction and Description,” Computer Graphics and Image Processing, 13, 3, July 1980, 257–269. 14. A. P. Paplinski, “Directional Filtering in Edge Detection,” IEEE Trans. Image Processing, IP-7, 4, April 1998, 611–615. 15. I. E. Abdou and W. K. Pratt, “Quantitative Design and Evaluation of Enhancement/ Thresholding Edge Detectors,” Proc. IEEE, 67, 5, May 1979, 753–763. 16. K. Fukunaga, Introduction to Statistical Pattern Recognition, Academic Press, New York, 1972. 17. P. V. Henstock and D. M. Chelberg, “Automatic Gradient Threshold Determination for Edge Detection,” IEEE Trans. Image Processing, IP-5, 5, May 1996, 784–787. 18. V. Torre and T. A. Poggio, “On Edge Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-8, 2, March 1986, 147–163. 19. D. Marr and E. Hildrith, “Theory of Edge Detection,” Proc. Royal Society of London, B207, 1980, 187–217. 20. J. S. Wiejak, H. Buxton, and B. F. Buxton, “Convolution with Separable Masks for Early Image Processing,” Computer Vision, Graphics, and Image Processing, 32, 3, December 1985, 279–290. 21. A. Huertas and G. Medioni, “Detection of Intensity Changes Using Laplacian-Gaussian Masks,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-8, 5, September 1986, 651–664. 22. R. M. Haralick, “Digital Step Edges from Zero Crossing of Second Directional Derivatives,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-6, 1, January 1984, 58–68. 23. M. Hueckel, “An Operator Which Locates Edges in Digital Pictures,” J. Association for Computing Machinery, 18, 1, January 1971, 113–125. 24. V. S. Nalwa and T. O. Binford, “On Detecting Edges,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-6, November 1986, 699–714. 25. J. R. Fram and E. S. Deutsch, “On the Evaluation of Edge Detection Schemes and Their Comparison with Human Performance,” IEEE Trans. Computers, C-24, 6, June 1975, 616–628.

508

EDGE DETECTION

26. M. D. Heath, et al., “A Robust Visual Method for Assessing the Relative Performance of Edge-Detection Algorithms,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-19, 12, December 1997, 1338–1359. 27. V. Berzins, “Accuracy of Laplacian Edge Detectors,” Computer Vision, Graphics, and Image Processing, 27, 2, August 1984, 195–210. 28. W. K. Pratt, Digital Image Processing, Wiley-Interscience, New York, 1978, 497–499. 29. G. S. Robinson, “Color Edge Detection,” Proc. SPIE Symposium on Advances in Image Transmission Techniques, 87, San Diego, CA, August 1976. 30. C. M. Cook and A. Rosenfeld, “Size Detectors,” Proc. IEEE Letters, 58, 12, December 1970, 1956–1957. 31. S. W. Zucker, A. Rosenfeld, and L. S. Davis, “Picture Segmentation by Texture Discrimination,” IEEE Trans. Computers, C-24, 12, December 1975, 1228–1233.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

16
IMAGE FEATURE EXTRACTION

An image feature is a distinguishing primitive characteristic or attribute of an image. Some features are natural in the sense that such features are defined by the visual appearance of an image, while other, artificial features result from specific manipulations of an image. Natural features include the luminance of a region of pixels and gray scale textural regions. Image amplitude histograms and spatial frequency spectra are examples of artificial features. Image features are of major importance in the isolation of regions of common property within an image (image segmentation) and subsequent identification or labeling of such regions (image classification). Image segmentation is discussed in Chapter 16. References 1 to 4 provide information on image classification techniques. This chapter describes several types of image features that have been proposed for image segmentation and classification. Before introducing them, however, methods of evaluating their performance are discussed.

16.1. IMAGE FEATURE EVALUATION There are two quantitative approaches to the evaluation of image features: prototype performance and figure of merit. In the prototype performance approach for image classification, a prototype image with regions (segments) that have been independently categorized is classified by a classification procedure using various image features to be evaluated. The classification error is then measured for each feature set. The best set of features is, of course, that which results in the least classification error. The prototype performance approach for image segmentation is similar in nature. A prototype image with independently identified regions is segmented by a
509

510

IMAGE FEATURE EXTRACTION

segmentation procedure using a test set of features. Then, the detected segments are compared to the known segments, and the segmentation error is evaluated. The problems associated with the prototype performance methods of feature evaluation are the integrity of the prototype data and the fact that the performance indication is dependent not only on the quality of the features but also on the classification or segmentation ability of the classifier or segmenter. The figure-of-merit approach to feature evaluation involves the establishment of some functional distance measurements between sets of image features such that a large distance implies a low classification error, and vice versa. Faugeras and Pratt (5) have utilized the Bhattacharyya distance (3) figure-of-merit for texture feature evaluation. The method should be extensible for other features as well. The Bhattacharyya distance (B-distance for simplicity) is a scalar function of the probability densities of features of a pair of classes defined as
  1⁄2 B ( S 1, S 2 ) = – ln  ∫ [ p ( x S 1 )p ( x S 2 ) ] dx   

(16.1-1)

where x denotes a vector containing individual image feature measurements with conditional density p ( x S i ) . It can be shown (3) that the B-distance is related monotonically to the Chernoff bound for the probability of classification error using a Bayes classifier. The bound on the error probability is
P ≤ [ P ( S 1 )P ( S2 ) ]
1⁄2

exp { – B ( S 1, S 2 ) }

(16.1-2)

where P ( Si ) represents the a priori class probability. For future reference, the Chernoff error bound is tabulated in Table 16.1-1 as a function of B-distance for equally likely feature classes. For Gaussian densities, the B-distance becomes

 1  –1  -- Σ 1 + Σ 2  T Σ1 + Σ2 2 --B ( S 1, S 2 ) = 1 ( u 1 – u 2 )  -----------------  ( u 1 – u 2 ) + 1 ln  ---------------------------------  (16.1-3) -  2 8 2  Σ1 1 ⁄ 2 Σ2 1 ⁄ 2   

where ui and Σ i represent the feature mean vector and the feature covariance matrix of the classes, respectively. Calculation of the B-distance for other densities is generally difficult. Consequently, the B-distance figure of merit is applicable only for Gaussian-distributed feature data, which fortunately is the common case. In practice, features to be evaluated by Eq. 16.1-3 are measured in regions whose class has been determined independently. Sufficient feature measurements need be taken so that the feature mean vector and covariance can be estimated accurately.

AMPLITUDE FEATURES

511

TABLE 16.1-1 Relationship of Bhattacharyya Distance and Chernoff Error Bound B ( S 1, S 2 ) 1 2 4 6 8 10 12 Error Bound 1.84 × 10–1 6.77 × 10–2 9.16 × 10–3 1.24 × 10–3 1.68 × 10–4 2.27 × 10–5 2.07 × 10–6

16.2. AMPLITUDE FEATURES The most basic of all image features is some measure of image amplitude in terms of luminance, tristimulus value, spectral value, or other units. There are many degrees of freedom in establishing image amplitude features. Image variables such as luminance or tristimulus values may be utilized directly, or alternatively, some linear, nonlinear, or perhaps noninvertible transformation can be performed to generate variables in a new amplitude space. Amplitude measurements may be made at specific image points, [e.g., the amplitude F ( j, k ) at pixel coordinate ( j, k ) , or over a neighborhood centered at ( j, k ) ]. For example, the average or mean image amplitude in a W × W pixel neighborhood is given by
1 M ( j, k ) = -----2 W

m = –w n = –w



w



w

F ( j + m, k + n )

(16.2-1)

where W = 2w + 1. An advantage of a neighborhood, as opposed to a point measurement, is a diminishing of noise effects because of the averaging process. A disadvantage is that object edges falling within the neighborhood can lead to erroneous measurements. The median of pixels within a W × W neighborhood can be used as an alternative amplitude feature to the mean measurement of Eq. 16.2-1, or as an additional feature. The median is defined to be that pixel amplitude in the window for which one-half of the pixels are equal or smaller in amplitude, and one-half are equal or greater in amplitude. Another useful image amplitude feature is the neighborhood standard deviation, which can be computed as
1 S ( j, k ) = ---W w w 1⁄2

m = –w n = – w





[ F ( j + m, k + n ) – M ( j + m, k + n ) ]

2

(16.2-2)

512

IMAGE FEATURE EXTRACTION

(a) Original

(b) 7 × 7 pyramid mean

(c) 7 × 7 standard deviation

(d ) 7 × 7 plus median

FIGURE 16.2-1. Image amplitude features of the washington_ir image.

In the literature, the standard deviation image feature is sometimes called the image dispersion. Figure 16.2-1 shows an original image and the mean, median, and standard deviation of the image computed over a small neighborhood. The mean and standard deviation of Eqs. 16.2-1 and 16.2-2 can be computed indirectly in terms of the histogram of image pixels within a neighborhood. This leads to a class of image amplitude histogram features. Referring to Section 5.7, the first-order probability distribution of the amplitude of a quantized image may be defined as
P ( b ) = P R [ F ( j, k ) = r b ]

(16.2-3)

where r b denotes the quantized amplitude level for 0 ≤ b ≤ L – 1 . The first-order histogram estimate of P(b) is simply

AMPLITUDE FEATURES

513

N( b) P ( b ) ≈ ----------M

(16.2-4)

where M represents the total number of pixels in a neighborhood window centered about ( j, k ) , and N ( b ) is the number of pixels of amplitude r b in the same window. The shape of an image histogram provides many clues as to the character of the image. For example, a narrowly distributed histogram indicates a low-contrast image. A bimodal histogram often suggests that the image contains an object with a narrow amplitude range against a background of differing amplitude. The following measures have been formulated as quantitative shape descriptions of a first-order histogram (6). Mean:
L–1

SM ≡ b =

b=0



bP ( b )

(16.2-5)

Standard deviation:
L–1

SD ≡ σb =

b=0



(b – b ) P( b)

2

1⁄2

(16.2-6)

Skewness:
1 S S = ----3 σb
L–1

b=0

∑ ( b – b)

3

P(b )

(16.2-7)

Kurtosis:
1S K = ----4 σb
L–1

b=0

∑ ( b – b)

4

P( b ) – 3

(16.2-8)

Energy:
L–1

SN =

b=0

∑ [P(b )]

2

(16.2-9)

Entropy:
L–1

SE = –

b=0



P ( b ) log 2 { P ( b ) }

(16.2-10)

514

IMAGE FEATURE EXTRACTION

m,n r q

j,k

FIGURE 16.2-2. Relationship of pixel pairs.

The factor of 3 inserted in the expression for the Kurtosis measure normalizes SK to zero for a zero-mean, Gaussian-shaped histogram. Another useful histogram shape measure is the histogram mode, which is the pixel amplitude corresponding to the histogram peak (i.e., the most commonly occurring pixel amplitude in the window). If the histogram peak is not unique, the pixel at the peak closest to the mean is usually chosen as the histogram shape descriptor. Second-order histogram features are based on the definition of the joint probability distribution of pairs of pixels. Consider two pixels F ( j, k ) and F ( m, n ) that are located at coordinates ( j, k ) and ( m, n ), respectively, and, as shown in Figure 16.2-2, are separated by r radial units at an angle θ with respect to the horizontal axis. The joint distribution of image amplitude values is then expressed as
P ( a, b ) = P R [ F ( j, k ) = r a, F ( m, n ) = r b ]

(16.2-11)

where r a and r b represent quantized pixel amplitude values. As a result of the discrete rectilinear representation of an image, the separation parameters ( r, θ ) may assume only certain discrete values. The histogram estimate of the second-order distribution is
N ( a, b ) P ( a, b ) ≈ ----------------M

(16.2-12)

where M is the total number of pixels in the measurement window and N ( a, b ) denotes the number of occurrences for which F ( j, k ) = r a and F ( m, n ) = r b . If the pixel pairs within an image are highly correlated, the entries in P ( a, b ) will be clustered along the diagonal of the array. Various measures, listed below, have been proposed (6,7) as measures that specify the energy spread about the diagonal of P ( a, b ). Autocorrelation:
L–1 L–1

SA =

a=0 b=0

∑ ∑

abP ( a, b )

(16.2-13)

AMPLITUDE FEATURES

515

Covariance:
L–1 L–1

SC =

a=0 b=0

∑ ∑

( a – a ) ( b – b )P ( a, b )

(16.2-14a)

where
L–1 L–1

a =

a=0 b=0 L–1 L–1

∑ ∑ aP ( a, b ) ∑ ∑ bP ( a, b )

(16.2-14b)

b =

(16.2-14c)

a=0 b=0

Inertia:
L–1

SI =

a=0 b=0

∑ ∑ (a – b)

L–1

2

P ( a, b )

(16.2-15)

Absolute value:
L–1 L–1

SV =

a=0 b=0

∑ ∑

a – b P ( a, b )

(16.2-16)

Inverse difference:
L–1 L–1

SF =

a=0 b=0

∑ ∑

P ( a, b ) ---------------------------2 1 + ( a – b)

(16.2-17)

Energy:
L–1 L–1

SG =

a=0 b=0

∑ ∑ [ P ( a, b ) ]

2

(16.2-18)

Entropy:
L–1

ST = –

a=0 b=0

∑ ∑

L–1

P ( a, b ) log 2 { P ( a, b ) }

(16.2-19)

The utilization of second-order histogram measures for texture analysis is considered in Section 16.6.

516

IMAGE FEATURE EXTRACTION

16.3. TRANSFORM COEFFICIENT FEATURES The coefficients of a two-dimensional transform of a luminance image specify the amplitude of the luminance patterns (two-dimensional basis functions) of a transform such that the weighted sum of the luminance patterns is identical to the image. By this characterization of a transform, the coefficients may be considered to indicate the degree of correspondence of a particular luminance pattern with an image field. If a basis pattern is of the same spatial form as a feature to be detected within the image, image detection can be performed simply by monitoring the value of the transform coefficient. The problem, in practice, is that objects to be detected within an image are often of complex shape and luminance distribution, and hence do not correspond closely to the more primitive luminance patterns of most image transforms. Lendaris and Stanley (8) have investigated the application of the continuous twodimensional Fourier transform of an image, obtained by a coherent optical processor, as a means of image feature extraction. The optical system produces an electric field radiation pattern proportional to
F ( ω x, ω y ) =

∫–∞ ∫–∞ F ( x, y ) exp { – i ( ωx x + ωy y ) } dx dy





(16.3-1)

where ( ω x, ω y ) are the image spatial frequencies. An optical sensor produces an output
M ( ω x, ω y ) = F ( ω x, ω y )
2

(16.3-2)

proportional to the intensity of the radiation pattern. It should be observed that F ( ω x, ω y ) and F ( x, y ) are unique transform pairs, but M ( ω x, ω y ) is not uniquely related to F ( x, y ) . For example, M ( ω x, ω y ) does not change if the origin of F ( x, y ) is shifted. In some applications, the translation invariance of M ( ω x, ω y ) may be a benefit. Angular integration of M ( ω x, ω y ) over the spatial frequency plane produces a spatial frequency feature that is invariant to translation and rotation. Representing M ( ω x, ω y ) in polar form, this feature is defined as
N (ρ) =
2

∫0
2



M ( ρ, θ ) dθ
2

(16.3-3)

where θ = arc tan { ω x ⁄ ω y } and ρ = ω x + ω y . Invariance to changes in scale is an attribute of the feature
P(θ ) =

∫0



M ( ρ, θ ) dρ

(16.3-4)

TRANSFORM COEFFICIENT FEATURES

517

FIGURE 16.3-1. Fourier transform feature masks.

The Fourier domain intensity pattern M ( ω x, ω y ) is normally examined in specific regions to isolate image features. As an example, Figure 16.3-1 defines regions for the following Fourier features: Horizontal slit:
S1( m ) =

∫– ∞ ∫ω ( m ) y ∞

ωy (m + 1 )

M ( ω x, ω y ) dω x dω y

(16.3-5)

Vertical slit:
S2 ( m ) =

∫ω ( m) ∫–∞ M ( ω x, ωy ) dωx dωy x ωx (m + 1 ) ∞

(16.3-6)

Ring:
S3 ( m ) =

∫ρ ( m ) ∫0

ρ ( m + 1 ) 2π

M ( ρ, θ ) dρ dθ

(16.3-7)

Sector:
S4 ( m ) =

∫0 ∫ θ ( m )



θ(m + 1 )

M ( ρ, θ ) dρ dθ

(16.3-8)

518

IMAGE FEATURE EXTRACTION

(a ) Rectangle

(b ) Rectangle transform

(c ) Ellipse

(d ) Ellipse transform

(e ) Triangle

(f ) Triangle transform

FIGURE 16.3-2. Discrete Fourier spectra of objects; log magnitude displays.

For a discrete image array F ( j, k ) , the discrete Fourier transform
1 F ( u, v ) = --N
N–1 N–1

j=0 k=0

∑ ∑ F ( j, k ) exp  ----------- ( ux + vy )   N 

 – 2πi



(16.3-9)

TEXTURE DEFINITION

519

for u, v = 0, …, N – 1 can be examined directly for feature extraction purposes. Horizontal slit, vertical slit, ring, and sector features can be defined analogous to Eqs. 16.3-5 to 16.3-8. This concept can be extended to other unitary transforms, such as the Hadamard and Haar transforms. Figure 16.3-2 presents discrete Fourier transform log magnitude displays of several geometric shapes. 16.4. TEXTURE DEFINITION Many portions of images of natural scenes are devoid of sharp edges over large areas. In these areas, the scene can often be characterized as exhibiting a consistent structure analogous to the texture of cloth. Image texture measurements can be used to segment an image and classify its segments. Several authors have attempted qualitatively to define texture. Pickett (9) states that “texture is used to describe two dimensional arrays of variations... The elements and rules of spacing or arrangement may be arbitrarily manipulated, provided a characteristic repetitiveness remains.” Hawkins (10) has provided a more detailed description of texture: “The notion of texture appears to depend upon three ingredients: (1) some local 'order' is repeated over a region which is large in comparison to the order's size, (2) the order consists in the nonrandom arrangement of elementary parts and (3) the parts are roughly uniform entities having approximately the same dimensions everywhere within the textured region.” Although these descriptions of texture seem perceptually reasonably, they do not immediately lead to simple quantitative textural measures in the sense that the description of an edge discontinuity leads to a quantitative description of an edge in terms of its location, slope angle, and height. Texture is often qualitatively described by its coarseness in the sense that a patch of wool cloth is coarser than a patch of silk cloth under the same viewing conditions. The coarseness index is related to the spatial repetition period of the local structure. A large period implies a coarse texture; a small period implies a fine texture. This perceptual coarseness index is clearly not sufficient as a quantitative texture measure, but can at least be used as a guide for the slope of texture measures; that is, small numerical texture measures should imply fine texture, and large numerical measures should indicate coarse texture. It should be recognized that texture is a neighborhood property of an image point. Therefore, texture measures are inherently dependent on the size of the observation neighborhood. Because texture is a spatial property, measurements should be restricted to regions of relative uniformity. Hence it is necessary to establish the boundary of a uniform textural region by some form of image segmentation before attempting texture measurements. Texture may be classified as being artificial or natural. Artificial textures consist of arrangements of symbols, such as line segments, dots, and stars placed against a neutral background. Several examples of artificial texture are presented in Figure 16.4-1 (9). As the name implies, natural textures are images of natural scenes containing semirepetitive arrangements of pixels. Examples include photographs of brick walls, terrazzo tile, sand, and grass. Brodatz (11) has published an album of photographs of naturally occurring textures. Figure 16.4-2 shows several natural texture examples obtained by digitizing photographs from the Brodatz album.

520

IMAGE FEATURE EXTRACTION

FIGURE 16.4-1. Artificial texture.

VISUAL TEXTURE DISCRIMINATION

521

(a) Sand

(b) Grass

(c) Wool

(d) Raffia

FIGURE 16.4-2. Brodatz texture fields.

16.5. VISUAL TEXTURE DISCRIMINATION A discrete stochastic field is an array of numbers that are randomly distributed in amplitude and governed by some joint probability density (12). When converted to light intensities, such fields can be made to approximate natural textures surprisingly well by control of the generating probability density. This technique is useful for generating realistic appearing artificial scenes for applications such as airplane flight simulators. Stochastic texture fields are also an extremely useful tool for investigating human perception of texture as a guide to the development of texture feature extraction methods. In the early 1960s, Julesz (13) attempted to determine the parameters of stochastic texture fields of perceptual importance. This study was extended later by Julesz et al. (14–16). Further extensions of Julesz’s work have been made by Pollack (17),

522

IMAGE FEATURE EXTRACTION

FIGURE 16.5-1. Stochastic texture field generation model.

Purks and Richards (18), and Pratt et al. (19). These studies have provided valuable insight into the mechanism of human visual perception and have led to some useful quantitative texture measurement methods. Figure 16.5-1 is a model for stochastic texture generation. In this model, an array of independent, identically distributed random variables W ( j, k ) passes through a linear or nonlinear spatial operator O { · } to produce a stochastic texture array F ( j, k ). By controlling the form of the generating probability density p ( W ) and the spatial operator, it is possible to create texture fields with specified statistical properties. Consider a continuous amplitude pixel x 0 at some coordinate ( j, k ) in F ( j, k ) . Let the set { z 1, z 2, …, z J } denote neighboring pixels but not necessarily nearest geometric neighbors, raster scanned in a conventional top-to-bottom, left-to-right fashion. The conditional probability density of x 0 conditioned on the state of its neighbors is given by p ( x 0, z 1, …, z J ) p ( x 0 z 1, …, z J ) = ------------------------------------p ( z 1, …, z J )

(16.5-1)

The first-order density p ( x 0 ) employs no conditioning, the second-order density p ( x 0 z 1 ) implies that J = 1, the third-order density implies that J = 2, and so on. 16.5.1. Julesz Texture Fields In his pioneering texture discrimination experiments, Julesz utilized Markov process state methods to create stochastic texture arrays independently along rows of the array. The family of Julesz stochastic arrays are defined below. 1. Notation. Let x n = F ( j, k – n ) denote a row neighbor of pixel x 0 and let P(m), for m = 1, 2,..., M, denote a desired probability generating function. 2. First-order process. Set x 0 = m for a desired probability function P(m). The resulting pixel probability is
P ( x0 ) = P ( x0 = m ) = P ( m )

(16.5-2)

VISUAL TEXTURE DISCRIMINATION

523

3. Second-order process. Set F ( j, 1 ) = m for P ( m ) = 1 ⁄ M , and set x 0 = ( x 1 + m )MOD { M }, where the modulus function p MOD { q } ≡ p – [ q × ( p ÷ q ) ] for integers p and q. This gives a first-order probability
1P ( x 0 ) = ---M

(16.5-3a)

and a transition probability p ( x 0 x 1 ) = P [ x 0 = ( x 1 + m ) MOD { M } ] = P ( m )

(16.5-3b)

4. Third-order process. Set F ( j, 1 ) = m for P ( m ) = 1 ⁄ M , and set F ( j, 2 ) = n for P ( n ) = 1 ⁄ M . Choose x 0 to satisfy 2x 0 = ( x 1 + x 2 + m ) MOD { M } . The governing probabilities then become
1 P ( x 0 ) = ---M 1 p ( x 0 x 1 ) = ---M p ( x 0 x 1, x 2 ) = P [ 2x 0 = ( x 1 + x 2 + m ) MOD { M } ] = P ( m )

(16.5-4a) (16.5-4b) (16.5-4c)

This process has the interesting property that pixel pairs along a row are independent, and consequently, the process is spatially uncorrelated. Figure 16.5-2 contains several examples of Julesz texture field discrimination tests performed by Pratt et al. (19). In these tests, the textures were generated according to the presentation format of Figure 16.5-3. In these and subsequent visual texture discrimination tests, the perceptual differences are often small. Proper discrimination testing should be performed using high-quality photographic transparencies, prints, or electronic displays. The following moments were used as simple indicators of differences between generating distributions and densities of the stochastic fields. η = E { x0 } σ = E { [ x0 – η ] } E { [ x0 – η ] [ x1 – η ] } α = ------------------------------------------------2 σ E { [ x 0 – η ] [ x1 – η ] [ x2 – η ] } θ = --------------------------------------------------------------------3 σ
2 2

(16.5-5a) (16.5-5b) (16.5-5c)

(16.5-5d)

524

IMAGE FEATURE EXTRACTION

(a) Different first order sA = 0.289, sB = 0.204

(b) Different second order sA = 0.289, sB = 0.289 aA = 0.250, aB = − 0.250

(c) Different third order sA = 0.289, sB = 0.289 aA = 0.000, aB = 0.000 qA = 0.058, qB = − 0.058

FIGURE 16.5-2. Field comparison of Julesz stochastic fields; η A = η B = 0.500 .

The examples of Figure 16.5-2a and b indicate that texture field pairs differing in their first- and second-order distributions can be discriminated. The example of Figure 16.5-2c supports the conjecture, attributed to Julesz, that differences in thirdorder, and presumably, higher-order distribution texture fields cannot be perceived provided that their first-order and second- distributions are pairwise identical.

VISUAL TEXTURE DISCRIMINATION

525

FIGURE 16.5-3. Presentation format for visual texture discrimination experiments.

16.5.2. Pratt, Faugeras, and Gagalowicz Texture Fields Pratt et al. (19) have extended the work of Julesz et al. (13–16) in an attempt to study the discriminability of spatially correlated stochastic texture fields. A class of Gaussian fields was generated according to the conditional probability density
  T –1 -exp  – 1 ( v J + 1 – η J + 1 ) ( K J + 1 ) ( v J + 1 – η J + 1 )  2   p ( x 0 z 1, …, z J ) = ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------–1 ⁄ 2  1  J T –1 exp  – -- ( v J – η J ) ( K J ) ( v J – η J )  ( 2π ) K J 2   ( 2π )
J+1 –1 ⁄ 2

KJ + 1

(16.5-6a) where vJ = z1 … zJ

(16.5-6b)

vJ + 1 =

x0 vJ

(16.5-6c)

The covariance matrix of Eq. 16.5-6a is of the parametric form

526

IMAGE FEATURE EXTRACTION

(a) Constrained second-order density

(b) Constrained third-order density

FIGURE 16.5-4. Row correlation factors for stochastic field generation. Dashed line, field A; solid line, field B.

1 KJ + 1 = α β γ …

α
–2

β σ KJ

γ …

(16.5-7)

where α, β, γ, … denote correlation lag terms. Figure 16.5-4 presents an example of the row correlation functions used in the texture field comparison tests described below. Figures 16.5-5 and 16.5-6 contain examples of Gaussian texture field comparison tests. In Figure 16.5-5, the first-order densities are set equal, but the second-order nearest neighbor conditional densities differ according to the covariance function plot of Figure 16.5-4a. Visual discrimination can be made in Figure 16.5-5, in which the correlation parameter differs by 20%. Visual discrimination has been found to be marginal when the correlation factor differs by less than 10% (19). The first- and second-order densities of each field are fixed in Figure 16.5-6, and the third-order

VISUAL TEXTURE DISCRIMINATION

527

(a) aA = 0.750, aB = 0.900

(b) aA = 0.500, aB = 0.600

FIGURE 16.5-5. Field comparison of Gaussian stochastic fields with different second-order nearest neighbor densities; η A = η B = 0.500, σ A = σ B = 0.167.

conditional densities differ according to the plan of Figure 16.5-4b. Visual discrimination is possible. The test of Figure 16.5-6 seemingly provides a counterexample to A B A B the Julesz conjecture. In this test, [ p ( x 0 ) = p ( x 0 ) ] and p ( x 0, x 1 ) = p ( x 0, x 1 ), but A B p ( x 0, x 1, x 2 ) ≠ p ( x 0, x 1, x 2 ) . However, the general second-order density pairs A B p ( x 0, z j ) and p ( x 0, z j ) are not necessarily equal for an arbitrary neighbor z j , and therefore the conditions necessary to disprove Julesz’s conjecture are violated. To test the Julesz conjecture for realistically appearing texture fields, it is necessary to generate a pair of fields with identical first-order densities, identical Markovian type second-order densities, and differing third-order densities for every

(a) bA = 0.563, bB = 0.600

(b) bA = 0.563, bB = 0.400

FIGURE 16.5-6. Field comparison of Gaussian stochastic fields with different third-order nearest neighbor densities; η A = η B = 0.500, σ A = σ B = 0.167, α A = α B = 0.750 .

528

IMAGE FEATURE EXTRACTION

hA = 0.500, hB = 0.500 sA = 0.167, sB = 0.167 aA = 0.850, aB = 0.850 qA = 0.040, qB = − 0.027

FIGURE 16.5-7. Field comparison of correlated Julesz stochastic fields with identical firstand second-order densities, but different third-order densities.

pair of similar observation points in both fields. An example of such a pair of fields is presented in Figure 16.5-7 for a non-Gaussian generating process (19). In this example, the texture appears identical in both fields, thus supporting the Julesz conjecture. Gagalowicz has succeeded in generating a pair of texture fields that disprove the Julesz conjecture (20). However, the counterexample, shown in Figure 16.5-8, is not very realistic in appearance. Thus, it seems likely that if a statistically based texture measure can be developed, it need not utilize statistics greater than second-order.

FIGURE 16.5-8. Gagalowicz counterexample.

TEXTURE FEATURES

529

hA = 0.413, hB = 0.412 sA = 0.078, sB = 0.078 aA = 0.915, aB = 0.917 qA = 1.512, qB = 0.006

FIGURE 16.5-9. Field comparison of correlated stochastic fields with identical means, variances, and autocorrelation functions, but different nth-order probability densities generated by different processing of the same input field. Input array consists of uniform random variables raised to the 256th power. Moments are computed.

Because a human viewer is sensitive to differences in the mean, variance, and autocorrelation function of the texture pairs, it is reasonable to investigate the sufficiency of these parameters in terms of texture representation. Figure 16.5-9 presents examples of the comparison of texture fields with identical means, variances, and autocorrelation functions, but different nth-order probability densities. Visual discrimination is readily accomplished between the fields. This leads to the conclusion that these low-order moment measurements, by themselves, are not always sufficient to distinguish texture fields.

16.6. TEXTURE FEATURES As noted in Section 16.4, there is no commonly accepted quantitative definition of visual texture. As a consequence, researchers seeking a quantitative texture measure have been forced to search intuitively for texture features, and then attempt to evaluate their performance by techniques such as those presented in Section 16.1. The following subsections describe several texture features of historical and practical important. References 20 to 22 provide surveys on image texture feature extraction. Randen and Husoy (23) have performed a comprehensive study of many texture feature extraction methods.

530

IMAGE FEATURE EXTRACTION

16.6.1. Fourier Spectra Methods Several studies (8,24,25) have considered textural analysis based on the Fourier spectrum of an image region, as discussed in Section 16.2. Because the degree of texture coarseness is proportional to its spatial period, a region of coarse texture should have its Fourier spectral energy concentrated at low spatial frequencies. Conversely, regions of fine texture should exhibit a concentration of spectral energy at high spatial frequencies. Although this correspondence exists to some degree, difficulties often arise because of spatial changes in the period and phase of texture pattern repetitions. Experiments (10) have shown that there is considerable spectral overlap of regions of distinctly different natural texture, such as urban, rural, and woodland regions extracted from aerial photographs. On the other hand, Fourier spectral analysis has proved successful (26,27) in the detection and classification of coal miner’s black lung disease, which appears as diffuse textural deviations from the norm. 16.6.2. Edge Detection Methods Rosenfeld and Troy (28) have proposed a measure of the number of edges in a neighborhood as a textural measure. As a first step in their process, an edge map array E ( j, k ) is produced by some edge detector such that E ( j, k ) = 1 for a detected edge and E ( j, k ) = 0 otherwise. Usually, the detection threshold is set lower than the normal setting for the isolation of boundary points. This texture measure is defined as
1 T ( j, k ) = ------ ∑ 2 W m = –w w n = –w



w

E ( j + m, k + n )

(16.6-1)

where W = 2w + 1 is the dimension of the observation window. A variation of this approach is to substitute the edge gradient G ( j, k ) for the edge map array in Eq. 16.6-1. A generalization of this concept is presented in Section 16.6.4. 16.6.3. Autocorrelation Methods The autocorrelation function has been suggested as the basis of a texture measure (28). Although it has been demonstrated in the preceding section that it is possible to generate visually different stochastic fields with the same autocorrelation function, this does not necessarily rule out the utility of an autocorrelation feature set for natural images. The autocorrelation function is defined as
A F ( m, n ) =

∑ ∑ F ( j, k )F ( j – m, k – n ) j k

(16.6-2)

TEXTURE FEATURES

531

for computation over a W × W window with – T ≤ m, n ≤ T pixel lags. Presumably, a region of coarse texture will exhibit a higher correlation for a fixed shift ( m, n ) than will a region of fine texture. Thus, texture coarseness should be proportional to the spread of the autocorrelation function. Faugeras and Pratt (5) have proposed the following set of autocorrelation spread measures:
S ( u, v ) =

m = 0 n = –T

∑ ∑

T

T

( m – η m ) ( n – η n ) A F ( m, n )

u

v

(16.6-3a)

where
T T

ηm =

m = 0 n = –T

∑ ∑
T T

mAF ( m, n )

(16.6-3b)

ηn =

m = 0 n = –T

∑ ∑

nA F ( m, n )

(16.6-3c)

In Eq. 16.6-3, computation is only over one-half of the autocorrelation function because of its symmetry. Features of potential interest include the profile spreads S(2, 0) and S(0, 2), the cross-relation S(1, 1), and the second-degree spread S(2, 2). Figure 16.6-1 shows perspective views of the autocorrelation functions of the four Brodatz texture examples (5). Bhattacharyya distance measurements of these texture fields, performed by Faugeras and Pratt (5), are presented in Table 16.6-1. These B-distance measurements indicate that the autocorrelation shape features are marginally adequate for the set of four shape features, but unacceptable for fewer features. Tests by Faugeras and Pratt (5) verify that the B-distances are low for

(a) Sand

(b) Grass

(c) Wool

(d ) Raffia

FIGURE 16.6-1. Perspective views of autocorrelation functions of Brodatz texture fields.

532

IMAGE FEATURE EXTRACTION

TABLE 16.6-1. Bhattacharyya Distance of Texture Feature Sets for Prototype Texture Fields: Autocorrelation Features Field Pair Grass – sand Grass – raffia Grass – wool Sand – raffia Sand – wool Raffia – wool Average Set 1 a 5.05 7.07 2.37 1.49 6.55 8.70 5.21 Set 2 b 4.29 5.32 0.21 0.58 4.93 5.96 3.55 Set 3 c 2.92 3.57 0.04 0.35 3.14 3.78 2.30

a1: S(2, 0), S(0, 2), S(1, 1), S(2,2). b 2: S(1,1), S(2,2). c3: S(2,2).

the stochastic field pairs of Figure 16.5-9, which have the same autocorrelation functions but are visually distinct.

16.6.4. Decorrelation Methods Stochastic texture fields generated by the model of Figure 16.5-1 can be described quite compactly by specification of the spatial operator O { · } and the stationary first-order probability density p(W) of the independent, identically distributed generating process W ( j, k ) . This observation has led to a texture feature extraction procedure, developed by Faugeras and Pratt (5), in which an attempt has been made to invert the model and estimate its parameters. Figure 16.6-2 is a block diagram of their decorrelation method of texture feature extraction. In the first step of the method, the spatial autocorrelation function AF ( m, n ) is measured over a texture field to be analyzed. The autocorrelation function is then used to develop a whitening filter, with an impulse response H W ( j, k ) , using techniques described in Section 19.2. The whitening filter is a special type of decorrelation operator. It is used to generate the whitened field
ˆ W ( j, k ) = F ( j, k ) * H W ( j, k )

(16.6-4)

This whitened field, which is spatially uncorrelated, can be utilized as an estimate of the independent generating process W ( j, k ) by forming its first-order histogram.

TEXTURE FEATURES

533

FIGURE 16.6-2. Decorrelation method of texture feature extraction.

(a) Sand

(b) Grass

(c) Wool

(d ) Raffia

FIGURE 16.6-3. Whitened Brodatz texture fields.

534

IMAGE FEATURE EXTRACTION

FIGURE 16.6-4. First-order histograms of whitened Brodatz texture fields.

If W ( j, k ) were known exactly, then, in principle, it could be used to identify ˆ O { · } from the texture observation F ( j, k ) . But, the whitened field estimate W ( j, k ) can only be used to identify the autocorrelation function, which, of course, is already known. As a consequence, the texture generation model cannot be inverted. ˆ However, the shape of the histogram of W ( j, k ) augmented by the shape of the autocorrelation function have proved to be useful texture features. Figure 16.6-3 shows the whitened texture fields of the Brodatz test images. Figure 16.6-4 provides plots of their histograms. The whitened fields are observed to be visually distinctive; their histograms are also different from one another. Tables 16.6-2 and 16.6-3 list, respectively, the Bhattacharyya distance measurements for histogram shape features alone, and histogram and autocorrelation shape features. The B-distance is relatively low for some of the test textures for histogram-only features. A combination of the autocorrelation shape and histogram shape features provides good results, as noted in Table 16.6-3. An obvious disadvantage of the decorrelation method of texture measurement, as just described, is the large amount of computation involved in generating the whitening operator. An alternative is to use an approximate decorrelation operator. Two candidates, investigated by Faugeras and Pratt (5), are the Laplacian and Sobel gradients. Figure 16.6-5 shows the resultant decorrelated fields for these operators. The B-distance measurements using the Laplacian and Sobel gradients are presented in Tables 16.6-2 and 16.6-3. These tests indicate that the whitening operator is superior, on average, to the Laplacian operator. But the Sobel operator yields the largest average and largest minimum B-distances.

TABLE 16.6-2 Bhattacharyya Distance of Texture Feature Sets for Prototype Texture Fields: Histogram Features Texture Feature Whitening Set 2b Set 1 1.29 3.48 2.23 2.23 7.73 4.59 3.59 3.51 4.43 1.53 2.17 7.65 7.42 1.40 3.13 1.24 2.14 1.57 0.28 2.19 1.76 0.13 3.38 0.55 1.87 2.20 2.98 5.09 9.98 7.73 6.31 1.28 0.19 0.66 9.90 7.15 1.00 1.67 4.79 5.01 2.31 3.66 Set 2 Set 3 Set 4 Set 1 Set 2 4.52 1.04 1.59 12.60 12.55 3.87 6.03 4.20 0.88 0.39 1.47 8.24 2.19 10.93 0.24 1.07 0.14 0.51 0.52 4.04 0.77 Set 3c Set 4d Laplacian Sobel Set 3 4.41 0.27 1.01 3.51 1.67 0.41 1.88 Set 4 2.31 0.02 1.46 2.30 0.56 1.41 1.35

Field Pair 4.61 1.15 1.68

Set 1a

Grass – sand

Grass – raffia

Grass – wool

Sand – raffia

12.76

Sand – wool 4.20 6.14

12.61

Raffia – wool

Average

a Set 1: SM, SD, SS, SK. bSet 2: S , S . S K cSet 3: S . S d Set 4: SK.

535

536 Texture Feature Whitening Set 2b Set 4 7.48 4.66 1.70 12.98 15.72 7.96 8.42 8.98 8.89 7.20 5.88 13.93 13.75 10.90 8.47 14.43 14.38 12.72 10.86 3.85 3.76 2.74 2.49 6.75 18.75 17.28 12.20 4.64 4.59 2.48 2.31 5.62 10.61 10.49 8.74 6.95 9.46 6.39 6.37 5.61 4.21 15.34 12.34 8.15 4.05 6.40 12.3 11.19 9.08 Set 1 Set 2 Set 3 Set 4 Set 1 Set 2 9.72 8.34 4.03 15.08 19.08 13.14 11.57 9.72 10.32 17.43 13.22 1.87 6.56 8.94 Set 3c Laplacian Sobel Set 3 11.48 6.33 1.87 5.39 10.52 8.24 7.31 Set 4 10.12 4.59 1.72 5.13 8.29 6.08 5.99

TABLE 16.6-3 Bhattacharyya Distance of Texture Feature Sets for Prototype Texture Fields: Autocorrelation and Histogram Features

Field Pair

Set 1a

Grass – sand

9.80

Grass – raffia

8.47

Grass – wool

4.17

Sand – raffia

15.26

Sand – wool

19.14

Raffia – wool

13.29

Average

11.69

aSet 1: S S , S , S , S(2,0), S(0,2), S(1,1), S(2,2). M, D S K bSet 2: S , S , S(2,0), S(0,2), S(1,1), S(2, 2). S K cSet 3: S , S , S(1,1), S(2,2). S k dSet 4: S S , S(2,2). S, K

TEXTURE FEATURES

537

(a) Laplacian, sand

(b) Sobel, sand

(c) Laplacian, raffia

(d ) Sobel, raffia

FIGURE 16.6-5. Laplacian and Sobel gradients of Brodatz texture fields.

16.6.5. Dependency Matrix Methods Haralick et al. (7) have proposed a number of textural features based on the joint amplitude histogram of pairs of pixels. If an image region contains fine texture, the two-dimensional histogram of pixel pairs will tend to be uniform, and for coarse texture, the histogram values will be skewed toward the diagonal of the histogram. Consider the pair of pixels F ( j, k ) and F ( m, n ) that are separated by r radial units at an angle θ with respect to the horizontal axis. Let P ( a, b ; j, k, r, θ ) represent the two-dimensional histogram measurement of an image over some W × W window where each pixel is quantized over a range 0 ≤ a, b ≤ L – 1 . The two-dimensional histogram can be considered as an estimate of the joint probability distribution
P ( a, b ; j, k, r, θ ) ≈ P R [ F ( j, k ) = a, F ( m, n ) = b ]

(16.6-5)

538

IMAGE FEATURE EXTRACTION

FIGURE 16.6-6. Geometry for measurement of gray scale dependency matrix.

For each member of the parameter set ( j, k, r, θ ) , the two-dimensional histogram may be regarded as a L × L array of numbers relating the measured statistical dependency of pixel pairs. Such arrays have been called a gray scale dependency matrix or a co-occurrence matrix. Because a L × L histogram array must be accumulated for each image point ( j, k ) and separation set ( r, θ ) under consideration, it is usually computationally necessary to restrict the angular and radial separation to a limited number of values. Figure 16.6-6 illustrates geometrical relationships of histogram measurements made for four radial separation points and angles of θ = 0, π ⁄ 4, π ⁄ 2, 3π ⁄ 4 radians under the assumption of angular symmetry. To obtain statistical confidence in estimation of the joint probability distribution, the histogram must contain a reasonably large average occupancy level. This can be achieved either by restricting the number of amplitude quantization levels or by utilizing a relatively large measurement window. The former approach results in a loss of accuracy in the measurement of low-amplitude texture, while the latter approach causes errors if the texture changes over the large window. A typical compromise is to use 16 gray levels and a window of about 30 to 50 pixels on each side. Perspective views of joint amplitude histograms of two texture fields are presented in Figure 16.6-7. For a given separation set ( r, θ ) , the histogram obtained for fine texture tends to be more uniformly dispersed than the histogram for coarse texture. Texture coarseness can be measured in terms of the relative spread of histogram occupancy cells about the main diagonal of the histogram. Haralick et al. (7) have proposed a number of spread indicators for texture measurement. Several of these have been presented in Section 16.2. As an example, the inertia function of Eq. 16.2-15 results in a texture measure of the form

L–1 L–1

T ( j, k, r, θ ) =

a=0 b=0

∑ ∑ (a – b)

2

P ( a, b ; j, k, r, θ )

(16.6-6)

TEXTURE FEATURES

539

(a) Grass

(b) Dependency matrix, grass

(c) Ivy

(d) Dependency matrix, ivy

FIGURE 16.6-7. Perspective views of gray scale dependency matrices for r = 4, θ = 0.

If the textural region of interest is suspected to be angularly invariant, it is reasonable to average over the measurement angles of a particular measure to produce the mean textural measure (20)
1 M T ( j, k, r ) = ----Nθ

∑ T ( j, k, r, θ ) θ (16.6-7)

where the summation is over the angular measurements, and N θ represents the number of such measurements. Similarly, an angular-independent texture variance may be defined as
1VT ( j, k, r ) = ----Nθ

∑ [ T ( j, k, r, θ ) – MT ( j, k, r ) ] θ 2

(16.6-8)

540

IMAGE FEATURE EXTRACTION

FIGURE 16.6-8. Laws microstructure texture feature extraction method.

Another useful measurement is the angular independent spread defined by

S ( j, k, r ) = MAX { T ( j, k, r, θ ) } – MIN { T ( j, k, r, θ ) } θ θ

(16.6-9)

16.6.6. Microstructure Methods Examination of the whitened, Laplacian, and Sobel gradient texture fields of Figures 16.6-3 and 16.6-5 reveals that they appear to accentuate the microstructure of the texture. This observation was the basis of a texture feature extraction scheme developed by Laws (29), and described in Figure 16.6-8. Laws proposed that the set of nine 3 × 3 pixel impulse response arrays H i ( j, k ) shown in Figure 16.6-9, be convolved with a texture field to accentuate its microstructure. The ith microstructure array is defined as

M i ( j, k ) = F ( j, k ) * H i ( j, k )

(16.6-10)

TEXTURE FEATURES

541

FIGURE 16.6-9. Laws microstructure impulse response arrays.

Then, the energy of these microstructure arrays is measured by forming their moving window standard deviation Ti ( j, k ) according to Eq. 16.2-2, over a window that contains a few cycles of the repetitive texture. Figure 16.6-10 shows a mosaic of several Brodatz texture fields that have been used to test the Laws feature extraction method. Note that some of the texture fields appear twice in the mosaic. Figure 16.6-11 illustrates the texture arrays Ti ( j, k ) . In classification tests of the Brodatz textures performed by Laws (29), the correct texture was identified in nearly 90% of the trials. Many of the microstructure detection operators of Figure 16.6-9 have been encountered previously in this book: the pyramid average, the Sobel horizontal and vertical gradients, the weighted line horizontal and vertical gradients, and the cross second derivative. The nine Laws operators form a basis set that can be generated from all outer product combinations of the three vectors
1 1 v 1 = -- 2 6 1

(16.6-11a)

1 v2 = 1 -2 0 –1

(16.6-11b)

542

IMAGE FEATURE EXTRACTION

FIGURE 16.6-10. Mosaic of Brodatz texture fields.

1 1 v 3 = -- – 2 2 1

(16.6-11c)

Alternatively, the 3 × 3 Chebyshev basis set proposed by Haralick (30) for edge detection, as described in Section 16.3.3, can be used for texture measurement. The
T 1 first Chebyshev basis vector is v 1 = -- 1 1 1 . The other two are identical to 3 Eqs. 16.6-11b and 16.6-11c. The Laws procedure can be extended by using larger size Chebyshev arrays or other types of basis arrays (31). Ade (32) has suggested a microstructure texture feature extraction procedure similar in nature to the Laws method, which is based on a principal components transformation of a texture sample. In the development of this transformation, pixels within a 3 × 3 neighborhood are regarded as being column stacked into a 9 × 1 vector, as shown in Figure 16.6-12a. Then a 9 × 9 covariance matrix K that specifies all pairwise covariance relationships of pixels within the stacked vector is estimated from a set of prototype texture fields. Next, a 9 × 9 transformation matrix T that diagonalizes the covariance matrix K is computed, as described in Eq. 5.8-7. The rows of T are eigenvectors of the principal components transformation. Each eigen

TEXTURE FEATURES

543

(a) Laws no. 1

(b) Laws no. 2

(c) Laws no. 3

(d ) Laws no. 4

(e) Laws no. 5

(f ) Laws no. 6

FIGURE 16.6-11. Laws microstructure texture features.

544

IMAGE FEATURE EXTRACTION

(g) Laws no. 7

(h) Laws no. 8

(i ) Laws no. 9

FIGURE 16.6-11. (continued) Laws microstructure texture features.

vector is then cast into a 3 × 3 impulse response array by the destacking operation of Eq. 5.3-4. The resulting nine eigenmatrices are then used in place of the Laws fixed impulse response arrays, as shown in Figure 16.6-8. Ade (32,33) has computed eigenmatrices for a Brodatz texture field and a cloth sample. Interestingly, these eigenmatrices are similar in structure to the Laws arrays. 16.6.7. Gabor Filter Methods The microstructure method of texture feature extraction is not easily scalable. Microstructure arrays must be derived to match the inherent periodicity of each texture to be characterized. Bovik et al. (34–36) have utilized Gabor filters (37) as an efficient means of scaling the impulse response function arrays of Figure 16.6-8 to the texture periodicity. A two-dimensional Gabor filter is a complex field sinusoidal grating that is modulated by a two-dimensional Gaussian function in the spatial

TEXTURE FEATURES

545

(a) 3 × 3 neighborhood

(b) Pixel relationships

FIGURE 16.6-12. Neighborhood covariance relationships.

domain (35). Gabor filters have tunable orientation and radial frequency passbands and tunable center frequencies. A special case of the Gabor filter is the daisy petal filter, in which the filter lobes radiate from the origin of the spatial frequency domain. The continuous domain impulse response function of the daisy petal Gabor filter is given by (35)
H ( x, y ) = G ( x′, y ′ ) exp { 2πiFx ′ }

(16.6-12)

where F is a scaling factor and i =

– 1. The Gaussian component is

 ( x ⁄ λ ) + y2  1 G ( x, y ) = ---------------- exp  – ----------------------------  2 2   2πλσ 2σ

2

(16.6-13)

546

IMAGE FEATURE EXTRACTION

where σ is the Gaussian spread factor and λ is the aspect ratio between the x and y axes. The rotation of coordinates is specified by
( x ′, y ′ ) = ( x cos φ + y sin φ, – x sin φ + y cos φ )

(16.6-14)

where φ is the orientation angle with respect to the x axis. The continuous domain filter transfer function is given by (35)
H ( u, v ) = exp { – 2π σ [ ( u ′ – F ) + ( v ′ ) ] }
2 2 2 2

(16.6-15)

Figure 16.6-13 shows the relationship between the real and imaginary components of the impulse response array and the magnitude of the transfer function (35). y y

x

x

(a) Real part of H (x, y)

(b) Imaginary part of H (x, y)

v

u

(c)

(u, v)

Figure 16.6-13. Relationship between impulse response array and transfer function of a Gabor filter.

TEXTURE FEATURES

547

The impulse response array is composed of sine-wave gratings within the elliptical region. The half energy profile of the transfer function is shown in gray. In the comparative study of texture classification methods by Randen and Husoy (23), The Gabor filter method, like many other methods, gave mixed results. It performed well on some texture samples, but poorly on others. 16.6.8. Transform and Wavelet Methods The Fourier spectra method of texture feature extraction can be generalized to other unitary transforms. The concept is straightforward. A N × N texture sample is subdivided into M × M pixel arrays, and a unitary transform is performed for each array 2 yielding a M × 1 feature vector. The window size needs to large enough to contain several cycles of the texture periodicity. Mallat (38) has used the discrete wavelet transform, based on Haar wavelets (see Section 8.4.2) as a means of generating texture feature vectors. Improved results have been obtained by Unser (39), who has used a complete Haar-based wavelet transform for an 8 × 8 window. In their comparative study of texture classification, Randen and Husoy (23) used several types of Daubechies transforms up to size 10 (see Section 8.4-4). The transform and wavelet methods provide reasonably good classification for many texture samples (23). However, the computational requirement is high for large windows. 16.6.9. Singular-Value Decomposition Methods Ashjari (40) has proposed a texture measurement method based on the singularvalue decomposition of a texture sample. In this method, a N × N texture sample is treated as a N × N matrix X and the amplitude-ordered set of singular values s(n) for n = 1, 2,. .., N is computed, as described in Appendix A1.2. If the elements of X are spatially unrelated to one another, the singular values tend to be uniformly distributed in amplitude. On the other hand, if the elements of X are highly structured, the singular-value distribution tends to be skewed such that the lower-order singular values are much larger than the higher-order ones. Figure 16.6-14 contains measurements of the singular-value distributions of the four Brodatz textures performed by Ashjari (40). In this experiment, the 512 × 512 pixel texture originals were first subjected to a statistical rescaling process to produce four normalized texture images whose first-order distributions were Gaussian with identical moments. Next, these normalized texture images were subdivided into 196 nonoverlapping 32 × 32 pixel blocks, and an SVD transformation was taken of each block. Figure 16.6-14 is a plot of the average value of each singular value. The shape of the singular-value distributions can be quantified by the one-dimensional shape descriptors defined in Section 16.2. Table 16.6-4 lists Bhattacharyya distance measurements obtained by Ashjari (30) for the mean, standard deviation, skewness, and kurtosis shape descriptors. For this experiment, the B-distances are relatively high, and therefore good classification results should be expected.

548

IMAGE FEATURE EXTRACTION

FIGURE 16.6-14. Singular-value distributions of Brodatz texture fields.

TABLE 16.6-4. Bhattacharyya Distance of SVD Texture Feature Sets for Prototype Texture Fields: SVD Features Field Pair Grass – sand Grass – raffia Grass – wool Sand – raffia Sand – wool Raffia – wool Average 1.25 2.42 3.31 6.33 2.56 9.24 4.19

REFERENCES
1. H. C. Andrews, Introduction to Mathematical Techniques in Pattern Recognition, WileyInterscience, New York, 1972. 2. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, 2nd ed., Wiley-Interscience, New York, 2001. 3. K. Fukunaga, Introduction to Statistical Pattern Recognition, 2nd ed., Academic Press, New York, 1990. 4. W. S. Meisel, Computer-Oriented Approaches to Pattern Recognition, Academic Press, New York, 1972.

REFERENCES

549

5. O. D. Faugeras and W. K. Pratt, “Decorrelation Methods of Texture Feature Extraction,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-2, 4, July 1980, 323–332. 6. R. O. Duda, “Image Data Extraction,” unpublished notes, July 1975. 7. R. M. Haralick, K. Shanmugan, and I. Dinstein, “Texture Features for Image Classification,” IEEE Trans. Systems, Man and Cybernetics, SMC-3, November 1973, 610–621. 8. G. G. Lendaris and G. L. Stanley, “Diffraction Pattern Sampling for Automatic Pattern Recognition,” Proc. IEEE, 58, 2, February 1970, 198–216. 9. R. M. Pickett, “Visual Analysis of Texture in the Detection and Recognition of Objects,” in Picture Processing and Psychopictorics, B. C. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970, 289–308. 10. J. K. Hawkins, “Textural Properties for Pattern Recognition,” in Picture Processing and Psychopictorics, B. C. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970, 347–370. 11. P. Brodatz, Texture: A Photograph Album for Artists and Designers, Dover Publications, New York, 1956. 12. J. W. Woods, “Two-Dimensional Discrete Markov Random Fields,” IEEE Trans. Information Theory, IT-18, 2, March 1972, 232–240. 13. B. Julesz, “Visual Pattern Discrimination,” IRE Trans. Information Theory, IT-8, 1, February 1962, 84–92. 14. B. Julesz et al., “Inability of Humans to Discriminate Between Visual Textures That Agree in Second-Order Statistics Revisited,” Perception, 2, 1973, 391–405. 15. B. Julesz, Foundations of Cyclopean Perception, University of Chicago Press, Chicago, 1971. 16. B. Julesz, “Experiments in the Visual Perception of Texture,” Scientific American, 232, 4, April 1975, 2–11. 17. I. Pollack, Perceptual Psychophysics, 13, 1973, 276–280. 18. S. R. Purks and W. Richards, “Visual Texture Discrimination Using Random-Dot Patterns,” J. Optical Society America, 67, 6, June 1977, 765–771. 19. W. K. Pratt, O. D. Faugeras, and A. Gagalowicz, “Visual Discrimination of Stochastic Texture Fields,” IEEE Trans. Systems, Man and Cybernetics, SMC-8, 11, November 1978, 796–804. 20. E. L. Hall et al., “A Survey of Preprocessing and Feature Extraction Techniques for Radiographic Images,” IEEE Trans. Computers, C-20, 9, September 1971, 1032–1044. 21. R. M. Haralick, “Statistical and Structural Approach to Texture,” Proc. IEEE, 67, 5, May 1979, 786–804. 22. T. R. Reed and J. M. H. duBuf, “A Review of Recent Texture Segmentation and Feature Extraction Techniques,” CVGIP: Image Understanding, 57, May 1993, 358–372. 23. T. Randen and J. H. Husoy, “Filtering for Classification: A Comparative Study,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI 21, 4, April 1999, 291–310. 24. A. Rosenfeld, “Automatic Recognition of Basic Terrain Types from Aerial Photographs,” Photogrammic Engineering, 28, 1, March 1962, 115–132. 25. J. M. Coggins and A. K. Jain, “A Spatial Filtering Approach to Texture Analysis,” Pattern Recognition Letters, 3, 3, 1985, 195–203.

550

IMAGE FEATURE EXTRACTION

26. R. P. Kruger, W. B. Thompson, and A. F. Turner, “Computer Diagnosis of Pneumoconiosis,” IEEE Trans. Systems, Man and Cybernetics, SMC-4, 1, January 1974, 40–49. 27. R. N. Sutton and E. L. Hall, “Texture Measures for Automatic Classification of Pulmonary Disease,” IEEE Trans. Computers, C-21, July 1972, 667–676. 28. A. Rosenfeld and E. B. Troy, “Visual Texture Analysis,” Proc. UMR–Mervin J. Kelly Communications Conference, University of Missouri–Rolla, Rolla, MO, October 1970, Sec. 10-1. 29. K. I. Laws, “Textured Image Segmentation,” USCIPI Report 940, University of Southern California, Image Processing Institute, Los Angeles, January 1980. 30. R. M. Haralick, “Digital Step Edges from Zero Crossing of Second Directional Derivatives,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-6, 1, January 1984, 58–68. 31. M. Unser and M. Eden, “Multiresolution Feature Extraction and Selection for Texture Segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-11, 7, July 1989, 717–728. 32. F. Ade, “Characterization of Textures by Eigenfilters,” Signal Processing, September 1983. 33. F. Ade, “Application of Principal Components Analysis to the Inspection of Industrial Goods,” Proc. SPIE International Technical Conference/Europe, Geneva, April 1983. 34. M. Clark and A. C. Bovik, “Texture Discrimination Using a Model of Visual Cortex,” Proc. IEEE International Conference on Systems, Man and Cybernetics, Atlanta, GA, 1986 35. A. C. Bovik, M. Clark, and W. S. Geisler, “Multichannel Texture Analysis Using Localized Spatial Filters,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-12, 1, January 1990, 55–73. 36. A. C. Bovik, “Analysis of Multichannel Narrow-Band Filters for Image Texture Segmentation,” IEEE Trans. Signal Processing, 39, 9, September 1991, 2025–2043. 37. D. Gabor, “Theory of Communication,” J. Institute of Electrical Engineers, 93, 1946, 429–457. 38. S. G. Mallat, “A Theory for Multiresolution Signal Decomposition: The Wavelet Representation,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-11, 7, July 1989, 674–693. 39. M. Unser, “Texture Classification and Segmentation Using Wavelet Frames,” IEEE Trans. Image Processing, IP-4, 11, November 1995, 1549–1560. 40. B. Ashjari, “Singular Value Decomposition Texture Measurement for Image Classification,” Ph.D. dissertation, University of Southern California, Department of Electrical Engineering, Los Angeles February 1982.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

17
IMAGE SEGMENTATION

Segmentation of an image entails the division or separation of the image into regions of similar attribute. The most basic attribute for segmentation is image luminance amplitude for a monochrome image and color components for a color image. Image edges and texture are also useful attributes for segmentation. The definition of segmentation adopted in this chapter is deliberately restrictive; no contextual information is utilized in the segmentation. Furthermore, segmentation does not involve classifying each segment. The segmenter only subdivides an image; it does not attempt to recognize the individual segments or their relationships to one another. There is no theory of image segmentation. As a consequence, no single standard method of image segmentation has emerged. Rather, there are a collection of ad hoc methods that have received some degree of popularity. Because the methods are ad hoc, it would be useful to have some means of assessing their performance. Haralick and Shapiro (1) have established the following qualitative guideline for a good image segmentation: “Regions of an image segmentation should be uniform and homogeneous with respect to some characteristic such as gray tone or texture. Region interiors should be simple and without many small holes. Adjacent regions of a segmentation should have significantly different values with respect to the characteristic on which they are uniform. Boundaries of each segment should be simple, not ragged, and must be spatially accurate.” Unfortunately, no quantitative image segmentation performance metric has been developed. Several generic methods of image segmentation are described in the following sections. Because of their complexity, it is not feasible to describe all the details of the various algorithms. Surveys of image segmentation methods are given in References 1 to 6.
551

552

IMAGE SEGMENTATION

17.1. AMPLITUDE SEGMENTATION METHODS This section considers several image segmentation methods based on the thresholding of luminance or color components of an image. An amplitude projection segmentation technique is also discussed. 17.1.1. Bilevel Luminance Thresholding Many images can be characterized as containing some object of interest of reasonably uniform brightness placed against a background of differing brightness. Typical examples include handwritten and typewritten text, microscope biomedical samples, and airplanes on a runway. For such images, luminance is a distinguishing feature that can be utilized to segment the object from its background. If an object of interest is white against a black background, or vice versa, it is a trivial task to set a midgray threshold to segment the object from the background. Practical problems occur, however, when the observed image is subject to noise and when both the object and background assume some broad range of gray scales. Another frequent difficulty is that the background may be nonuniform. Figure 17.1-1a shows a digitized typewritten text consisting of dark letters against a lighter background. A gray scale histogram of the text is presented in Figure 17.1-1b. The expected bimodality of the histogram is masked by the relatively large percentage of background pixels. Figure 17.1-1c to e are threshold displays in which all pixels brighter than the threshold are mapped to unity display luminance and all the remaining pixels below the threshold are mapped to the zero level of display luminance. The photographs illustrate a common problem associated with image thresholding. If the threshold is set too low, portions of the letters are deleted (the stem of the letter “p” is fragmented). Conversely, if the threshold is set too high, object artifacts result (the loop of the letter “e” is filled in). Several analytic approaches to the setting of a luminance threshold have been proposed (7,8). One method is to set the gray scale threshold at a level such that the cumulative gray scale count matches an a priori assumption of the gray scale probability distribution (9). For example, it may be known that black characters cover 25% of the area of a typewritten page. Thus, the threshold level on the image might be set such that the quartile of pixels with the lowest luminance are judged to be black. Another approach to luminance threshold selection is to set the threshold at the minimum point of the histogram between its bimodal peaks (10). Determination of the minimum is often difficult because of the jaggedness of the histogram. A solution to this problem is to fit the histogram values between the peaks with some analytic function and then obtain its minimum by differentiation. For example, let y and x represent the histogram ordinate and abscissa, respectively. Then the quadratic curve y = ax + bx + c
2

(17.1-1)

AMPLITUDE SEGMENTATION METHODS

553

(a) Gray scale text

(b) Histogram

(c) High threshold, T = 0.67

(d ) Medium threshold, T = 0.50

(e) Low threshold, T = 0.10

(f ) Histogram, Laplacian mask

FIGURE 17.1-1. Luminance thresholding segmentation of typewritten text.

554

IMAGE SEGMENTATION

where a, b, and c are constants provides a simple histogram approximation in the vicinity of the histogram valley. The minimum histogram valley occurs for x = – b ⁄ 2a . Papamarkos and Gatos (11) have extended this concept for threshold selection. Weska et al. (12) have suggested the use of a Laplacian operator to aid in luminance threshold selection. As defined in Eq. 15.3-1, the Laplacian forms the spatial second partial derivative of an image. Consider an image region in the vicinity of an object in which the luminance increases from a low plateau level to a higher plateau level in a smooth ramplike fashion. In the flat regions and along the ramp, the Laplacian is zero. Large positive values of the Laplacian will occur in the transition region from the low plateau to the ramp; large negative values will be produced in the transition from the ramp to the high plateau. A gray scale histogram formed of only those pixels of the original image that lie at coordinates corresponding to very high or low values of the Laplacian tends to be bimodal with a distinctive valley between the peaks. Figure 17.1-1f shows the histogram of the text image of Figure 17.1-1a after the Laplacian mask operation. If the background of an image is nonuniform, it often is necessary to adapt the luminance threshold to the mean luminance level (13,14). This can be accomplished by subdividing the image into small blocks and determining the best threshold level for each block by the methods discussed previously. Threshold levels for each pixel may then be determined by interpolation between the block centers. Yankowitz and Bruckstein (15) have proposed an adaptive thresholding method in which a threshold surface is obtained by interpolating an image only at points where its gradient is large. 17.1.2. Multilevel Luminance Thresholding Effective segmentation can be achieved in some classes of images by a recursive multilevel thresholding method suggested by Tomita et al. (16). In the first stage of the process, the image is thresholded to separate brighter regions from darker regions by locating a minimum between luminance modes of the histogram. Then histograms are formed of each of the segmented parts. If these histograms are not unimodal, the parts are thresholded again. The process continues until the histogram of a part becomes unimodal. Figures 17.1-2 to 17.1-4 provide an example of this form of amplitude segmentation in which the peppers image is segmented into four gray scale segments. 17.1.3. Multilevel Color Component Thresholding The multilevel luminance thresholding concept can be extended to the segmentation of color and multispectral images. Ohlander et al. (17, 18) have developed a segmentation scheme for natural color images based on multidimensional thresholding of color images represented by their RGB color components, their luma/chroma YIQ components, and by a set of nonstandard color components, loosely called intensity,

AMPLITUDE SEGMENTATION METHODS

555

(a) Original

(b) Original histogram

(c) Segment 0

(d ) Segment 0 histogram

(e) Segment 1

(f ) Segment 1 histogram

FIGURE 17.1-2. Multilevel luminance thresholding image segmentation of the peppers_ mon image; first-level segmentation.

556

IMAGE SEGMENTATION

(a) Segment 00

(b) Segment 00 histogram

(c) Segment 01

(d ) Segment 01 histogram

FIGURE 17.1-3. Multilevel luminance thresholding image segmentation of the peppers_ mon image; second-level segmentation, 0 branch.

hue, and saturation. Figure 17.1-5 provides an example of the property histograms of these nine color components for a scene. The histograms, have been measured over those parts of the original scene that are relatively devoid of texture: the nonbusy parts of the scene. This important step of the segmentation process is necessary to avoid false segmentation of homogeneous textured regions into many isolated parts. If the property histograms are not all unimodal, an ad hoc procedure is invoked to determine the best property and the best level for thresholding of that property. The first candidate is image intensity. Other candidates are selected on a priority basis, depending on contrast level and location of the histogram modes. After a threshold level has been determined, the image is subdivided into its segmented parts. The procedure is then repeated on each part until the resulting property histograms become unimodal or the segmentation reaches a reasonable

AMPLITUDE SEGMENTATION METHODS

557

(a) Segment 10

(b) Segment 10 histogram

(c) Segment 11

(d ) Segment 11 histogram

FIGURE 17.1-4. Multilevel luminance thresholding image segmentation of the peppers_ mon image; second-level segmentation, 1 branch.

stage of separation under manual surveillance. Ohlander's segmentation technique using multidimensional thresholding aided by texture discrimination has proved quite effective in simulation tests. However, a large part of the segmentation control has been performed by a human operator; human judgment, predicated on trial threshold setting results, is required for guidance. In Ohlander's segmentation method, the nine property values are obviously interdependent. The YIQ and intensity components are linear combinations of RGB; the hue and saturation measurements are nonlinear functions of RGB. This observation raises several questions. What types of linear and nonlinear transformations of RGB are best for segmentation? Ohta et al. (19) suggest an approximation to the spectral Karhunen–Loeve transform. How many property values should be used? What is the best form of property thresholding? Perhaps answers to these last two questions may

558

IMAGE SEGMENTATION

FIGURE 17.1-5. Typical property histograms for color image segmentation.

be forthcoming from a study of clustering techniques in pattern recognition (20). Property value histograms are really the marginal histograms of a joint histogram of property values. Clustering methods can be utilized to specify multidimensional decision boundaries for segmentation. This approach permits utilization of all the property values for segmentation and inherently recognizes their respective cross correlation. The following section discusses clustering methods of image segmentation.

AMPLITUDE SEGMENTATION METHODS

559

17.1.4. Amplitude Projection Image segments can sometimes be effectively isolated by forming the average amplitude projections of an image along its rows and columns (21,22). The horizontal (row) and vertical (column) projections are defined as
1 H ( k ) = -J

j=1

∑ F ( j, k )
K

J

(17.1-2)

and
1 V ( j ) = --K

k=1

∑ F ( j, k )

(17.1-3)

Figure 17.1-6 illustrates an application of gray scale projection segmentation of an image. The rectangularly shaped segment can be further delimited by taking projections over oblique angles.

B (a) Row projection

W (b) Original W

B (c) Segmentation (d ) Column projection

FIGURE 17.1-6. Gray scale projection image segmentation of a toy tank image.

560

IMAGE SEGMENTATION

17.2. CLUSTERING SEGMENTATION METHODS One of the earliest examples of image segmentation, by Haralick and Kelly (23) using data clustering, was the subdivision of multispectral aerial images of agricultural land into regions containing the same type of land cover. The clustering segmentation concept is simple; however, it is usually computationally intensive. T Consider a vector x = [ x 1, x 2, …, x N ] of measurements at each pixel coordinate (j, k) in an image. The measurements could be point multispectral values, point color components, and derived color components, as in the Ohlander approach described previously, or they could be neighborhood feature measurements such as the moving window mean, standard deviation, and mode, as discussed in Section 16.2. If the measurement set is to be effective for image segmentation, data collected at various pixels within a segment of common attribute should be similar. That is, the data should be tightly clustered in an N-dimensional measurement space. If this condition holds, the segmenter design task becomes one of subdividing the N-dimensional measurement space into mutually exclusive compartments, each of which envelopes typical data clusters for each image segment. Figure 17.2-1 illustrates the concept for two features. In the segmentation process, if a measurement vector for a pixel falls within a measurement space compartment, the pixel is assigned the segment name or label of that compartment. Coleman and Andrews (24) have developed a robust and relatively efficient image segmentation clustering algorithm. Figure 17.2-2 is a flowchart that describes a simplified version of the algorithm for segmentation of monochrome images. The first stage of the algorithm involves feature computation. In one set of experiments, Coleman and Andrews used 12 mode measurements in square windows of size 1, 3, 7, and 15 pixels. The next step in the algorithm is the clustering stage, in which the optimum number of clusters is determined along with the feature space center of each cluster. In the segmenter, a given feature vector is assigned to its closest cluster center.

X2

CLASS 1 CLASS 2

CLASS 3

LINEAR CLASSIFICATION BOUNDARY X1

FIGURE 17.2-1. Data clustering for two feature measurements.

CLUSTERING SEGMENTATION METHODS

561

FIGURE 17.2-2. Simplified version of Coleman–Andrews clustering image segmentation method.

The cluster computation algorithm begins by establishing two initial trial cluster centers. All feature vectors of an image are assigned to their closest cluster center. Next, the number of cluster centers is successively increased by one, and a clustering quality factor β is computed at each iteration until the maximum value of β is determined. This establishes the optimum number of clusters. When the number of clusters is incremented by one, the new cluster center becomes the feature vector that is farthest from its closest cluster center. The β factor is defined as β = tr { S W } tr { S B }

(17.2-1)

where SW and SB are the within- and between-cluster scatter matrices, respectively, and tr { · } denotes the trace of a matrix. The within-cluster scatter matrix is computed as
1 S W = --K

k=1



K

1-----Mk

x i ∈ Sk



( xi – u k ) ( x i – uk )

T

(17.2-2)

where K is the number of clusters, Mk is the number of vector elements in the kth cluster, xi is a vector element in the kth cluster, u k is the mean of the kth cluster, and Sk is the set of elements in the kth cluster. The between-cluster scatter matrix is defined as
1 S B = --K

k=1

∑ ( uk – u 0 ) ( uk – u0 )

K

T

(17.2-3)

where u 0 is the mean of all of the feature vectors as computed by
1 u 0 = ---M

i=1



M

xi

(17.2-4)

562

IMAGE SEGMENTATION

where M denotes the number of pixels to be clustered. Coleman and Andrews (24) have obtained subjectively good results for their clustering algorithm in the segmentation of monochrome and color images. 17.3. REGION SEGMENTATION METHODS The amplitude and clustering methods described in the preceding sections are based on point properties of an image. The logical extension, as first suggested by Muerle and Allen (25), is to utilize spatial properties of an image for segmentation. 17.3.1. Region Growing Region growing is one of the conceptually simplest approaches to image segmentation; neighboring pixels of similar amplitude are grouped together to form a segmented region. However, in practice, constraints, some of which are reasonably complex, must be placed on the growth pattern to achieve acceptable results. Brice and Fenema (26) have developed a region-growing method based on a set of simple growth rules. In the first stage of the process, pairs of quantized pixels are combined together in groups called atomic regions if they are of the same amplitude and are four-connected. Two heuristic rules are next invoked to dissolve weak boundaries between atomic boundaries. Referring to Figure 17.3-1, let R1 and R2 be two adjacent regions with perimeters P1 and P2, respectively, which have previously been merged. After the initial stages of region growing, a region may contain previously merged subregions of different amplitude values. Also, let C denote the length of the common boundary and let D represent the length of that portion of C for which the amplitude difference Y across the boundary is smaller than a significance factor ε 1. The regions R1 and R2 are then merged if
D ---------------------------------- > ε 2 MIN { P1, P2 }

(17.3-1)

FIGURE 17.3-1. Region-growing geometry.

REGION SEGMENTATION METHODS

563

where ε 2 is a constant typically set at ε2 = 1 . This heuristic prevents merger of 2 adjacent regions of the same approximate size, but permits smaller regions to be absorbed into larger regions. The second rule merges weak common boundaries remaining after application of the first rule. Adjacent regions are merged if D --- > ε3 C

(17.3-2)

where ε 3 is a constant set at about ε3 = 3 . Application of only the second rule tends 4 to overmerge regions. The Brice and Fenema region growing method provides reasonably accurate segmentation of simple scenes with few objects and little texture (26, 27) but does not perform well on more complex scenes. Yakimovsky (28) has attempted to improve the region-growing concept by establishing merging constraints based on estimated Bayesian probability densities of feature measurements of each region.

17.3.2. Split and Merge Split and merge image segmentation techniques (29) are based on a quad tree data representation whereby a square image segment is broken (split) into four quadrants if the original image segment is nonuniform in attribute. If four neighboring squares are found to be uniform, they are replaced (merge) by a single square composed of the four adjacent squares. In principle, the split and merge process could start at the full image level and initiate split operations. This approach tends to be computationally intensive. Conversely, beginning at the individual pixel level and making initial merges has the drawback that region uniformity measures are limited at the single pixel level. Initializing the split and merge process at an intermediate level enables the use of more powerful uniformity tests without excessive computation. The simplest uniformity measure is to compute the difference between the largest and smallest pixels of a segment. Fukada (30) has proposed the segment variance as a uniformity measure. Chen and Pavlidis (31) suggest more complex statistical measures of uniformity. The basic split and merge process tends to produce rather blocky segments because of the rule that square blocks are either split or merged. Horowitz and Pavlidis (32) have proposed a modification of the basic process whereby adjacent pairs of regions are merged if they are sufficiently uniform. 17.3.3. Watershed Topographic and hydrology concepts have proved useful in the development of region segmentation methods (33–36). In this context, a monochrome image is considered to be an altitude surface in which high-amplitude pixels correspond to ridge points, and low-amplitude pixels correspond to valley points. If a drop of water were

564

IMAGE SEGMENTATION

Figure 17.3-2. Rainfall watershed.

to fall on any point of the altitude surface, it would move to a lower altitude until it reached a local altitude minimum. The accumulation of water in the vicinity of a local minimum is called a catchment basin. All points that drain into a common catchment basin are part of the same watershed. A valley is a region that is surrounded by a ridge. A ridge is the loci of maximum gradient of the altitude surface. There are two basic algorithmic approaches to the computation of the watershed of an image: rainfall and flooding. In the rainfall approach, local minima are found throughout the image. Each local minima is given a unique tag. Adjacent local minima are combined with a unique tag. Next, a conceptual water drop is placed at each untagged pixel. The drop moves to its lower-amplitude neighbor until it reaches a tagged pixel, at which time it assumes the tag value. Figure 17.3-2 illustrates a section of a digital image encompassing a watershed in which the local minimum pixel is black and the dashed line indicates the path of a water drop to the local minimum. In the flooding approach, conceptual single pixel holes are pierced at each local minima, and the amplitude surface is lowered into a large body of water. The water enters the holes and proceeds to fill each catchment basin. If a basin is about to overflow, a conceptual dam is built on its surrounding ridge line to a height equal to the highest- altitude ridge point. Figure 17.3-3 shows a profile of the filling process of a catchment basin (37). Figure 17.3-4 is an example of watershed segmentation provided by Moga and Gabbouj (38).

REGION SEGMENTATION METHODS

565

DAM

CB1

CB2

CB3

CB4

Figure 17.3-3. Profile of catchment basis filling.

(a) Original

(b) Segmentation

FIGURE 17.3-4. Watershed image segmentation of the peppers_mon image. Courtesy of Alina N. Moga and M. Gabbouj, Tampere University of Technology, Finland.

566

IMAGE SEGMENTATION

(a) Original

(b) Edge map

(c) Thinned edge map

FIGURE 17.4-1. Boundary detection image segmentation of the projectile image.

Simple watershed algorithms tend to produce results that are oversegmented (39). Najman and Schmitt (37) have applied morphological methods in their watershed algorithm to reduce over segmentation. Wright and Acton (40) have performed watershed segmentation on a pyramid of different spatial resolutions to avoid oversegmentation. 17.4. BOUNDARY DETECTION It is possible to segment an image into regions of common attribute by detecting the boundary of each region for which there is a significant change in attribute across the boundary. Boundary detection can be accomplished by means of edge detection as described in Chapter 15. Figure 17.4-1 illustrates the segmentation of a projectile from its background. In this example a 11 × 11 derivative of Gaussian edge detector

BOUNDARY DETECTION

567

is used to generate the edge map of Figure 17.4-1b. Morphological thinning of this edge map results in Figure 17.4-1c. The resulting boundary appears visually to be correct when overlaid on the original image. If an image is noisy or if its region attributes differ by only a small amount between regions, a detected boundary may often be broken. Edge linking techniques can be employed to bridge short gaps in such a region boundary. 17.4.1. Curve-Fitting Edge Linking In some instances, edge map points of a broken segment boundary can be linked together to form a closed contour by curve-fitting methods. If a priori information is available as to the expected shape of a region in an image (e.g., a rectangle or a circle), the fit may be made directly to that closed contour. For more complexshaped regions, as illustrated in Figure 17.4-2, it is usually necessary to break up the supposed closed contour into chains with broken links. One such chain, shown in Figure 17.4-2 starting at point A and ending at point B, contains a single broken link. Classical curve-fitting methods (29) such as Bezier polynomial or spline fitting can be used to fit the broken chain. In their book, Duda and Hart (41) credit Forsen as being the developer of a simple piecewise linear curve-fitting procedure called the iterative endpoint fit. In the first stage of the algorithm, illustrated in Figure 17.4-3, data endpoints A and B are connected by a straight line. The point of greatest departure from the straight-line (point C) is examined. If the separation of this point is too large, the point becomes an anchor point for two straight-line segments (A to C and C to B). The procedure then continues until the data points are well fitted by line segments. The principal advantage of the algorithm is its simplicity; its disadvantage is error caused by incorrect data points. Ramer (42) has used a technique similar to the iterated endpoint procedure to determine a polynomial approximation to an arbitrary-shaped closed curve. Pavlidis and Horowitz (43) have developed related algorithms for polygonal curve fitting. The curve-fitting approach is reasonably effective for simply structured objects. Difficulties occur when an image contains many overlapping objects and its corresponding edge map contains branch structures.

FIGURE 17.4-2. Region boundary with missing links indicated by dashed lines.

568

IMAGE SEGMENTATION

FIGURE 17.4-3. Iterative endpoint curve fitting.

17.4.2. Heuristic Edge-Linking Methods The edge segmentation technique developed by Roberts (44) is typical of the philosophy of many heuristic edge-linking methods. In Roberts' method, edge gradients are examined in 4 × 4 pixels blocks. The pixel whose magnitude gradient is largest is declared a tentative edge point if its magnitude is greater than a threshold value. Then north-, east-, south-, and west-oriented lines of length 5 are fitted to the gradient data about the tentative edge point. If the ratio of the best fit to the worst fit, measured in terms of the fit correlation, is greater than a second threshold, the tentative edge point is declared valid, and it is assigned the direction of the best fit. Next, straight lines are fitted between pairs of edge points if they are in adjacent 4 × 4 blocks and if the line direction is within ± 23 degrees of the edge direction of either edge point. Those points failing to meet the linking criteria are discarded. A typical boundary at this stage, shown in Figure 17.4-4a, will contain gaps and multiply connected edge points. Small triangles are eliminated by deleting the longest side; small

BOUNDARY DETECTION

569

Bounday Detection

FIGURE 17.4-4. Roberts edge linking.

rectangles are replaced by their longest diagonal, as indicated in Figure 17.4-4b. Short spur lines are also deleted. At this stage, short gaps are bridged by straight-line connection. This form of edge linking can be used with a wide variety of edge detectors. Nevatia (45) has used a similar method for edge linking of edges produced by a Heuckel edge detector. Robinson (46) has suggested a simple but effective edge-linking algorithm in which edge points from an edge detector providing eight edge compass directions are examined in 3 × 3 blocks as indicated in Figure 17.4-5. The edge point in the center of the block is declared a valid edge if it possesses directional neighbors in the proper orientation. Extensions to larger windows should be beneficial, but the number of potential valid edge connections will grow rapidly with window size. 17.4.3. Hough Transform Edge Linking The Hough transform (47–49) can be used as a means of edge linking. The Hough transform involves the transformation of a line in Cartesian coordinate space to a

570

IMAGE SEGMENTATION

FIGURE 17.4-5. Edge linking rules.

point in polar coordinate space. With reference to Figure 17.4-6a, a straight line can be described parametrically as ρ = x cos θ + y sin θ

(17.4-1)

where ρ is the normal distance of the line from the origin and θ is the angle of the origin with respect to the x axis. The Hough transform of the line is simply a point at coordinate ( ρ, θ ) in the polar domain as shown in Figure 17.4-6b. A family of lines passing through a common point, as shown in Figure 17.4-6c, maps into the connected set of ρ – θ points of Figure 17.4-6d. Now consider the three collinear points of Figure 17.4-6e. The Hough transform of the family of curves passing through the three points results in the set of three parametric curves in the ρ – θ space of Figure 17.4-6f. These three curves cross at a single point ( ρ 0, θ0 ) corresponding to the dashed line passing through the collinear points. Duda and Hart Version. Duda and Hart (48) have adapted the Hough transform technique for line and curve detection in discrete binary images. Each nonzero data point in the image domain is transformed to a curve in the ρ – θ domain, which is quantized into cells. If an element of a curve falls in a cell, that particular cell is

BOUNDARY DETECTION

571

FIGURE 17.4-6. Hough transform.

incremented by one count. After all data points are transformed, the ρ – θ cells are examined. Large cell counts correspond to colinear data points that may be fitted by a straight line with the appropriate ρ – θ parameters. Small counts in a cell generally indicate isolated data points that can be deleted. Figure 17.4-7a presents the geometry utilized for the development of an algorithm for the Duda and Hart version of the Hough transform. Following the notation adopted in Section 13.1, the origin of the image is established at the lower left corner of the image. The discrete Cartesian coordinates of the image point ( j, k) are

572

IMAGE SEGMENTATION

FIGURE 17.4-7. Geometry for Hough transform computation.

-xk = k – 1 2 -yj = J + 1 – j 2

(17.4-2a) (17.4-2b)

Consider a line segment in a binary image F ( j, k ), which contains a point at coordinate (j, k) that is at an angle φ with respect to the horizontal reference axis. When the line segment is projected, it intersects a normal line of length ρ emanating from the origin at an angle θ with respect to the horizontal axis. The Hough array H ( m, n ) consists of cells of the quantized variables ρ m and θ n . It can be shown that ρ max – ----------- ≤ ρ m ≤ ρ max 2 π – -- ≤ θ n ≤ π 2

(17.4-3a) (17.4-3b)

where ρmax = [ ( x K ) + ( y 1 ) ]
2 2 1⁄2

(17.4-3c)

For ease of interpretation, it is convenient to adopt the symmetrical limits of Figure 17.4-7b and to set M and N as odd integers so that the center cell of the Hough array represents ρ m = 0 and θ n = 0 . The Duda and Hart (D & H) Hough transform algorithm follows.

BOUNDARY DETECTION

573

1. Initialize the Hough array to zero. 2. For each ( j, k) for which F ( j, k ) = 1, compute ρ ( n ) = x k cos θ n + y j sin θ n

(17.4-4)

where
2π ( N – n ) θ n = π – -----------------------N–1

(17.4-5)

is incremented over the range 1 ≤ n ≤ N under the restriction that π π φ – -- ≤ θ n ≤ φ + -2 2

(17.4-6)

where
 yj  φ = arc tan  ----   xk 

(17.4-7)

3. Determine the m index of the quantized rho value.
[ ρ max – ρ ( n ) ] ( M – 1 ) m = M – -----------------------------------------------------2ρ max

(17.4-8)
N

where [ · ] N denotes the nearest integer value of its argument. 4. Increment the Hough array.
H ( m, n ) = H ( m, n ) + 1

(17.4-9)

It is important to observe the restriction of Eq. 17.4-6; not all ρ – θ combinations are legal for a given pixel coordinate (j, k). Computation of the Hough array requires on the order of N evaluations of Eqs. 17.4-4 to 17.4-9 for each nonzero pixel of F ( j, k ). The size of the Hough array is not strictly dependent on the size of the image array. However, as the image size increases, the Hough array size should also be increased accordingly to maintain computational accuracy of rho and theta. In most applications, the Hough array size should be set at least one quarter the image size to obtain reasonably accurate results. Figure 17.4-8 presents several examples of the D & H version of the Hough transform. In these examples, M = N = 127 and J = K = 512 . The Hough arrays

574

IMAGE SEGMENTATION

(a) Three dots: upper left, center, lower right

(b) Hough transform of dots

(c) Straight line

(d) Hough transform of line

(e) Straight dashed line

(f) Hough transform of dashed line

FIGURE 17.4-8. Duda and Hart version of the Hough transform.

BOUNDARY DETECTION

575

have been flipped bottom to top for display purposes so that the positive rho and positive theta quadrant is in the normal Cartesian first quadrant (i.e., the upper right quadrant). O 'Gorman and Clowes Version. O' Gorman and Clowes (50) have proposed a modification of the Hough transformation for linking-edge points in an image. In their procedure, the angle θ for entry in ρ – θ space is obtained from the gradient direction of an edge. The corresponding ρ value is then computed from Eq. 17.4-4 for an edge coordinate (j, k). However, instead of incrementing the ( ρ, θ ) cell by unity, the cell is incremented by the edge gradient magnitude in order to give greater importance to strong edges than weak edges. The following is an algorithm for computation of the O' Gorman and Clowes (O & C) version of the Hough transform. Figure 17.4-7a defines the edge angles referenced in the algorithm. 1. Initialize the Hough array to zero. 2. Given a gray scale image F ( j, k ) , generate a first-order derivative edge gradient array G ( j, k ) and an edge gradient angle array γ ( j, k ) using one of the edge detectors described in Section 15.2.1. 3. For each (j, k) for which G ( j, k ) > T , where T is the edge detector threshold value, compute ρ ( j, k ) = x k cos { θ ( j, k ) } + y j sin { θ ( j, k ) }

(17.4-10)

where
ψ + π - 2 θ =  ψ + π - 2

for ψ < φ for ψ ≥ φ

(17.4-11a) (17.4-11b)

with
 yj  φ = arc tan  ----   xk 

(17.4-12)

and
 3π  γ + ----2   ψ =  γ+π -2    γ–π -2 

for – π ≤ γ < – π -2

(17.4-13a) (17.4-13b) (17.4-13c)

for – π ≤ γ < π --2 2

for π ≤ γ < π -2

576

IMAGE SEGMENTATION

4. Determine the m and n indices of the quantized rho and theta values.

[ ρ max – ρ ( j, k ) ] ( M – 1 ) m = M – ---------------------------------------------------------2ρ max [ π – θ]( N – 1 ) n = N – ----------------------------------2π

(17.4-14a)
N

N

(17.4-14b)

5. Increment the Hough array.
H ( m, n ) = H ( m, n ) + G ( j, k )

(17.4-15)

Figure 17.4-9 gives an example of the O'Gorman and Clowes version of the Hough transform. The original image is 512 × 512 pixels, and the Hough array is of size 511 × 511 cells. The Hough array has been flipped bottom to top for display. Hough Transform Edge Linking. The Hough transform can be used for edge linking in the following manner. Each ( ρ, θ ) cell whose magnitude is sufficiently large defines a straight line that passes through the original image. If this line is overlaid with the image edge map, it should cover the missing links of straight-line edge segments, and therefore, it can be used as a mask to fill-in the missing links using some heuristic method, such as those described in the preceding section. Another approach, described below, is to use the line mask as a spatial control function for morphological image processing. Figure 17.4-10 presents an example of Hough transform morphological edge linking. Figure 17.4-10a is an original image of a noisy octagon, and Figure 17.410b shows an edge map of the original image obtained by Sobel edge detection followed by morphological thinning, as defined in Section 14.3. Although this form of edge detection performs reasonably well, there are gaps in the contour of the object caused by the image noise. Figure 17.4-10c shows the D & H version of the Hough transform. The eight largest cells in the Hough array have been used to generate the eight Hough lines shown as gray lines overlaid on the original image in Figure 17.4-10d. These Hough lines have been widened to a width of 3 pixels and used as a region-of-interest (ROI) mask that controls the edge linking morphological processing such that the processing is performed only on edge map pixels within the ROI. Edge map pixels outside the ROI are left unchanged. The morphological processing consists of three iterations of 3 × 3 pixel dilation, as shown in Figure 17.4-10e, followed by five iterations of 3 × 3 pixel thinning. The linked edge map is presented in Figure 17.4-10f.

BOUNDARY DETECTION

577

(a) Original

(b) Sobel edge gradient

(c) Hough array

FIGURE 17.4-9. O’Gorman and Clowes version of the Hough transform of the building image.

17.4.4. Snakes Boundary Detection Snakes, developed by Kass et al. (51), is a method of molding a closed contour to the boundary of an object in an image. The snake model is a controlled continuity closed contour that deforms under the influence of internal forces, image forces, and external constraint forces. The internal contour forces provide a piecewise smoothness constraint. The image forces manipulate the contour toward image edges. The external forces are the result of the initial positioning of the contour by some a priori means.

578

IMAGE SEGMENTATION

(a) Original

(b) Sobel edge map after thinning

(c) D & H Hough array

(d ) Hough line overlays

(e) Edge map after ROI dilation

(f ) Linked edge map

FIGURE 17.4-10. Hough transform morphological edge linking.

BOUNDARY DETECTION

579

Let v ( s ) = [ x ( s ), y ( s ) ] denote a parametric curve in the continuous domain where s is the arc length of the curve. The continuous domain snake energy is defined as (51)
ES =

∫0 EN { v ( s ) } ds + ∫0 EI { v ( s ) } ds + ∫0 ET { v ( s ) } ds

1

1

1

(17.4-16)

where EN denotes the internal energy of the contour due to bending or discontinuities, E I represents the image energy, and E T is the constraint energy. In the discrete domain, the snake energy is
ES =

n=1

∑ EN { vn } + ∑ E I { vn } + ∑ E T { vn } n=1 n=1

N

N

N

(17.4-17)

where v n = [ x n, y n ] for n = 0, 1, …, N represents the discrete contour. The location of a snake corresponds to the local minima of the energy functional of Eq. 17.4-17. Kass et al. (51) have derived a set of N differential equations whose solution minimizes the snake energy. Samadani (52) has investigated the stability of these snake model solutions. The greedy algorithm (53,54) expresses the internal snake energy in terms of its continuity energy E C and curvature energy E K as
E N = α ( n )E C { v n } + β ( n )E K { v n }

(17.4-18)

where α ( n ) and β ( n ) control the elasticity and rigidity of the snake model. The continuity energy is defined as d – v n – vn – 1 E C = --------------------------------------------------------------MAX { d – v n ( j ) – v n – 1 }

(17.4-19)

and the curvature energy is defined as v n – 1 – 2v n + v n + 1 E K = -------------------------------------------------------------------------------2 MAX { v n – 1 – 2v n ( j ) + v n + 1 }
2

(17.4-19)

where d is the average curve length and v n ( j ) represents the eight neighbors of a point v n for j = 1, 2, …, 8. The conventional snake model algorithms suffer from the inability to mold a contour to severe object concavities. Another problem is the generation of false contours due to the creation of unwanted contour loops. Ji and Yan (55) have developed a loop-free snake model segmentation algorithm that overcomes these problems. Figure 17.4-11 illustrates the performance of their algorithm. Figure 17.4-11a shows the initial contour around the pliers object, Figure 17.4-11b is the segmentation

580

IMAGE SEGMENTATION

(a) Original with initial contour

(b) Segmentation with greedy algorithm

(c) Segmentation with loop-free algorithm

FIGURE 17.4-11. Snakes image segmentation of the pliers image. Courtesy of Lilian Ji and Hong Yan, University of Sydney, Australia.

using the greedy algorithm, and Figure 17.4-11c is the result with the loop-free algorithm.

17.5. TEXTURE SEGMENTATION It has long been recognized that texture should be a valuable feature for image segmentation. Putting this proposition to practice, however, has been hindered by the lack of reliable and computationally efficient means of texture measurement. One approach to texture segmentation, fostered by Rosenfeld et al. (56–58), is to compute some texture coarseness measure at all image pixels and then detect changes in the coarseness of the texture measure. In effect, the original image is preprocessed to convert texture to an amplitude scale for subsequent amplitude segmentation. A major problem with this approach is that texture is measured over a window area, and therefore, texture measurements in the vicinity of the boundary between texture regions represent some average texture computation. As a result, it becomes difficult to locate a texture boundary accurately.

SEGMENT LABELING

581

Another approach to texture segmentation is to detect the transition between regions of differing texture. The basic concept of texture edge detection is identical to that of luminance edge detection; the dissimilarity between textured regions is enhanced over all pixels in an image, and then the enhanced array is thresholded to locate texture discontinuities. Thompson (59) has suggested a means of texture enhancement analogous to the Roberts gradient presented in Section 15.2. Texture measures are computed in each of four adjacent W × W pixel subregions scanned over the image, and the sum of the cross-difference magnitudes is formed and thresholded to locate significant texture changes. This method can be generalized to include computation in adjacent windows arranged in 3 × 3 groups. Then, the resulting texture measures of each window can be combined in some linear or nonlinear manner analogous to the 3 × 3 luminance edge detection methods of Section 15.2. Zucker et al. (60) have proposed a histogram thresholding method of texture segmentation based on a texture analysis technique developed by Tsuji and Tomita (61). In this method a texture measure is computed at each pixel by forming the spot gradient followed by a dominant neighbor suppression algorithm. Then a histogram is formed over the resultant modified gradient data. If the histogram is multimodal, thresholding of the gradient at the minimum between histogram modes should provide a segmentation of textured regions. The process is repeated on the separate parts until segmentation is complete.

17.6. SEGMENT LABELING The result of any successful image segmentation is the labeling of each pixel that lies within a specific distinct segment. One means of labeling is to append to each pixel of an image the label number or index of its segment. A more succinct method is to specify the closed contour of each segment. If necessary, contour filling techniques (29) can be used to label each pixel within a contour. The following describes two common techniques of contour following. The contour following approach to image segment representation is commonly called bug following. In the binary image example of Figure 17.6-1, a conceptual bug begins marching from the white background to the black pixel region indicated by the closed contour. When the bug crosses into a black pixel, it makes a left turn

FIGURE 17.6-1. Contour following.

582

IMAGE SEGMENTATION

FIGURE 17.6-2. Comparison of bug follower algorithms.

and proceeds to the next pixel. If that pixel is black, the bug again turns left, and if the pixel is white, the bug turns right. The procedure continues until the bug returns to the starting point. This simple bug follower may miss spur pixels on a boundary. Figure 17.6-2a shows the boundary trace for such an example. This problem can be overcome by providing the bug with some memory and intelligence that permit the bug to remember its past steps and backtrack if its present course is erroneous. Figure 17.6-2b illustrates the boundary trace for a backtracking bug follower. In this algorithm, if the bug makes a white-to-black pixel transition, it returns to its previous starting point and makes a right turn. The bug makes a right turn whenever it makes a white-to-white transition. Because of the backtracking, this bug follower takes about twice as many steps as does its simpler counterpart. While the bug is following a contour, it can create a list of the pixel coordinates of each boundary pixel. Alternatively, the coordinates of some reference pixel on the boundary can be recorded, and the boundary can be described by a relative movement code. One such simple code is the crack code (62), which is generated for each side p of a pixel on the boundary such that C(p) = 0, 1, 2, 3 for movement to the right, down, left, or up, respectively, as shown in Figure 17.6-3. The crack code for the object of Figure 17.6-2 is as follows:

FIGURE 17.6-3. Crack code definition.

SEGMENT LABELING

583

p: C(p):

1 0

2 1

3 4 0 3

5 0

6 1

7 2

8 1

9 2

10 2

11 3

12 3

Upon completion of the boundary trace, the value of the index p is the perimeter of the segment boundary. Section 18.2 describes a method for computing the enclosed area of the segment boundary during the contour following. Freeman (63, 64) has devised a method of boundary coding, called chain coding, in which the path from the centers of connected boundary pixels are represented by an eight-element code. Figure 17.6-4 defines the chain code and provides an example of its use. Freeman has developed formulas for perimeter and area calculation based on the chain code of a closed contour.

FIGURE 17.6-4. Chain coding contour coding.

584

IMAGE SEGMENTATION

REFERENCES
1. R. M. Haralick and L. G. Shapiro, “Image Segmentation Techniques,” Computer Vision, Graphics, and lmage Processing, 29, 1, January 1985, 100–132. 2. S. W. Zucker, “Region Growing: Childhood and Adolescence,” Computer Graphics and Image Processing, 5, 3, September 1976, 382–389. 3. E. M. Riseman and M. A. Arbib, “Computational Techniques in the Visual Segmentation of Static Scenes,” Computer Graphics and Image Processing, 6, 3, June 1977, 221–276. 4. T. Kanade, “Region Segmentation: Signal vs. Semantics,” Computer Graphics and Image Processing, 13, 4, August 1980, 279–297. 5. K. S. Fu and J. K. Mui, “A Survey on Image Segmentation,” Pattern Recognition, 13, 1981, 3–16. 6. N. R. Pal and S. K. Pal, “A Review on Image Segmentation Techniques,” Pattern Recognition, 26, 9, 1993, 1277–1294. 7. J. S. Weska, “A Survey of Threshold Selection Techniques,” Computer Graphics and Image Processing, 7, 2, April 1978, 259–265. 8. B. Sankur, A. T. Abak, and U. Baris, “Assessment of Thresholding Algorithms for Document Processing,” Proc. IEEE International Conference on Image Processing, Kobe, Japan, October 1999, 1, 580–584. 9. W. Doyle, “Operations Useful for Similarity-Invariant Pattern Recognition,” J. Association for Computing Machinery, 9, 2, April 1962, 259–267. 10 J. M. S. Prewitt and M. L. Mendelsohn, “The Analysis of Cell Images,” Ann. New York Academy of Science, 128, 1966, 1036–1053. 11. N. Papamarkos and B. Gatos, “A New Approach for Multilevel Threshold Selection,” CVGIP: Graphical Models and Image Processing, 56, 5, September 1994, 357–370. 12. J. S. Weska, R. N. Nagel, and A. Rosenfeld, “A Threshold Selection Technique,” IEEE Trans. Computers, C-23, 12, December 1974, 1322–1326. 13. M. R. Bartz, “The IBM 1975 Optical Page Reader, II: Video Thresholding System,” IBM J. Research and Development, 12, September 1968, 354–363. 14. C. K. Chow and T. Kaneko, “Boundary Detection of Radiographic Images by a Threshold Method,” in Frontiers of Pattern Recognition, S. Watanabe, Ed., Academic Press, New York, 1972. 15. S. D. Yankowitz and A. M. Bruckstein, “A New Method for Image Segmentation,” Computer Vision, Graphics, and Image Processing, 46, 1, April 1989, 82–95. 16. F. Tomita, M. Yachida, and S. Tsuji, “Detection of Homogeneous Regions by Structural Analysis,” Proc. International Joint Conference on Artificial Intelligence, Stanford, CA, August 1973, 564–571. 17. R. B. Ohlander, “Analysis of Natural Scenes,” Ph.D. dissertation, Carnegie-Mellon University, Department of Computer Science, Pittsburgh, PA, April 1975. 18. R. B. Ohlander, K. Price, and D. R. Ready, “Picture Segmentation Using a Recursive Region Splitting Method,” Computer Graphics and Image Processing, 8, 3, December 1978, 313–333. 19. Y. Ohta, T. Kanade, and T. Saki, “Color Information for Region Segmentation,” Computer Graphics and Image Processing, 13, 3, July 1980, 222–241.

REFERENCES

585

20. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, 2nd ed., Wiley-Interscience, New York, 2001. 21. H. C. Becker et al., “Digital Computer Determination of a Medical Diagnostic Index Directly from Chest X-ray Images,” IEEE Trans. Biomedical Engineering., BME-11, 3, July 1964, 67–72. 22. R. P. Kruger et al., “Radiographic Diagnosis via Feature Extraction and Classification of Cardiac Size and Shape Descriptors,” IEEE Trans. Biomedical Engineering, BME-19, 3, May 1972, 174–186. 23. R. M. Haralick and G. L. Kelly, “Pattern Recognition with Measurement Space and Spatial Clustering for Multiple Images,” Proc. IEEE, 57, 4, April 1969, 654–665. 24. G. B. Coleman and H. C. Andrews, “Image Segmentation by Clustering,” Proc. IEEE, 67, 5, May 1979, 773–785. 25. J. L. Muerle and D. C. Allen, “Experimental Evaluation of Techniques for Automatic Segmentation of Objects in a Complex Scene,” in Pictorial Pattern Recognition, G. C. Cheng et al., Eds., Thompson, Washington, DC, 1968, 3–13. 26. C. R. Brice and C. L. Fenema, “Scene Analysis Using Regions,” Artificial Intelligence, 1, 1970, 205–226. 27. H. G. Barrow and R. J. Popplestone, “Relational Descriptions in Picture Processing,” in Machine Intelligence, Vol. 6, B. Meltzer and D. Michie, Eds., University Press, Edinburgh, 1971, 377–396. 28. Y. Yakimovsky, “Scene Analysis Using a Semantic Base for Region Growing,” Report AIM-209, Stanford University, Stanford, Calif., 1973. 29. T. Pavlidis, Algorithms for Graphics and Image Processing, Computer Science Press, Rockville, MD, 1982. 30. Y. Fukada, “Spatial Clustering Procedures for Region Analysis,” Pattern Recognition, 12, 1980, 395–403. 31. P. C. Chen and T. Pavlidis, “Image Segmentation as an Estimation Problem,” Computer Graphics and Image Processing, 12, 2, February 1980, 153–172. 32. S. L. Horowitz and T. Pavlidis, “Picture Segmentation by a Tree Transversal Algorithm,” J. Association for Computing Machinery, 23, 1976, 368–388. 33. R. M. Haralick, “Ridges and Valleys on Digital Images,” Computer Vision, Graphics and Image Processing, 22, 10, April 1983, 28–38. 34. S. Beucher and C. Lantuejoul, “Use of Watersheds in Contour Detection,” Proc. International Workshop on Image Processing, Real Time Edge and Motion Detection/Estimation, Rennes, France, September 1979. 35. S. Beucher and F. Meyer, “The Morphological Approach to Segmentation: The Watershed Transformation,” in Mathematical Morphology in Image Processing, E. R. Dougherty, ed., Marcel Dekker, New York, 1993. 36. L. Vincent and P. Soille, “Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-13, 6, June 1991, 583–598. 37. L. Najman and M. Schmitt, “Geodesic Saliency of Watershed Contours and Hierarchical Segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-18, 12, December 1996.

586

IMAGE SEGMENTATION

38. A. N. Morga and M. Gabbouj, “Parallel Image Component Labeling with Watershed Transformation,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-19, 5, May 1997, 441–440. 39. A. M. Lopez et al., “Evaluation of Methods for Ridge and Valley Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-21, 4, April 1999, 327–335. 40. A. S. Wright and S. T. Acton, “Watershed Pyramids for Edge Detection, Proc. 1997 International Conference on Image Processing, II, Santa Bartara, CA, 1997, 578–581. 41. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley-Interscience, New York, 1973. 42. U. Ramer, “An Iterative Procedure for the Polygonal Approximation of Plane Curves,” Computer Graphics and Image Processing, 1, 3, November 1972, 244–256. 43. T. Pavlidis and S. L. Horowitz, “Segmentation of Plane Curves,” IEEE Trans. Computers, C-23, 8, August 1974, 860–870. 44. L. G. Roberts, “Machine Perception of Three Dimensional Solids,” in Optical and Electro-Optical Information Processing, J. T. Tippett et al., Eds., MIT Press, Cambridge, MA, 1965. 45. R. Nevatia, “Locating Object Boundaries in Textured Environments,” IEEE Trans. Computers, C-25, 11, November 1976, 1170–1175. 46. G. S. Robinson, “Detection and Coding of Edges Using Directional Masks,” Proc. SPIE Conference on Advances in Image Transmission Techniques, San Diego, CA, August 1976. 47. P. V. C. Hough, “Method and Means for Recognizing Complex patterns,” U.S. patent 3,069,654, December 18, 1962. 48. R. O. Duda and P. E. Hart, “Use of the Hough Transformation to Detect Lines and Curves in Pictures,” Communication of the ACM, 15, 1, January 1972, 11–15. 49. J. Illingworth and J. Kittler, “A Survey of the Hough Transform,” Computer Vision, Graphics, and Image Processing, 44, 1, October 1988, 87–116. 50. F. O'Gorman and M. B. Clowes, “Finding Picture Edges Through Colinearity of Feature Points,” IEEE Trans. Computers, C-25, 4, April 1976, 449–456. 51. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active Contour Models,” International J. Computer Vision, 1, 4, 1987, 321–331. 52. R. Samadani, “Adaptive Snakes: Control of Damping and Material Parameters,” Proc. SPIE Conference, on Geometric Methods in Computer Vision, 1570, San Diego, CA, 202–213. 53. D. J. Williams and M. Shah, “A Fast Algorithm for Active Contours and Curve Estimation,” CVGIP: Image Understanding, 55, 1, 1992, 14–26. 54. K.-H. Lam and H. Yan, “Fast Greedy Algorithm for Active Contours,” Electronic Letters, 30, 1, January 1994, 21–23. 55. L. Ji and H. Yan, “Loop-Free Snakes for Image Segmentation,” Proc. 1999 International Conference on Image Processing, 3, Kobe, Japan, 1999, 193–197. 56. A. Rosenfeld and M. Thurston, “Edge and Curve Detection for Visual Scene Analysis,” IEEE Trans. Computers, C-20, 5, May 1971, 562–569. 57. A. Rosenfeld, M. Thurston, and Y. H. Lee, “Edge and Curve Detection: Further Experiments,” IEEE Trans. Computers, C-21, 7, July 1972, 677–715.

REFERENCES

587

58. K. C. Hayes, Jr., A. N. Shah, and A. Rosenfeld, “Texture Coarseness: Further Experiments,” IEEE Trans. Systems, Man and Cybernetics (Correspondence), SMC-4, 5, September 1974, 467–472. 59. W. B. Thompson, “Textural Boundary Analysis,” Report USCIPI 620, University of Southern California, Image Processing Institute, Los Angeles, September 1975, 124– 134. 60. S. W. Zucker, A. Rosenfeld, and L. S. Davis, “Picture Segmentation by Texture Discrimination,” IEEE Trans. Computers, C-24, 12, December 1975, 1228–1233. 61. S. Tsuji and F. Tomita, “A Structural Analyzer for a Class of Textures,” Computer Graphics and Image Processing, 2, 3/4, December 1973, 216–231. 62. Z. Kulpa, “Area and Perimeter Measurements of Blobs in Discrete Binary Pictures,” Computer Graphics and Image Processing, 6, 4, December 1977, 434–451. 63. H. Freeman, “On the Encoding of Arbitrary Geometric Configurations,” IRE Trans. Electronic Computers, EC-10, 2, June 1961, 260–268. 64. H. Freeman, “Boundary Encoding and Processing,” in Picture Processing and Psychopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970, 241– 266.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

18
SHAPE ANALYSIS

Several qualitative and quantitative techniques have been developed for characterizing the shape of objects within an image. These techniques are useful for classifying objects in a pattern recognition system and for symbolically describing objects in an image understanding system. Some of the techniques apply only to binary-valued images; others can be extended to gray level images.

18.1. TOPOLOGICAL ATTRIBUTES Topological shape attributes are properties of a shape that are invariant under rubber-sheet transformation (1–3). Such a transformation or mapping can be visualized as the stretching of a rubber sheet containing the image of an object of a given shape to produce some spatially distorted object. Mappings that require cutting of the rubber sheet or connection of one part to another are not permissible. Metric distance is clearly not a topological attribute because distance can be altered by rubber-sheet stretching. Also, the concepts of perpendicularity and parallelism between lines are not topological properties. Connectivity is a topological attribute. Figure 18.1-1a is a binary-valued image containing two connected object components. Figure 18.1-1b is a spatially stretched version of the same image. Clearly, there are no stretching operations that can either increase or decrease the connectivity of the objects in the stretched image. Connected components of an object may contain holes, as illustrated in Figure 18.1-1c. The number of holes is obviously unchanged by a topological mapping.

589

590

SHAPE ANALYSIS

FIGURE 18.1-1. Topological attributes.

There is a fundamental relationship between the number of connected object components C and the number of object holes H in an image called the Euler number, as defined by
E = C–H

(18.1-1)

The Euler number is also a topological property because C and H are topological attributes. Irregularly shaped objects can be described by their topological constituents. Consider the tubular-shaped object letter R of Figure 18.1-2a, and imagine a rubber band stretched about the object. The region enclosed by the rubber band is called the convex hull of the object. The set of points within the convex hull, which are not in the object, form the convex deficiency of the object. There are two types of convex deficiencies: regions totally enclosed by the object, called lakes; and regions lying between the convex hull perimeter and the object, called bays. In some applications it is simpler to describe an object indirectly in terms of its convex hull and convex deficiency. For objects represented over rectilinear grids, the definition of the convex hull must be modified slightly to remain meaningful. Objects such as discretized circles and triangles clearly should be judged as being convex even though their

FIGURE 18.1-2. Definitions of convex shape descriptors.

DISTANCE, PERIMETER, AND AREA MEASUREMENTS

591

boundaries are jagged. This apparent difficulty can be handled by considering a rubber band to be stretched about the discretized object. A pixel lying totally within the rubber band, but not in the object, is a member of the convex deficiency. Sklansky et al. (4,5) have developed practical algorithms for computing the convex attributes of discretized objects.

18.2. DISTANCE, PERIMETER, AND AREA MEASUREMENTS Distance is a real-valued function d { ( j1, k 1 ), ( j 2, k 2 ) } of two image points ( j 1, k 1 ) and ( j 2, k 2 ) satisfying the following properties (6): d { ( j 1, k 1 ), ( j 2, k 2 ) } ≥ 0 d { ( j 1, k 1 ), ( j 2, k 2 ) } = d { ( j 2, k 2 ), ( j 1, k 1 ) } d { ( j 1, k 1 ), ( j 2, k 2 ) } + d { ( j2, k 2 ), ( j3, k 3 ) } ≥ d { ( j 1, k 1 ), ( j 3, k 3 ) }

(18.2-1a) (18.2-1b) (18.2-1c)

There are a number of distance functions that satisfy the defining properties. The most common measures encountered in image analysis are the Euclidean distance, dE = ( j 1 – j2 ) + ( k1 – k2 )
2 2 1⁄2

(18.2-2a)

the magnitude distance, dM = j1 – j2 + k1 – k2

(18.2-2b)

and the maximum value distance, d X = MAX { j 1 – j2 , k 1 – k 2 }

(18.2-2c)

In discrete images, the coordinate differences ( j1 – j 2 ) and ( k 1 – k 2 ) are integers, but the Euclidean distance is usually not an integer. Perimeter and area measurements are meaningful only for binary images. Consider a discrete binary image containing one or more objects, where F ( j, k ) = 1 if a pixel is part of the object and F ( j, k ) = 0 for all nonobject or background pixels. The perimeter of each object is the count of the number of pixel sides traversed around the boundary of the object starting at an arbitrary initial boundary pixel and returning to the initial pixel. The area of each object within the image is simply the count of the number of pixels in the object for which F ( j, k ) = 1. As an example, for

592

SHAPE ANALYSIS

a 2 × 2 pixel square, the object area is A O = 4 and the object perimeter is P O = 8. An object formed of three diagonally connected pixels possesses A O = 3 and PO = 12 . The enclosed area of an object is defined to be the total number of pixels for which F ( j, k ) = 0 or 1 within the outer perimeter boundary PE of the object. The enclosed area can be computed during a boundary-following process while the perimeter is being computed (7,8). Assume that the initial pixel in the boundaryfollowing process is the first black pixel encountered in a raster scan of the image. Then, proceeding in a clockwise direction around the boundary, a crack code C(p), as defined in Section 17.6, is generated for each side p of the object perimeter such that C(p) = 0, 1, 2, 3 for directional angles 0, 90, 180, 270°, respectively. The enclosed area is
PE

AE =

p=1



j ( p – 1 ) ∆k ( p )

(18.2-3a)

where PE is the perimeter of the enclosed object and p j(p ) =

i=1



∆j ( i )

(18.2-3b)

with j(0) = 0. The delta terms are defined by

 1   ∆j ( p ) =  0    –1

if C ( p ) = 1 if C ( p ) = 0 or 2 if C ( p ) = 3

(18.2-4a) (18.2-4b) (18.2-4c)

 1   ∆k ( p ) =  0    –1

if C ( p ) = 0 if C ( p ) = 1 or 3 if C ( p ) = 2

(18.2-4d) (18.2-4e) (18.2-4f)

Table 18.2-1 gives an example of computation of the enclosed area of the following four-pixel object:

DISTANCE, PERIMETER, AND AREA MEASUREMENTS

593

TABLE 18.2-1. Example of Perimeter and Area Computation p 1 2 3 4 5 6 7 8 9 10 11 12 C(p) 0 3 0 1 0 3 2 3 2 2 1 1 ∆ j(p) 0 –1 0 1 0 –1 0 –1 0 0 1 1 ∆ k(p) 1 0 1 0 1 0 –1 0 –1 –1 0 0 j(p) 0 –1 –1 0 0 –1 –1 –2 –2 –2 –1 0 A(p) 0 0 –1 –1 –1 –1 0 0 2 4 4 4

0 0 0 0

0 1 1 0

0 0 1 0

0 1 0 0

0 0 0 0

18.2.1. Bit Quads Gray (9) has devised a systematic method of computing the area and perimeter of binary objects based on matching the logical state of regions of an image to binary patterns. Let n { Q } represent the count of the number of matches between image pixels and the pattern Q within the curly brackets. By this definition, the object area is then
AO = n { 1 }

(18.2-5)

If the object is enclosed completely by a border of white pixels, its perimeter is equal to
0  P O = 2n { 0 1} + 2n   1 

(18.2-6)

Now, consider the following set of 2 × 2 pixel patterns called bit quads defined in Figure 18.2-1. The object area and object perimeter of an image can be expressed in terms of the number of bit quad counts in the image as

594

SHAPE ANALYSIS

FIGURE 18.2-1. Bit quad patterns.

-AO = 1 [ n { Q 1 } + 2n { Q 2 } + 3n { Q 3 } + 4n { Q 4 } + 2n { Q D } ] 4

(18.2-7a) (18.2-7b)

PO = n { Q 1 } + n { Q 2 } + n { Q 3 } + 2n { Q D }

These area and perimeter formulas may be in considerable error if they are utilized to represent the area of a continuous object that has been coarsely discretized. More accurate formulas for such applications have been derived by Duda (10):
----AO = 1 n { Q 1 } + 1 n { Q 2 } + 7 n { Q 3 } + n { Q 4 } + 3 n { Q D } 4 2 8 4 1P O = n { Q 2 } + ------ [ n { Q 1 } + n { Q 3 } + 2n { Q D } ] 2

(18.2-8a) (18.2-8b)

DISTANCE, PERIMETER, AND AREA MEASUREMENTS

595

Bit quad counting provides a very simple means of determining the Euler number of an image. Gray (9) has determined that under the definition of four-connectivity, the Euler number can be computed as
-E = 1 [ n { Q 1 } – n { Q3 } + 2n { QD } ] 4

(18.2-9a)

and for eight-connectivity
-E = 1 [ n { Q 1 } – n { Q3 } – 2n { Q D } ] 4

(18.2-9b)

It should be noted that although it is possible to compute the Euler number E of an image by local neighborhood computation, neither the number of connected components C nor the number of holes H, for which E = C – H, can be separately computed by local neighborhood computation. 18.2.2. Geometric Attributes With the establishment of distance, area, and perimeter measurements, various geometric attributes of objects can be developed. In the following, it is assumed that the number of holes with respect to the number of objects is small (i.e., E is approximately equal to C). The circularity of an object is defined as
4πAO C O = ------------2 ( PO )

(18.2-10)

This attribute is also called the thinness ratio. A circle-shaped object has a circularity of unity; oblong-shaped objects possess a circularity of less than 1. If an image contains many components but few holes, the Euler number can be taken as an approximation of the number of components. Hence, the average area and perimeter of connected components, for E > 0, may be expressed as (9)

AO AA = -----E PO PA = -----E

(18.2-11) (18.2-12)

For images containing thin objects, such as typewritten or script characters, the average object length and width can be approximated by

596

SHAPE ANALYSIS

PA L A = ----2 2A A W A = --------PA

(18.2-13) (18.2-14)

These simple measures are useful for distinguishing gross characteristics of an image. For example, does it contain a multitude of small pointlike objects, or fewer bloblike objects of larger size; are the objects fat or thin? Figure 18.2-2 contains images of playing card symbols. Table 18.2-2 lists the geometric attributes of these objects.

(a) Spade

(b) Heart

(c) Diamond

(d) Club

FIGURE 18.2-2. Playing card symbol images.

SPATIAL MOMENTS

597

TABLE 18.2-2 Geometric Attributes of Playing Card Symbols Attribute Outer perimeter Enclosed area Average area Average perimeter Average length Average width Circularity Spade 652 8,421 8,421 652 326 25.8 0.25 Heart 512 8,681 8,681 512 256 33.9 0.42 Diamond 548 8.562 8,562 548 274 31.3 0.36 Club 668 8.820 8,820 668 334 26.4 0.25

18.3. SPATIAL MOMENTS From probability theory, the (m, n)th moment of the joint probability density p ( x, y ) is defined as
M ( m, n ) =

∫– ∞ ∫– ∞ x





m n

y p ( x, y ) dx dy

(18.3-1)

The central moment is given by
U ( m, n ) =

∫– ∞ ∫– ∞ ( x – η x )





m

( y – η y ) p ( x, y ) dx dy

n

(18.3-2)

where η x and η y are the marginal means of p ( x, y ) . These classical relationships of probability theory have been applied to shape analysis by Hu (11) and Alt (12). The concept is quite simple. The joint probability density p ( x, y ) of Eqs. 18.3-1 and 18.3-2 is replaced by the continuous image function F ( x, y ) . Object shape is characterized by a few of the low-order moments. Abu-Mostafa and Psaltis (13,14) have investigated the performance of spatial moments as features for shape analysis. 18.3.1. Discrete Image Spatial Moments The spatial moment concept can be extended to discrete images by forming spatial summations over a discrete image function F ( j, k ) . The literature (15–17) is notationally inconsistent on the discrete extension because of the differing relationships defined between the continuous and discrete domains. Following the notation established in Chapter 13, the (m, n)th spatial moment is defined as
M U ( m, n ) =

j =1 k =1

∑ ∑ ( xk )

J

K

m

( y j ) F ( j, k )

n

(18.3-3)

598

SHAPE ANALYSIS

where, with reference to Figure 13.1-1, the scaled coordinates are
-xk = k – 1 2 -yj = J + 1 – j 2

(18.3-4a) (18.3-4b)

The origin of the coordinate system is the lower left corner of the image. This formulation results in moments that are extremely scale dependent; the ratio of secondorder (m + n = 2) to zero-order (m = n = 0) moments can vary by several orders of magnitude (18). The spatial moments can be restricted in range by spatially scaling the image array over a unit range in each dimension. The (m, n)th scaled spatial moment is then defined as
1 M ( m, n ) = -------------n m J K

j =1 k =1

∑ ∑ ( xk )

J

K

m

( y j ) F ( j, k )

n

(18.3-5)

Clearly,
M U ( m, n ) M ( m, n ) = -----------------------n m J K

(18.3-6)

It is instructive to explicitly identify the lower-order spatial moments. The zeroorder moment
M ( 0, 0 ) =

j =1 k =1

∑ ∑ F ( j, k )

J

K

(18.3-7)

is the sum of the pixel values of an image. It is called the image surface. If F ( j, k ) is a binary image, its surface is equal to its area. The first-order row moment is
1 M ( 1, 0 ) = --K

j =1 k =1

∑ ∑ xk F ( j, k )

J

K

(18.3-8)

and the first-order column moment is
-M ( 0, 1 ) = 1 J

j =1 k =1

∑ ∑ y j F ( j, k )

J

K

(18.3-9)

Table 18.3-1 lists the scaled spatial moments of several test images. These images include unit-amplitude gray scale versions of the playing card symbols of Figure 18.2-2, several rotated, minified and magnified versions of these symbols, as shown in Figure 18.3-1, as well as an elliptically shaped gray scale object shown in Figure 18.3-2. The ratios

TABLE 18.3-1. Scaled Spatial Moments of Test Images M(1,0) 4,013.75 4,186.39 4,283.65 4,276.28 17,130.64 1,047.38 4,349.00 4,294.89 4,323.54 4,363.23 4,326.93 4,377.78 2,175.86 2,189.76 4,220.96 2,196.08 2,103.88 4,500.10 2,150.47 2,215.32 2,344.02 2,057.66 2,226.61 4,324.09 2,196.40 2,168.00 2,196.97 4,704.71 2,222.43 2,390.10 2,627.42 1,142.44 1,143.83 1,080.29 1,120.12 1,108.47 1,059.44 522.14 527.16 535.38 260.78 17,442.91 8,762.68 8,658.34 9,402.25 4,608.05 4,337.90 2,149.18 2,143.52 2,211.15 1,092.92 1,071.95 4,442.37 262.82 1,221.53 1,108.30 1,101.21 1,062.39 1,109.92 4,341.36 2,145.90 2,158.40 2,223.79 1,083.06 1,081.72 3,968.30 2,149.35 2,021.65 1,949.89 1,111.69 1,038.04 993.20 1,105.73 1,008.05 4,669.42 266.41 1,334.97 1,101.11 1,153.76 1,028.90 1,122.62 4,281.28 1,976.12 2,089.86 2,263.11 980.81 1,028.31 1,104.36 M(0,1) M(2,0) M(1,1) M(0,2) M(3,0) M(2,1) M(1,2) M(0,3) 1,213.73 973.53 1,156.35 1,140.43 5,318.58 271.61 1,490.26 1,122.93 1,241.04 1,017.60 1,146.97

Image

M(0,0)

Spade

8,219.98

Rotated spade

8,215.99

Heart

8,616.79

Rotated Heart

8,613.79

Magnified heart

34,523.13

Minified heart

2,104.97

Diamond

8,561.82

Rotated diamond

8,562.82

Club

8,781.71

Rotated club

8,787.71

Ellipse

8,721.74

599

600

SHAPE ANALYSIS

(a) Rotated spade

(b) Rotated heart

(c) Rotated diamond

(d) Rotated club

(e) Minified heart

(f) Magnified heart

FIGURE 18.3-1 Rotated, magnified, and minified playing card symbol images.

SPATIAL MOMENTS

601

FIGURE 18.3-2 Eliptically shaped object image.

M ( 1, 0 ) x k = -----------------M ( 0, 0 ) M ( 0, 1 ) y j = -----------------M ( 0, 0 )

(18.3-10a) (18.3-10b)

of first- to zero-order spatial moments define the image centroid. The centroid, called the center of gravity, is the balance point of the image function F ( j, k ) such that the mass of F ( j, k ) left and right of x k and above and below y j is equal. With the centroid established, it is possible to define the scaled spatial central moments of a discrete image, in correspondence with Eq. 18.3-2, as
1 U ( m, n ) = ------------n m J K

j =1 k =1

∑ ∑ ( xk – xk )

J

K

m

( y j – y j ) F ( j, k )

n

(18.3-11)

For future reference, the (m, n)th unscaled spatial central moment is defined as

602

SHAPE ANALYSIS
J K

U U ( m, n ) =

j =1 k =1

∑ ∑ ( xk – xk )

m

( y j – y j ) F ( j, k )

n

(18.3-12)

where
M U ( 1, 0 ) ˜ x k = ---------------------M U ( 0, 0 ) M U ( 0, 1 ) ˜ y j = ---------------------M U ( 0, 0 )

(18.3-13a)

(18.3-13b)

It is easily shown that
U U ( m, n ) U ( m, n ) = ---------------------n m J K

(18.3-14)

The three second-order scaled central moments are the row moment of inertia,
J K

1 U ( 2, 0 ) = -----2 K

j =1 k =1

∑ ∑ ( xk – xk )

2

F ( j, k )

(18.3-15)

the column moment of inertia,
1 U ( 0, 2 ) = ----n J

j =1 k =1

∑ ∑ ( yj – yj )

J

K

2

F ( j, k )

(18.3-16)

and the row–column cross moment of inertia,
1 U ( 1, 1 ) = -----JK

j =1 k =1

∑ ∑ ( xk – xk ) ( yj – yj )F ( j, k )

J

K

(18.3-17)

The central moments of order 3 can be computed directly from Eq. 18.3-11 for m + n = 3, or indirectly according to the following relations:
U ( 3, 0 ) = M ( 3, 0 ) – 3y j M ( 2, 0 ) + 2 ( y j ) M ( 1, 0 ) U ( 2, 1 ) = M ( 2, 1 ) – 2y j M ( 1, 1 ) – x k M ( 2, 0 ) + 2 ( y j ) M ( 0, 1 )
2 2

(18.3-18a) (18.3-18b)

SPATIAL MOMENTS

603

U ( 1, 2 ) = M ( 1, 2 ) – 2x k M ( 1, 1 ) – y j M ( 0, 2 ) + 2 ( x k ) M ( 1, 0 ) U ( 0, 3 ) = M ( 0, 3 ) – 3x k M ( 0, 2 ) + 2 ( x k ) M ( 0, 1 )
2

2

(18.3-18c) (18.3-18d)

Table 18.3-2 presents the horizontal and vertical centers of gravity and the scaled central spatial moments of the test images. The three second-order moments of inertia defined by Eqs. 18.3-15, 18.3-16, and 18.3-17 can be used to create the moment of inertia covariance matrix,
U = U ( 2, 0 ) U ( 1, 1 ) U ( 1, 1 ) U ( 0, 2 )

(18.3-19)

Performing a singular-value decomposition of the covariance matrix results in the diagonal matrix
E UE = Λ
T

(18.3-20)

where the columns of e 11 e 21 e 12

E =

(18.3-21) e 22

are the eigenvectors of U and λ1 0 0

Λ =

(18.3-22) λ2 contains the eigenvalues of U. Expressions for the eigenvalues can be derived explicitly. They are
2 2 2 1⁄2

--λ 1 = 1 [ U ( 2, 0 ) + U ( 0, 2 ) ] + 1 [ U ( 2, 0 ) + U ( 0, 2 ) – 2U ( 2, 0 )U ( 0, 2 ) + 4U ( 1, 1 ) ] 2 2

(18.3-23a)
--λ 2 = 1 [ U ( 2, 0 ) + U ( 0, 2 ) ] – 1 [ U ( 2, 0 ) + U ( 0, 2 ) – 2U ( 2, 0 )U ( 0, 2 ) + 4U ( 1, 1 ) ] 2 2 2 2 2 1⁄2

(18.3-23b)

604 Horizontal COG U(2,0) 16.240 16.207 16.380 26.237 262.321 0.984 13.337 42.198 21.834 29.675 29.236 17.913 8.116 –0.239 37.979 30.228 29.236 –0.853 13.366 0.324 42.186 –0.002 –0.158 0.037 0.268 0.000 0.013 2.165 0.000 3.037 589.162 0.383 –10.009 26.584 –0.077 –0.438 11.991 0.011 –0.026 0.009 –0.545 –0.505 0.000 0.194 36.506 –0.012 0.371 –0.366 33.215 –0.013 0.284 –0.653 33.261 0.026 –0.285 –0.017 –0.002 0.027 0.411 0.886 0.000 0.005 0.029 –0.039 –0.557 0.000 U(1,1) U(0,2) U(3,0) U(2,1) U(1,2) 0.488 0.510 0.497 0.496 0.496 0.498 0.508 0.502 0.492 0.497 0.496 0.502 0.480 0.512 0.505 0.549 0.503 0.505 0.504 0.504 0.483 0.521 Vertical COG U(0,3) 0.363 –0.357 –0.831 0.122 –27.284 –0.025 0.136 –0.005 0.950 0.216 0.000

TABLE 18.3-2 Centers of Gravity and Scaled Spatial Central Moments of Test Images

Image

Spade

Rotated spade

Heart

Rotated heart

Magnified heart

Minified heart

Diamond

Rotated diamond

Club

Rotated club

Ellipse

SPATIAL MOMENTS

605

Let λ M = MAX { λ 1, λ 2 } and λ N = MIN { λ 1, λ 2 } , and let the orientation angle θ be defined as
  e 21  arc tan  ------   e 11  θ =   e 22  arc tan  ------   -  e 12   

if λ M = λ 1

(18.3-24a)

if λ M = λ 2

(18.3-24b)

The orientation angle can be expressed explicitly as
 λ M – U ( 0, 2 )  θ = arc tan  -------------------------------   U ( 1, 1 ) 

(18.3-24c)

The eigenvalues λ M and λ N and the orientation angle θ define an ellipse, as shown in Figure 18.3-2, whose major axis is λ M and whose minor axis is λ N . The major axis of the ellipse is rotated by the angle θ with respect to the horizontal axis. This elliptically shaped object has the same moments of inertia along the horizontal and vertical axes and the same moments of inertia along the principal axes as does an actual object in an image. The ratio λN R A = -----λM

(18.3-25)

of the minor-to-major axes is a useful shape feature. Table 18.3-3 provides moment of inertia data for the test images. It should be noted that the orientation angle can only be determined to within plus or minus π ⁄ 2 radians.
TABLE 18.3-3 Moment of Intertia Data of Test Images Image Spade Rotated spade Heart Rotated heart Magnified heart Minified heart Diamond Rotated diamond Club Rotated club Ellipse Largest Eigenvalue 33.286 33.223 36.508 36.421 589.190 2.165 42.189 42.223 37.982 38.073 47.149 Smallest Eigenvalue 16.215 16.200 16.376 16.400 262.290 0.984 13.334 13.341 21.831 21.831 11.324 Orientation (radians) –0.153 –1.549 1.561 –0.794 1.562 1.560 1.560 –0.030 –1.556 0.802 0.785 Eigenvalue Ratio 0.487 0.488 0.449 0.450 0.445 0.454 0.316 0.316 0.575 0.573 0.240

606

SHAPE ANALYSIS

Hu (11) has proposed a normalization of the unscaled central moments, defined by Eq. 18.3-12, according to the relation
U U ( m, n ) V ( m, n ) = -------------------------α [ M ( 0, 0 ) ]

(18.3-26a)

where m+n α = ------------ + 1 2

(18.3-26b)

for m + n = 2, 3,... These normalized central moments have been used by Hu to develop a set of seven compound spatial moments that are invariant in the continuous image domain to translation, rotation, and scale change. The Hu invariant moments are defined below. h 1 = V ( 2, 0 ) + V ( 0, 2 ) h 2 = [ V ( 2, 0 ) – V ( 0, 2 ) ] + 4 [ V ( 1, 1 ) ]
2 2 2 2

(18.3-27a) (18.3-27b) (18.3-27c) (18.3-27d)
2 2

h 3 = [ V ( 3, 0 ) – 3V ( 1, 2 ) ] + [ V ( 0, 3 ) – 3V ( 2, 1 ) ] h 4 = [ V ( 3, 0 ) + V ( 1, 2 ) ] + [ V ( 0, 3 ) – V ( 2, 1 ) ]
2 2

h 5 = [ V ( 3, 0 ) – 3V ( 1, 2 ) ] [ V ( 3, 0 ) + V ( 1, 2 ) ] [ [ V ( 3, 0 ) + V ( 1, 2 ) ] – 3 [ V ( 0, 3 ) + V ( 2, 1 ) ] ] + [ 3V ( 2, 1 ) – V ( 0, 3 ) ] [ V ( 0, 3 ) + V ( 2, 1 ) ] [ 3 [ V ( 3, 0 ) + V ( 1, 2 ) ] – [ V ( 0, 3 ) + V ( 2, 1 ) ] ] h 6 = [ V ( 2, 0 ) – V ( 0, 2 ) ] [ [ V ( 3, 0 ) + V ( 1, 2 ) ] – [ V ( 0, 3 ) + V ( 2, 1 ) ] ] + 4V ( 1, 1 ) [ V ( 3, 0 ) + V ( 1, 2 ) ] [ V ( 0, 3 ) + V ( 2, 1 ) ]
2 2 2 2 2

(18.3-27e)

(18.3-27f)
2

h 7 = [ 3V ( 2, 1 ) – V ( 0, 3 ) ] [ V ( 3, 0 ) + V ( 1, 2 ) ] [ [ V ( 3, 0 ) + V ( 1, 2 ) ] – 3 [ V ( 0, 3 ) + V ( 2, 1 ) ] ] + [ 3V ( 1, 2 ) – V ( 3, 0 ) ] [ V ( 0, 3 ) + V ( 2, 1 ) ] [ 3 [ V ( 3, 0 ) + V ( 1, 2 ) ] – [ V ( 0, 3 ) + V ( 2, 1 ) ] ]
2 2

(18.3-27g)

Table 18.3-4 lists the moment invariants of the test images. As desired, these moment invariants are in reasonably close agreement for the geometrically modified versions of the same object, but differ between objects. The relatively small degree of variability of the moment invariants for the same object is due to the spatial discretization of the objects.

SHAPE ORIENTATION DESCRIPTORS

607

TABLE 18.3-4 Invariant Moments of Test Images Image Spade Rotated spade Heart Rotated heart Magnified heart Minified heart Diamond Rotated diamond Club Rotated club Ellipse h 1 × 10 1.920 1.919 1.867 1.866 1.873 1.863 1.986 1.987 2.033 2.033 2.015
1

h 2 × 10 4.387 4.371 5.052 5.004 5.710 4.887 10.648 10.663 3.014 3.040 15.242

3

h 3 × 10 0.715 0.704 1.435 1.434 1.473 1.443 0.018 0.024 2.313 2.323 0.000

3

h 4 × 10 0.295 0.270 8.052 8.010 8.600 8.019 0.475 0.656 5.641 5.749 0.000

5

h 5 × 10 0.123 0.097

9

h 6 × 10 0.185 0.162 5.702 5.650 6.162 5.583 0.490 0.678 3.096 3.167 0.000

6

h 7 × 10

1

–14.159 –11.102 –15.483 –14.788 0.559 0.658 0.004 –0.020 10.226 13.487 0.000

27.340 27.126 30.575 27.241 0.004 0.082 20.353 20.968 0.000

The terms of Eq. 18.3-27 contain differences of relatively large quantities, and therefore, are sometimes subject to significant roundoff error. Liao and Pawlak (19) have investigated the numerical accuracy of moment measures.

18.4. SHAPE ORIENTATION DESCRIPTORS The spatial orientation of an object with respect to a horizontal reference axis is the basis of a set of orientation descriptors developed at the Stanford Research Institute (20). These descriptors, defined below, are described in Figure 18.4-1. 1. Image-oriented bounding box: the smallest rectangle oriented along the rows of the image that encompasses the object 2. Image-oriented box height: dimension of box height for image-oriented box

FIGURE 18.4-1. Shape orientation descriptors.

608

SHAPE ANALYSIS

3. Image-oriented box width: dimension of box width for image-oriented box 4. Image-oriented box area: area of image-oriented bounding box 5. Image oriented box ratio: ratio of box area to enclosed area of an object for an image-oriented box 6. Object-oriented bounding box: the smallest rectangle oriented along the major axis of the object that encompasses the object 7. Object-oriented box height: dimension of box height for object-oriented box 8. Object-oriented box width: dimension of box width for object-oriented box 9. Object-oriented box area: area of object-oriented bounding box 10. Object-oriented box ratio: ratio of box area to enclosed area of an object for an object-oriented box 11. Minimum radius: the minimum distance between the centroid and a perimeter pixel 12. Maximum radius: the maximum distance between the centroid and a perimeter pixel 13. Minimum radius angle: the angle of the minimum radius vector with respect to the horizontal axis 14. Maximum radius angle: the angle of the maximum radius vector with respect to the horizontal axis 15. Radius ratio: ratio of minimum radius angle to maximum radius angle Table 18.4-1 lists the orientation descriptors of some of the playing card symbols.
TABLE 18.4-1 Shape Orientation Descriptors of the Playing Card Symbols Descriptor Row-bounding box height Row-bounding box width Row-bounding box area Row-bounding box ratio Object-bounding box height Object-bounding box width Object-bounding box area Object-bounding box ratio Minimum radius Maximum radius Minimum radius angle Maximum radius angle Spade 155 95 14,725 1.75 94 154 14,476 1.72 11.18 92.05 –1.11 –1.54 Rotated Heart 122 125 15,250 1.76 147 93 13,671 1.57 38.28 84.17 0.35 –0.76 Rotated Diamond 99 175 17,325 2.02 99 175 17,325 2.02 38.95 88.02 1.06 0.02 Rotated Club 123 121 14,883 1.69 148 112 16,576 1.88 26.00 82.22 0.00 0.85

FOURIER DESCRIPTORS

609

18.5. FOURIER DESCRIPTORS The perimeter of an arbitrary closed curve can be represented by its instantaneous curvature at each perimeter point. Consider the continuous closed curve drawn on the complex plane of Figure 18.5-1, in which a point on the perimeter is measured by its polar position z ( s ) as a function of arc length s. The complex function z ( s ) may be expressed in terms of its real part x ( s ) and imaginary part y ( s ) as z ( s ) = x ( s ) + iy ( s )

(18.5-1)

The tangent angle defined in Figure 18.5-1 is given by
 dy ( s ) ⁄ ds  Φ ( s ) = arc tan  -------------------------   dx ( s ) ⁄ ds 

(18.5-2)

and the curvature is the real function dΦ ( s ) k ( s ) = -------------ds

(18.5-3)

The coordinate points x(s), y(s) can be obtained from the curvature function by the reconstruction formulas x ( s ) = x ( 0 ) + ∫ k ( α ) cos { Φ ( α ) } dα
0 s

(18.5-4a) (18.5-4b)

y ( s ) = y ( 0 ) + ∫ k ( α ) sin { Φ ( α ) } dα
0

s

where x(0) and y(0) are the starting point coordinates.

FIGURE 18.5-1. Geometry for curvature definition.

610

SHAPE ANALYSIS

Because the curvature function is periodic over the perimeter length P, it can be expanded in a Fourier series as


k(s) =

n = –∞



 2πins  c n exp  -------------   P 

(18.5-5a)

where the coefficients c n are obtained from
1 c n = -P
P

∫0

 2πin  k ( s ) exp  – -----------  ds  P 

(18.5-5b)

This result is the basis of an analysis technique developed by Cosgriff (21) and Brill (22) in which the Fourier expansion of a shape is truncated to a few terms to produce a set of Fourier descriptors. These Fourier descriptors are then utilized as a symbolic representation of shape for subsequent recognition. If an object has sharp discontinuities (e.g., a rectangle), the curvature function is undefined at these points. This analytic difficulty can be overcome by the utilization of a cumulative shape function

θ(s) =

∫0 k ( α ) dα – --------P

s

2πs

(18.5-6)

proposed by Zahn and Roskies (23). This function is also periodic over P and can therefore be expanded in a Fourier series for a shape description. Bennett and MacDonald (24) have analyzed the discretization error associated with the curvature function defined on discrete image arrays for a variety of connectivity algorithms. The discrete definition of curvature is given by

z ( sj ) = x ( s j ) + iy ( s j )  y ( sj ) – y ( sj – 1 )  Φ ( s j ) = arc tan  -----------------------------------   x ( sj ) – x ( sj – 1 )  k ( sj ) = Φ ( sj ) – Φ ( sj – 1 )

(18.5-7a)

(18.5-7b) (18.5-7c)

where s j represents the jth step of arc position. Figure 18.5-2 contains results of the Fourier expansion of the discrete curvature function.

REFERENCES

611

FIGURE 18.5-2. Fourier expansions of curvature function.

REFERENCES
1. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley-Interscience, New York, 1973. 2. E. C. Greanis et al., “The Recognition of Handwritten Numerals by Contour Analysis,” IBM J. Research and Development, 7, 1, January 1963, 14–21.

612

SHAPE ANALYSIS

3. M. A. Fischler, “Machine Perception and Description of Pictorial Data,” Proc. International Joint Conference on Artificial Intelligence, D. E. Walker and L. M. Norton, Eds., May 1969, 629–639. 4. J. Sklansky, “Recognizing Convex Blobs,” Proc. International Joint Conference on Artificial Intelligence, D. E. Walker and L. M. Norton, Eds., May 1969, 107–116. 5. J. Sklansky, L. P. Cordella, and S. Levialdi, “Parallel Detection of Concavities in Cellular Blobs,” IEEE Trans. Computers, C-25, 2, February 1976, 187–196. 6. A. Rosenfeld and J. L. Pflatz, “Distance Functions on Digital Pictures,” Pattern Recognition, 1, July 1968, 33–62. 7. Z. Kulpa, “Area and Perimeter Measurements of Blobs in Discrete Binary Pictures,” Computer Graphics and Image Processing, 6, 5, October 1977, 434–451. 8. G. Y. Tang, “A Discrete Version of Green's Theorem,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-7, 3, May 1985, 338–344. 9. S. B. Gray, “Local Properties of Binary Images in Two Dimensions,” IEEE Trans. Computers, C-20, 5, May 1971, 551–561. 10. R. O. Duda, “Image Segmentation and Description,” unpublished notes, 1975. 11. M. K. Hu, “Visual Pattern Recognition by Moment Invariants,” IRE Trans. Information Theory, IT-8, 2, February 1962, 179–187. 12. F. L. Alt, “Digital Pattern Recognition by Moments,” J. Association for Computing Machinery, 9, 2, April 1962, 240–258. 13. Y. S. Abu-Mostafa and D. Psaltis, “Recognition Aspects of Moment Invariants,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-6, 6, November 1984, 698– 706. 14. Y. S. Abu-Mostafa and D. Psaltis, “Image Normalization by Complex Moments,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-7, 6, January 1985, 46–55. 15. S. A. Dudani et al., “Aircraft Identification by Moment Invariants,” IEEE Trans. Computers, C-26, February 1962, 179–187. 16. F. W. Smith and M. H. Wright, “Automatic Ship Interpretation by the Method of Moments,” IEEE Trans. Computers, C-20, 1971, 1089–1094. 17. R. Wong and E. Hall, “Scene Matching with Moment Invariants,” Computer Graphics and Image Processing, 8, 1, August 1978, 16–24. 18. A. Goshtasby, “Template Matching in Rotated Images,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-7, 3, May 1985, 338–344. 19. S. X. Liao and M. Pawlak, “On Image Analysis by Moments,”IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-18, 3, March 1996, 254–266. 20. Stanford Research Institute, unpublished notes. 21. R. L. Cosgriff, “Identification of Shape,” Report 820-11, ASTIA AD 254 792, Ohio State University Research Foundation, Columbus, OH, December 1960. 22. E. L. Brill, “Character Recognition via Fourier Descriptors,” WESCON Convention Record, Paper 25/3, Los Angeles, 1968. 23. C. T. Zahn and R. Z. Roskies, “Fourier Descriptors for Plane Closed Curves,” IEEE Trans. Computers, C-21, 3, March 1972, 269–281. 24. J. R. Bennett and J. S. MacDonald, “On the Measurement of Curvature in a Quantized Environment,” IEEE Trans. Computers, C-25, 8, August 1975, 803–820.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

19
IMAGE DETECTION AND REGISTRATION

This chapter covers two related image analysis tasks: detection and registration. Image detection is concerned with the determination of the presence or absence of objects suspected of being in an image. Image registration involves the spatial alignment of a pair of views of a scene.

19.1. TEMPLATE MATCHING One of the most fundamental means of object detection within an image field is by template matching, in which a replica of an object of interest is compared to all unknown objects in the image field (1–4). If the template match between an unknown object and the template is sufficiently close, the unknown object is labeled as the template object. As a simple example of the template-matching process, consider the set of binary black line figures against a white background as shown in Figure 19.1-1a. In this example, the objective is to detect the presence and location of right triangles in the image field. Figure 19.1-1b contains a simple template for localization of right triangles that possesses unit value in the triangular region and zero elsewhere. The width of the legs of the triangle template is chosen as a compromise between localization accuracy and size invariance of the template. In operation, the template is sequentially scanned over the image field and the common region between the template and image field is compared for similarity. A template match is rarely ever exact because of image noise, spatial and amplitude quantization effects, and a priori uncertainty as to the exact shape and structure of an object to be detected. Consequently, a common procedure is to produce a difference measure D ( m, n ) between the template and the image field at all points of
613

614

IMAGE DETECTION AND REGISTRATION

FIGURE 19.1-1. Template-matching example.

the image field where – M ≤ m ≤ M and – N ≤ n ≤ N denote the trial offset. An object is deemed to be matched wherever the difference is smaller than some established level L D ( m, n ) . Normally, the threshold level is constant over the image field. The usual difference measure is the mean-square difference or error as defined by
D ( m, n ) =

∑∑ j k

[ F ( j, k ) – T ( j – m, k – n ) ]

2

(19.1-1)

where F ( j, k ) denotes the image field to be searched and T ( j, k ) is the template. The search, of course, is restricted to the overlap region between the translated template and the image field. A template match is then said to exist at coordinate ( m, n ) if
D ( m, n ) < L D ( m, n )

(19.1-2)

Now, let Eq. 19.1-1 be expanded to yield
D ( m, n ) = D 1 ( m, n ) – 2D 2 ( m, n ) + D 3 ( m, n )

(19.1-3)

TEMPLATE MATCHING

615

where
D 1 ( m, n ) = D 2 ( m, n ) = D 3 ( m, n ) =

∑∑ j k

[ F ( j, k ) ]

2

(19.1-4a) (19.1-4b) (19.1-4c)

∑∑ j k

[ F ( j, k )T ( j – m, k – n ) ] [ T ( j – m, k – n ) ]
2

∑∑ j k

The term D 3 ( m, n ) represents a summation of the template energy. It is constant valued and independent of the coordinate ( m, n ). The image energy over the window area represented by the first term D 1 ( m, n ) generally varies rather slowly over the image field. The second term should be recognized as the cross correlation RFT ( m, n ) between the image field and the template. At the coordinate location of a template match, the cross correlation should become large to yield a small difference. However, the magnitude of the cross correlation is not always an adequate measure of the template difference because the image energy term D 1 ( m, n ) is position variant. For example, the cross correlation can become large, even under a condition of template mismatch, if the image amplitude over the template region is high about a particular coordinate ( m, n ). This difficulty can be avoided by comparison of the normalized cross correlation

∑ ∑ [ F ( j, k )T ( j – m, k – n) ] ) j k ˜ ( m, n ) = D 2 ( m, n - = ----------------------------------------------------------------------RFT --------------------2 D 1 ( m, n ) ∑ ∑ [ F ( j, k ) ] j k

(19.1-5)

to a threshold level L R ( m, n ). A template match is said to exist if
˜ RFT ( m, n ) > L R ( m, n )

(19.1-6)

The normalized cross correlation has a maximum value of unity that occurs if and only if the image function under the template exactly matches the template. One of the major limitations of template matching is that an enormous number of templates must often be test matched against an image field to account for changes in rotation and magnification of template objects. For this reason, template matching is usually limited to smaller local features, which are more invariant to size and shape variations of an object. Such features, for example, include edges joined in a Y or T arrangement.

616

IMAGE DETECTION AND REGISTRATION

19.2. MATCHED FILTERING OF CONTINUOUS IMAGES Matched filtering, implemented by electrical circuits, is widely used in one-dimensional signal detection applications such as radar and digital communication (5–7). It is also possible to detect objects within images by a two-dimensional version of the matched filter (8–12). In the context of image processing, the matched filter is a spatial filter that provides an output measure of the spatial correlation between an input image and a reference image. This correlation measure may then be utilized, for example, to determine the presence or absence of a given input image, or to assist in the spatial registration of two images. This section considers matched filtering of deterministic and stochastic images. 19.2.1. Matched Filtering of Deterministic Continuous Images As an introduction to the concept of the matched filter, consider the problem of detecting the presence or absence of a known continuous, deterministic signal or reference image F ( x, y ) in an unknown or input image FU ( x, y ) corrupted by additive stationary noise N ( x, y ) independent of F ( x, y ) . Thus, FU ( x, y ) is composed of the signal image plus noise,
F U ( x, y ) = F ( x, y ) + N ( x, y )

(19.2-1a)

or noise alone,
FU ( x, y ) = N ( x, y )

(19.2-1b)

The unknown image is spatially filtered by a matched filter with impulse response H ( x, y ) and transfer function H ( ω x, ω y ) to produce an output
F O ( x, y ) = FU ( x, y ) H ( x, y )

(19.2-2)

The matched filter is designed so that the ratio of the signal image energy to the noise field energy at some point ( ε, η ) in the filter output plane is maximized. The instantaneous signal image energy at point ( ε, η ) of the filter output in the absence of noise is given by
S ( ε, η )
2

= F ( x, y )

H ( x, y )

2

(19.2-3)

MATCHED FILTERING OF CONTINUOUS IMAGES

617

with x = ε and y = η . By the convolution theorem,
S ( ε, η )
2

=

∫–∞ ∫–∞ F ( ωx, ωy )H ( ωx, ωy ) exp { i ( ωx ε + ωy η ) } dωx dωy





2

(19.2-4)

where F ( ω x, ω y ) is the Fourier transform of F ( x, y ). The additive input noise component N ( x, y ) is assumed to be stationary, independent of the signal image, and described by its noise power-spectral density W N ( ω x, ω y ). From Eq. 1.4-27, the total noise power at the filter output is
N =

∫– ∞ ∫– ∞ W N ( ω x, ω y ) H ( ω x, ω y )





2

dω x dω y

(19.2-5)

Then, forming the signal-to-noise ratio, one obtains

S ( ε, η ) ---------------------- = ----------------------------------------------------------------------------------------------------------------------------------------------------- (19.2-6) 2 N ∞ ∞ W N ( ω x, ω y ) H ( ω x, ω y ) dω x dω y ∫ ∫
–∞ –∞

2

∫–∞ ∫–∞ F ( ωx, ω y )H ( ωx, ω y ) exp { i ( ωx ε + ω y η ) } dωx dωy





2

This ratio is found to be maximized when the filter transfer function is of the form (5,8)
F * ( ω x, ω y ) exp { – i ( ω x ε + ωy η ) } H ( ω x, ω y ) = ---------------------------------------------------------------------------------W N ( ω x, ω y )

(19.2-7)

If the input noise power-spectral density is white with a flat spectrum,
W N ( ω x, ω y ) = n w ⁄ 2 , the matched filter transfer function reduces to 2 H ( ω x, ω y ) = ----- F * ( ω x, ω y ) exp { – i ( ω x ε + ω y η ) } nw

(19.2-8)

and the corresponding filter impulse response becomes
2 H ( x, y ) = ----- F* ( ε – x, η – y ) nw

(19.2-9)

In this case, the matched filter impulse response is an amplitude scaled version of the complex conjugate of the signal image rotated by 180°. For the case of white noise, the filter output can be written as
2 F O ( x, y ) = ----- FU ( x, y ) nw F∗ ( ε – x, η – y )

(19.2-10a)

618

IMAGE DETECTION AND REGISTRATION

or
2 FO ( x, y ) = ----nw

∫–∞ ∫–∞ FU ( α, β )F∗ ( α + ε – x, β + η – y ) dα dβ





(19.2-10b)

If the matched filter offset ( ε, η ) is chosen to be zero, the filter output
2FO ( x, y ) = ----nw

∫–∞ ∫–∞ FU ( α, β )F∗ ( α – x, β – y ) dα dβ





(19.2-11)

is then seen to be proportional to the mathematical correlation between the input image and the complex conjugate of the signal image. Ordinarily, the parameters ( ε, η ) of the matched filter transfer function are set to be zero so that the origin of the output plane becomes the point of no translational offset between FU ( x, y ) and F ( x, y ). If the unknown image FU ( x, y ) consists of the signal image translated by distances ( ∆x, ∆y ) plus additive noise as defined by
F U ( x, y ) = F ( x + ∆x, y + ∆y ) + N ( x, y )

(19.2-12)

the matched filter output for ε = 0, η = 0 will be
2F O ( x, y ) = ----nw

∫–∞ ∫–∞ [ F ( α + ∆x, β + ∆y ) + N ( x, y ) ]F∗ ( α – x, β – y ) dα dβ





(19.2-13)

A correlation peak will occur at x = ∆x , y = ∆y in the output plane, thus indicating the translation of the input image relative to the reference image. Hence the matched filter is translation invariant. It is, however, not invariant to rotation of the image to be detected. It is possible to implement the general matched filter of Eq. 19.2-7 as a two-stage linear filter with transfer function
H ( ω x, ω y ) = HA ( ω x, ω y )H B ( ω x, ω y )

(19.2-14)

The first stage, called a whitening filter, has a transfer function chosen such that noise N ( x, y ) with a power spectrum WN ( ω x, ω y ) at its input results in unit energy white noise at its output. Thus
W N ( ω x, ω y ) H A ( ω x, ω y )
2

= 1

(19.2-15)

MATCHED FILTERING OF CONTINUOUS IMAGES

619

The transfer function of the whitening filter may be determined by a spectral factorization of the input noise power-spectral density into the product (7)
W N ( ω x, ω y ) = W N ( ω x, ω y ) W N ( ω x, ω y )
+ –

(19.2-16)

such that the following conditions hold:
W N ( ω x, ω y ) = [ W N ( ω x, ω y ) ]
– + + –

∗ ∗ = W N ( ω x, ω y )
– 2

(19.2-17a) (19.2-17b) (19.2-17c)

W N ( ω x, ω y ) = [ W N ( ω x, ω y ) ] W N ( ω x, ω y ) = W N ( ω x, ω y )
+

2

The simplest type of factorization is the spatially noncausal factorization
W N ( ω x, ω y ) =
+

WN ( ω x, ω y ) exp { iθ ( ω x, ω y ) }

(19.2-18)

where θ ( ω x, ω y ) represents an arbitrary phase angle. Causal factorization of the input noise power-spectral density may be difficult if the spectrum does not factor into separable products. For a given factorization, the whitening filter transfer function may be set to
1 H A ( ω x, ω y ) = ---------------------------------+ W N ( ω x, ω y )

(19.2-19)

The resultant input to the second-stage filter is F 1 ( x, y ) + N W ( x, y ) , where NW ( x, y ) represents unit energy white noise and
F1 ( x, y ) = F ( x, y ) H A ( x, y )

(19.2-20)

is a modified image signal with a spectrum
F ( ω x, ω y ) F 1 ( ω x, ω y ) = F ( ω x, ω y )H A ( ω x, ω y ) = ---------------------------------+ W N ( ω x, ω y )

(19.2-21)

From Eq. 19.2-8, for the white noise condition, the optimum transfer function of the second-stage filter is found to be

620

IMAGE DETECTION AND REGISTRATION

F * ( ω x, ω y ) H B ( ω x, ω y ) = -------------------------------- exp { – i ( ω x ε + ω y η ) } – W N ( ω x, ω y )

(19.2-22)

Calculation of the product H A ( ω x, ω y )H B ( ω x, ω y ) shows that the optimum filter expression of Eq. 19.2-7 can be obtained by the whitening filter implementation. The basic limitation of the normal matched filter, as defined by Eq. 19.2-7, is that the correlation output between an unknown image and an image signal to be detected is primarily dependent on the energy of the images rather than their spatial structure. For example, consider a signal image in the form of a bright hexagonally shaped object against a black background. If the unknown image field contains a circular disk of the same brightness and area as the hexagonal object, the correlation function resulting will be very similar to the correlation function produced by a perfect match. In general, the normal matched filter provides relatively poor discrimination between objects of different shape but of similar size or energy content. This drawback of the normal matched filter is overcome somewhat with the derivative matched filter (8), which makes use of the edge structure of an object to be detected. The transfer function of the pth-order derivative matched filter is given by
( ω x + ω y ) F * ( ω x, ω y ) exp { – i ( ω x ε + ω y η ) } Hp ( ω x, ω y ) = -----------------------------------------------------------------------------------------------------------W N ( ω x, ω y )
2 2 p

(19.2-23)

where p is an integer. If p = 0, the normal matched filter
F * ( ω x, ω y ) exp { – i ( ω x ε + ω y η ) } H 0 ( ω x, ω y ) = -------------------------------------------------------------------------------W N ( ω x, ω y )

(19.2-24)

is obtained. With p = 1, the resulting filter
Hp ( ω x, ω y ) = ( ω x + ω y )H0 ( ω x, ω y )
2 2

(19.2-25)

is called the Laplacian matched filter. Its impulse response function is
H 1 ( x, y ) =  ∂ + ∂   2 2 ∂x ∂y H 0 ( x, y )

(19.2-26)

The pth-order derivative matched filter transfer function is
H p ( ω x, ω y ) = ( ω x + ω y ) H 0 ( ω x, ω y )
2 2 p

(19.2-27)

MATCHED FILTERING OF CONTINUOUS IMAGES

621

Hence the derivative matched filter may be implemented by cascaded operations consisting of a generalized derivative operator whose function is to enhance the edges of an image, followed by a normal matched filter. 19.2.2. Matched Filtering of Stochastic Continuous Images In the preceding section, the ideal image F ( x, y ) to be detected in the presence of additive noise was assumed deterministic. If the state of F ( x, y ) is not known exactly, but only statistically, the matched filtering concept can be extended to the detection of a stochastic image in the presence of noise (13). Even if F ( x, y ) is known deterministically, it is often useful to consider it as a random field with a mean E { F ( x, y ) } = F ( x, y ). Such a formulation provides a mechanism for incorporating a priori knowledge of the spatial correlation of an image in its detection. Conventional matched filtering, as defined by Eq. 19.2-7, completely ignores the spatial relationships between the pixels of an observed image. For purposes of analysis, let the observed unknown field
F U ( x, y ) = F ( x, y ) + N ( x, y )

(19.2-28a)

or noise alone
FU ( x, y ) = N ( x, y )

(19.2-28b)

be composed of an ideal image F ( x, y ) , which is a sample of a two-dimensional stochastic process with known moments, plus noise N ( x, y ) independent of the image, or be composed of noise alone. The unknown field is convolved with the matched filter impulse response H ( x, y ) to produce an output modeled as
F O ( x, y ) = FU ( x, y ) H ( x, y )

(19.2-29)

The stochastic matched filter is designed so that it maximizes the ratio of the average squared signal energy without noise to the variance of the filter output. This is simply a generalization of the conventional signal-to-noise ratio of Eq. 19.2-6. In the absence of noise, the expected signal energy at some point ( ε, η ) in the output field is
S ( ε, η )
2

= E { F ( x, y ) }

H ( x, y )

2

(19.2-30)

By the convolution theorem and linearity of the expectation operator,
S ( ε, η )
2

=

∫–∞ ∫–∞ E { F ( ωx, ωy ) }H ( ω x, ωy ) exp { i ( ω x ε + ωy η ) } dω x dω y





2

(19.2-31)

622

IMAGE DETECTION AND REGISTRATION

The variance of the matched filter output, under the assumption of stationarity and signal and noise independence, is
∞ ∞ 2

N =

∫– ∞ ∫– ∞ [ W F ( ω x, ω y ) + W N ( ω x, ω y ) ] H ( ω x, ω y )

dω x dω y

(19.2-32)

where W F ( ω x, ω y ) and W N ( ω x, ω y ) are the image signal and noise power spectral densities, respectively. The generalized signal-to-noise ratio of the two equations above, which is of similar form to the specialized case of Eq. 19.2-6, is maximized when
E { F * ( ω x, ω y ) } exp { – i ( ω x ε + ω y η ) } H ( ω x, ω y ) = ------------------------------------------------------------------------------------------W F ( ω x, ω y ) + W N ( ω x, ω y )

(19.2-33)

Note that when F ( x, y ) is deterministic, Eq. 19.2-33 reduces to the matched filter transfer function of Eq. 19.2-7. The stochastic matched filter is often modified by replacement of the mean of the ideal image to be detected by a replica of the image itself. In this case, for ε = η = 0,
F * ( ω x, ω y ) H ( ω x, ω y ) = ---------------------------------------------------------------W F ( ω x, ω y ) + W N ( ω x, ω y )

(19.2-34)

A special case of common interest occurs when the noise is white, WN ( ω x, ω y ) = n W ⁄ 2 , and the ideal image is regarded as a first-order nonseparable Markov process, as defined by Eq. 1.4-17, with power spectrum
2 W F ( ω x, ω y ) = ------------------------------2 2 2 α + ωx + ωy

(19.2-35)

where exp { – α } is the adjacent pixel correlation. For such processes, the resultant modified matched filter transfer function becomes
2 2 2 2 ( α + ω x + ω y )F * ( ω x, ω y ) H ( ω x, ω y ) = -------------------------------------------------------------------2 2 2 4 + nW ( α + ωx + ωy )

(19.2-36)

At high spatial frequencies and low noise levels, the modified matched filter defined by Eq. 19.2-36 becomes equivalent to the Laplacian matched filter of Eq. 19.2-25.

MATCHED FILTERING OF DISCRETE IMAGES

623

19.3. MATCHED FILTERING OF DISCRETE IMAGES A matched filter for object detection can be defined for discrete as well as continuous images. One approach is to perform discrete linear filtering using a discretized version of the matched filter transfer function of Eq. 19.2-7 following the techniques outlined in Section 9.4. Alternatively, the discrete matched filter can be developed by a vector-space formulation (13,14). The latter approach, presented in this section, is advantageous because it permits a concise analysis for nonstationary image and noise arrays. Also, image boundary effects can be dealt with accurately. Consider an observed image vector fU = f + n

(19.3-1a)

or fU = n

(19.3-1b)

composed of a deterministic image vector f plus a noise vector n, or noise alone. The discrete matched filtering operation is implemented by forming the inner product of fU with a matched filter vector m to produce the scalar output fO = m f U
T

(19.3-2)

Vector m is chosen to maximize the signal-to-noise ratio. The signal power in the absence of noise is simply
S = [m f]
T 2

(19.3-3)

and the noise power is
T T

N = E { [ m n ] [ m n ] } = mT Kn m

T

(19.3-4)

where K n is the noise covariance matrix. Hence the signal-to-noise ratio is
[m f] S --- = -------------------T N m Knm
T 2

(19.3-5)

The optimal choice of m can be determined by differentiating the signal-to-noise ratio of Eq. 19.3-5 with respect to m and setting the result to zero. These operations lead directly to the relation

624

IMAGE DETECTION AND REGISTRATION
T

m K n m –1 m = -------------------- K n f T m f

(19.3-6)

where the term in brackets is a scalar, which may be normalized to unity. The matched filter output fO = f Kn fU
T –1

(19.3-7)

reduces to simple vector correlation for white noise. In the general case, the noise covariance matrix may be spectrally factored into the matrix product
K n = KK
–1 ⁄ 2 T

(19.3-8)

with K = EΛn , where E is a matrix composed of the eigenvectors of K n and Λ n Λ is a diagonal matrix of the corresponding eigenvalues (14). The resulting matched filter output fO = [ K f U ] [ K f U ]
–1 T –1

(19.3-9)

can be regarded as vector correlation after the unknown vector f U has been whit–1 ened by premultiplication by K . Extensions of the previous derivation for the detection of stochastic image vectors are straightforward. The signal energy of Eq. 19.3-3 becomes
S = [ m ηf ]
T 2

(19.3-10)

where η f is the mean vector of f and the variance of the matched filter output is
N = m Kfm + m Knm
T T

(19.3-11)

under the assumption of independence of f and n. The resulting signal-to-noise ratio is maximized when m = [ Kf + Kn ] ηf
–1

(19.3-12)

Vector correlation of m and fU to form the matched filter output can be performed directly using Eq. 19.3-2 or alternatively, according to Eq. 19.3-9, where –1 ⁄ 2 and E and Λ denote the matrices of eigenvectors and eigenvalues of K = EΛ Λ

IMAGE REGISTRATION

625

[ K f + K n ] , respectively (14). In the special but common case of white noise and a separable, first-order Markovian covariance matrix, the whitening operations can be performed using an efficient Fourier domain processing algorithm developed for Wiener filtering (15).

19.4. IMAGE REGISTRATION In many image processing applications, it is necessary to form a pixel-by-pixel comparison of two images of the same object field obtained from different sensors, or of two images of an object field taken from the same sensor at different times. To form this comparison, it is necessary to spatially register the images, and thereby, to correct for relative translation shifts, rotational differences, scale differences and even perspective view differences. Often, it is possible to eliminate or minimize many of these sources of misregistration by proper static calibration of an image sensor. However, in many cases, a posteriori misregistration detection and subsequent correction must be performed. Chapter 13 considered the task of spatially warping an image to compensate for physical spatial distortion mechanisms. This section considers means of detecting the parameters of misregistration. Consideration is given first to the common problem of detecting the translational misregistration of two images. Techniques developed for the solution to this problem are then extended to other forms of misregistration. 19.4.1. Translational Misregistration Detection The classical technique for registering a pair of images subject to unknown translational differences is to (1) form the normalized cross correlation function between the image pair, (2) determine the translational offset coordinates of the correlation function peak, and (3) translate one of the images with respect to the other by the offset coordinates (16,17). This subsection considers the generation of the basic cross correlation function and several of its derivatives as means of detecting the translational differences between a pair of images. Basic Correlation Function. Let F 1 ( j, k ) and F 2 ( j, k ), for 1 ≤ j ≤ J and 1 ≤ k ≤ K , represent two discrete images to be registered. F 1 ( j, k ) is considered to be the reference image, and

F2 ( j, k ) = F 1 ( j – j o, k – k o )

(19.4-1)

is a translated version of F1 ( j, k ) where ( jo, k o ) are the offset coordinates of the translation. The normalized cross correlation between the image pair is defined as

626

IMAGE DETECTION AND REGISTRATION

FIGURE 19.4-1. Geometrical relationships between arrays for the cross correlation of an image pair.

R ( m, n ) = --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

∑∑ j k

F1 ( j, k )F2 ( j – m + ( M + 1 ) ⁄ 2, k – n + ( N + 1 ) ⁄ 2 )
1 -2 2 1 -2 2

∑∑ j k

[ F 1 ( j, k ) ]

∑ ∑ [ F2 ( j – m + ( M + 1 ) ⁄ 2, k – n + ( N + 1 ) ⁄ 2 ) ] j k

(19.4-2) for m = 1, 2,..., M and n = 1, 2,..., N, where M and N are odd integers. This formulation, which is a generalization of the template matching cross correlation expression, as defined by Eq. 19.1-5, utilizes an upper left corner–justified definition for all of the arrays. The dashed-line rectangle of Figure 19.4-1 specifies the bounds of the correlation function region over which the upper left corner of F 2 ( j, k ) moves in space with respect to F1 ( j, k ) . The bounds of the summations of Eq. 19.4-2 are
MAX { 1, m – ( M – 1 ) ⁄ 2 } ≤ j ≤ MIN { J, J + m – ( M + 1 ) ⁄ 2 } MAX { 1, n – ( N – 1 ) ⁄ 2 } ≤ k ≤ MIN { K, K + n – ( N + 1 ) ⁄ 2 }

(19.4-3a) (19.4-3b)

These bounds are indicated by the shaded region in Figure 19.4-1 for the trial offset (a, b). This region is called the window region of the correlation function computation. The computation of Eq. 19.4-2 is often restricted to a constant-size window area less than the overlap of the image pair in order to reduce the number of

IMAGE REGISTRATION

627

calculations. This P × Q constant-size window region, called a template region, is defined by the summation bounds m≤ j≤m+J–M n≤ k≤n+K–N

(19.4-4a) (19.4-4b)

The dotted lines in Figure 19.4-1 specify the maximum constant-size template region, which lies at the center of F 2 ( j, k ). The sizes of the M × N correlation function array, the J × K search region, and the P × Q template region are related by
M =J–P+1 N =K–Q+1

(19.4-5a) (19.4-5b)

For the special case in which the correlation window is of constant size, the correlation function of Eq. 19.4-2 can be reformulated as a template search process. Let S ( u, v ) denote a U × V search area within F1 ( j, k ) whose upper left corner is at the offset coordinate ( j s, k s ) . Let T ( p, q ) denote a P × Q template region extracted from F2 ( j, k ) whose upper left corner is at the offset coordinate ( jt, k t ). Figure 19.4-2 relates the template region to the search area. Clearly, U > P and V > Q . The normalized cross correlation function can then be expressed as

u v R ( m, n ) = ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

∑ ∑ S ( u, v )T ( u – m + ( M + 1 ) ⁄ 2, v – n + ( N + 1 ) ⁄ 2 )
1 -2 2 u v

∑ u ∑ [ S ( u, v ) ] v ∑ ∑ [ T ( u – m + ( M + 1 ) ⁄ 2, v – n + ( N + 1 ) ⁄ 2 ) ]

2

1 -2

(19.4-6) for m = 1, 2,..., M and n = 1, 2,. . ., N where
M =U–P+1 N = V–Q+1

(19.4-7a) (19.4-7b)

The summation limits of Eq. 19.4-6 are m≤ u≤m+P–1 n≤ v≤n+Q–1

(19.4-8a) (19.4-8b)

628

IMAGE DETECTION AND REGISTRATION

FIGURE 19.4-2. Relationship of template region and search area.

Computation of the numerator of Eq. 19.4-6 is equivalent to raster scanning the template T ( p, q ) over the search area S ( u, v ) such that the template always resides within S ( u, v ) , and then forming the sum of the products of the template and the search area under the template. The left-hand denominator term is the square root of 2 the sum of the terms [ S ( u, v ) ] within the search area defined by the template position. The right-hand denominator term is simply the square root of the sum of the 2 template terms [ T ( p, q ) ] independent of ( m, n ) . It should be recognized that the numerator of Eq. 19.4-6 can be computed by convolution of S ( u, v ) with an impulse response function consisting of the template T ( p, q ) spatially rotated by 180°. Similarly, the left-hand term of the denominator can be implemented by convolving the square of S ( u, v ) with a P × Q uniform impulse response function. For large templates, it may be more computationally efficient to perform the convolutions indirectly by Fourier domain filtering. Statistical Correlation Function. There are two problems associated with the basic correlation function of Eq. 19.4-2. First, the correlation function may be rather broad, making detection of its peak difficult. Second, image noise may mask the peak correlation. Both problems can be alleviated by extending the correlation function definition to consider the statistical properties of the pair of image arrays. The statistical correlation function (14) is defined as

RS ( m, n ) = ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------2 1⁄2 2 1⁄2 [ G 1 ( j, k ) ] [ G 2 ( j – m + ( M + 1 ) ⁄ 2, k – n + ( N + 1 ) ⁄ 2 ) ] ∑∑ ∑∑ j k j k

∑ ∑ G1 ( j, k )G2 ( j – m + ( M + 1 ) ⁄ 2, k – n + ( N + 1 ) ⁄ 2 ) j k

(19.4-9)

IMAGE REGISTRATION

629

The arrays Gi ( j, k ) are obtained by the convolution operation
G i ( j, k ) = [ F i ( j, k ) – F i ( j, k ) ] * D i ( j, k )

(19.4-10)

where F i ( j, k ) is the spatial average of F i ( j, k ) over the correlation window. The impulse response functions D i ( j, k ) are chosen to maximize the peak correlation when the pair of images is in best register. The design problem can be solved by recourse to the theory of matched filtering of discrete arrays developed in the preceding section. Accordingly, let f 1 denote the vector of column-scanned elements of F 1 ( j, k ) in the window area and let f 2 ( m, n ) represent the elements of F 2 ( j, k ) over the window area for a given registration shift (m, n) in the search area. There are a total of M ⋅ N vectors f 2 ( m, n ). The elements within f1 and f 2 ( m, n ) are usually highly correlated spatially. Hence, following the techniques of stochastic method filtering, the first processing step should be to whiten each vector by premultiplication with whitening filter matrices H1 and H2 according to the relations g1 = [ H 1 ] f1 g 2 ( m, n ) = [ H 2 ] f2 ( m, n )
–1 –1

(19.4-11a) (19.4-11b)

where H1 and H2 are obtained by factorization of the image covariance matrices
K1 = H1 H1 K2 = H2 H2
T

(19.4-12a) (19.4-12b)

T

The factorization matrices may be expressed as
H1 = E1 [ Λ1 ] H2 = E2 [ Λ2 ]
1⁄2

(19.4-13a) (19.4-13b)

1⁄2

where E1 and E2 contain eigenvectors of K1 and K2, respectively, and Λ 1 and Λ 2 are diagonal matrices of the corresponding eigenvalues of the covariance matrices. The statistical correlation function can then be obtained by the normalized innerproduct computation

630

IMAGE DETECTION AND REGISTRATION
T

g 1 g 2 ( m, n ) R S ( m, n ) = ------------------------------------------------------------------------------1⁄2 T 1⁄2 T [ g 1 g 1 ] [ g 2 ( m, n )g 2 ( m, n ) ]

(19.4-14)

Computation of the statistical correlation function requires calculation of two sets of eigenvectors and eigenvalues of the covariance matrices of the two images to be registered. If the window area contains P ⋅ Q pixels, the covariance matrices K1 and K2 will each be ( P ⋅ Q ) × ( P ⋅ Q ) matrices. For example, if P = Q = 16, the covariance matrices K1 and K2 are each of dimension 256 × 256 . Computation of the eigenvectors and eigenvalues of such large matrices is numerically difficult. However, in special cases, the computation can be simplified appreciably (14). For example, if the images are modeled as separable Markov process sources and there is no observation noise, the convolution operators of Eq. 19.5-9 reduce to the statistical mask operator

1 2 2 2 D i = --------------------(1 + ρ ) 2 2 –ρ ( 1 + ρ ) (1 + ρ ) 2 2 ρ –ρ( 1 + ρ )

ρ

2

–ρ( 1 + ρ )

2

ρ

2 2

–ρ ( 1 + ρ ) ρ
2

(19.4-15)

where ρ denotes the adjacent pixel correlation (18). If the images are spatially uncorrelated, then ρ = 0, and the correlation operation is not required. At the other extreme, if ρ = 1, then
1 1 D i = -- – 2 4 1 –2 4 –2 1 –2 1

(19.4-16)

This operator is an orthonormally scaled version of the cross second derivative spot detection operator of Eq. 15.7-3. In general, when an image is highly spatially correlated, the statistical correlation operators D i produce outputs that are large in magnitude only in regions of an image for which its amplitude changes significantly in both coordinate directions simultaneously. Figure 19.4-3 provides computer simulation results of the performance of the statistical correlation measure for registration of the toy tank image of Figure 17.1-6b. In the simulation, the reference image F 1 ( j, k ) has been spatially offset horizontally by three pixels and vertically by four pixels to produce the translated image F2 ( j, k ). The pair of images has then been correlated in a window area of 16 × 16 pixels over a search area of 32 × 32 pixels. The curves in Figure 19.4-3 represent the normalized statistical correlation measure taken through the peak of the correlation

IMAGE REGISTRATION

631

FIGURE 19.4-3. Statistical correlation misregistration detection.

function. It should be noted that for ρ = 0, corresponding to the basic correlation measure, it is relatively difficult to distinguish the peak of R S ( m, n ) . For ρ = 0.9 or greater, R ( m, n ) peaks sharply at the correct point. The correlation function methods of translation offset detection defined by Eqs. 19.4-2 and 19.4-9 are capable of estimating any translation offset to an accuracy of ± ½ pixel. It is possible to improve the accuracy of these methods to subpixel levels by interpolation techniques (19). One approach (20) is to spatially interpolate the correlation function and then search for the peak of the interpolated correlation function. Another approach is to spatially interpolate each of the pair of images and then correlate the higher-resolution pair. A common criticism of the correlation function method of image registration is the great amount of computation that must be performed if the template region and the search areas are large. Several computational methods that attempt to overcome this problem are presented next. Two-State Methods. Rosenfeld and Vandenburg (21,22) have proposed two efficient two-stage methods of translation offset detection. In one of the methods, called coarse–fine matching, each of the pair of images is reduced in resolution by conventional techniques (low-pass filtering followed by subsampling) to produce coarse

632

IMAGE DETECTION AND REGISTRATION

representations of the images. Then the coarse images are correlated and the resulting correlation peak is determined. The correlation peak provides a rough estimate of the translation offset, which is then used to define a spatially restricted search area for correlation at the fine resolution of the original image pair. The other method, suggested by Vandenburg and Rosenfeld (22), is to use a subset of the pixels within the window area to compute the correlation function in the first stage of the two-stage process. This can be accomplished by restricting the size of the window area or by performing subsampling of the images within the window area. Goshtasby et al. (23) have proposed random rather than deterministic subsampling. The second stage of the process is the same as that of the coarse–fine method; correlation is performed over the full window at fine resolution. Two-stage methods can provide a significant reduction in computation, but they can produce false results. Sequential Search Method. With the correlation measure techniques, no decision can be made until the correlation array is computed for all ( m, n ) elements. Furthermore, the amount of computation of the correlation array is the same for all degrees of misregistration. These deficiencies of the standard correlation measures have led to the search for efficient sequential search algorithms. An efficient sequential search method has been proposed by Barnea and Silverman (24). The basic form of this algorithm is deceptively simple. The absolute value difference error
ES =

∑∑ j k

F 1 ( j, k ) – F 2 ( j – m, k – n )

(19.4-17)

is accumulated for pixel values in a window area. If the error exceeds a predetermined threshold value before all P ⋅ Q pixels in the window area are examined, it is assumed that the test has failed for the particular offset ( m, n ), and a new offset is checked. If the error grows slowly, the number of pixels examined when the threshold is finally exceeded is recorded as a rating of the test offset. Eventually, when all test offsets have been examined, the offset with the largest rating is assumed to be the proper misregistration offset. Phase Correlation Method. Consider a pair of continuous domain images

F2 ( x, y ) = F 1 ( x – x o, y – y o )

(19.4-18)

that are translated by an offset ( x o, y o ) with respect to one another. By the Fourier transform shift property of Eq. 1.3-13a, the Fourier transforms of the images are related by
F 2 ( ω x, ω y ) = F 1 ( ω x, ω y ) exp { – i ( ω x x o + ω y y o ) }

(19.4-19)

IMAGE REGISTRATION

633

The exponential phase shift factor can be computed by the cross-power spectrum (25) of the two images as given by
* F 1 ( ω x, ω y )F 2 ( ω x, ω y ) G ( ω x, ω y ) ≡ --------------------------------------------------------- = exp { i ( ω x x o + ω y y o ) } F 1 ( ω x, ω y )F 2 ( ω x, ω y )

(19.4-20)

Taking the inverse Fourier transform of Eq. 19.4-20 yields the spatial offset
G ( x, y ) = δ ( x – x o, y – y o )

(19.4-21)

in the space domain. The cross-power spectrum approach can be applied to discrete images by utilizing discrete Fourier transforms in place of the continuous Fourier transforms in Eq. 19.4-20. However, care must be taken to prevent wraparound error. Figure 19.4-4 presents an example of translational misregistration detection using the phase correlation method. Figure 19.4-4a and b show translated portions of a scene embedded in a zero background. The scene in Figure 19.4-4a was obtained by extracting the first 480 rows and columns of the 500 × 500 washington_ir source image. The scene in Figure 19.4-4b consists of the last 480 rows and columns of the source image. Figure 19.4-4c and d are the logarithm magnitudes of the Fourier transforms of the two images, and Figure 19.4-4e is inverse Fourier transform of the crosspower spectrum of the pair of images. The bright pixel in the upper left corner of Figure 19.4-4e, located at coordinate (20,20), is the correlation peak. 19.4.2. Scale and Rotation Misregistration Detection The phase correlation method for translational misregistration detection has been extended to scale and rotation misregistration detection (25,26). Consider a a pair of images in which a second image is translated by an offset ( x o, y o ) and rotated by an angle θ o with respect to the first image. Then
F 2 ( x, y ) = F 1 ( x cos θ o + y sin θo – x o, – x sin θ o + y cos θ o – y o )

(19.4-22)

Taking Fourier transforms of both sides of Eq. 19.4-22, one obtains the relationship (25)
F 2 ( ω x, ω y ) = F 1 ( ω x cos θ o + ω y sin θ o, – ω x sin θ o + ω y cos θ o ) exp { – i ( ω x x o + ω y y o ) }

(19.4-23)

634

IMAGE DETECTION AND REGISTRATION

(a ) Embedded image 1

(b ) Embedded image 2

(c ) Log magnitude of Fourier transform of image 1

(d ) Log magnitude of Fourier transform of image 1

(e ) Phase correlation spatial array

FIGURE 19.4-4. Translational misregistration detection on the washington_ir1 and washington_ir2 images using the phase correlation method. See white pixel in upper left corner of (e).

IMAGE REGISTRATION

635

The rotation component can be isolated by taking the magnitudes M1 ( ω x, ω y ) and M 2 ( ω x, ω y ) of both sides of Eq. 19.4-19. By representing the frequency variables in polar form,
M 2 ( ρ, θ ) = M 1 ( ρ, θ – θo )

(19.4-24)

the phase correlation method can be used to determine the rotation angle θ o. If a second image is a size-scaled version of a first image with scale factors (a, b), then from the Fourier transform scaling property of Eq. 1.3-12, ω x ωy 1 - F 2 ( ω x, ω y ) = -------- F 1  ----- , -----  ab  a b 

(19.4-25)

By converting the frequency variables to a logarithmic scale, scaling can be converted to a translational movement. Then
1 F 2 ( log ω x, log ω y ) = -------- F 1 ( log ω x – log a, log ω y – log b ) ab

(19.4-26)

Now, the phase correlation method can be applied to determine the unknown scale factors (a,b). 19.4.3. Generalized Misregistration Detection The basic correlation concept for translational misregistration detection can be generalized, in principle, to accommodate rotation and size scaling. As an illustrative example, consider an observed image F2 ( j, k ) that is an exact replica of a reference image F1 ( j, k ) except that it is rotated by an unknown angle θ measured in a clockwise direction about the common center of both images. Figure 19.4-5 illustrates the geometry of the example. Now suppose that F2 ( j, k ) is rotated by a trial angle θ r measured in a counterclockwise direction and that it is resampled with appropriate interpolation. Let F2 ( j, k ; θ r ) denote the trial rotated version of F 2 ( j, k ). This procedure is then repeated for a set of angles θ 1 ≤ θ ≤ θ R expected to span the unknown angle θ in the reverse direction. The normalized correlation function can then be expressed as

R ( r ) = ------------------------------------------------------------------------------------------------------------------2 1⁄2 2 1⁄2 [ F 1 ( j, k ) ] [ F 2 ( j, k ; r ) ] ∑∑ ∑∑ j k j k

∑ ∑ F1 ( j, k )F2 ( j, k ; r ) j k

(19.4-27)

636

IMAGE DETECTION AND REGISTRATION

FIGURE 19.4-5 Rotational misregistration detection.

for r = 1, 2, . . ., R. Searching for the peak of R(r) leads to an estimate of the unknown rotation angle θ . The procedure does, of course, require a significant amount of computation because of the need to resample F 2 ( j, k ) for each trial rotation angle θ r . The rotational misregistration example of Figure 19.4-5 is based on the simplifying assumption that the center of rotation is known. If it is not, then to extend the correlation function concept, it is necessary to translate F2 ( j, k ) to a trial translation coordinate ( jp, k q ) , rotate that image by a trial angle θr , and translate that image to the translation coordinate ( – j p, – k q ). This results in a trial image F 2 ( j, k ; j p, k q, θ r ) , which is used to compute one term of a three-dimensional correlation function R ( p, q, r ), the peak of which leads to an estimate of the unknown translation and rotation. Clearly, this procedure is computationally intensive. It is possible to apply the correlation concept to determine unknown row and column size scaling factors between a pair of images. The straightforward extension requires the computation of a two-dimensional correlation function. If all five misregistration parameters are unknown, then again, in principle, a five-dimensional correlation function can be computed to determine an estimate of the unknown parameters. This formidable computational task is further complicated by the fact that, as noted in Section 13.1, the order of the geometric manipulations is important. The complexity and computational load of the correlation function method of misregistration detection for combined translation, rotation, and size scaling can be reduced significantly by a procedure in which the misregistration of only a few chosen common points between a pair of images is determined. This procedure, called control point detection, can be applied to the general rubber-sheet warping problem. A few pixels that represent unique points on objects within the pair of images are identified, and their coordinates are recorded to be used in the spatial warping mapping process described in Eq. 13.2-3. The trick, of course, is to accurately identify and measure the control points. It is desirable to locate object features that are reasonably invariant to small-scale geometric transformations. One such set of features are Hu's (27) seven invariant moments defined by Eqs. 18.3-27. Wong and Hall (28)

REFERENCES

637

have investigated the use of invariant moment features for matching optical and radar images of the same scene. Goshtasby (29) has applied invariant moment features for registering visible and infrared weather satellite images. The control point detection procedure begins with the establishment of a small feature template window, typically 8 × 8 pixels, in the reference image that is sufficiently large to contain a single control point feature of interest. Next, a search window area is established such that it envelops all possible translates of the center of the template window between the pair of images to be registered. It should be noted that the control point feature may be rotated, minified or magnified to a limited extent, as well as being translated. Then the seven Hu moment invariants h i1 for i = 1, 2,..., 7 are computed in the reference image. Similarly, the seven moments h i2 ( m, n ) are computed in the second image for each translate pair ( m, n ) within the search area. Following this computation, the invariant moment correlation function is formed as
7

R(r) =

h i1 h i2 ( m, n ) i=1 ------------------------------------------------------------------------------------------------1⁄2 1⁄2 7 7 2 2 [ h i2 ( m, n ) ] ( h i1 ) i=1 i=1



(19.4-28)





Its peak is found to determine the coordinates of the control point feature in each image of the image pair. The process is then repeated on other control point features until the number of control points is sufficient to perform the rubber-sheet warping of F 2 ( j, k ) onto the space of F1 ( j, k ) . REFERENCES
1. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley-Interscience, New York, 1973. 2. W. H. Highleyman, “An Analog Method for Character Recognition,” IRE Trans. Electronic Computers, EC-10, 3, September 1961, 502–510. 3. L. N. Kanal and N. C. Randall, “Recognition System Design by Statistical Analysis,” Proc. ACM National Conference, 1964. 4. J. H. Munson, “Experiments in the Recognition of Hand-Printed Text, I. Character Recognition,” Proc. Fall Joint Computer Conference, December 1968, 1125–1138. 5. G. L. Turin, “An Introduction to Matched Filters,” IRE Trans. Information Theory, IT-6, 3, June 1960, 311–329. 6. C. E. Cook and M. Bernfeld, Radar Signals, Academic Press, New York, 1965. 7. J. B. Thomas, An Introduction to Statistical Communication Theory, Wiley, New York, 1965, 187–218.

638

IMAGE DETECTION AND REGISTRATION

8. H. C. Andrews, Computer Techniques in Image Processing, Academic Press, New York, 1970, 55–71. 9. L. J. Cutrona, E. N. Leith, C. J. Palermo, and L. J. Porcello, “Optical Data Processing and Filtering Systems,” IRE Trans. Information Theory, IT-6, 3, June 1960, 386–400. 10. A. Vander Lugt, F. B. Rotz, and A. Kloester, Jr., “Character-Reading by Optical Spatial Filtering,” in Optical and Electro-Optical Information Processing, J. Tippett et al., Eds., MIT Press, Cambridge, MA, 1965, 125–141. 11. A. Vander Lugt, “Signal Detection by Complex Spatial Filtering,” IEEE Trans. Information Theory, IT-10, 2, April 1964, 139–145. 12. A. Kozma and D. L. Kelly, “Spatial Filtering for Detection of Signals Submerged in Noise,” Applied Optics, 4, 4, April 1965, 387–392. 13. A. Arcese, P. H. Mengert, and E. W. Trombini, “Image Detection Through Bipolar Correlation,” IEEE Trans. Information Theory, IT-16, 5, September 1970, 534–541. 14. W. K. Pratt, “Correlation Techniques of Image Registration,” IEEE Trans. Aerospace and Electronic Systems, AES-1O, 3, May 1974, 353–358. 15. W. K. Pratt and F. Davarian, “Fast Computational Techniques for Pseudoinverse and Wiener Image Restoration,” IEEE Trans. Computers, C-26, 6 June, 1977, 571–580. 16. W. Meyer-Eppler and G. Darius, “Two-Dimensional Photographic Autocorrelation of Pictures and Alphabet Letters,” Proc. 3rd London Symposium on Information Theory, C. Cherry, Ed., Academic Press, New York, 1956, 34–36. 17. P. F. Anuta, “Digital Registration of Multispectral Video Imagery,” SPIE J., 7, September 1969, 168–178. 18. J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psychopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970. 19. Q. Tian and M. N. Huhns, “Algorithms for Subpixel Registration,” Computer Graphics, Vision, and Image Processing, 35, 2, August 1986, 220–233. 20. P. F. Anuta, “Spatial Registration of Multispectral and Multitemporal Imagery Using Fast Fourier Transform Techniques,” IEEE Trans. Geoscience and Electronics, GE-8, 1970, 353–368. 21. A. Rosenfeld and G. J. Vandenburg, “Coarse–Fine Template Matching,” IEEE Trans. Systems, Man and Cybernetics, SMC-2, February 1977, 104–107. 22. G. J. Vandenburg and A. Rosenfeld, “Two-Stage Template Matching,” IEEE Trans. Computers, C-26, 4, April 1977, 384–393. 23. A. Goshtasby, S. H. Gage, and J. F. Bartolic, “A Two-Stage Cross-Correlation Approach to Template Matching,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI6, 3, May 1984, 374–378. 24. D. I. Barnea and H. F. Silverman, “A Class of Algorithms for Fast Image Registration” IEEE Trans. Computers, C-21, 2, February 1972, 179–186. 25. B. S. Reddy and B. N. Chatterji, “An FFT-Based Technique for Translation, Rotation, and Scale-Invariant Image Registration,” IEEE Trans. Image Processing, IP-5, 8, August 1996, 1266–1271. 26. E. De Castro and C. Morandi, “Registration of Translated and Rotated Images Using Finite Fourier Transforms,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-9, 5, September 1987, 700–703.

REFERENCES

639

27. M. K. Hu, “Visual Pattern Recognition by Moment Invariants,” IRE Trans. Information Theory, IT-8, 2, February 1962, 179–187. 28. R. Y. Wong and E. L. Hall, “Scene Matching with Invariant Moments,” Computer Graphics and Image Processing, 8, 1, August 1978, 16–24. 29. A. Goshtasby, “Template Matching in Rotated Images,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-7, 3, May 1985, 338–344.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

PART 6 IMAGE PROCESSING SOFTWARE
Digital image processing applications typically are implemented by software calls to an image processing library of functional operators. Many libraries are limited to primitive functions such as lookup table manipulation, convolution, and histogram generation. Sophisticated libraries perform more complex functions such as unsharp masking, edge detection, and spatial moment shape analysis. The interface between an application and a library is an application program interface (API) which defines the semantics and syntax of an operation. Chapter 20 describes the architecture of a full featured image processing API called the Programmer’s Imaging Kernel System (PIKS). PIKS is an international standard developed under the auspices of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). The PIKS description in Chapter 20 serves two purposes. It explains the architecture and elements of a well designed image processing API. It provides an introduction to PIKS usage to implement the programming exercises in Chapter 21.

641

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

20
PIKS IMAGE PROCESSING SOFTWARE

PIKS contains a rich set of operators that perform manipulations of multidimensional images or of data objects extracted from images in order to enhance, restore, or assist in the extraction of information from images. This chapter presents a functional overview of the PIKS standard and a more detailed definition of a functional subset of the standard called PIKS Core.

20.1. PIKS FUNCTIONAL OVERVIEW This section provides a brief functional overview of PIKS. References 1 to 6 provide further information. The PIKS documentation utilizes British spelling conventions, which differ from American spelling conventions for some words (e.g., colour instead of color). For consistency with the PIKS standard, the British spelling convention has been adopted for this chapter. 20.1.1. PIKS Imaging Model Figure 20.1-1 describes the PIKS imaging model. The solid lines indicate data flow, and the dashed lines indicate control flow. The PIKS application program interface consists of four major parts: 1. Data objects 2. Operators, tools, and utilities 3. System mechanisms 4. Import and export
643

644

PIKS IMAGE PROCESSING SOFTWARE

Image Interchange Facility

Basic Image Interchange Format

Application Data

Import and Export

Data Objects

System Mechanisms Application Control

Operators, Tools and Utilities

Application Program Interface

FIGURE 20.1-1. PIKS imaging model.

The PIKS data objects include both image and image-related, non-image data objects. The operators, tools, and utilities are functional elements that are used to process images or data objects extracted from images. The system mechanisms manage and control the processing. PIKS receives information from the application to invoke its system mechanisms, operators, tools, and utilities, and returns certain status and error information to the application. The import and export facility provides the means of accepting images and image-related data objects from an application, and for returning processed images and image-related data objects to the application. PIKS can transmit its internal data objects to an external facility through the ISO/IEC standards Image Interchange Facility (IIF) or the Basic Image Interchange Format (BIIF). Also, PIKS can receive data objects in its internal format, which have been supplied by the IIF or the BIIF. References 7 to 9 provide information and specifications of the IIF and BIIF. 20.1.2. PIKS Data Objects PIKS supports two types of data objects: image data objects and image-related, nonimage data objects.

PIKS FUNCTIONAL OVERVIEW

645

Origin b 0 x

y

S(x, y, 0, 0, 2) S(x, y, 0, 0, 1) S(x, y, 0, 0, 0)

FIGURE 20.1-2. Geometrical representation of a PIKS colour image array.

A PIKS image data object is a five-dimensional collection of pixels whose structure is: x Horizontal space index, 0 ≤ x ≤ X – 1 y Vertical space index, 0 ≤ y ≤ Y – 1 z t Depth space index, 0 ≤ z ≤ Z – 1 Temporal index, 0 ≤ t ≤ T – 1

b Colour or spectral band index, 0 ≤ b ≤ B – 1 Some of the image dimensions may be unpopulated. For example, as shown in Figure 20.1-2, for a colour image, Z = T = 1. PIKS gives semantic meaning to certain dimensional subsets of the five-dimensional image object. These are listed in Table 20.1-1. PIKS utilizes the following pixel data types: 1. Boolean 2. Non-negative integer 3. Signed integer 4. Real arithmetic 5. Complex arithmetic

646

PIKS IMAGE PROCESSING SOFTWARE

TABLE 20.1-1. PIKS Image Objects Semantic Description Monochrome Volume Temporal Colour Spectral Volume–temporal Volume–colour Volume–spectral Temporal–colour Temporal–spectral Volume–temporal–colour Volume–temporal–spectral Generic Image Indices x, y, 0, 0, 0 x, y, z, 0, 0 x, y, 0, t, 0 x, y, 0, 0, b x, y, 0, 0, b x, y, z, t, 0 x, y, z, 0, b x, y, z, 0, b x, y, 0, t, b x, y, 0, t, b x, y, z, t, b x, y, z, t, b x, y, z, t, b

The precision and data storage format of pixel data is implementation dependent. PIKS supports several image related, non-image data objects. These include: 1. Chain: an identifier of a sequence of operators 2. Composite identifier: an identifier of a structure of image arrays, lists, and records 3. Histogram: a construction of the counts of pixels with some particular amplitude value 4. Lookup table: a structure that contains pairs of entries in which the first entry is an input value to be matched and the second is an output value 5. Matrix: a two-dimensional array of elements that is used in vector algebra operations 6. Neighbourhood array: a multi-dimensional moving window associated with each pixel of an image (e.g., a convolution impulse response function array) 7. Pixel record: a sequence of across-band pixel values 8. Region-of-interest: a general mechanism for pixel-by-pixel processing selection 9. Static array: an identifier of the same dimension as an image to which it is related (e.g., a Fourier filter transfer function) 10. Tuple: a collection of data values of the same elementary data type (e.g., image size 5-tuple). 11. Value bounds collection: a collection of pairs of elements in which the first element is a pixel coordinate and the second element is an image measurement (e.g., pixel amplitude) 12. Virtual register: an identifier of a storage location for numerical values returned from operators in a chain

PIKS FUNCTIONAL OVERVIEW

647

20.1.3. PIKS Operators, Tools, Utilities, and Mechanisms PIKS operators are elements that manipulate images or manipulate data objects extracted from images in order to enhance or restore images, or to assist in the extraction of information from images. Exhibit 20.1-1 is a list of PIKS operators categorized by functionality. PIKS tools are elements that create data objects to be used by PIKS operators. Exhibit 20.1-2 presents a list of PIKS tools functionally classified. PIKS utilities are elements that perform basic mechanical image manipulation tasks. A classification of PIKS utilities is shown in Exhibit 20.1-3. This list contains several file access and display utilities that are defined in a proposed amendment to PIKS. PIKS mechanisms are elements that perform control and management tasks. Exhibit 20.1-4 provides a functional listing of PIKS mechanisms. In Exhibits 20.1-1 to 20.1-4, the elements in PIKS Core are identified by an asterisk. EXHIBIT 20.1-1. PIKS Operators Classification Analysis: image-to-non-image operators that extract numerical information from an image

*Accumulator Difference measures *Extrema *Histogram, one-dimensional Histogram, two-dimensional Hough transform *Line profile *Moments *Value bounds Classification: image-to-image operators that classify each pixel of a multispectral image into one of a specified number of classes based on the amplitudes of pixels across image bands Classifier, Bayes Classifier, nearest neighbour Colour: image-to-image operators that convert a colour image from one colour space to another *Colour conversion, linear *Colour conversion, nonlinear *Colour conversion, subtractive Colour lookup, interpolated *Luminance generation

648

PIKS IMAGE PROCESSING SOFTWARE

Complex image:

image-to-image operators that perform basic manipulations of images in real and imaginary or magnitude and phase form

*Complex composition *Complex conjugate *Complex decomposition *Complex magnitude Correlation: image-to-non-image operators that compute a correlation array of a pair of images Cross-correlation Template match Edge detection: image-to-image operators that detect the edge boundary of objects within an image Edge detection, orthogonal gradient Edge detection, second derivative Edge detection, template gradient Enhancement: image-to-image operators that improve the visual appearance of an image or that convert an image to a form better suited for analysis by a human or a machine Adaptive histogram equalization False colour Histogram modification Outlier removal Pseudocolour Unsharp mask Wallis statistical differencing Ensemble: image-to-image operators that perform arithmetic, extremal, and logical combinations of pixels *Alpha blend, constant Alpha blend, variable *Dyadic, arithmetic *Dyadic, complex *Dyadic, logical *Dyadic, predicate *Split image Z merge

PIKS FUNCTIONAL OVERVIEW

649

Feature extraction:

image-to-image operators that compute a set of image features at each pixel of an image

Label objects Laws texture features Window statistics Filtering: image-to-image operators that perform neighbourhood combinations of pixels directly or by Fourier transform domain processing Convolve, five-dimensional *Convolve, two-dimensional Filtering, homomorphic *Filtering, linear Geometric: image-to-image and ROI-to-ROI operators that perform geometric modifications Cartesian to polar *Flip, spin, transpose Polar to cartesian *Rescale *Resize *Rotate *Subsample *Translate Warp, control point *Warp, lookup table *Warp, polynomial *Zoom Histogram shape: non-image to non-image operators that generate shape measurements of a pixel amplitude histogram of an image Histogram shape, one-dimensional Histogram shape, two-dimensional Morphological: image-to-image operators that perform morphological operations on boolean and grey scale images *Erosion or dilation, Boolean *Erosion or dilation, grey *Fill region Hit or miss transformation *Morphic processor

650

PIKS IMAGE PROCESSING SOFTWARE

Morphology Neighbour count Open and close Pixel modification: image-to-image operators that modify an image by pixel drawing or painting Draw pixels Paint pixels Point: image-to-image operators that perform point manipulation on a pixel-bypixel basis *Bit shift * Complement Error function scaling *Gamma correction Histogram scaling Level slice *Lookup Lookup, interpolated *Monadic, arithmetic *Monadic, complex *Monadic, logical Noise combination *Power law scaling Rubber band scaling *Threshold *Unary, integer *Unary, real *Window-level Presentation: image-to-image operators that prepare an image for display *Diffuse *Dither Shape: Image-to-non-image operators that label objects and perform measurements of the shape of objects within an image Perimeter code generator Shape metrics Spatial moments, invariant Spatial moments, scaled

PIKS FUNCTIONAL OVERVIEW

651

Unitary transform:

image-to-image operators that perform multi-dimensional forward and inverse unitary transforms of an image

Transform, cosine *Transform, Fourier Transform, Hadamard Transform, Hartley 3D Specific: image-to-image operators that perform manipulations of three-dimensional image data Sequence average Sequence Karhunen-Loeve transform Sequence running measures 3D slice EXHIBIT 20.1-2 PIKS Tools Classification Image generation: Tools that create test images Image, bar chart *Image, constant Image, Gaussian image Image, grey scale image Image, random number image Impulse response function array generation: Tools that create impulse response function neighbourhood array data objects Impulse, boxcar *Impulse, derivative of Gaussian Impulse, difference of Gaussians *Impulse, elliptical *Impulse, Gaussian *Impulse, Laplacian of Gaussian Impulse, pyramid *Impulse, rectangular Impulse, sinc function Lookup table generation: Tools that create entries of a lookup table data object * Array to LUT Matrix generation: tools that create matrix data objects *Colour conversion matrix

652

PIKS IMAGE PROCESSING SOFTWARE

Region-of-interest generation: tools that create region-of-interest data objects from a mathematical description of the region-of-interest *ROI, coordinate *ROI, elliptical *ROI, polygon *ROI, rectangular Static array generation: tools that create filter transfer function, power spectrum, and windowing function static array data objects *Filter, Butterworth *Filter, Gaussian Filter, inverse Filter, matched Filter, Wiener Filter, zonal Markov process power spectrum Windowing function

EXHIBIT 20.1-3. PIKS Utilities Classification Display: utilities that perform image display functions *Boolean display *Close window *Colour display *Event display *Monochrome display *Open titled window *Open window *Pseudocolour display Export From Piks: Utilities that export image and non-image data objects from PIKS to an application or to the IIF or BIIF *Export histogram *Export image *Export LUT *Export matrix *Export neighbourhood array *Export ROI array *Export static array *Export tuple

PIKS FUNCTIONAL OVERVIEW

653

*Export value bounds *Get colour pixel *Get pixel *Get pixel array Get pixel record *Output image file Output object Import to PIKS: utilities that import image and non-image data objects to PIKS from an application or from the IIF or the BIIF *Import histogram *Import image *Import LUT *Import matrix *Import neighbourhood array *Import ROI array *Import static array *Import tuple *Import value bounds Input object *Input image file *Input PhotoCD *Put colour pixel *Put pixel *Put pixel array Put pixel record Inquiry: utilities that return information to the application regarding PIKS data objects, status and implementation Inquire chain environment Inquire chain status *Inquire elements *Inquire image Inquire index assignment *Inquire non-image object *Inquire PIKS implementation *Inquire PIKS status *Inquire repository *Inquire resampling Internal: utilities that perform manipulation and conversion of PIKS internal image and non-image data objects *Constant predicate

654

PIKS IMAGE PROCESSING SOFTWARE

*Convert array to image *Convert image data type *Convert image to array *Convert image to ROI *Convert ROI to image *Copy window *Create tuple *Equal predicate *Extract pixel plane *Insert pixel plane EXHIBITS 20.1-4 PIKS Mechanisms Classification Chaining: mechanisms that manage execution of PIKS elements inserted in chains Chain abort Chain begin Chain delete Chain end Chain execute Chain reload Composite identifier management: mechanisms that perform manipulation of image identifiers inserted in arrays, lists, and records Composite identifier array equal Composite identifier array get Composite identifier array put Composite identifier list empty Composite identifier list equal Composite identifier list get Composite identifier list insert Composite identifier list remove Composite identifier record equal Composite identifier record get Composite identifier record put Control: mechanisms that control the basic operational functionality of PIKS Abort asynchronous execution *Close PIKS *Close PIKS, emergency *Open PIKS Synchronize

PIKS FUNCTIONAL OVERVIEW

655

Error: mechanisms that provide means of reporting operational errors *Error handler *Error logger *Error test System management: mechanisms that allocate, deallocate, bind, and set attributes of data objects and set global variables Allocate chain Allocate composite identifier array Allocate composite identifier list Allocate composite identifier record *Allocate display image *Allocate histogram *Allocate image *Allocate lookup table *Allocate matrix *Allocate neighbourhood array Allocate pixel record *Allocate ROI *Allocate static array *Allocate tuple *Allocate value bounds collection Allocate virtual register Bind match point *Bind ROI *Deallocate data object *Define sub image *Return repository identifier *Set globals *Set image attributes Set index assignment Virtual register: mechanisms that manage the use of virtual registers Vreg alter Vreg clear Vreg conditional Vreg copy Vreg create Vreg delete Vreg get Vreg set Vreg wait

656

PIKS IMAGE PROCESSING SOFTWARE

Source Non-image Objects

Operator

Destination Non-image Objects

FIGURE 20.1-3. PIKS operator model: non-image to non-image operators.

20.1.4. PIKS Operator Model The PIKS operator model provides three possible transformations of PIKS data objects by a PIKS operator: 1. Non-image to non-image 2. Image to non-image 3. Image to image Figure 20.1-3 shows the PIKS operator model for the transformation of non-image data objects to produce destination non-image data objects. An example of such a transformation is the generation of shape features from an image histogram. The operator model for the transformation of image data objects by an operator to produce non-image data objects is shown in Figure 20.1-4. An example of such a transformation is the computation of the least-squares error between a pair of images. In this operator model, processing is subject to two control mechanisms: region-ofinterest (ROI) source selection and source match point translation. These control mechanisms are defined later. The dashed line in Figure 20.1-4 indicates the transfer of control information. The dotted line indicates the binding of source ROI objects to source image objects. Figure 20.1-5 shows the PIKS operator model for
Source Match Points

Tagged Source Images

Source Image Objects Source Match Point Translation Source ROI Objects

ROI Source Selection

Operator

Destination Non-image Objects

FIGURE 20.1-4. PIKS operator model: image to non-image operators.

PIKS FUNCTIONAL OVERVIEW

657

Source and Destination Match Points

Tagged Source Images

Tagged Destination Images

Source Image Objects Source Match Point Translation

ROI Source Selection

Operator

ROI Destination Selection Destination Match Point Translation

Destination Image Objects

Source ROI Objects

Destination ROI Objects

FIGURE 20.1-5. PIKS operator model: image to image operators.

the transformation of image data objects by an operator to produce other image data objects. An example of such an operator is the unsharp masking operator, which enhances detail within an image. In this operator model, processing is subject to four control mechanisms: source match point translation, destination match point translation, ROI source selection, and ROI destination selection. Index Assignment. Some PIKS image to non-image and image to image operators have the capability of assigning operator indices to image indices. This capability permits operators that are inherently Nth order, where N < 5 , to be applied to fivedimensional images in a flexible manner. For example, a two-dimensional Fourier transform can be taken of each column slice of a volumetric image using index assignment. ROI Control. A region-of-interest (ROI) data object can be used to control which pixels within a source image will be processed by an operator and to specify which pixels processed by an operator will be recorded in a destination image. Conceptually, a ROI consists of an array of Boolean value pixels of up to five dimensions. Figure 20.1-6 presents an example of a two-dimensional rectangular ROI. In this example, if the pixels in the cross-hatched region are logically TRUE, the remaining pixels are logically FALSE. Otherwise, if the cross-hatched pixels are set FALSE, the others are TRUE.

658

PIKS IMAGE PROCESSING SOFTWARE

image ROI

FIGURE 20.1-6. Rectangular ROI bound to an image array.

The size of a ROI need not be the same as the size of an image to which it is associated. When a ROI is to be associated with an image, a binding process occurs in which a ROI control object is generated. If the ROI data object is larger in spatial extent than the image to which it is to be bound, it is clipped to the image size to form the ROI control object. In the opposite case, if the ROI data object is smaller than the image, the ROI control object is set to the FALSE state in the non-overlap region. Figure 20.1-7 illustrates three cases of ROI functionality for point processing of a monochrome image. In case 1, the destination ROI control object is logically TRUE over the full image extent, and the source ROI control object is TRUE over a crosshatched rectangular region smaller than the full image. In this case, the destination image consists of the existing destination image with an insert of processed source pixels. For case 2, the source ROI is of full extent, and the destination ROI is of a smaller cross-hatched rectangular extent. The resultant destination image consists of processed pixels inserted into the existing destination image. Functionally, the result is the same as for case 1. The third case shows the destination image when the source and destination ROIs are overlapping rectangles smaller than the image extent. In this case, the processed pixels are recorded only in the overlap area of the source and destination ROIs. The ROI concept applies to multiple destination images. Each destination image has a separately bound ROI control object which independently controls recording of pixels in the corresponding destination image. The ROI concept also applies to neighbourhood as well as point operators. Each neighbourhood processing element, such as an impulse response array, has a pre-defined key pixel. If the key pixel lies within a source control ROI, the output pixel is formed by the neighbourhood operator even if any or all neighbourhood elements lie outside the ROI. PIKS provides tools for generating ROI data objects from higher level specifications. Such supported specifications include:

PIKS FUNCTIONAL OVERVIEW

659

RD

RD

RS

RS

Case 1

Case 2

RD

RS Case 3

FIGURE 20.1-7. ROI operation.

1. Coordinate list 2. Ellipse 3. Polygon 4. Rectangle These tools, together with the ROI binding tool, provide the capability to conceptually generate five-dimensional ROI control objects from lower dimensional descriptions by pixel plane extensions. For example, with the elliptical ROI generation tool, it is possible to generate a circular disk ROI in a spatial pixel plane, and then cause the disk to be replicated over the other pixel planes of a volumetric image to obtain a cylinder-shaped ROI. Match Point Control. Each PIKS image object has an associated match point coordinate set (x, y, z, t, b) which some PIKS operators utilize to control multi-dimensional translations of images prior to processing by an operator. The generic effect of match point control for an operator that creates multiple destination images from

660

PIKS IMAGE PROCESSING SOFTWARE

Common Match Point

S2 D = S1 − S2 S1

FIGURE 20.1-8. Match point translation for image subtraction.

multiple source images is to translate each source image and each destination image, other than the first source image, such that the match points of these images are aligned with the match point of the first source image prior to processing. Processing then occurs on the spatial intersection of all images. Figure 20.1-8 an example of image subtraction subject to match point control. In the example, the difference image is shown cross-hatched. Other Features. PIKS provides a number of other features to control processing. These include: 1. Processing of ROI objects in concert with image objects 2. Global setting of image and ROI resampling options 3. Global engagement of ROI control and ROI processing 4. Global engagement of index assignment 5. Global engagement of match point control 6. Global engagement of synchronous or asynchronous operation 7. Heterogeneous bands of dissimilar data types 8. Operator chaining 9. Virtual registers to store intermediate numerical results of an operator chain 10. Composite image management of image and non-image objects The PIKS Functional Specification (2) provides rigorous specifications of these features. PIKS also contains a data object repository of commonly used impulse response arrays, dither arrays, and colour conversion matrices.

PIKS FUNCTIONAL OVERVIEW

661

PIKS Element Specification External Physical Source Image Pixel Data Type: BI, NI, SI, TI, RF, CF Abstract parameters: BP, NP, SP, RP, CP Abstract identifiers: IP, ID External Physical Destination Image Pixel Data Type: BI, NI, SI, TI, RF, CF

PIKS Language Specification IP IP

IMPORT

API

EXPORT

Internal Abstract Source Image Pixel Data Type: BD, ND, SD, RD, CD

ID Internal Abstract Computational System

ID

Internal Abstract Destination Image Pixel Data Type: BD, ND, SD, RD, CD

PIKS Imaging Model

FIGURE 20.1-9. PIKS application interface.

20.1.5. PIKS Application Interface Figure 20.1-9 describes the PIKS application interface for data interchange for an implementation-specific data pathway. PIKS supports a limited number of physical data types that may exist within an application domain or within the PIKS domain. Such data types represent both input and output parameters of PIKS elements and image and non-image data that are interchanged between PIKS and the application. PIKS provides notational differentiation between most of the elementary abstract data types used entirely within the PIKS domain (PIKS internal), those that are used to convey parameter data between PIKS and the application (PIKS parameter), and those that are used to convey pixel data between PIKS and the application (external physical image). Table 20.1-2 lists the codes for the PIKS abstract data types. The abstract data types are defined in ISO/IEC 12087-1. PIKS internal and parameter data types are of the same class if they refer to the same basic data type. For example, RP and RD data types are of the same class, but RP and SD data types are of different classes. The external physical data types supported by PIKS for the import and export of image data are also listed in Table 20.1-2. PIKS internal pixel data types and external pixel data types are of the same class if they refer to the same basic data type. For example, ND and NI data types are of the same class, but SI and ND data types are of different classes.

662

PIKS IMAGE PROCESSING SOFTWARE

TABLE 20.1-2 PIKS Datatype Codes Data Type Boolean Non-negative integer Signed integer Fixed-point integer Real arithmetic Complex arithmetic Character string Data object identifier Enumerated Null PIKS Internal Code BD ND SD — RD CD CS ID NA NULL PIKS Parameter Code BP NP SP — RP CP CS IP EP NULL Physical Code BI NI SI TI RF CF — — — —

20.1.6. PIKS Conformance Profiles Because image processing requirements vary considerably across various applications, PIKS functionality has been subdivided into the following five nested sets of functionality called conformance profiles: 1. PIKS Foundation: basic image processing functionality for monochrome and colour images whose pixels are represented as Boolean values or as non-negative or signed integers. 2. PIKS Core: intermediate image processing functionality for monochrome and colour images whose pixels are represented as Boolean values, non-negative or signed integers, real arithmetic values, and complex arithmetic values. PIKS Core is a superset of PIKS Foundation. 3. PIKS Technical: expanded image processing functionality for monochrome, colour, volume, temporal, and spectral images for all pixel data types. 4. PIKS Scientific: complete set of image processing functionality for all image structures and pixel data types. PIKS Scientific is a superset of PIKS Technical functionality. 5. PIKS Full: complete set of image processing functionality for all image structures and pixel data types plus the capability to chain together PIKS processing elements and to operate asynchronously. PIKS Full is a superset of PIKS Scientific functionality. Each PIKS profile may include the capability to interface with the IIF, the BIIF, and to include display and input/output functionality, as specified by PIKS Amendment 1.

PIKS CORE OVERVIEW

663

20.2. PIKS CORE OVERVIEW The PIKS Core profile provides an intermediate level of functionality designed to service the majority of image processing applications. It supports all pixel data types, but only monochrome and colour images of the full five-dimensional PIKS image data object. It supports the following processing features: 1. Nearest neighbour, bilinear, and cubic convolution global resampling image interpolation 2. Nearest neighbour global resampling ROI interpolation 3. All ROIs 4. Data object repository The following sections provide details of the data structures for PIKS Core nonimage and image data objects. 20.2.1. PIKS Core Non-image Data Objects PIKS Core supports the non-image data objects listed below. The list contains the PIKS Functional Specific object name code and the definition of each object. HIST LUT MATRIX NBHOOD_ARRAY ROI STATIC_ARRAY TUPLE VALUE_BOUNDS Histogram Look-up table Matrix Neighbourhood array Region-of-interest Static array Tuple Value bounds collection

The tuple object is defined first because it is used to define other non-image and image data objects. Tuples are also widely used in PIKS to specify operator and tool parameters (e.g., the size of a magnified image). Figure 20.2-1 contains the tree structure of a tuple object. It consists of the tuple size, tuple data type, and a private identifier to the tuple data values. The tuple size is an unsigned integer that specifies the number of tuple data values. The tuple datatype option is a signed integer from 1 to 6 that specifies one of the six options. The identifier to the tuple data array is private in the sense that it is not available to an application; only the tuple data object itself has a public identifier. A PIKS histogram data object is a one-dimensional array of unsigned integers that stores the histogram of an image plus histogram object attributes. Figure 20.2-2 shows the tree structure of a histogram data object. The histogram array size is an unsigned integer that specifies the number of histogram bins. The lower and upper

664

PIKS IMAGE PROCESSING SOFTWARE

Tuple Object Tuple data size number of tuple data values, e.g. 5 Tuple datatype option choice of BD, ND, SD, RD, CD or CS Tuple data array private identifier

FIGURE 20.2-1. Tuple object tree structure.

Histogram Object Histogram array size number of histogram bins, e.g. 512 Lower amplitude value lower amplitude value of histogram range, e.g. 0.1 Upper amplitude value upper amplitude value of histogram range, e.g. 0.9 Histogram data array private identifier

FIGURE 20.2-2. Histogram object tree structure.

amplitude values are real numbers that specify the pixel amplitude range of the histogram. A PIKS look-up table data object, as shown in Figure 20.2-3, is a two-dimensional array that stores the look-up table data plus a collection of look-up table attributes. The two-dimensional array has the general form following:
T ( 0, 0 ) · T ( 0, e ) · T ( 0, E – 1 ) … … … T ( b, 0 ) · T ( b, e ) · T ( b, E – 1 ) … … … T ( B – 1, 0 ) · T ( B – 1, e ) · T ( B – 1, E – 1 )

A positive integer e is the input row index to the table. It is derived from a source image by the relationship e = S ( x, y, z, t, b )

(20.2-1)

The LUT output is a one-dimensional array a ( e ) = [ T ( 0, e ) … T ( b, e ) … T ( B – 1, e ) ]

(20.2-2)

PIKS CORE OVERVIEW

665

Lookup Table Object Table entries number of table entries, e.g. 512 Table bands number of table bands, e.g. 3 Table input data type option choice of ND or SD Table output data type option choice of BD, ND, SD, RD OR CD Lookup table data array private identifier

FIGURE 20.2-3. Look-up table object tree structure.

There are two types of usage for PIKS Core: (1) the source and destination images are of the same band dimension, or (2) the source image is monochrome and the destination image is colour. In the former case,
D ( x, y, 0, 0, b ) = T ( 0, S ( x, y, z, t, b ) )

(20.2-3)

In the latter case,
D ( x, y, 0, 0, b ) = T ( b, S ( x, y, z, t, 0 ) )

(20.2-4)

Figure 20.2-4 shows the tree structure of a matrix data object. The matrix is specified by its number of rows R and columns C and the data type of its constituent terms. The matrix is addressed as follows:

M ( 1, 1 ) … M = M ( r, 1 ) M ( R, 1 ) …

… … …

M ( 1, c ) M ( r, c ) M ( R, c ) … …

… … …

M ( 1, C ) M ( r, C ) M ( R, C ) … …

(20.2-5)

In PIKS, matrices are used primarily for colour space conversion. A PIKS Core neighbourhood array is a two-dimensional array and associated attributes as shown in Figure 20.2-5. The array has J columns and K rows. As shown below, it is indexed in the same manner as a two-dimensional image.

666

PIKS IMAGE PROCESSING SOFTWARE

Matrix Object Column size number of matrix columns, e.g. 4 Row size number of matrix rows, e.g. 3 Matrix data type option choice of ND, SD, RD or CD Matrix data array private identifier

FIGURE 20.2-4. Matrix object tree structure.

Neighbourhood Array Object Neighbourhood size 5-tuple public identifier specification of J, K, 1, 1, 1 Key pixel 5-tuple public identifier specification of jK, kK , 0, 0, 0 Scale factor integer value Semantic label option choice of GL, DL, IL, ML, SL Neighbourhood data array private identifier

FIGURE 20.2-5. Neighbourhood object tree structure.

H ( 0, 0 ) … 1 H ( j, k ) = -S H ( 0, k ) …

… …

H ( j, 0 ) H ( j, k ) … H ( j, K – 1 ) …

… …

H ( J – 1, 0 ) H ( J – 1, k ) … …

(20.2-6)

H ( 0, K – 1 ) …

… H ( J – 1, K – 1 )

In Eq. 20.2-6, the scale factor S is unity except for the signed integer data. For signed integers, the scale factor can be used to realize fractional elements. The key pixel ( j K, k K ) defines the origin of the neighbourhood array. It need not be with the confines of the array. There are five types of neighbourhood arrays, specified by the following structure codes: GL DL IL ML SL Generic array Dither array Impulse response array Mask array Structuring element array

PIKS CORE OVERVIEW

667

Region-of-interest Object ROI virtual array size 5-tuple public identifier specification of XR , YR , 1, 1, 1 ROI structure option choice of AR, CR, ER, GR, PR, RR Polarity option choice of TRUE or FALSE Conceptual ROI data array private identifier

FIGURE 20.2-6. Region-of-interest object tree structure.

Figure 20.2-6 shows the tree structure of a region-of-interest ROI data object. Conceptually, a PIKS Core ROI data object is a two-dimensional array of Boolean value pixels of width XR and height YR. The actual storage method is implementation dependent. The ROI can be constructed by one of the following representations: AR CR ER GR RR ROI array ROI coordinate list ROI elliptical ROI generic ROI rectangular

The ROI can be defined to be TRUE or FALSE in its interior. A PIKS Core static array is a two-dimensional array of width XS and height YS as shown in Figure 20.2-7. Following is a list of the types of static arrays supported by PIKS: GS PS TS WS Generic static array Power spectrum Transfer function Windowing function

Static Array Object Static array size 5-tuple public identifier specification of XS, YS, 1, 1, 1 Semantic label option choice of GS, PS, TS, WS Datatype option choice of BD, ND, SD, RD or CD Static array data array private identifier

FIGURE 20.2-7. Static array object tree structure.

668

PIKS IMAGE PROCESSING SOFTWARE

Value Bounds Collection Object Collection size number of collection members Lower amplitude bound value of lower amplitude bound Upper amplitude bound value of upper amplitude bound Pixel data type option choice of NP SP RP , , Value bounds collection data array private identifier

FIGURE 20.2-8. Value bounds collection object tree structure.

A value bounds collection is a storage mechanism containing the pixel coordinate and pixel values of all pixels whose amplitudes lie within a lower and an upper bound. Figure 20.2-8 is the tree structure of the value bounds collection data object. 20.2.2. PIKS Core Image Data Object A PIKS image object is a tree structure of image attributes, processing control attributes, and private identifiers to an image data array of pixels and an associated ROI. Figure 20.2-9 illustrates the tree structure of an image object. The image attributes are created when an image object is allocated. When an image is allocated, there will be no private identifier to the image array data. The private identifier is established automatically when raw image data are imported to a PIKS image object or when a destination image is created by an operator. The processing control attributes are created when a ROI is bound to an image. It should be noted that for PIKS Core, all bands must be of the same datatype and pixel precision. The pixel precision specification must be in accord with the choices provided by a particular PIKS implementation. 20.2.3. PIKS Core C Language Binding The PIKS Functional Specification document (2) establishes the semantic usage of PIKS. The PIKS C language binding document (10) defines the PIKS syntactical usage for the C programming language. At present, there are no other language bindings. Reader familiarity with the C programming language is assumed. The PIKS C binding has adopted the Hungarian prototype naming convention, in which the datatypes of all entities are specified by prefix codes. Table 20.2-1 lists the datatype prefix codes. The entities in courier font are binding names. Table 20.2-2 gives the relationship between the PIKS Core C binding designators and the PIKS Functional Specification datatypes and data objects. The general structure of the C language binding element prototype is

PIKS CORE OVERVIEW

669

Image Object Image attributes Representation Size 5-tuple public identifier specification of X, Y, Z, T, B Band datatype B-tuple public identifier specification of BD, ND, SD, RD or CD datatype Image structure option MON or COLR Channel Band precision B-tuple public identifier specification of pixel precision per band Colour White point specification of X0 , Y0 , Z0 Colour space option 29 choices, e.g. CIE L*a*b* or CMYK Control ROI private identifier ROI offset 5-tuple public identifier specification of xo, yo, zo, to, bo Image data array private identifier

FIGURE 20.2-9. Image object tree structure.

void IvElementName or I(prefix)ReturnName I(prefix)ElementName As an example, the following is the element C binding prototype for two-dimensional convolution of a source image into a destination image:
Idnimage InConvolve2D( Idnimage Idnimage Ipint nDestImage, iOption /* OUT destination image identifier */ */ */ */ /* destination image identifier /* convolution 2D option

nSourceImage, /* source image identifier

Idnnbhood nImpulse,

/* impulse response array identifier */

);

In this example, the first two components of the prototype are the identifiers to the source and destination images. Next is the identifier to the impulse response neighbourhood array. The last component is the integer option parameter for the convolution boundary option. The following #define convolution options are provided in the piks.h header file:

670

PIKS IMAGE PROCESSING SOFTWARE

TABLE 20.2-1 PIKS Datatype Prefix Codes Prefix a b c d e f i m n p r s t u v z st tba tia tf tra tua Array Boolean Character Internal data type Enumerated data type Function Integer External image data type Identifier Parameter type Real Structure Pointer Unsigned integer Void Zero terminated string Structure or union pointer Pointer to Boolean array Pointer to integer array Pointer to function Pointer to real array Pointer to unsigned integer array Definition

ICONVOLVE_UPPER_LEFT ICONVOLVE_ENCLOSED ICONVOLVE_KEY_ZERO

1 2 3

/* upper left corner justified /* enclosed array /* key pixel, zero exterior

*/ */ */

ICONVOLVE_KEY_REFLECTED 4

/* key pixel, reflected exterior */

As an example, let nSrc and nDst be the identifier names assigned to a source and destination images, respectively, and let nImpulse be the identifier of an impulse response array. In an application program, the two-dimensional convolution operator can be invoked as InConvolve2D(nSrc, nDst, nImpulse, ICONVOLVE_ENCLOSED); or by nDst = InConvolve2D(nSrc, nDst, nImpulse, ICONVOLVE_EN CLOSED);

PIKS CORE OVERVIEW

671

TABLE 20.2-2 PIKS Core C Binding Designators and Functional Specification Datatypes and Data Objects Binding Imbool Imuint Imint Imfixed Imfloat Ipbool Ipuint Ipint Ipfloat Idnimage Idnhist Idnlut Idnmatrix Idnnbhood Idnroi idnstatic Idntuple Idnbounds Idnrepository Ipnerror Ipsparameter_basic BI NI SI TI RF BP NP SP RP SRC, DST HIST LUT MATRIX ROI STATIC_ARRAY TUPLE VALUE_BOUNDS IP IP IP Functional Specification Description External Boolean datatype External non-negative integer datatype External signed integer datatype External fixed point integer datatype External floating point datatype Parameter Boolean datatype Parameter non-negative integer datatype Parameter signed integer datatype Parameter real arithmetic datatype Image data object Histogram data object Lookup table data object Matrix data object Region-of-interest data object Static array data object Tuple data object Value bounds collection data object External repository identifier External error file identifier External tuple data array pointer union External matrix data array pointer union External LUT, neighbourhood, pixel data array pointer union External image data array pointer union

NBHOOD_ARRAY Neighbourhood array data object

Ipsparameter_numeric IP Ipsparameter_pixel Ipspiks_pixel_types IP IP

where ICONVOLVE_ENCLOSED is a boundary convolution option. The second formulation is useful for nesting of operator calls. The PIKS C binding provides a number of standardized convenience functions, which are shortcuts for creating tuples, ROIs, and monochrome and colour images. Reference 5 is a complete C programmer’s guide for the PIKS Foundation profile. The compact disk contains a PDF file of a PIKS Core programmer’s reference manual. This manual contains program snippets for each of the PIKS elements that explain their use.

672

PIKS IMAGE PROCESSING SOFTWARE

REFERENCES
“Information Technology, Computer Graphics and Image Processing, Image Processing and Interchange, Functional Specification, Part 1: Common Architecture for Imaging,” ISO/IEC 12087-1:1995(E). 2. “Information Technology, Computer Graphics and Image Processing, Image Processing and Interchange, Functional Specification, Part 2: Programmer’s Imaging Kernel System Application Program Interface,” ISO/IEC 12087-2:1994(E). 3. A. F. Clark, “Image Processing and Interchange: The Image Model,” Proc. SPIE/IS&T Conference on Image Processing and Interchange: Implementation and Systems, San Jose, CA, February 1992, 1659, SPIE Press, Bellingham, WA, 106–116. 4. W. K. Pratt, “An Overview of the ISO/IEC Programmer’s Imaging Kernel System Application Program Interface,” Proc. SPIE/IS&T Conference on Image Processing and Interchange: Implementation and Systems, San Jose, CA, February 1992, 1659, SPIE Press, Bellingham, WA, 117–129. 5. W. K. Pratt, PIKS Foundation C Programmer’s Guide, Manning Publications, Prentice Hall, Upper Saddle River, NJ, 1995. 6. W. K. Pratt, “Overview of the ISO/IEC Image Processing and Interchange Standard,” in Standards for Electronic Imaging Technologies, Devices, and Systems, M. C. Nier, Ed., San Jose, CA, February 1996, CR61, SPIE Press, Bellingham, WA, 29–53. 7. “Information Technology, Computer Graphics and Image Processing, Image Processing and Interchange, Functional Specification, Part 3: Image Interchange Facility,” ISO/IEC 12087-3:1995(E). 8. C. Blum and G. R. Hoffman, “ISO/IEC’s Image Interchange Facility,” Proc. SPIE/IS&T Conf. on Image Processing and Interchange: Implementation and Systems, San Jose, CA, February 1992, 1659, SPIE Press, Bellingham, WA, 130–141. 9. “Information Technology, Computer Graphics and Image Processing, Image Processing and Interchange, Functional Specification, Part 5: Basic Image Interchange Format,” ISO/IEC 12087-5:1998(E). 10. “Information Technology, Computer Graphics and Image Processing, Image Processing and Interchange, Application Program Interface Language Bindings, Part 4: C,” ISO/ IEC 12088-4:1995(E). 1.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

21
PIKS IMAGE PROCESSING PROGRAMMING EXERCISES

Digital image processing is best learned by writing and executing software programs that implement image processing algorithms. Toward this end, the compact disk affixed to the back cover of this book provides executable versions of the PIKS Core Application Program Interface C programming language library, which can be used to implement exercises described in this chapter. The compact disk contains the following items: A Solaris operating system executable version of the PIKS Core API. A Windows 2000 and Windows NT operating system executable version of the PIKS Core API. A Windows 2000 and Windows NT operating system executable version of PIKSTool, a graphical user interface method of executing many of the PIKS Core operators without program compilation. A PDF file format version of the PIKS Core C Programmer’s Reference Manual. PDF file format and Word versions of the PIKSTool User’s Manual. A PDF file format version of the image database directory. A digital image database of most of the source images used in the book plus many others widely used in the literature. The images are provided in the PIKS file format. A utility program is provided for conversion from the PIKS file format to the TIFF file format.
673

674

PIKS IMAGE PROCESSING PROGRAMMING EXERCISES

Digital images of many of the book photographic figures. The images are provided in the TIFF file format. A utility program is provided for conversion from the TIFF file format to the PIKS file format. C program source demonstration programs. C program executable programs of the programming exercises. To install the CD on a Windows computer, insert the CD into the CD drive and follow the screen instructions. To install the CD on a Solaris computer, create a subdirectory called PIKSrelease, and make that your current working directory by executing: mkdir PIKSrelease cd PIKSrelease Insert the PIKS CD in the CD drive and type: /cdrom/piks_core_1_6/install.sh See the README text file in the PIKSrelease directory for further installation information. For further information about the PIKS software, please refer to the PixelSoft, Inc. web site: or send email to: pixelsoft@pixelsoft.com The following sections contain descriptions of programming exercises. All of them can be implemented using the PIKS API. Some can be more easily implemented using PIKSTool. It is, of course, possible to implement the exercises with other APIs or tools that match the functionality of PIKS Core.

21.1 PROGRAM GENERATION EXERCISES 1.1 Develop a program that: (a) Opens a program session. (b) Reads file parameters of a source image stored in a file. (c) Allocates unsigned integer, monochrome source and destination images. (d) Reads an unsigned integer, 8-bit, monochrome source image from a file. (e) Opens an image display window and displays the source image. (f) Creates a destination image, which is the complement of the source image.

IMAGE MANIPULATION EXERCISES

675

(g) Opens a second display window and displays the destination image. (h) Closes the program session. The executable example_complement_monochrome_ND performs this exercise. The utility source program DisplayMonochromeND.c provides a PIKS template for this exercise. Refer to the input_image_file manual page of the PIKS Programmer’s Reference Manual for file reading information. 1.2 Develop a program that: (a) Creates, in application space, an unsigned integer, 8-bit, 512 × 512 pixel array of a source ramp image whose amplitude increases from left-toright from 0 to 255. (b) Imports the source image for display. (c) Creates a destination image by adding value 100 to each pixel (d) Displays the destination image What is the visual effect of the display in step (d)? The monadic_arithmetic operator can be used for the pixel addition. The executable example_import_ramp performs this exercise. See the monadic_arithmetic, and import_image manual pages. 21.2 IMAGE MANIPULATION EXERCISES 2.1 Develop a program that passes a monochrome image through the log part of the monochrome vision model of Figure 2.4-4. Steps: (a) Convert an unsigned integer, 8-bit, monochrome source image to floating point datatype. (b) Scale the source image over the range 1.0 to 100.0. (c) Compute the source image logarithmic lightness function of Eq. 6.3-4. (d) Scale the log source image for display. The executable example_monochrome_vision performs this exercise. Refer to the window-level manual page for image scaling. See the unary_real and monadic_arithmetic manual pages for computation of the logarithmic lightness function. 2.2 Develop a program that passes an unsigned integer, monochrome image through a lookup table with a square root function. Steps: (a) Read an unsigned integer, 8-bit, monochrome source image from a file. (b) Display the source image.

676

PIKS IMAGE PROCESSING PROGRAMMING EXERCISES

(c) Allocate a 256 level lookup table. (d) Load the lookup table with a square root function. (e) Pass the source image through the lookup table. (f) Display the destination image. The executable example_lookup_monochrome_ND performs this exercise. See the allocate_lookup_table, import_lut, and lookup manual pages. 2.3 Develop a program that passes a signed integer, monochrome image through a lookup table with a square root function. Steps: (a) Read a signed integer, 16-bit, monochrome source image from a file. (b) Linearly scale the source image over its maximum range and display it. (c) Allocate a 32,768 level lookup table. (d) Load the lookup table with a square root function over the source image maximum range. (e) Pass the source image through the lookup table. (f) Linearly scale the destination image over its maximum range and display it. The executable example_lookup_monochrome_SD performs this exercise. See the extrema, window_level, allocate_lookup_table, import_lut, and lookup manual pages.

21.3 COLOUR SPACE EXERCISES 3.1 Develop a program that converts a linear RGB unsigned integer, 8-bit, colour image to the XYZ colour space and converts the XYZ colour image back to the RGB colour space. Steps: (a) Display the RGB source linear colour image. (b) Display the R, G and B components as monochrome images. (c) Convert the source image to unit range. (d) Convert the RGB source image to XYZ colour space. (e) Display the X, Y and Z components as monochrome images. (f) Convert the XYZ destination image to RGB colour space. (g) Display the RGB destination image.

COLOUR SPACE EXERCISES

677

The executable example_colour_conversion_RGB_XYZ performs this exercise. See the extract_pixel_plane, convert_image_datatype, monadic_ arithmetic, and colour_conversion_linear manual pages. 3.2 Develop a program that converts a linear RGB colour image to the L*a*b* colour space and converts the L*a*b* colour image back to the RGB colour space. Steps: (a) Display the RGB source linear colour image. (b) Display the R, G and B components as monochrome images. (c) Convert the source image to unit range. (d) Convert the RGB source image to L*a*b* colour space. (e) Display the L*, a* and b* components as monochrome images. (f) Convert the L*a*b* destination image to RGB colour space. (g) Display the RGB destination image. The executable example_colour_conversion_RGB_Lab performs this exercise. See the extract_pixel_plane, convert_image_datatype, monadic_ arithmetic, and colour_conversion_linear manual pages. 3.3 Develop a program that converts a linear RGB colour image to a gamma corrected RGB colour image and converts the gamma colour image back to the linear RGB colour space. Steps: (a) Display the RGB source linear colour image. (b) Display the R, G and B components as monochrome images. (c) Convert the source image to unit range. (d) Perform gamma correction on the linear RGB source image. (e) Display the gamma corrected RGB destination image. (f) Display the R, G and B gamma corrected components as monochrome images. (g) Convert the gamma corrected destination image to linear RGB colour space. (h) Display the linear RGB destination image. The executable example_colour_gamma_correction performs this exercise. See the extract_pixel_plane, convert_image_datatype, monadic_arithmetic, and gamma_correction manual pages. 3.4 Develop a program that converts a gamma RGB colour image to the YCbCr colour space and converts the YCbCr colour image back to the gamma RGB colour space. Steps:

678

PIKS IMAGE PROCESSING PROGRAMMING EXERCISES

(a) Display the RGB source gamma colour image. (b) Display the R, G and B components as monochrome images. (c) Convert the source image to unit range. (d) Convert the RGB source image to YCbCr colour space. (e) Display the Y, Cb and Cr components as monochrome images. (f) Convert the YCbCr destination image to gamma RGB colour space. (g) Display the gamma RGB destination image. The executable example_colour_conversion_RGB_YCbCr performs this exercise. See the extract_pixel_plane, convert_image_datatype, monadic_ arithmetic, and colour_conversion_linear manual pages. 3.5 Develop a program that converts a gamma RGB colour image to the IHS colour space and converts the IHS colour image back to the gamma RGB colour space. Steps: (a) Display the RGB source gamma colour image. (b) Display the R, G and B components as monochrome images. (c) Convert the source image to unit range. (d) Convert the RGB source image to IHS colour space. (e) Display the I, H and S components as monochrome images. (f) Convert the IHS destination image to gamma RGB colour space. (g) Display the gamma RGB destination image. The executable example_colour_conversion_RGB_IHS performs this exercise. See the extract_pixel_plane, convert_image_datatype, monadic_ arithmetic, and colour_conversion_linear manual pages. 21.4 REGION-OF-INTEREST EXERCISES 4.1 Develop a program that forms the complement of an unsigned integer, 8-bit, 512 × 512, monochrome, image under region-of-interest control. Case 1: Full source and destination ROIs. Case 2: Rectangular source ROI, upper left corner at (50, 100), lower right corner at (300, 350) and full destination ROI. Case 3: Full source ROI and rectangular destination ROI, upper left corner at (150, 200), lower right corner at (400, 450). Case 4: Rectangular source ROI, upper left corner at (50, 100), lower right corner at (300, 350) and rectangular destination ROI, upper left corner at (150, 200), lower right corner at (400, 450).

QUANTIZATION EXERCISES

679

Steps: (a) Display the source monochrome image. (b) Create a constant destination image of value 150. (c) Complement the source image into the destination image. (d) Display the destination image. (e) Create a constant destination image of value 150. (f) Bind the source ROI to the source image. (g) Complement the source image into the destination image. (h) Display the destination image. (i) Create a constant destination image of value 150. (j) Bind the destination ROI to the destination image. (k) Complement the source image into the destination image. (l) Display the destination image. (m) Create a constant destination image of value 150. (n) Bind the source ROI to the source image and bind the destination ROI to the destination image. (o) Complement the source image into the destination image. (p) Display the destination image. The executable example_complement_monochrome_roi performs this exercise. See the image_constant, generate_2d_roi_rectangular, bind_roi, and complement manual pages. 21.5 IMAGE MEASUREMENT EXERCISES 5.1 Develop a program that computes the extrema of the RGB components of an unsigned integer, 8-bit, colour image. Steps: (a) Display the source colour image. (b) Compute extrema of the colour image and print results for all bands. The executable example_extrema_colour performs this exercise. See the extrema manual page. 5.2 Develop a program that computes the mean and standard deviation of an unsigned integer, 8-bit, monochrome image. Steps:

680

PIKS IMAGE PROCESSING PROGRAMMING EXERCISES

(a) Display the source monochrome image. (b) Compute moments of the monochrome image and print results. The executable example_moments_monochrome performs this exercise. See the moments manual page. 5.3 Develop a program that computes the first-order histogram of an unsigned integer, 8-bit, monochrome image with 16 amplitude bins. Steps: (a) Display the source monochrome image. (b) Allocate the histogram. (c) Compute the histogram of the source image. (d) Export the histogram and print its contents. The executable example_histogram_monochrome performs this exercise. See the allocate_histogram, histogram_1d, and export_histogram manual pages. 21.6 QUANTIZATION EXERCISES 6.1 Develop a program that re-quantizes an unsigned integer, 8-bit, monochrome image linearly to three bits per pixel and reconstructs it to eight bits per pixel. Steps: (a) Display the source image. (b) Perform a right overflow shift by three bits on the source image. (c) Perform a left overflow shift by three bits on the right bit-shifted source image. (d) Scale the reconstruction levels to 3-bit values. (e) Display the destination image. The executable example_linear_quantizer executes this example. See the bit_shift, extrema, and window_level manual pages. 6.2 Develop a program that quantizes an unsigned integer, 8-bit, monochrome image according to the cube root lightness function of Eq. 6.3-4 and reconstructs it to eight bits per pixel. Steps: (a) Display the source image. (b) Scale the source image to unit range. (c) Perform the cube root lightness transformation. (d) Scale the lightness function image to 0 to 255. (e) Perform a right overflow shift by three bits on the source image. (f) Perform a left overflow shift by three bits on the right bit-shifted source image.

CONVOLUTION EXERCISES

681

(g) Scale the reconstruction levels to 3-bit values. (h) Scale the reconstruction image to the lightness function range. (i) Perform the inverse lightness function. (j) Scale the inverse lightness function to the display range. (k) Display the destination image. The executable example_lightness_quantizer executes this example. See the monadic_arithmetic, unary_integer, window_level, and bit_shift manual pages.

21.7 CONVOLUTION EXERCISES 7.1 Develop a program that convolves a test image with a 3 × 3 uniform impulse response array for three convolution boundary conditions. Steps: (a) Create a 101 × 101 pixel, real datatype test image consisting of a 2 × 2 cluster of amplitude 1.0 pixels in the upper left corner and a single pixel of amplitude 1.0 in the image center. Set all other pixels to 0.0. (b) Create a 3 × 3 uniform impulse response array. (c) Convolve the source image with the impulse response array for the following three boundary conditions: enclosed array, zero exterior, reflected exterior. (d) Print a 5 × 5 pixel image array about the upper left corner and image center for each boundary condition and explain the results. The executable example_convolve_boundary executes this example. See the allocate_neighbourhood_array, impulse_rectangular, image_constant, put_pixel, get_pixel, and convolve_2d manual pages. 7.2 Develop a program that convolves an unsigned integer, 8-bit, colour image with a 5 × 5 uniform impulse response array acquired from the data object repository. Steps: (a) Display the source colour image. (b) Allocate the impulse response array. (c) Fetch the impulse response array from the data object repository. (d) Convolve the source image with the impulse response array. (e) Display the destination image. The executable example_repository_convolve_colour executes this example. See the allocate_neighbourhood_array, return_repository_id, and convolve_2d manual pages.

682

PIKS IMAGE PROCESSING PROGRAMMING EXERCISES

21.8 UNITARY TRANSFORM EXERCISES 8.1 Develop a program that generates the Fourier transform log magnitude ordered display of Figure 8.2-4d for the smpte_girl_luma image. Steps: (a) Display the source monochrome image. (b) Scale the source image to unit amplitude. (c) Perform a two-dimensional Fourier transform on the unit amplitude source image with the ordered display option. (d) Scale the log magnitude according to Eq. 8.2-9 where a = 1.0 and b = 100.0. (e) Display the Fourier transformed image. The executable example_fourier_transform_spectrum executes this example. See the convert_image_datatype, monadic_arithmetic, image_constant, complex_composition, transform_fourier, complex_magnitude, window_level, and unary_real manual pages. 8.2 Develop a program that generates the Hartley transform log magnitude ordered display of Figure 8.3-2c for the smpte_girl_luma image by manipulation of the Fourier transform coefficients of the image. Steps: (a) Display the source monochrome image. (b) Scale the source image to unit amplitude. (c) Perform a two-dimensional Fourier transform on the unit amplitude source image with the dc term at the origin option. (d) Extract the Hartley components from the Fourier components. (e) Scale the log magnitude according to Eq. 8.2-9 where a = 1.0 and b = 100.0. (f) Display the Hartley transformed image. The executable example_transform_hartley executes this example. See the convert_image_datatype, monadic_arithmetic, image_constant, complex_composition, transform_fourier, complex_decomposition, dyadic_arithmetic, complex_ magnitude, window_level, and unary_real manual pages.

21.9 LINEAR PROCESSING EXERCISES 9.1 Develop a program that performs fast Fourier transform convolution following the steps of Section 9.3. Execute this program using an 11 × 11 uniform impulse response array on an unsigned integer, 8-bit, 512 × 512 monochrome image without zero padding. Steps:

IMAGE ENHANCEMENT EXERCISES

683

(a) Display the source monochrome image. (b) Scale the source image to unit range. (c) Perform a two-dimensional Fourier transform of the source image. (d) Display the clipped magnitude of the source Fourier transform. (e) Allocate an 11 × 11 impulse response array. (f) Create an 11 × 11 uniform impulse response array. (g) Convert the impulse response array to an image and embed it in a 512 × 512 zero background image. (h) Perform a two-dimensional Fourier transform of the embedded impulse image. (i) Display the clipped magnitude of the embedded impulse Fourier transform. (j) Multiply the source and embedded impulse Fourier transforms. (k) Perform a two-dimensional inverse Fourier transform of the product image. (l) Display the destination image. (m) Printout the erroneous pixels along a mid image row. The executable example_fourier_filtering executes this example. See the monadic_arithmetic, image_constant, complex_composition, transform_fourier, complex_magnitude, allocate_neighbourhood_array, impulse_rectangular, convert_array_to_image, dyadic_complex, and complex_decomposition manual pages.

21.10 IMAGE ENHANCEMENT EXERCISES 10.1 Develop a program that displays the Q component of a YIQ colour image over its full dynamic range. Steps: (a) Display the source monochrome RGB image. (b) Scale the RGB image to unit range and convert it to the YIQ space. (c) Extract the Q component image. (d) Compute the amplitude extrema. (e) Use the window_level conversion function to display the Q component.

684

PIKS IMAGE PROCESSING PROGRAMMING EXERCISES

The executable example_Q_display executes this example. See the monadic_arithmetic, colour_conversion_linear, extrema, extract_pixel_plane, and window_level manual pages. 10.2 Develop a program to histogram equalize an unsigned integer, 8-bit, monochrome image. Steps: (a) Display the source monochrome image. (b) Compute the image histogram. (c) Compute the image cumulative histogram. (d) Load the image cumulative histogram into a lookup table. (e) Pass the image through the lookup table. (f) Display the enhanced destination image. The executable example_histogram_equalization executes this example. See the allocate_histogram, histogram_1d, export_histogram, allocate_lookup_ table, export_lut, and lookup_table manual pages. 10.3 Develop a program to perform outlier noise cleaning of the unsigned integer, 8-bit, monochrome image peppers_replacement_noise following the algorithm of Figure 10.3-9. Steps: (a) Display the source monochrome image. (b) Compute a 3 × 3 neighborhood average image. (c) Display the neighbourhood image. (d) Create a magnitude of the difference image between the source image and the neighbourhood image. (e) Create a Boolean mask image which is TRUE if the magnitude difference image is greater than a specified error tolerance, e.g. 15%. (f) Convert the mask image to a ROI and use it to generate the outlier destination image. (g) Display the destination image. The executable example_outlier executes this example. See the return_repository_id, convolve_2d, dyadic_predicate, allocate_roi, convert_image_ to_roi, bind_roi, and convert_image_datatype manual pages. 10.4 Develop a program that performs linear edge crispening of an unsigned integer, 8-bit, colour image by convolution. Steps: (a) Display the source colour image. (b) Import the Mask 3 impulse response array defined by Eq.10.3-1c.

IMAGE RESTORATION MODELS EXERCISES

685

(c) Convert the ND source image to SD datatype. (d) Convolve the colour image with the impulse response array. (e) Clip the convolved image over the dynamic range of the source image to avoid amplitude undershoot and overshoot. (f) Display the clipped destination image. The executable example_edge_crispening executes this example. See the allocate_neighbourhood_array, import_neighbourhood_array, convolve_2d, extrema, and window_level manual pages. 10.5 Develop a program that performs 7 × 7 plus-shape median filtering of the unsigned integer, 8-bit, monochrome image peppers_replacement _noise. Steps: (a) Display the source monochrome image. (b) Create a 7 × 7 Boolean mask array. (c) Perform median filtering. (d) Display the destination image. The executable example_filtering_median_plus7 executes this example. See the allocate_neighbourhood_array, import_neighbourhood_array, and filtering _median manual pages.

21.11 IMAGE RESTORATION MODELS EXERCISES 11.1 Develop a program that creates an unsigned integer, 8-bit, monochrome image with zero mean, additive, uniform noise with a signal-to-noise ratio of 10.0. The program should execute for arbitrary size source images. Steps: (a) Display the source monochrome image. (b) In application space, create a unit range noise image array using the C math.h function rand. (c) Import the noise image array. (d) Display the noise image array. (e) Scale the noise image array to produce a noise image array with zero mean and a SNR of 10.0. (f) Compute the mean and standard deviation of the noise image. (g) Read an unsigned integer, 8-bit monochrome image source image file and normalize it to unit range. (h) Add the noise image to the source image and clip to unit range. (i) Display the noisy source image.

686

PIKS IMAGE PROCESSING PROGRAMMING EXERCISES

The executable example_additive_noise executes this example. See the monadic_arithmetic, import_image, moments, window_level, and dyadic_arithmetic manual pages. 11.2 Develop a program that creates an unsigned integer, 8-bit, monochrome image with replacement impulse noise. The program should execute for arbitrary size source images. Steps: (a) Display the source monochrome image. (b) In application space, create a unit range noise image array using the C math.h function rand. (c) Import the noise image array. (d) Read a source image file and normalize to unit range. (e) Replace each source image pixel with 0.0 if the noise pixel is less than 1.0%, and replace each source image pixel with 1.0 if the noise pixel is greater than 99%. The replacement operation can be implemented by image copying under ROI control. (f) Display the noisy source image. The executable example_replacement_noise executes this example. See the monadic_arithmetic, import_image, dyadic_predicate, allocate_roi, bind_roi, convert_image_datatype, and dyadic_arithmetic manual pages. 21.12 IMAGE RESTORATION EXERCISES 12.1 Develop a program that computes a 512 × 512 Wiener filter transfer function for the blur impulse response array of Eq. 10.3-2c and white noise with a SNR of 10.0. Steps: (a) Fetch the impulse response array from the repository. (b) Convert the impulse response array to an image and embed it in a 512 × 512 zero background array. (c) Compute the two-dimensional Fourier transform of the embedded impulse response array. (d) Form the Wiener filter transfer function according to Eq. 12.2-23. (e) Display the magnitude of the Wiener filter transfer function. The executable example_wiener executes this example. See the return_repository_id, transform_fourier, image_constant, complex_conjugate, dyadic_arithmetic, and complex_magnitude manual pages.

GEOMETRICAL IMAGE MODIFICATION EXERCISES

687

21.13 GEOMETRICAL IMAGE MODIFICATION EXERCISES 13.1 Develop a program that minifies an unsigned integer, 8-bit, monochrome image by a factor of two and rotates the minified image by 45 degrees about its center using bilinear interpolation. Display the geometrically modified image. Steps: (a) Display the source monochrome image. (b) Set the global interpolation mode to bilinear. (c) Set the first work image to zero. (d) Minify the source image into the first work image. (e) Set the second work image to zero. (f) Translate the first work image into the center of the second work image. (g) Set the destination image to zero. (h) Rotate the second work image about its center into the destination image. (i) Display the destination image. The executable example_minify_rotate executes this example. See the image_constant, resize, translate, rotate, and set_globals manual pages. 13.2 Develop a program that performs shearing of the rows of an unsigned integer, 8-bit, monochrome image using the warp_lut operator such that the last image row is shifted 10% of the row width and all other rows are shifted proportionally. Steps: (a) Display the source monochrome image. (b) Set the global interpolation mode to bilinear. (c) Set the warp polynomial coefficients. (d) Perform polynomial warping. (e) Display the destination image. The executable example_shear executes this example. See the set_globals, image_constant, and warp_lut manual pages.

21.14 MORPHOLOGICAL IMAGE PROCESSING EXERCISES 14.1 Develop a program that reads the 64 × 64, Boolean test image boolean_test and dilates it by one and two iterations with a 3 × 3 structuring element. Steps:

688

PIKS IMAGE PROCESSING PROGRAMMING EXERCISES

(a) Read the source image and zoom it by a factor of 8:1. (b) Create a 3 × 3 structuring element array. (c) Dilate the source image with one iteration. (d) Display the zoomed destination image. (e) Dilate the source image with two iterations. (f) Display the zoomed destination image. The executable example_boolean_dilation executes this example. See the allocate_neighbourhood_array, import_neighbourhood_array, erosion_dilation_ boolean, zoom, and boolean_display manual pages. 14.2 Develop a program that reads the 64 × 64, Boolean test image boolean_ test and erodes it by one and two iterations with a 3 × 3 structuring element. Steps: (a) Read the source image and zoom it by a factor of 8:1. (b) Create a 3 × 3 structuring element array. (c) Erode the source image with one iteration. (d) Display the zoomed destination image. (e) Erode the source image with two iterations. (f) Display the zoomed destination image. The executable example_boolean_erosion executes this example. See the allocate_neighbourhood_array, import_neighbourhood_array, erosion_dilation _boolean, zoom, and boolean_display manual pages. 14.3 Develop a program that performs gray scale dilation on an unsigned integer, 8-bit, monochrome image with a 5 × 5 zero-value structuring element and a 5 × 5 TRUE state mask. Steps: (a) Display the source image. (b) Create a 5 × 5 Boolean mask. (c) Perform grey scale dilation on the source image. (d) Display the destination image. The executable example_dilation_grey_ND executes this example. See the allocate_neighbourhood_array, import_neighbourhood_array, and erosion_dilation _ grey manual pages. 14.4 Develop a program that performs gray scale erosion on an unsigned integer, 8bit, monochrome image with a 5 × 5 zero-value structuring element and a 5 × 5 TRUE state mask. Steps:

EDGE DETECTION EXERCISES

689

(a) Display the source image. (b) Create a 5 × 5 Boolean mask. (c) Perform grey scale erosion on the source image. (d) Display the destination image. The executable example_erosion_gray_ND executes this example. See the allocate_neighbourhood_array, import_neighbourhood_array, and erosion_dilation _gray manual pages.

21.15 EDGE DETECTION EXERCISES 15.1 Develop a program that generates the Sobel edge gradient according to Figure 15.2-1 using a square root sum of squares gradient combination. Steps: (a) Display the source image. (b) Allocate the horizontal and vertical Sobel impulse response arrays. (c) Fetch the horizontal and vertical Sobel impulse response arrays from the repository. (d) Convolve the source image with the horizontal Sobel. (e) Display the Sobel horizontal gradient. (f) Convolve the source image with the vertical Sobel. (g) Display the Sobel vertical gradient. (h) Form the square root sum of squares of the gradients. (i) Display the Sobel gradient. The executable example_sobel_gradient executes this example. See the allocate_neighbourhood_array, return_repository_id, convolve_2d, unary_real, and dyadic_arithmetic manual pages. 15.2 Develop a program that generates the Laplacian of Gaussian gradient for a 11 × 11 impulse response array and a standard deviation of 2.0. Steps: (a) Display the source image. (b) Allocate the Laplacian of Gaussian impulse response array. (c) Generate the Laplacian of Gaussian impulse response array. (d) Convolve the source image with the Laplacian of Gaussian impulse response array. (e) Display the Laplacian of Gaussian gradient.

690

PIKS IMAGE PROCESSING PROGRAMMING EXERCISES

The executable example_LoG_gradient executes this example. See the allocate_neighbourhood_array, impulse_laplacian_of_gaussian, and convolve_2d manual pages.

21.16 IMAGE FEATURE EXTRACTION EXERCISES 16.1 Develop a program that generates the 7 × 7 moving window mean and standard deviation features of an unsigned integer, 8-bit, monochrome image. Steps: (a) Display the source image. (b) Scale the source image to unit range. (c) Create a 7 × 7 uniform impulse response array. (d) Compute the moving window mean with the uniform impulse response array. (e) Display the moving window mean image. (f) Compute the moving window standard deviation with the uniform impulse response array. (g) Display the moving window standard deviation image. The executable example_amplitude_features executes this example. See the allocate_neighbourhood_array, impulse_rectangular, convolve_2d, dyadic_ arithmetic, and unary_real manual pages. 16.2 Develop a program that computes the mean, standard deviation, skewness, kurtosis, energy, and entropy first-order histogram features of an unsigned integer, 8-bit, monochrome image. Steps: (a) Display the source image. (b) Compute the histogram of the source image. (c) Export the histogram and compute the histogram features. The executable example_histogram_features executes this example. See the allocate_histogram, histogram_1d, and export_histogram manual pages. 16.3 Develop a program that computes the nine Laws texture features of an unsigned integer, 8-bit, monochrome image. Use a 7 × 7 moving window to compute the standard deviation. Steps: (a) Display the source image. (b) Allocate nine 3 × 3 impulse response arrays.

IMAGE DETECTION AND REGISTRATION EXERCISES

691

(c) Fetch the nine Laws impulse response arrays from the repository. (d) For each Laws array: convolve the source image with the Laws array. compute the moving window mean of the Laws convolution. compute the moving window standard deviation of the Laws convolution image. display the Laws texture features. The executable example_laws_features executes this example. See the allocate_neighbourhood_array, impulse_rectangular, return_repository_id, convolve_2d, dyadic_arithmetic, and unary_real manual pages.

21.17 IMAGE SEGMENTATION EXERCISES 17.1 Develop a program that thresholds the monochrome image parts and displays the thresholded image. Determine the threshold value that provides the best visual segmentation. Steps: (a) Display the source image. (b) Threshold the source image into a Boolean destination image. (c) Display the destination image. The executable example_threshold executes this example. See the threshold and boolean_display manual pages. 17.2 Develop a program that locates and tags the watershed segmentation local minima in the monochrome image segmentation_test. Steps: (a) Display the source image. (b) Generate a 3 × 3 Boolean mask. (c) Erode the source image into a work image with the Boolean mask. (d) Compute the local minima of the work image. (e) Display the local minima image. The executable example_watershed executes this example. See the erosion_dilation_grey, and dyadic_predicate manual pages.

21.18 SHAPE ANALYSIS EXERCISES 18.1 Develop a program that computes the scaled second-order central moments of the monochrome image ellipse. Steps:

692

IMAGE DETECTION AND REGISTRATION

(a) Display the source image. (b) Normalize the source image to unit range. (c) Export the source image and perform the computation in application space in double precision. The executable example_spatial_moments executes this example. See the monadic_arithmetic, and export_image manual pages.

21.19 IMAGE DETECTION AND REGISTRATION EXERCISES 19.1 Develop a program that performs normalized cross-correlation template matching of the monochrome source image L_source and the monochrome template image L_template using the convolution operator as a means of correlation array computation. Steps: (a) Display the source image. (b) Display the template image. (c) Rotate the template image 180 degrees and convert it to an impulse response array. (d) Convolve the source image with the impulse response array to form the numerator of the cross-correlation array. (e) Display the numerator image. (f) Square the source image and compute its moving window average energy by convolution with a rectangular impulse response array to form the denominator of the cross-correlation array. (g) Display the denominator image. (h) Form the cross-correlation array image. (i) Display the cross-correlation array image. Note, it is necessary to properly scale the source and template images to obtain valid results. The executable example_template executes this example. See the allocate_neighbourhood_array, flip_spin_transpose, convert_image_to_array, impulse_rectangular, convolve_2d, and monadic_arithmetic manual pages.

Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)

APPENDIX 1 VECTOR-SPACE ALGEBRA CONCEPTS

This appendix contains reference material on vector-space algebra concepts used in the book.

A1.1. VECTOR ALGEBRA This section provides a summary of vector and matrix algebraic manipulation procedures utilized in the book. References 1 to 5 may be consulted for formal derivations and proofs of the statements of definition presented here. Vector. An N × 1 column vector f is a one-dimensional vertical arrangement,

f (1) f (2) … f (n) … f ( N) 693 f =

(A1.1-1)

694

VECTOR-SPACE ALGEBRA CONCEPTS

of the elements f (n), where n = 1, 2,..., N. An 1 × N row vector h is a one-dimensional horizontal arrangement h = h( 1) h(2) … h(n) … h(N )

(A1.1-2)

of the elements h(n), where n = 1, 2,..., N. In this book, unless otherwise indicated, all boldface lowercase letters denote column vectors. Row vectors are indicated by the transpose relation f =
T

f(1 ) f(2 ) … f(n ) … f(N )

(A1.1-3)

Matrix. An M × N matrix F is a two-dimensional arrangement
F ( 1, 1 ) F ( 2, 1 ) F ( M, 1 ) … F ( 1, 2 ) F ( 2, 2 ) F ( M, 2 ) … … … … F ( 1, N ) F ( 2, N ) F ( M, N ) …

F =

(A1.1-4)

of the elements F(m, n) into rows and columns, where m = 1, 2,..., M and n = 1, 2,..., N. The symbol 0 indicates a null matrix whose terms are all zeros. A diagonal matrix is a square matrix, M = N, for which all off-diagonal terms are zero; that is, F(m, n) = 0 if m ≠ n . An identity matrix denoted by I is a diagonal matrix whose diagonal terms are unity. The identity symbol is often subscripted to indicate its dimension: I N is an N × N identity matrix. A submatrix F pq is a matrix partition of a larger matrix F of the form
… …

F =

F 1, 1 F P, 1 …

F1, 2 F P, 1 …

F1, Q F P, Q …

(A1.1-5)

Matrix Addition. The sum C = A + B of two matrices is defined only for matrices of the same size. The sum matrix C is an M × N matrix whose elements are C ( m, n ) = A ( m, n ) + B ( m, n ) . Matrix Multiplication. The product C = AB of two matrices is defined only when the number of columns of A equals the number of rows of B. The M × N product matrix C of the matrix A and the P × N matrix B is a matrix whose general element is given by

VECTOR ALGEBRA
P

695

C ( m, n ) =

p=1



A ( m, p )B ( p, n )

(A1.1-6)

Matrix Inverse. The matrix inverse, denoted by A–1, of a square matrix A has the –1 –1 –1 property that AA = I and A A = I . If such a matrix A exists, the matrix A is said to be nonsingular; otherwise, A is singular. If a matrix possesses an inverse, the inverse is unique. The matrix inverse of a matrix inverse is the original matrix. Thus
[A ]
–1 –1

= A

(A1.1-7)

If matrices A and B are nonsingular,
[ AB ]
–1

= B A

–1

–1

(A1.1-8)

If matrix A is nonsingular, an