SHADOW a surface in and out of shadow regions

 

SHADOW
REMOVAL USING COLOR SPACE MODEL BASED ON ILLUMINATION

Abstract

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

The new color space has three
channels. Each of the channels has its own physical meaning such that an image
is separated into intrinsic reflectance (the first two channels) and lighting
information (the third channel). Consequently, a couple of illumination-related
applications can be directly reached, e.g., shadow removal extracting and
relighting. Furthermore, the new color space can be transformed with other
color spaces (such as RGB) directly. It guarantees a gamut for color
representation. The main contributions is deducing the linear relationship
between the pixel values of a surface in and out of shadow regions and removing
shadows based on shadow identification. The main contributions of this work are
as follows. (1) Color space based on separating intrinsic and lighting information, which corresponds one-to one with the RGB color space.(2) Color Space Model cooperates
with the new color space, to estimate intrinsic lighting and shadow
intensities. Finally, (3) using this new space representation, an image is decomposed
into factors of reflectance and illumination such that illumination-related
applications can be directly achieved.

 

               Keywords: Shadow detection, Shadow Removal, Illumination.

 

Introduction

          

            The observed colors of natural scenes are mostly influenced by
illumination. It may generate uneven lightness or shadows, which greatly affect
the quality of various vision tasks. Therefore, increasingly more attention has
been focused on extracting and representing illumination invariance from
natural images, which is generally defined as an “intrinsic” image. Until now,
no illumination-based color space has existed. Such a space should be
constructed via the intrinsic and lighting information and be suitable for
“direct” lighting processing.

 

            Generally, a color space is
designed for a certain purpose. The RGB and CMYK color models are the most
well-known color spaces for digital display or printing, and they are device
dependent color models. For approximating human vision, the Lab color space was
designed. The YUV color space was developed when engineers wanted color
television in a black-and- white infrastructure. Based on different concepts,
the HSV/HSL color spaces represent colors by hue, saturation and value (brightness)
/ lightness (luminance). They are based more on how colors are organized and
conceptualized in human vision in terms of color-making attributes. However,
the color attributes here do not correspond to the optical spectra in physics.

 

           The aforementioned color spaces are
suitable for color  representation,
display, painting, and so forth. However, they generally lack the ability to
intuitively show the information of reflectance (material) or illumination
separately. It may impede related image processing or perception, such as
shadow removal, relighting, and scene understanding.

 

Figure 1.1
Illumination based color space model

 

          One of the most fundamental tasks for
any visual system is that of separating the changes in an image which are due
to a change in the underlying imaged surfaces from changes which are due to the
effects of the scene illumination. The interaction between light and surface is
complex and introduces many unwanted artefacts into an image. For example,
shading, shadows, specularities and inter-reflections, as well as changes due
to local variation in the intensity or colour of the illumination all make it
more difficult to achieve basic visual tasks such as image segmentation, object
recognition and tracking. The importance of being able to separate illumination
effects from reflectance has been well understood for a long time.

 

COLOR CONSTANCY

        

           Over decades, researchers have tried to solve
the problem of color constancy by proposing a number of algorithmic and instrumentation approaches. Nevertheless,
no unique solution has been identified. Given a wide range of computer vision
applications that require color constancy, it is not possible to obtain a
unique solution. This led researchers in the field to identify sets of possible
approaches that can be applied to particular problems. Imagine, a light emitted
by a lamp and reflected by a red color object, causing a color sensation in the
brain of the observer. The physical composition of the reflected light depends
on the color of the light source. However, this effect is compensated by the
human vision system. Hence, regardless of the color of the light source, it
will be seen the true red color of the object. The ability to correct color
deviations caused by a difference in illumination as done by the human vision
system is, known as color constancy.

 

         The same process is not trivial to machine
vision systems in an unconstrained scene. Therefore, the goal of color
constancy research is to achieve an illuminant invariant description of a scene
taken under illumination whose spectral characteristics are unknown (It is
referred to as unknown illumination). It is a two step process. In the first
step, an estimate of the illuminant parameters is obtained, and in the second
step, the illuminant independent surface descriptor is parametrically computed.
Often illumination invariant descriptor of the scene are computed under an
illumination whose spectral characteristics are known.

 

       A color image is a function of three
variables, therefore the assumptions are categorized into three classes,

 (i) assumptions based on sensors,

 (ii) assumptions based on surface reflectance,
and

 (iii)assumptions based on illumination. Most
cameras automatically perform a gamma correction, auto gain, white  balancing  and  others,
 affect
 the  image  acquisition.  Sensors automatically
perform a gamma correction on the image and it is important to inverse the
gamma correction to obtain the true RGB values of the image.

(a) Image, I                        (b) I in RGB,                 (c) Reflectance, R                      (d) R in RGB               

COLOR MODELS

           

           To utilize color as a visual cue in
multimedia, image processing, graphics and computer vision applications, an
appropriate method for representing the color signal is needed. The different
color specification systems or color models address this need. Color spaces
provide a rational method to specify order, manipulate and effectively display the object colors taken into consideration..
Thus the selected color model should be well suited to address the problem’s
statement and solution. The process of selecting the best color representation
involves knowing how color signals are generated and what information is needs
from these signals.. In particular, the color models may be used to define
colors, discriminate between colors, judge similarity between color and
indentify color categories for a number of applications. Color model literature
can be found in the domain of modern sciences, such as physics, engineering,
artificial intelligence, computer science and
philosophy.

Device-oriented color models,
which are associated with input processing and output signals devices. Such
spaces are of paramount importance in modern applications, where there is a
need to specify, color in a way that is compatible with the hardware tools used
to provide, manipulate or receive the color
signals.

              User-oriented color models, which
are utilized as a bridge between the human operators and the hardware used to
manipulate the color information. Such models allow the user to specify color
in terms of perceptual attributes and they can be considered an experimental
approximation of the human perception of color.

Device-independent color models,
which are used to specify color signals independently of the characteristics of
a given device or application. Such models are of importance in applications,
where color comparisons and transmission of visual information over networks
connecting different hardware platforms are
required.

 

RGB And CMY COLOR MODELS

         

           For accurate representation of
phenomena such as interference and color separation generally requires a fine
spectral representation of light required instead of the commonly used RGB
components. The bi- directional reflectance distribution function (BRDF) has
proven its efficiency to describe complex light interactions with surfaces. Two
implementations of this approach, a Phong-like specular reflection, and a
diffuse model. Even if this models is not completely physically based, these
implementations show that realistic effects can e achieved by adjusting a small
set of intuitive parameters.

The three primary colors (red,
green, and blue) and their combination in visible light spectrum are shown in
Fig.1. With different weights, (R, G, B), their combination can indicate
different colors. After normalizing the values of R, G, B, we can get the color
cube (Figure 1.3.1). The colors on the diagonal line, from the origin to the
coordinate (1,1,1) of the cube, means the gray-level values.

 

                                               

                                                
 RGB Graph of Primary colors

    The CMY color model is based on
complementary colors- cyan, magenta, yellow. This color model can be expressed
as

                                                    
 CMY Color Model

 

SHADOW REMOVAL

 

      A Lambertian model is adopted for image
formation so that if a light with a spectral power distribution (SPD) denoted
E(?, x, y) is incident upon a surface whose surface reflectance function is
denoted S(?, x, y), then the response of the camera sensors can be expressed as:

It is possible to derive a 1-d
illuminant invariant (and hence shadow-free) representation at a single pixel
given the following two assumptions. First, the camera sensors must be exact
Dirac delta functions and second, illumination must be restricted to be
Planckian.

 

PROPOSED SYSTEM

 

Problem Statement

 

First, for the given input image
extract the intrinsic features and light intensity features. Then obtain
shadow-free color images by (1) removing shadows from the given images; (2)
recovering the “true” colors of the scenes; and (3) showing the shadow
intensities of the original images. Finally apply relighting by adding or
subtracting a uniform lighting variation from 
the original lighting condition. Therefore, the light intensities of
pixels in and out of the shadow areas change in opposite directions.

 

Objective

 

            To improve the visual quality of a
natural image by constructing Color space model based on illumination. To extract or separate intrinsic information and
illumination from natural images and thereby to improve the visual quality of
the image

 

Proposed System Methodology

 

         The new color space has three
channels. Each of the channels has its own physical meaning such that an image
is separated into intrinsic reflectance (the first two channels) and lighting
information (the third channel). Consequently, a couple of illumination-related
applications can be directly reached, e.g., shadow removal and extracting and
relighting. Furthermore, the new color space can be transformed with other
color spaces (such as RGB) directly. It guarantees
a gamut for color representation. The main contributions is deducing the linear
relationship between the pixel values of a surface in and out of shadow regions
and removing shadows based on shadow identification. The main contributions of
this work are as follows. (1) It is proposed that the first complete color
space based on separating intrinsic and lighting information, which corresponds
one-to one with the RGB color space. (2) Via our proposed algorithm cooperating
with the new color space, we can estimate intrinsic lighting and shadow intensities.
Finally, (3) using this new space representation, an image is decomposed into
factors of reflectance and illumination such that illumination-related
applications can be directly achieved.

 

ADVANTAGES OF PROPOSED SYSTEM

 

The proposed system has the following
advantages:

 

·        
The proposed
method is more accurate.

·        
The proposed
method produces a new color model based on illumination.

·        
The proposed
method is fast when compared to the existing methods.

 

SYSTEM OVERVIEW

 

The overall
system architecture is shown as,

 

 

 

Feature
Extraction
(Fast Fourier Transformation)

 

Noise free
image

 

 

Intrinsic Features and Illumination based features

 

 

 

 

 

                                     Overall Block Diagram

    For the given input image apply Gaussian
Filter to remove the noise. Then extract the intrinsic features and light
intensity features to construct the color space model based on illumination. Then remove the shadow and
apply relighting.

(i)Intrinsic lines

      The
light distribution within the same illumination environment is not uniform
everywhere because of various degrees of reflections and occlusions by other
objects or itself.

Consequently, the pixels on the
same reflectance should not simply be regarded as either inside or outside
shadows but rather under a continuous variation of light intensities.

                                                                 
Intensity lines

1.     
Here, I =
(I1; I2; I3) is defined as the intrinsic value based on basic color space model
(R,G,B).

2.     
It provides
an invariant for a set of log-RGB values.

3.     
These
log-RGB values belong to the same reflectance but under different lighting
conditions.

4.     
It identifies
the intrinsic characteristic of the set of log-RGB values.

 

(ii)
Lighting level surfaces

Considering one specific
reflectance under uneven lighting conditions, the log-RGB values of the pixels
on the reflectance may be various because of different light intensities. However,
they are distributed on the same intrinsic line. By setting the log-RGB values
of all the pixels to one specific value, the entire reflectance is under the
same lighting intensity. Therefore, by projecting all the pixels onto this
plane, each reflectance is under the same light intensity. However, for the
intrinsic lines with larger absolute values of the intercept, the brightness of
the colors may approximate the maximum because max(LR;LG;LB) is close to the
defined upper bound of the log-RGB values1 (measured by the green dotted lines
in the figure), whereas for the intrinsic lines with smaller absolute values of
the intercept (green cross), the brightness may only reach the middle of the
range. In this case, the two colors on the same level surface have a large
difference in brightness, which deviates from the purpose of the lighting level
surface.

(iii) Shadow Removal

For obtaining shadow-free color
images

1.     
removing
shadows from the given images

2.     
recovering
the “true” colors of the scenes

3.     
showing the
shadow intensities of the original images.

 

(iv)
Relighting

 

Each relighting image is obtained
by adding or subtracting a uniform lighting variation from the original
lighting condition. Therefore, the light intensities of pixels in and out of
the shadow areas change in opposite directions.

 

 

 

 

 

 

SYSTEM IMPLEMENTATION

 

OVERVIEW

 

This module accepts the input
image which contains shadow. It preprocess the image, then removes the shadow
and applies relighting using MATLAB implementation.

Preprocessing

 

The light distribution within the
same illumination environment is not uniform everywhere because of various
degrees of reflections and occlusions
by other objects or itself.

Consequently, the pixels on the
same reflectance should not simply be regarded as either inside or outside
shadows but rather under a  continuous
variation of light intensities.

Here, I = (I1; I2; I3) is defined
as the intrinsic value based on basic color space model (R,G,B).

It provides an invariant for a
set of log-RGB values.

 

These log-RGB values belong to
the same reflectance but under different lighting conditions.

It identifies the intrinsic
characteristic of the set of log-RGB values

 

Feature Extraction

 

Considering one specific
reflectance under uneven lighting conditions, the log-RGB values of the pixels
on the reflectance may be various because of different light intensities.

 

However, they are distributed on
the same intrinsic line. By setting the log-RGB values of all the pixels to one
specific value, the entire reflectance is under the same lighting intensity.

Therefore, by projecting all the
pixels onto this plane, each reflectance is under the same light intensity.

However, for the intrinsic lines
with larger absolute values of the intercept, the brightness of the colors may
approximate the maximum because max(LR;LG;LB) is close to the defined upper
bound of the log- RGB values1, whereas for the intrinsic lines with smaller
absolute values of the intercept, the brightness may only reach the middle of
the range.

In this case, the two colors on
the same level surface have a large difference in brightness, which deviates
from the purpose of the lighting level surface.

 

                                            Construction
of Color Space Model Algorithm

Shadow Removal

 

Although the proposed algorithm
is capable of recovering the  “true”
colors of the scene, when comparing the shadow free image with its
corresponding original image, the percepts by human vision may suffer minor differences,
e.g., feeling whiten or dull. The major reasons are two-fold. First, the
intrinsic lighting contour surface is estimated from the given scene. The
deviation may occur for various reasons, such as improper clustering results or
irregular lighting level distribution of one reflectance. Second, the
perception differences come from the decrease of image contrast or entropy. Our
shadow- free image is obtained by setting the lighting levels of all the pixels
to the intrinsic lighting contour surface. By such processing, not only the
shadows but also the over-reflecting phenomenon are removed. All the darker and
brighter areas are pulled to the same brightness, which reduces the contrast of
the image. Since a large number of textures are essentially generated by the
variation of light intensities (not the cases by the mixtures of colors or
materials), the disappearance of lighting variations makes the texture area
more smooth and lowers the entropy of the image, which makes the image more
cartoon like. Consequently, the visual percept for the entire scene is affected.

 

Relighting

 

Relighting consists of placing
various lighting distributions upon the same scene to simulate different
illuminations. In the IL representation, the relighting effects can be achieved
by directly tuning the lighting channel. Each relighting image is obtained by
adding or subtracting a uniform lighting variation from the original lighting
condition. For the scene with slight shadows (upper row), the light intensities
of all the pixels vary from low to high altogether. However, for the scene with
heavy shadows (lower row), when the illumination environment becomes brighter,
the shadow areas should become even
darker. It is caused by the stronger direct sunlight and the weaker scattered
skylight in such a situation. Therefore, the light intensities of pixels in and
out of the shadow areas change in opposite directions.

x

Hi!
I'm Owen!

Would you like to get a custom essay? How about receiving a customized one?

Check it out