Course grade is based on the final project.
Project submission is in 3 stages + presentation:
Preliminary submission due April 11,
2011.
Mid-submission due May 16, 2011.
Class Presentation on May 30
or June 6, 2011.
Final submission due June
26, 2011.
The project will include, Project summary, code, demos, documentation as well
as a web page explaining your
project its goal and presenting some of the results (see example)..
Preliminary Submission includes Web Page (outline) + Summary of
appropriate papers.
Mid-submission includes Web page complete with preliminary results.
Final submission includes – Everything! (Complete running program
and examples, user guide and web page with final results).
Project grade distribution can be found here.
Projects can be written in Matlab or as a Java applet. All programs must be stand-alone
(e.g. use standard functions supported by Matlab
and not toolbox functions) (Java programs – include in submission all
required downloads and libraries).
Your program should be fully interactive and GUI assisted (e.g. using MATLAB's
guide), e.g. parameters should be controlled by sliders, etc.
Part of the grade will be given for usability i.e. how easy it is to manage the
program (user-friendly).
Project submission should include:
4 directories:
Code –all code files, include
libraries, scripts, example runs etc.
Code should be well
documented (head of each file as well as within the code).
Include a readme file
describing the layout of the code (Modules, Functions etc).
Docs – Project summary describing the project, goal, methods, results.
Include a short User-Guide
should be associated with the program explaining how to use the program.
Supply at least one example
with working input and output and parameters used.
Data – input data for the program. May include input images, example
images or other input data.
Web – Directory including all files needed for the webpage. Should be
stand alone - i.e. include all link files and images used
in the webpages.
Questions can be sent to me by email at hagit@cs.haifa.ac.il.
Project #1: Multi-Image Color
Correction
In this project you have to correct colors of several images that have
been mosiaced together. Create 2 options:
a) align all image colors globally. b)
align colors locally creating a slowly varying color tone across the
mosaic.
The mosaicing code is given, you will have to add to this code.
Project #2: Color Matching
Application – on non-uniform background
Create an application/applet that performs color matching with
non-uniform background (e.g. with sinusoidal background of varying frequency).
Create a Modulation Transfer Function for colors dependent on frequency and
color of background pattern
based on data from the color-matching experiment.
Project #3: Color Correction of
Images Based on Faces (and/or other objects)
Color Correction from Millions of images….
Perform color correction (white balance) of an image based on objects in
the image.
Based on the paper below.
1) Based on faces: Search for faces in image. Estimate color deviation from
mean face color.
(Take mean to be over many).
2) Extend to other objects. Be careful - when looking for average of objects
consider that some objects
are highly distributed (e.g. cars) look at the color variance in the examples.
If variance
is large and distributed, the estimate of deviation will be poor.
Try to do this: increase patch/seg size till find region that as lower
variance
(e.g. for cars could be that the windshields/tires/bumpers have less deviation
and are always in the image!
Object Based Illumination Classification,
Hel-Or and Wandell, PR 2002
http://cs.haifa.ac.il/hagit/papers/PR02-HelOrWandell-IlluminationClassification.pdf
Project #4: Painting Faces
Using Color Landmarks
Determine "optimal" painting points on faces. Given a BW image
of a face determine the "magic" points
In the face for which color info must be given so that image painting will be
as good as possible.
Recently, several methods for image colorization have been developed (coloring
BW images).
These typically require input of selected points in the image and their
associated color. The rest of the
image is then painted according to these points. In their paper, Huang and Chen
studied how to determine
which are the best points to give as input. These are called landmarks.
In this project, try to implement their work on face images. Determine whether
there are "magic" points
in faces that should always be used as landmarks for coloring. How many
landmarks are needed?
Does this vary for faces with glasses, makeup, facial hair etc?
Image painting code must be written (possibly is available).
Face alignment must be performed.
"Landmark-Based Sparse Color Representations for Color Transfer",
Huang and Chen, ICCV2009
http://www.cse.wustl.edu/~mgeorg/readPapers/byVenue/iccv2009/huang2009_iccv_landmarkSparseColorForColorTransfer.pdf
Project #5: Detecting Camera
source of an image.
Determine noise patterns of 2 or more cameras and build an automatic
classifier that determines for a given image
with which camera it was acquired.
Detecting Digital Image Forgeries Using Sensor Pattern Noise, with J. Lukas and M. Goljan, Proc. of SPIE
Electronic Imaging, Photonics West, January 2006
http://www.ws.binghamton.edu/fridrich/Research/LukFriSPIE06_v9.pdf
Digital Camera Identification from Sensor Noise, with J. Lukas and M. Goljan, IEEE Transactions on
Information Security and Forensics, vol. 1(2), pp. 205-214, June 2006
http://www.ws.binghamton.edu/fridrich/Research/double.pdf
You can also use: noise characterization of a digital camera
http://scien.stanford.edu/class/psych221/projects/05/gregng/index.html
Project #6: Multi-Spectral
Imaging
Map a Multispectral image to 3D (RGB) such that color distances are
preserved as best as possible. Also constrain the mapping
so that colors are as similar to original as possible.
Must look for multispectral source images (if camera – even better!).
Project #7: Watermarking Color
Halftone Images
Watermarking (inserting a secret code in an image) has been applied to
halftoned images.
Implement watermarking in halftoned color images. Use the Barycentric Screening
approach
together with the grayscale watermarking technique. Both encoding and decoding
must be
implemented.
"Barycentric Screening", Nur Arad, Doron Shaked, Zachi Baharav, HP
Laboratories
http://www.hpl.hp.com/techreports/97/HPL-97-103R1.pdf
"Copyright Labeling of Printed Images", H.Z.Hel-Or, ICIP 2000
http://cs.haifa.ac.il/hagit/papers/CONF/ICIP00-HelOr-CopyrightLabelingPrintedImages.pdf
Project #8: Creating Color
Anaglyphs using red-green glasses
An Anaglyph is a pair of images superimposed on a single image and
typically viewed
with special glasses that allows one image to be viewed per eye. Glasses may be
color
filters or polarizes. Typically Anaglyphs are used to create 3D stereo images.
Problems when creating anaglyphs are 1)
due to filtering, there is significant loss of
color 2) there are colors that map to
the same values causing region merging
3) due to the non-independence of the
filters, there is cross-talk (when one eye sees
what the other is supposed to see).
In this project, you are given 2 color images and the goal is to create
anaglyph that has
minimal crossover and maximal colorfulness. This is of course dependent on the
filters
of the glasses.
The project involves: 1) designing an interactive mechanism to model the given
filters in the glasses, and to determine the color subspace spanned by each
filter.
2) Determine the optimal mapping of the images to anaglyph based on these
glasses
(e.g. using the papers below).
3) Design a color mapping of the images so that their anaglyph mapping becomes
more
efficient.
Methods for computing color anaglyphs, David F. McAllister, Ya Zhou, Sophia
Sullivan, EI2010
http://research.csc.ncsu.edu/stereographics/ei10.pdf
A Uniform Metric for Anaglyph Calculation, Zhe Zhang and David F.
McAllister, EI2006
http://research.csc.ncsu.edu/stereographics/ei06.pdf
Producing Anaglyphs from Synthetic Images, William Sanders, David F.
McAllister, EI2003
http://research.csc.ncsu.edu/stereographics/ei03.pdf
See also http://www.3dtv.at/knowhow/anaglyphcomparison_en.aspx
Project #9: Make it disappear! – Experiment with Projector and Camera
Projector camera interaction.
Place a poster with various letterings in view of the camera and under the
projector lighting.
Control the projector lighting so that one of the objects disappears.
Objects should be flat against the poster (otherwise shadows will interfere).
Make sure the objects and poster are colored so that
projector can actually project enough light to change appearance.
Project #10: Just Noticable
Difference – Experiment and Perceptual Color Mapping
Create a Just Noticable Difference - Experiment applet. Test for non
uniformity of JND in different color regions, in different color directions.
Map the results per person. Since must run on RGB machine, might be more
effective to use scale and not JND, i.e. define several
'distances' by example pairs and then
ask user to rate distances. Given these distances, map colors to 2d (3d) color
space.
Might require calibration of monitor.
Project #11: S-CIELAB
Implement S-CIELAB in JAVA. Ask users to rate perceptual distances
between photos/colors. Then compare rating
with SCIELAB CIELAB and RGB values. Use images such as stripes, patch on const
background as well as other images.
Project #12: LCD and CRT Display
Calibration
Build a calibration tool that measures color patches on LCD/CRT displays
(using photometer) and
builds the gamma curves of the display as well as the forward and backward
transformations between RGB and XYZ space.
Can use modules and code supplied by EyeOne (photometer).
NOT
OFFERED THIS YEAR
Project #3: Shadow Removal From
Video Sequences
In this project you will remove shadows form video sequences. The
shadows may move between video frames and may be on textured surfaces.
Implement the Arbel approach on a per-frame basis. Code for shadow removal in
single frames is given.
Project #4: Color Lines –
Interactive change of color of image objects
Color Lines - an interactive applet that takes an image, represents it using color
lines then allows user to click on object and
with 2 interactive sliders, change its color.
Based on : http://www.cs.huji.ac.il/~werman/Papers/colorLines04.pdf
Project #7: Background
Subtraction
The project will detect moving forground objects by background
subtraction and change detection in a sequence of video frames. A background
model is created using Mixture of Gaussians. The background is automatically
learnt from a predefined number of frames in the image. Once the background is
learnt, each video frame is subtracted from the background model and
thresholded
to obtain the foreground objects. Clean up of the foreground objects is
expected.
Sources:
C. Stauffer and W. Grimson, "Learning
Patterns of Activity Using Real-Time Tracking ", IEEE TPAMI, 22(8):747–757,
2000.
Al-Mazeed, A. H., Nixon, M. S. and Gunn, S. R. (2003)
"Fusing
Complementary Operators to Enhance Foreground/Background Segmentation".
In: British Machine Vision Conference 2003, 2003,
Project #10: Recoloring of BW
Images
Given a BW image – color it. Must learn priors, for example by
example images or by category.
Project #12: Compare different
Demosaicing techniques.
Implement and compare 5-6 different demosiacing techniques. Analyze
quality of demosaicing using frequency and orientation varying test pattern.
Project #13: Affective imaging
- color balancing.
Color balance an image to affect the emotional percept of the image.
Project #14:
Transparency and color images.
Overlaying 2 color images as transparent images often produce a scene
that is not separable into the two original scenes.
This is due to masking of one image over the other. This project will study
transparency masking and suggest a method that can automatically
recolor images so that masking in their transparent combination is minimized.
Will require a reference search, experimentation and coding.
Project #15: Course Scripting.
Write Matlab scripts (similar to scripts for chapters 2-4) for
demonstrating various aspects taught in the course on the topic of
Display, Printers, Scanners.
Project #16: Color blind images
Simulate what color blind people will see in an RGB image. Correct
(DALTONIZE) so that they see better.
See VISCHECK http://www.vischeck.com