Project Suggestions
Project Goal: |
|
|
Develop an algorithm, which colors a gray scale image automatically
using database of million images (from the Web). The basic approach is based on
transferring a color from a source image to the grayscale image by matching
different kinds of information between the images and then fusing the colors. More Details: http://webee.technion.ac.il/labs/cgm/Computer-Graphics-Multimedia/Undergraduate-Projects/2011/AutomaticScenePainting/ProjectWeb/ |
Project Goal: |
|||||
|
Implement the following
image completion algorithm. The algorithm patches up holes in images by
finding similar image regions in the very large database that are not only
seamless but also semantically valid. The algorithm is entirely data-driven,
requiring no annotations or labelling by the user. More Details:
http://graphics.cs.cmu.edu/projects/scene-completion/ Image
Blending
Automatic
Photo Album
Project Goal: Design and implement an algorithm
that automatically chooses a few images out of hundreds of pictures from a
trip or an event, which represent most of them. |
Project Goal: |
|
|
Run
multi-class categorization on 1000 categories of ImageNet. The goal of the project is to compare
one-against-all heuristic of SVM to Neural network classification. You should
first get familiar with Caffe 1.
Download the
1000 categories from here 2.
Download pretrained CNN from and run the categorization using the
CNN (here) 3.
Extract Deep
Leaning Features as explained here. 4.
Train
one-against-all classifiers using linear SVM on the extracted features (use
LIBSVM) 5.
Run
one-against-all classification using learned SVM models. 6.
Compare the
results of the two systems. |
Identifying the validity of the image in Identification
Project Goal: |
|
|
Recent
methods use face images for identification instead of passwords in mobile
devices and other systems. It was shown that placing a photograph in front of
the camera fools the system and makes it to believe that it’s a real
person. The goal of this project is to develop a mechanism that can
discriminate between a real person and an image. You can use video, audio, or
any other input that can be placed in a mobile device. |
Project Goal: |
|
|
The goal of this
project is interactive recognition of gestures using Kinect camera. You can get many ideas for projects that
use the Kinect from the following sites: OpenKinect, or KinectHacks. Note: The camera will NOT be provided. You can choose
this project only if you have your own Kinect camera. |
Additional Projects
in Collaboration with Robotics Lab
The
projects are based on developing algorithms for the following setups and
applications.
1.
The relevant autonomous hardware:
A.
Cars (lego or smaller race cars).
B.
Helicopters (possibly that shoot arrows or water bubbles)
C.
Mini Quadcopters
D.
Large quadcopters (e.g. DJI phantom, for outdoor).
E.
Fixed wing plane (outdoor).
2.
Trackers (that automatically follow the robots or the people):
A.
RGB on board
B.
Kinect (3-D)
C.
RGB on walls (stereo or mono)
3.
Controllers (that tells the robot what do to) using:
A.
EEG (brain)
B.
Eye trackers
C.
Gestures (e.g. MYO or Kinect)
D.
IR or radio signals
E.
Voice (speech recognition).
Applications
include: War games based on e.g. face recognition, Swarms (operate dozens of
robots), guiding robots, searching objects in the forest, delivering Pizza for
windows in Eshkol building or in hospital etc...
The
challenges are usually to develop fast and accurate algorithm for path planning
and visual recognition.
One of the main theoretical tool that we will use will be coresets.
For additional
information contact Dr. Dan Feldman