Difference between revisions of "Applied Computer Vision"

From David Vernon's Wiki
Jump to: navigation, search
(Content details)
Line 50: Line 50:
 
# Image processing: point & neighbourhood operations, image filtering, convolution, Fourier transform.
 
# Image processing: point & neighbourhood operations, image filtering, convolution, Fourier transform.
 
# Image processing: geometric operations, morphological operations.
 
# Image processing: geometric operations, morphological operations.
# Segmentation: region growing,  connected component analysis, edge detection, and boundary detection.
+
# Segmentation: region-based approaches,  connected component analysis, edge detection, and boundary-based approaches.
 
# Hough transform: line, circle, and generalized transform; extension to codeword features.  
 
# Hough transform: line, circle, and generalized transform; extension to codeword features.  
 
# Colour-based segmentation.
 
# Colour-based segmentation.

Revision as of 03:34, 25 May 2017

|CARNEGIE MELLON UNIVERSITY AFRICA|

04-800
Applied Computer Vision

Elective

Units: 12

Lecture/Lab/Rep hours/week: 4 hours lectures/week

Semester: Fall

Pre-requisites: programming skills

Students are expected to be proficient in programming in at least one programming language, ideally C/C++.

Course description

This course provides students with a solid foundation in the key elements of computer vision, emphasizing the practical application of the underlying theory. It focusses mainly on the techniques required to build robot vision applications but the algorithms can also be applied in other domains such as industrial inspection and video surveillance. A key focus of the course is on effective implementation of solutions to practical computer vision problems in a variety of environments using both bespoke software authored by the students and standard computer vision libraries.

Learning objectives

The course covers optics, sensors, image formation, image acquisition & image representation before proceeding to the essentials of image processing and image filtering. This provides the basis for a treatment of image segmentation, including edge detection, region growing, and boundary detection, the Hough transform, and colour-based segmentation.

Building on this, the course then proceeds to deal with object detection and recognition in 2D, addressing template matching, interest point operators, gradient orientation histograms, the SIFT descriptor, and colour histogram intersection and back-projection.

The problem of recovery of 3D information is then addressed, introducing homogeneous coordinates and transformations, the perspective transformation, camera model, inverse perspective transformation, stereo vision, and epipolar geometry.

The interpretation of visual information in unstructured environments poses many problems. To deal with these, the course then addresses visual attention, clustering, grouping, and segmentation, building on Gestalt principles, before proceeding to deal with object detection, object recognition, and object categorization in both 2D and 3D.

Outcomes

After completing this course, students should be able to:

  • Apply their knowledge of image acquisition, image processing, and image analysis to extract useful information from visual images.
  • Design, implement, and document appropriate, effective, and efficient software solutions for a variety of real-world computer vision problems.
  • Exploit standard computer vision software libraries in the development of these solutions.

Content details

Refer to the Lecture Schedule for information on course delivery, including lectures, labs, assignments, and exercises.

The course will cover the following topics:

  1. Overview of human and computer vision.
  2. Optics, sensors, image formation, image acquisition & image representation.
  3. Image processing: point & neighbourhood operations, image filtering, convolution, Fourier transform.
  4. Image processing: geometric operations, morphological operations.
  5. Segmentation: region-based approaches, connected component analysis, edge detection, and boundary-based approaches.
  6. Hough transform: line, circle, and generalized transform; extension to codeword features.
  7. Colour-based segmentation.
  8. Object detection and recognition in 2D; Template matching.
  9. Interest point operators: Harris and Difference of Gaussian. Gradient orientation histogram - SIFT descriptor.
  10. Colour histogram intersection and back-projection.
  11. 3D vision: Homogeneous coordinates and transformations. Perspective transformation. Camera model and inverse perspective transformation.
  12. Stereo vision. Epipolar geometry.
  13. Structured light & RGB-D cameras.
  14. Optical flow.
  15. Visual attention. Saliency. Bottom-up and top-down attention.
  16. Clustering, grouping, and segmentation revisited. Gestalt principles. Clustering algorithms.
  17. Object recognition in 2D and 3D. Object detection, object recognition, object categorisation. Affordances.
  18. Haar features. Histogram of Oriented Gradients (HOG) feature descriptor.
  19. Point cloud methods.
  20. Computer vision and machine learning

The detailed content for each of these topics follows.


Optics, sensors, image formation

  • xyz


  • xyz


  • xyz


  • xyz


  • xyz


  • xyz


  • xyz


  • xyz


  • xyz


  • xyz


  • xyz


  • xyz


  • xyz


  • xyz


  • xyz


  • xyz

Lecture Schedule

Refer to the Lecture Schedule for information on course delivery, including lectures, labs, assignments, and exercises.

Faculty

David Vernon

Delivery

Face-to-face.

Student assessment

This course includes several hands-on programming and analysis assignments. Students will program mainly in C/C++. The programming assignments include individual assignments and a team capstone project in teams of 2-3 people. In addition to programming assignments, students will be assigned readings to support the lecture material.

Marks will be awarded as follows.

Seven individual assignments 70%. Final examination 30%.

Software tools

Please follow the instructions provided in the Software Development Environment installation guide.

Course text

Szeliski, R. (2010). Computer Vision: Algorithms and Applications, Springer.

Recommended reading

Borji, A. and Itti, L. (2013). "State-of-the-Art in Visual Attention Modeling", IEEE Transactions on Pattern Analysis and Machine intelligence, Vol. 35, No. 1, pp. 185-207.

Dawson-Howe, K. (2014). A Practical Introduction to Computer Vision with OpenCV, Wiley.

Hanbury, A. The Taming of the Hue, Saturation, and Brightness Colour Space, Proc. Computer Vision Winter Workshop (CVWW), Austria, 2002.

Kragic, D. and Vincze, M. (2010). "Vision for Robotics", Foundation and Trends in Robotics, Vol 1, No 1, pp 1–78.

Vernon, D. (1991). Machine Vision: Automated Visual Inspection and Robot Vision, Prentice-Hall.

Acknowledgments

The syllabus for this course drew inspiration from several sources. These include the following.

  • Course VO 4.0 376.054 Machine Vision and Cognitive Robotics given by Markus Vincze, Michael Zillich, and Daniel Wolf at Technische Universität Wien.
  • Course 4BA10 Computer Vision given by David Vernon at Trinity College Dublin.
  • Course 4BA10 Computer Vision given by Kenneth-Dawson Howe at Trinity College Dublin.
  • Course on Computer Vision at VVV2017 by Francesca Odone, University of Genova.