Biometrics and Machine Learning Group
Latest news
We are pleased to announce that Mateusz Trokielewicz defended (with honors) his doctoral dissertation entitled „Iris Recognition Methods Resistant to Biological Changes in the Eye” , supervised by prof. Czajka and prof. Pacut, on the 18th of July, 2019.
Iris scanner can distinguish dead eyeballs from living ones: MIT Technology Review reports on our recent developements in the field of presentation attack detection for cadaver irises.
We are pleased to announce that Mateusz Trokielewicz received the EAB European Biometrics Research Award 2016 for research on iris recognition reliability including template aging, influence of eye diseases and post-mortem recognition.
Is That Eyeball Dead or Alive? Adam Czajka discusses the prevention of iris sensors accepting the use of a high-resolution photo of an iris or, in a grislier scenario, an actual eyeball. For full article, please see IEEE Spectrum.
Computer Vision (CSE 40535/60535)
back to Computer Vision (CSE 40535/60535)
Materials available here were prepared for students of the University of Notre Dame attending the course in Spring 2016. If you find these notes helpful in your work, please provide the following reference:
"Adam Czajka, Computer Vision (CSE 40535/60535), Lecture Notes, Spring 2016, available online http://zbum.ia.pw.edu.pl/EN/node/55"
All lecture notes
For your convenience here are all slides zipped to a single file (350 MB).
Progress
PART I: Introduction
Wed. 1/13/2016: Course structure and syllabus
Fri. 1/15/2016: Definition of computer vision (slides: L01)
Notion of computer vision, well- and ill-posed problems, inverse optics. Short history. Example applications. General pipeline, image acquisition, processing and analysis. Computer graphics.
Wed. 1/20/2016 and Fri. 1/22/2016: Human vision system (slides: L02)
Structure of the human eye. Receptors, rods and cones. Information processing, retina, neurons. Image formation, optics of the eye. Higher-level processing, optical illusions.
PART II: Digital image analysis
Mon. 1/25/2016 and Wed. 1/27/2016: Image acquisition (slides: L03)
Measurement of the electromagnetic energy, Reflectance Distribution Function. Digital camera, CCD sensors, performance factors. Pixel, spatial and temporal resolution. Pixels and pixel values, lookup table.
Fri. 1/29/2016: Color imaging (slides: L04)
How we perceive colors? Additive vs. subtractive color mixing. CIE 1931 chromaticity diagram. Tri-chromatic vs. color-opponent theories. Color spaces, RGB, HSV, HSL. Color camera.
Mon. 2/1/2016 and Wed. 2/3/2016: Point operators (slides and Matlab code: L05)
Linear, affine and non-linear operators. Histogram equalization. Image intensity quantization, image thresholding, strategies for threshold selection.
Fri. 2/5/2016: Practical session I
Mon. 2/8/2016: Neighborhood operators (slides: L06)
Linear and non-linear filtering, convolution vs. correlation. Fourier transform, convolution theorem.
Wed. 2/10/2016 and Mon. 2/15/2016: Edge detection (slides and Matlab code: L07)
Detection of discontinuities, polarity and orientation of edges. Importance of zero-crossings, Logan's theorem. Multi-scale analysis, causality. Morphological edge detection. Edge thinning, non-maximum suppression, Canny thinning.
Fri. 2/12/2016: Practical session II
Wed. 2/17/2016 and Fri. 2/19/2016: Image segmentation (slides and Matlab code: L08)
Detection of lines, Hough Transform for lines and circles, Generalized Hough Transform, RANSAC. Snakes, inner and outer energy, contour evolution. Region-based segmentation, split-and-merge, quad-tree representation. Watersheds.
Mon. 2/22/2016 and Wed. 2/24/2016: Geometric transformations in 2D (slides: L09)
Interpolation and decimation. Euclidean, similarity, affine and projective transformations. Properties of 2D transformations. Polar coordinates. Forward vs. inverse warping.
PART III: Pattern recognition
Fri. 2/26/2016, Mon. 2/29/2016 and Wed. 3/2/2016: Image features (slides: L10)
Geometric vs. intensity features. Global shape properties, geometric moments, Hu-moments. Wavelets, Gabor wavelets, uncertainty principle, applications. Scale Invariant Feature Transform (SIFT), detection of keypoints.
Fri. 3/4/2016: Practical session III
Mon. 3/14/2016, Wed. 3/16/2016 and Fri. 3/18/2016: Image features (slides: L10)
Matching of SIFT keypoint clouds as a point-pattern matching problem, application of Hough transform. Modification of SIFT: SURF (Speed Up Feature Transform). Histogram of Oriented Gradients (HOG) as image descriptor. Linear Binary Patterns (LBP), image coding and mapping, uniform and non-uniform patterns. Binarized Statistical Image Features (BSIF). Example application of LBP and BSIF (recognition of faces and hand thermal images).
Mon. 3/21/2016, Wed. 3/23/2016 and Wed. 3/29/2016: Object classification (slides: L11)
Bayesian inference in vision, Bayes rule, calculating posterior probabilities given prior knowledge and empirical data, Bayesian classifier, optimal classification error. Development of classifiers based on Bayes rule, Minimum Distance classifier (dMIN), k Nearest Neighbors classifier (kNN). Support Vector Machines (SVM).
Fri. 4/1/2016 and Mon. 4/4/2016: Performance evaluation (slides: L12)
Point and interval estimation, hypothesis testing. Classification error rates and curves, ROC, DET, CMC. Efficient use of data samples, training, validation and testing sets, cross validation techniques.
Wed. 4/6/2016: Image stitching (slides: L13)
Extraction and matching of keypoints. Estimating the transformation, RANSAC. Use of 2D homographies, cylindrical and spherical coordinates.
Fri. 4/8/2016: Practical session IV
Mon. 4/11/2016 and Wed. 4/13/2016: Feature selection (slides: L14)
Selection of feature subsets vs feature space transformations. Sequential Forward and Backward Selection (SFS, SBS), compensating lack of feature reevaluation (LRS and BDS methods). Use of mutual information, minimum Redundancy Maximum Relevance (mRMR). Linear transformations, PCA, LDA.
PART IV: 3D Geometry in Computer Vision
Fri. 4/15/2016: Camera model and camera calibration (slides: L15)
Intrinsic and extrinsic parameters, radial and tangential lens distortions. Homogenous coordinates, camera matrix. Calibration process.
Fri. 4/22/2016: Practical session V
Mon. 4/18/2016, Wed. 4/20/2016 and Mon. 4/25/2016: 3D reconstruction (slides: L16)
Stereo vision as a method of 3D scene reconstruction. Correspondence detection, epipolar lines, epipolar constraint. Stereo image normalization, base distance, dispersion and depth. Solving the correspondence problem, 1D optical flow, pixel block matching, use of dynamic programming for scan-line matching. Other 3D reconstruction methods, laser scanning, time of light, photogrammetry, use of structured light, shape-from-X (shading, texture, focus).
Example application of Computer Vision
Wed. 4/27/2016: Iris recognition with liveness detection (slides: L17)
Iris image acquisition. Static and dynamic imitations and selected countermeasures. Image segmentation. Gabor filtering and iris code calculation, Daugman's method. Iris code matching.