Last Updated on October 30, 2021

Recent advance in artificial intelligence has actually made face acknowledgment not a challenging issue. But in the previous, scientists have actually made numerous efforts and established numerous abilities to make computer system efficient in recognizing individuals. One of the early effort with moderate success is **eigenface**, which is based upon direct algebra strategies.

In this tutorial, we will see how we can develop a primitive face acknowledgment system with some basic direct algebra strategy such as primary part analysis.

After finishing this tutorial, you will understand:

- The advancement of eigenface strategy
- How to utilize primary part analysis to extract particular images from an image dataset
- How to reveal any image as a weighted amount of the particular images
- How to compare the resemblance of images from the weight of primary elements

Let’s start.

## Tutorial summary

This tutorial is divided into 3 parts; they are:

- Image and Face Recognition
- Overview of Eigenface
- Implementing Eigenface

## Image and Face Recognition

In computer system, images are represented as a matrix of pixels, with each pixel a specific color coded in some mathematical worths. It is natural to ask if computer system can check out the photo and comprehend what it is, and if so, whether we can explain the reasoning utilizing matrix mathematics. To be less enthusiastic, individuals attempt to restrict the scope of this issue to recognizing human faces. An early effort for face acknowledgment is to think about the matrix as a high dimensional information and we presume a lower measurement info vector from it, then attempt to acknowledge the individual in lower measurement. It was required in the old time since the computer system was not effective and the quantity of memory is extremely minimal. However, by checking out how to **compress** image to a much smaller sized size, we established an ability to compare if 2 images are depicting the exact same human face even if the images are not similar.

In 1987, a paper by Sirovich and Kirby thought about the concept that all images of human face to be a weighted amount of a couple of “key pictures”. Sirovich and Kirby called these crucial images the “eigenpictures”, as they are the eigenvectors of the covariance matrix of the mean-subtracted images of human faces. In the paper they certainly offered the algorithm of primary part analysis of the face photo dataset in its matrix type. And the weights utilized in the weighted amount certainly represent the forecast of the face photo into each eigenpicture.

In 1991, a paper by Turk and Pentland created the term “eigenface”. They constructed on top of the concept of Sirovich and Kirby and utilize the weights and eigenpictures as particular functions to acknowledge faces. The paper by Turk and Pentland set out a memory-efficient method to calculate the eigenpictures. It likewise proposed an algorithm on how the face acknowledgment system can run, consisting of how to upgrade the system to consist of brand-new faces and how to integrate it with a video capture system. The exact same paper likewise mentioned that the idea of eigenface can assist restoration of partly blocked photo.

## Overview of Eigenface

Before we delve into the code, let’s lay out the actions in utilizing eigenface for face acknowledgment, and explain how some basic direct algebra strategy can assist the job.

Assume we have a lot of images of human faces, all in the exact same pixel measurement (e.g., all are r×c grayscale images). If we get M various images and **vectorize** each photo into L=r×c pixels, we can provide the whole dataset as a L×M matrix (let’s call it matrix $A$), where each aspect in the matrix is the pixel’s grayscale worth.

Recall that primary part analysis (PCA) can be used to any matrix, and the outcome is a variety of vectors called the **primary elements**. Each primary part has the length like the column length of the matrix. The various primary elements from the exact same matrix are orthogonal to each other, indicating that the vector dot-product of any 2 of them is no. Therefore the numerous primary elements built a vector area for which each column in the matrix can be represented as a direct mix (i.e., weighted amount) of the primary elements.

The method it is done is to very first take $C=A – a$ where $a$ is the mean vector of the matrix $A$. So $C$ is the matrix that deduct each column of $A$ with the mean vector $a$. Then the covariance matrix is

$$S = Ccdot C^T$$

from which we discover its eigenvectors and eigenvalues. The primary elements are these eigenvectors in reducing order of the eigenvalues. Because matrix $S$ is a L×L matrix, we might think about to discover the eigenvectors of a M×M matrix $C^Tcdot C$ rather as the eigenvector $v$ for $C^Tcdot C$ can be changed into eigenvector $u$ of $Ccdot C^T$ by $u=Ccdot v$, other than we typically choose to compose $u$ as stabilized vector (i.e., standard of $u$ is 1).

The physical significance of the primary part vectors of $A$, or equivalently the eigenvectors of $S=Ccdot C^T$, is that they are the crucial instructions that we can build the columns of matrix $A$. The relative significance of the various primary part vectors can be presumed from the matching eigenvalues. The higher the eigenvalue, the better (i.e., holds more info about $A$) the primary part vector. Hence we can keep just the very first K principal part vectors. If matrix $A$ is the dataset for face images, the very first K principal part vectors are the leading K crucial “face pictures”. We call them the **eigenface** photo.

For any provided face photo, we can forecast its mean-subtracted variation onto the eigenface photo utilizing vector dot-product. The result is how close this face photo is associated with the eigenface. If the face photo is absolutely unassociated to the eigenface, we would anticipate its outcome is no. For the K eigenfaces, we can discover K dot-product for any provided face photo. We can provide the outcome as **weights** of this face photo with regard to the eigenfaces. The weight is typically provided as a vector.

Conversely, if we have a weight vector, we can build up each eigenfaces subjected to the weight and rebuild a brand-new face. Let’s represent the eigenfaces as matrix $F$, which is a L×K matrix, and the weight vector $w$ is a column vector. Then for any $w$ we can build the photo of a face as

$$z=Fcdot w$$

which $z$ is resulted as a column vector of length L. Because we are just utilizing the leading K principal part vectors, we need to anticipate the resulting face photo is distorted however maintained some facial quality.

Since the eigenface matrix is consistent for the dataset, a differing weight vector $w$ indicates a differing face photo. Therefore we can anticipate the images of the exact same individual would supply comparable weight vectors, even if the images are not similar. As an outcome, we might utilize the range in between 2 weight vectors (such as the L2-norm) as a metric of how 2 images look like.

## Implementing Eigenface

Now we try to execute the concept of eigenface with numpy and scikit-learn. We will likewise utilize OpenCV to check out photo files. You might require to set up the appropriate plan with `pip`

command:

pip set up opencv–python |

The dataset we utilize are the ORL Database of Faces, which is rather of age however we can download it from Kaggle:

The file is a zip file of around 4MB. It has images of 40 individuals and everyone has 10 images. Total to 400 images. In the following we presumed the file is downloaded to the regional directory site and called as `attface.zip`

.

We might draw out the zip file to understand, or we can likewise utilize the `zipfile`

plan in Python to check out the contents from the zip file straight:

import cv2 import zipfile import numpy as np
faces = {} with zipfile.ZipFile(“attface.zip”) as facezip: for filename in facezip.namelist(): if not filename.endswith(“.pgm”): continue # not a face photo with facezip.open(filename) as image: # If we drew out files from zip, we can utilize cv2.imread(filename) rather deals with[filename] = cv2.imdecode(np.frombuffer(image.check out(), np.uint8), cv2.IMREAD_GRAYSCALE) |

The above is to read every PGM file in the zip. PGM is a grayscale image file format. We extract each PGM file into a byte string through `image.read()`

and transform it into a numpy range of bytes. Then we utilize OpenCV to translate the byte string into a range of pixels utilizing `cv2.imdecode()`

. The file format will be identified instantly by OpenCV. We conserve each photo into a Python dictionary `deals with`

for later on usage.

Here we can have a look on these photo of human faces, utilizing matplotlib:

... import matplotlib.pyplot as plt
fig, axes = plt.subplots(4,4,sharex=True,sharey=True,figsize=(8,10)) faceimages = list(deals with.worths())[–16:] # take last 16 images for i in variety(16): axes[i%4][i//4].imshow(faceimages[i], cmap=”gray”) plt.program() |

We can likewise discover the pixel size of each photo:

... faceshape = list(faces.values())[0].shape print(“Face image shape:”, faceshape) |

Face image shape: (112, 92) |

The images of faces are recognized by their file name in the Python dictionary. We can take a peek on the filenames:

... print(list(faces.keys())[:5]) |

[‘s1/1.pgm’, ‘s1/10.pgm’, ‘s1/2.pgm’, ‘s1/3.pgm’, ‘s1/4.pgm’] |

and for that reason we can put faces of the exact same individual into the exact same class. There are 40 classes and absolutely 400 images:

... classes = set(filename.split(“/”)[0] for filename in faces.keys()) print(“Number of classes:”, len(classes)) print(“Number of pictures:”, len(faces)) |

Number of classes: 40 Number of images: 400 |

To highlight the ability of utilizing eigenface for acknowledgment, we wish to hold out a few of the images prior to we create our eigenfaces. We hold out all the images of someone along with one photo for another individual as our test set. The staying images are vectorized and transformed into a 2D numpy range:

... # Take classes 1-39 for eigenfaces, keep whole class 40 and # image 10 of class 39 as out-of-sample test facematrix = [] facelabel = [] for crucial,val in deals with.products(): if crucial.startswith(“s40/”): continue # this is our test set if crucial == “s39/10.pgm”: continue # this is our test set facematrix.append(val.flatten()) facelabel.append(crucial.split(“/”)[0])
# Create facematrix as (n_samples,n_pixels) matrix facematrix = np.range(facematrix) |

Now we can carry out primary part analysis on this dataset matrix. Instead of calculating the PCA action by action, we use of the PCA function in scikit-learn, which we can quickly obtain all outcomes we required:

... # Apply PCA to extract eigenfaces from sklearn.decomposition import PCA
pca = PCA().fit(facematrix) |

We can determine how considerable is each primary part from the described difference ratio:

... print(pca.explained_variance_ratio_) |

[1.77824822e-01 1.29057925e-01 6.67093882e-02 5.63561346e-02 5.13040312e-02 3.39156477e-02 2.47893586e-02 2.27967054e-02 1.95632067e-02 1.82678428e-02 1.45655853e-02 1.38626271e-02 1.13318896e-02 1.07267786e-02 9.68365599e-03 9.17860717e-03 8.60995215e-03 8.21053028e-03 7.36580634e-03 7.01112888e-03 6.69450840e-03 6.40327943e-03 5.98295099e-03 5.49298705e-03 5.36083980e-03 4.99408106e-03 4.84854321e-03 4.77687371e-03 … 1.12203331e-04 1.11102187e-04 1.08901471e-04 1.06750318e-04 1.05732991e-04 1.01913786e-04 9.98164783e-05 9.85530209e-05 9.51582720e-05 8.95603083e-05 8.71638147e-05 8.44340263e-05 7.95894118e-05 7.77912922e-05 7.06467912e-05 6.77447444e-05 2.21225931e-32] |

or we can just comprise a moderate number, state, 50, and think about these numerous primary part vectors as the eigenface. For benefit, we draw out the eigenface from PCA result and save it as a numpy range. Note that the eigenfaces are saved as rows in a matrix. We can transform it back to 2D if we wish to show it. In listed below, we reveal a few of the eigenfaces to see how they appear like:

... # Take the very first K principal elements as eigenfaces n_components = 50 eigenfaces = pca.components_[:n_components]
# Show the first 16 eigenfaces fig, axes = plt.subplots(4,4,sharex=True,sharey=True,figsize=(8,10)) for i in range(16): axes[i%4][i//4].imshow(eigenfaces[i].reshape(faceshape), cmap=”gray”) plt.show() |

From this picture, we can see eigenfaces are blurry faces, but indeed each eigenfaces holds some facial characteristics that can be used to build a picture.

Since our goal is to build a face recognition system, we first calculate the weight vector for each input picture:

... # Generate weights as a KxN matrix where K is the number of eigenfaces and N the number of samples weights = eigenfaces @ (facematrix – pca.mean_).T |

The above code is using matrix multiplication to replace loops. It is roughly equivalent to the following:

... weights = [] for i in range(facematrix.shape[0]): weight = [] for j in range(n_components): w = eigenfaces[j] @ (facematrix[i] – pca.mean_) weight.append(w) weights.append(weight) |

Up to here, our face recognition system has been completed. We used pictures of 39 persons to build our eigenface. We use the test picture that belongs to one of these 39 persons (the one held out from the matrix that trained the PCA model) to see if it can successfully recognize the face:

... # Test on out-of-sample image of existing class query = faces[“s39/10.pgm”].reshape(1,–1) query_weight = eigenfaces @ (query – pca.mean_).T euclidean_distance = np.linalg.norm(weights – query_weight, axis=0) best_match = np.argmin(euclidean_distance) print(“Best match %s with Euclidean distance %f” % (facelabel[best_match], euclidean_distance[best_match])) # Visualize fig, axes = plt.subplots(1,2,sharex=True,sharey=True,figsize=(8,6)) axes[0].imshow(query.reshape(faceshape), cmap=“gray”) axes[0].set_title(“Query”) axes[1].imshow(facematrix[best_match].reshape(faceshape), cmap=“gray”) axes[1].set_title(“Best match”) plt.show() |

Above, we first subtract the vectorized image by the average vector that retrieved from the PCA result. Then we compute the projection of this mean-subtracted vector to each eigenface and take it as the weight for this picture. Afterwards, we compare the weight vector of the picture in question to that of each existing picture and find the one with the smallest L2 distance as the best match. We can see that it indeed can successfully find the closest match in the same class:

Best match s39 with Euclidean distance 1559.997137 |

and we can visualize the result by comparing the closest match side by side:

We can try again with the picture of the 40th person that we held out from the PCA. We would never get it correct because it is a new person to our model. However, we want to see how wrong it can be as well as the value in the distance metric:

... # Test on out-of-sample image of new class query = faces[“s40/1.pgm”].reshape(1,–1) query_weight = eigenfaces @ (query – pca.mean_).T euclidean_distance = np.linalg.norm(weights – query_weight, axis=0) best_match = np.argmin(euclidean_distance) print(“Best match %s with Euclidean distance %f” % (facelabel[best_match], euclidean_distance[best_match])) # Visualize fig, axes = plt.subplots(1,2,sharex=True,sharey=True,figsize=(8,6)) axes[0].imshow(query.reshape(faceshape), cmap=“gray”) axes[0].set_title(“Query”) axes[1].imshow(facematrix[best_match].reshape(faceshape), cmap=“gray”) axes[1].set_title(“Best match”) plt.show() |

We can see that it’s best match has a greater L2 distance:

Best match s5 with Euclidean distance 2690.209330 |

but we can see that the mistaken result has some resemblance to the picture in question:

In the paper by Turk and Petland, it is suggested that we set up a threshold for the L2 distance. If the best match’s distance is less than the threshold, we would consider the face is recognized to be the same person. If the distance is above the threshold, we claim the picture is someone we never saw even if a best match can be find numerically. In this case, we may consider to include this as a new person into our model by remembering this new weight vector.

Actually, we can do one step further, to generate new faces using eigenfaces, but the result is not very realistic. In below, we generate one using random weight vector and show it side by side with the “average face”:

... # Visualize the mean face and random face fig, axes = plt.subplots(1,2,sharex=True,sharey=True,figsize=(8,6)) axes[0].imshow(pca.mean_.reshape(faceshape), cmap=“gray”) axes[0].set_title(“Mean face”) random_weights = np.random.randn(n_components) * weights.std() newface = random_weights @ eigenfaces + pca.mean_ axes[1].imshow(newface.reshape(faceshape), cmap=“gray”) axes[1].set_title(“Random face”) plt.show() |

How good is eigenface? It is surprisingly overachieved for the simplicity of the model. However, Turk and Pentland tested it with various conditions. It found that its accuracy was “an average of 96% with light variation, 85% with orientation variation, and 64% with size variation.” Hence it may not be very practical as a face recognition system. After all, the picture as a matrix will be distorted a lot in the principal component domain after zoom-in and zoom-out. Therefore the modern alternative is to use convolution neural network, which is more tolerant to various transformations.

Putting everything together, the following is the complete code:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
import zipfile import cv2 import numpy as np import matplotlib.pyplot as plt from sklearn.decomposition import PCA
# Read face image from zip file on the fly faces = {} with zipfile.ZipFile(“attface.zip”) as facezip: for filename in facezip.namelist(): if not filename.endswith(“.pgm”): continue # not a face photo with facezip.open(filename) as image: # If we extracted files from zip, we can use cv2.imread(filename) instead faces[filename] = cv2.imdecode(np.frombuffer(image.read(), np.uint8), cv2.IMREAD_GRAYSCALE)
# Show sample faces using matplotlib fig, axes = plt.subplots(4,4,sharex=True,sharey=True,figsize=(8,10)) faceimages = list(faces.values())[–16:] # take last 16 images for i in range(16): axes[i%4][i//4].imshow(faceimages[i], cmap=”gray”) print(“Showing sample faces”) plt.show()
# Print some details faceshape = list(faces.values())[0].shape print(“Face image shape:”, faceshape)
classes = set(filename.split(“/”)[0] for filename in faces.keys()) print(“Number of classes:”, len(classes)) print(“Number of images:”, len(faces))
# Take classes 1-39 for eigenfaces, keep entire class 40 and # image 10 of class 39 as out-of-sample test facematrix = [] facelabel = [] for key,val in faces.items(): if key.startswith(“s40/”): continue # this is our test set if key == “s39/10.pgm”: continue # this is our test set facematrix.append(val.flatten()) facelabel.append(key.split(“/”)[0])
# Create a NxM matrix with N images and M pixels per image facematrix = np.array(facematrix)
# Apply PCA and take first K principal components as eigenfaces pca = PCA().fit(facematrix)
n_components = 50 eigenfaces = pca.components_[:n_components]
# Show the first 16 eigenfaces fig, axes = plt.subplots(4,4,sharex=True,sharey=True,figsize=(8,10)) for i in range(16): axes[i%4][i//4].imshow(eigenfaces[i].reshape(faceshape), cmap=”gray”) print(“Showing the eigenfaces”) plt.show()
# Generate weights as a KxN matrix where K is the number of eigenfaces and N the number of samples weights = eigenfaces @ (facematrix – pca.mean_).T print(“Shape of the weight matrix:”, weights.shape)
# Test on out-of-sample image of existing class query = faces[“s39/10.pgm”].reshape(1,–1) query_weight = eigenfaces @ (query – pca.mean_).T euclidean_distance = np.linalg.norm(weights – query_weight, axis=0) best_match = np.argmin(euclidean_distance) print(“Best match %s with Euclidean distance %f” % (facelabel[best_match], euclidean_distance[best_match])) # Visualize fig, axes = plt.subplots(1,2,sharex=True,sharey=True,figsize=(8,6)) axes[0].imshow(query.reshape(faceshape), cmap=“gray”) axes[0].set_title(“Query”) axes[1].imshow(facematrix[best_match].reshape(faceshape), cmap=“gray”) axes[1].set_title(“Best match”) plt.show()
# Test on out-of-sample image of new class query = faces[“s40/1.pgm”].reshape(1,–1) query_weight = eigenfaces @ (query – pca.mean_).T euclidean_distance = np.linalg.norm(weights – query_weight, axis=0) best_match = np.argmin(euclidean_distance) # Visualize fig, axes = plt.subplots(1,2,sharex=True,sharey=True,figsize=(8,6)) axes[0].imshow(query.reshape(faceshape), cmap=“gray”) axes[0].set_title(“Query”) axes[1].imshow(facematrix[best_match].reshape(faceshape), cmap=“gray”) axes[1].set_title(“Best match”) plt.show() |

## Further reading

This section provides more resources on the topic if you are looking to go deeper.

### Papers

### Books

### APIs

### Articles

## Summary

In this tutorial, you discovered how to develop a face acknowledgment system utilizing eigenface, which is derived from principal component analysis.

Specifically, you learned:

- How to extract characteristic images from the image dataset using principal component analysis
- How to use the set of characteristic images to create a weight vector for any seen or unseen images
- How to use the weight vectors of different images to measure for their similarity, and apply this technique to face acknowledgment
- How to generate a brand-new random image from the characteristic images

## Get a Handle on Linear Algebra for Machine Learning!

#### Develop a working understand of linear algebra

…by writing lines of code in python

Discover how in my brand-new Ebook:

Linear Algebra for Machine Learning

It offers **self-study tutorials** on subjects like:

*Vector Norms, Matrix Multiplication, Tensors, Eigendecomposition, SVD, PCA* and far more…

#### Finally Understand the Mathematics of Data

Skip the Academics. Just Results.

See What’s Inside