Benchmarking identification

I now have two methods of face identification. In an attempt to apply more method to my madness, I’ve been compiling images to use in benchmark tests to find out which one is better, and by how much. I’ve used the yale face database B, which has ten people in lots of lighting conditions, and giving the algorithms 4 images of each person in good lighting to train on, and the rest to recognise – and find where it breaks down.

On the left is the faceident program, which uses raw differencing on the face image pixels, basic stuff, on the right is the new faceclassifier program which uses the eigenfaces approach, which is a trained appearance model. The subjects should be numbered 0-4 from left to right, there are 40 images of each one, in increasingly difficult lighting.

The difference is not too staggering – the faceclassifier is 9% better than faceident (46% vs 56% correct). However, faceident is about as good as I can get that approach, while there is lots of room for tuning the faceclassifier. I need to try using different face databases for training (currently it’s using Dr Libor Spacek’s one I was playing with earlier) and also methods of projecting away things we are not interested in from the faces we want to recognise, such as pose, lighting and expression. Having benchmarks like this will help immensely for this process too, so I can compare iterations.

Leave a Reply

Your email address will not be published. Required fields are marked *