Recent additions to the faceident project, both are attempts at dealing with lighting changes – which is the main area that needs addressing, as lighting changes are far more significant algorithmically than identifying features, and so cause lots of problems.
Firstly, I’ve added support for multiple images per identity/user – so we can at least record images of people in different lighting conditions to match against:
Secondly (with the same test video) I’ve tried adding some sort of adaptive blending, where the last found face image is averaged with the current image, this helps lock onto a face more robustly:
The next thing to look at is automatically adding to the store of images, in order to create a good representative set under all the conditions the system has seen. This is basically a crude way of creating a model of the users face – the more advanced way to do this is to build up a statistical model, the really clever way to do it would be to use all the faces found in order to find differentiating features between them.