It feels like the recent work on the faceident competency for Lirec has spawned far too many avenues of research, and not much actual improvement. I thought I’d spend some time consolidating the new features and running them through test videos. Firstly I got rid of the adaptive blending, it’s too fragile and causes the program to get stuck on the wrong face too easily. I kept the multiple image idea, and added automatic storage of new images during training as I mentioned a few days ago.
With lots more images to match against saved from this video, the results from the lighting test video are improved, even though the images it’s matching against were captured in different lighting conditions:
All these images are also ready to be plugged into a more advanced method for classification, which might be the next thing to look at.