In the computer vision business, you have to get quite used to seeing your face a lot, as it’s generally the easiest to find subject for your experiments :)
This is me in lots of different lighting environments:
The standard face modelling approach is principle component analysis which is used to understand the appearance of face images. If you consider images as points in a rather large space of possible images, PCA allows you to find a subspace of this which describes faces (or anything else). You train the algorithm on a number of example images, such as the ones above, and you end up with what are called eigenfaces, which describe a set of higher level parameters which vary over the images you have trained it on:
The eigenfaces (or eigendaves in this case) are arranged in order, with the most significant ones first. This isn’t a great training set, but the first one seems to describe the angle of the light from left to right, the second is the overall lightness while the third and fourth seem to be something to do with the rotation.