Showing the expensive PR department I employ (not really), two synchronised articles in the media – one on the BBC website on livecoding that was filmed a few weeks back at the Roebuck during PubCode 2. Also an article on Furtherfield about the Futuresonic festival which includes a section on the groworld game.
A new version of plant eyes, mac version here.
This version has better controls (just cursors and space to grow), a lot of new pickups and corresponding ornaments which grow on you – and nutrients to increase the distance you can grow. Also new frilly bits:
The last image there is a psycho-galvanic analyzer, based on the work of L. George Lawrence. Unfortunately it seems our attempt to replicate the apparatus was unsuccessful. Perhaps this was due to the lack of signals from Ursa Major, or insufficient lunching (see the article).
In order to display the signals from environmental readings (which did work) such as light, temperature and soil moisture, I added some new ornamentation to the plants in the game.
The horns play and exude nutrients according to soil moisture data, and the inflatoes (inflatable potatoes) inflate according to the light level.
This gang of motley characters are the eigenfaces expressed though time, so I can see what kind of changes each vector represents in my eigenface-space. If you look closely they tend to express lighting changes, expressions, face shape and pose (head rotation) often jumbled together in some strange form. The next target is to separate out these ‘modes of change’ into clean vectors, so you only see expression, or only lighting changes etc in each vector. Then it becomes possible to build an appearance model which understands an incoming image in ways which are useful. e.g. ‘This is a face which looks like bob, he seems to be smiling, and the light is coming from the left’. Well, that’s the theory anyway.
I now have two methods of face identification. In an attempt to apply more method to my madness, I’ve been compiling images to use in benchmark tests to find out which one is better, and by how much. I’ve used the yale face database B, which has ten people in lots of lighting conditions, and giving the algorithms 4 images of each person in good lighting to train on, and the rest to recognise – and find where it breaks down.
On the left is the faceident program, which uses raw differencing on the face image pixels, basic stuff, on the right is the new faceclassifier program which uses the eigenfaces approach, which is a trained appearance model. The subjects should be numbered 0-4 from left to right, there are 40 images of each one, in increasingly difficult lighting.
The difference is not too staggering – the faceclassifier is 9% better than faceident (46% vs 56% correct). However, faceident is about as good as I can get that approach, while there is lots of room for tuning the faceclassifier. I need to try using different face databases for training (currently it’s using Dr Libor Spacek’s one I was playing with earlier) and also methods of projecting away things we are not interested in from the faces we want to recognise, such as pose, lighting and expression. Having benchmarks like this will help immensely for this process too, so I can compare iterations.