As part of my research for Germination X, I’ve been reading a Lirec deliverable report on facial and body expressions for companions (robots and graphical characters). It covers a lot of non verbal communication, and is useful for me as it concerns displaying the raw values coming from the FAtiMA AI system in a slightly more research grounded manner than my ad-hoc animations we are initially using in the game. This is a very different approach to character design/animation for me – but it’s great to see Tex Avery being referenced.
The document starts by explaining the work of Paul Ekman and Wallace V. Friesen in categorising ranges of basic emotional expressions and how they can be combined into blends and more complex expressions. They went on to develop the Facial Action Coding System for encoding expressions.
Some important aspects of the art of animation are discussed including breaking the rules of physics (easier for a graphical character perhaps) in order to achieve exaggerated expressions and movements. Animation has already built up a well understood set of rules and techniques which are now deeply routed in our expectations via puppetry as well as traditional animation. Even the way simple robots move can be thought about carefully, using slow in/out to make motion less rigid.
Here is the description of surprise, in text form and it’s corresponding encoding:
Surprise is described as having the eyebrows raised, eyes wide open, the jaw drop
open, and the head tilts upward. In terms of timing, it is a fast motion. In our notation, we define it as: IB(1,5)+OB(1,5)+UE(1,5)+LE(1,5)+JA(1,4)+HT(1,3)+SPEED(fast)
An area I find really interesting is finding more abstract ways of expressing emotion, for companions where facial animation is not possible. Eva Heller’s work in 1989 (see picture above) on linking combinations of colour to emotional meaning is exciting, apparently this work was the result of asking 1888 people to match colours and abstract feelings. These colours can be expressed easily with lights or simple displays.
Of course sound also has a big role to play. The table above comes from an experiment by Scherer & Oshinsky in 1977, exposing 48 undergraduates to “sawtooth wave bursts” from a MOOG synthesiser, ranging from simple tones to Beethoven melodies and then asking them to rate the sounds in terms of corresponding emotions.