A magazine where the digital world meets the real world.
On the web
- Browse by date
- Browse by topic
- Enter the maze
- Get our RSS feed
- Follow us on Twitter
- Resources for teachers
What is cs4fn?
- About us
- Contact us
- Privacy and cookies
- Copyright and contributions
- Links to other fun sites
- Complete our questionnaire and get a free magic download
The computer science of changing faces
The gory plot line of the 1997 movie Face/Off staring John Travolta and Nicolas Cage involves what was then the science fiction medical process of a face transplant. Government agent Sean Archer must find a ticking bomb planted by terrorist Caster Troy. To do this he takes on the identity of Troy by having Caster's face surgically transferred onto him so he can infiltrate the terrorist's group in prison. Just to complicate things the real Troy (the baddy) later manages to get hold of Archer's (the goody) face and pretending to be Archer helps his twin brother, Pollox Troy escape from prison. The real Archer is left in prison, looking like the baddy, while Caster, looking like the goody, goes off to destroy all the documents that would prove that the swap ever took place. If he's successful Archer will be left with Caster's Identity to rot in jail. Typical John Woo (the director) action follows with lots of two-handed gun shooting and chases. Eventually it all gets sorted, but it all goes to prove that faces are a key part of our personal identity and that shifting them around can be very confusing.
Meanwhile medical science has progressed to the stage where we can actually successfully transplant a face to help people whose own faces have been disfigured. Computer Scientists are also developing a way to digitally create and transfer faces, (in a less gruesome way) which could open up some fascinating new applications.
The psychology of faces
Humans seem to have a special part of the brain to process faces. Since we are social animals it's obviously important that we are able to recognise friends and family, and to be able to tell from their expressions what kind of mood they are in. Psychologists have studied how we use facial information for years, and have discovered that it's far from easy. Your brain is doing a lot of difficult calculations and using lots of assumptions to make the job easier. For example, when looking at faces we tend to think of them as convex, that is bulging outwards towards us, which of course they normally do. The brain is so convinced that faces are convex that even when we look at the inside of a mask we see a solid face - the hollow face illusion. The way our faces move is also very important. It's been found that if you use motion tracking to transfer the movements from an individual onto a computer generated face, even though that computerised face is genderless (looking as much female as male), observers can tell the gender of the original person from the pattern of movements alone; women and men have different ways of moving their faces.
This idea that we can build faces from component parts is at the heart of the identikit process used to try to reconstruct the face of a criminal in a police investigation. In the original system the witness was given a book filled with different eyes, ears, noses, mouth, hairlines and so on, and from this they selected the parts that the believed were like those of the criminal. It had some success. The problem though is that we tend not to see faces like that. We don't see them as a combination of bits. We see them as whole faces, and so often the identikit pictures created bit by bit didn't really look like the criminal at all. The computer-based E-Fit system tries to overcome this by taking account of some of the psychology of face recognition. By running the process electronically the face building elements stored in the database can be blended together to form a realistic looking face. One problem is that the witness building the face needs to say what isn't quite right with it. That can be difficult. A software system called Evo-Fit, developed in the Psychology Department at the University of Stirling, overcomes this by creating a range of similar faces rather than a single face. The witness selects the ones that are closest to the criminal. That is often easier to do. The system then uses genetic algorithms; a simple model for biological evolution, to breed more similar faces, and step-by-step, the system lets the witness focuses in on the best likeness.
The principle of Principal Components
The Evo-Fit system works by having, not a database full of different ears noses, eyes and so on, but a database containing something more cunning. It contains "Principal Components" of faces.
Principal Components are produced by a statistical technique that helps us represent lots of complicated data in a simplified form. We can think of pictures of faces as being represented as a collection of numbers (after all that's how they are stored as the pixels of a computer screen). A single good quality image can contain many thousands of numbers. If we combine a set of many different face images, the amount of data becomes gigantic, and each new face added will add more muddle to this big set of numbers; all faces are still in there it's just that they are all mixed up. We need some way to reduce this muddle so that we have just a few images that capture as much of the useful 'face stuff' in the full set as possible. That's where statistics comes in.
You may already have come across the idea of standard deviation or variance in maths lessons. These values are calculated from the data you are analysing and indicate how spread out the numbers are. A large variance shows that most of the numbers are spread out. A small variance shows that they are close together. We can use similar mathematical tricks on our set of face images. What we want to do is find a few images that will account for the most variation in the data; after all it is the variations in the data that makes the faces different from each other.
Once we turn the handle on the Principal Component Calculating Machine what we get out is a set of images. The first image (the first "Principal Component") accounts for most of the variation in the data, the second Principal Component accounts for the next highest level of variation and so on. So rather than having to store all the images in the original set we can just store as many Principal Component images as we need (this is called data reduction). What's more we can add and subtract these Principal Component images to let us recreate a good approximation to any of the original faces that we mangled together in the first place (this is called data reconstruction).
If all this maths is a little confusing, think of it this way. You know from art class that you can make any colour by mixing together amounts of red, green and blue paint (the primary colours). Think of the Principal Component images as computer calculated 'primary colours', which you can mix together to 'paint any colour' Ð or in this case make a face. The fact that the Principal Component images are all faces (of a sort) rather than colours means that they have the standard overall arrangement of the face. That means the Evo-Fit software doesn't have a problem of mistakenly adding in a nose where it shouldn't belong. The system just moves through painting new faces with the Principal combinations of the Component images until the best match to the criminal is found.
Let's get moving
We can also apply the Principal Component trick to sets of videos of faces, because videos are just a series of still frames all stacked together. Of course this means the amount of data involved is even bigger, but it can be done. What comes out of the Principal Component Calculating Machine here is not static images but video images. The first Principal Component video accounts for most of the variation in the original set of videos, the second Principal Component video accounts for ... and you know the rest. So we can now 'paint' with these videos to create new video sequences, by combining the appropriate Principal Component videos - and that's exactly what computer science researchers at Queen Mary and psychologists at UCL did!
The Digital Face/Off illusion
Suppose we take lots of video of one person, say Caster Troy, talking and laughing in typical baddy style and put this set of videos thorough the Principal Component Calculating Machine. What we have are videos that will let us paint new videos of Caster Troy by combining the components. If we can combine the right set of components we can make his face do whatever we want, even give us a cheery smile.
Now suppose we do the same with goody agent Sean Archer: take lots of videos of him and put this set of videos thorough the Principal Component Calculating Machine. We could now paint new Archer videos. Suppose instead we take a video of Archer pulling a particularly silly face. We can take this single video and work out the mix of Archer Principal Component videos that he uses to do this. In effect we have an instruction list of how to add and subtract the Archer components to make Archer look silly.
Now we take this instruction list from Archer but apply it to combine Caster Troy's' video components instead. The result: a new video where the baddy pulls exactly the same funny face - a face he never pulled in reality! We have taken Archer's facial movements and transplanted them onto Troy's face without a scalpel. The illusion is complete: Archer's face can work Troy's face like a puppet, and there is nothing he can do about it.
Facing up to the future
Apart from applications in espionage, these techniques could be used to allow, for example, actors to impersonate other possibly dead actors, of even let you pretend to be someone else on you mobile video phone. You would download the components for your new face, and then just send the instructions on how to build it. You could even create a face 'graphics equaliser' where rather than mixing music together by a set of sliders you mix facial expressions to create subtle performances for computer generated actors.
With computer science virtually nothing can be taken at face value.
See it yourself as "Isaac Newton" is manipulated on one of our "face-off" clips [very big movie file 22MB]