A magazine where the digital world meets the real world.
On the web
- Home
- Browse by date
- Browse by topic
- Enter the maze
- Follow our blog
- Follow us on Twitter
- Resources for teachers
- Subscribe
In print
What is cs4fn?
- About us
- Contact us
- Partners
- Privacy and cookies
- Copyright and contributions
- Links to other fun sites
- Complete our questionnaire, give us feedback
Search:
Turning silicon into gold
The ability to turn things from one material to another is the stuff of legend. Everyone knows of the Midas touch: named after the unfortunate king whose touch turned everything to gold. The Queen of Narnia’s most powerful weapon was her wand. Why? Because it turned people to stone. Alchemists too, long looked for ways to turn lead to gold.
Whilst in reality such transmutation isn’t possible, in the virtual world there are lots of situations where you might want to do it. Take that Narnia example, for instance. How do you do you make an object turn to stone on the screen? The ideal is that it could be done in a ‘photoshop’ way: the digital artist indicates a real object in a scene to transmute and the computer instantly changes it to realistically look like stone – or any other material desired for that matter. That way the film makers can make objects out of whatever is most convenient and use the digital wizardry to get the final effect – changing their mind at the touch of a button.
Erik Reinhard of the University of Bristol tells us about a way that Erum Khan, with help from Erik, Roland Fleming and Heinrich Buelthoff came up with to do it. Here's his explanation.
In computer graphics, we are normally concerned with the generation of images on the basis of three-dimensional (3D) models that are created by the computer. We create a scene full of geometric shapes, then specify where the light is coming from, and also specify how each object in the scene reflects the light based on its material’s properties.
Once we have a scene made of 3D virtual objects, we can then use all this information to simulate how light would travel through it. This is called ‘rendering’ the image. It involves simulating photons (individual particles of light). You take lots of imaginary photons and trace their route as they bounce around the scene to determine which would be seen by a viewer of that scene. This technique is called ‘ray tracing’. It is as if we just place a virtual camera in our virtual environment, and then compute what image that camera would capture.
The problem with ray tracing is that it’s not always practical to painstakingly model a 3D environment, and simulate the very large number of photons needed. It takes a very large amount of creative effort by skilled artists to define the 3D environment, and then an enormous amount of computer time to render a good image. Of course if we are making a film, it’s not just one image we are after but one for every frame of the film.
For some applications, it may be easier to just photograph a real environment, and then edit the photograph afterwards to achieve the desired effect rather than digitally create the whole thing from scratch. For instance, in the film industry, we could build a set that is mostly accurate (i.e. it is as the director would like it to be). When the director would like objects to be made out of a different material, or we would like the lighting to be different from what is practically achievable we only then use special effects, editing the images afterwards. This could be done to cut costs, but also to achieve Narnia-like material transformations that are part of the story.
Examples include not only changing the material, but also applying arbitrary patterns to objects, or even making an object transparent. For example the Fairy Godmother might be supposed to wave a wand to magically switch through lots of different patterns on the dress the Princess is trying on for the ball. It would be a drag if the actress had to be filmed in 20 different dresses all individually made to be identical apart from the pattern. Or maybe the slippers the Princess is wearing need to be glass. Pretty uncomfortable if the actress had to dance in real glass slippers!
On the technical side, the problem is that to change the material appearance of an object based on a single image, we really need to know both the 3D shape of the object and how it is illuminated (the colour of the light, how strong the light is, and from which directions it comes). With this information we can then use the standard computer graphics techniques.
Unfortunately, to compute the 3D shape of an object from a 2D image, we need to invent information that is not actually there in the image. This is similar to the problem that our visual system appears to be solving though: the image that falls on your retina is 2D, and yet somehow you still perceive the world as 3D. We could, therefore, try to mimic the human visual system, and build a computational model of it: create a computer program that does the same thing. In its simplest form, this is exactly what our team at the University of Bristol, working with researchers from the University of Central Florida, have done. We have also developed ways to apply various simple transformations on the image to approximate the way the object is lit.
Once we have constructed a 3D version of an image and can also illuminate it, we can then render a new object with the same geometry and illumination, but with a different material. The reason that the results look anywhere near plausible is that humans are bad at predicting exactly how light bounces around complicated scenes. For example, we are obviously very good at detecting material properties like transparency, but are actually very bad at then predicting the exact way light passes through a real transparent object. That’s one of the reasons we fall for illusions like the stick that seems to be bend when we put it into water. Our brain can’t compensate for the bending of the light to make us see what the stick is really like.
In our system for changing the materials of objects in images, we actively exploit these limitations of human vision to arrive at results that are physically wrong, but perceptually plausible. If you took a picture of an actual transparent object and placed it side-by-side with a picture of the fake one you would be able to tell the difference, but if you just saw the fake one you would be unlikely to notice it wasn’t real.
At the end of the day techniques like this, even though faking it, could turn computer silicon, at least, into gold. After all special effects are big business. Techniques that make digital effects more realistic and save money for the producers of a film or computer game are likely to make big money. That’s why computer graphics is such a thriving research area.
Now where's my wand...