Did you ever get the chance to play The Legend of Zelda: The Wind Waker for the Nintendo GameCube? If so, you may recall that the graphics of this game looked rather different from other games. The below image from Wikipedia gives a pretty good idea of what I mean.
Notice how it looks as though it was hand animated? This technique is called cel-shading, and although it appears to be a simplistic drawing made by hand, the process of achieving this kind of rendering on the computer is actually pretty complex.
According to the Wikipedia article on cel-shading, a 3D model is usually used as a starting point for creating this kind of image. To get the desired look, unconventional lighting is applied to the scene. For example, the way an object would look under normal lighting conditions would be calculated first. Then, each pixel that was lit would have its value discretized into a small number of specific ranges, taking away the smooth transition from dark to light and replacing it with something more abrupt. Of course, this is simplified, so have a look at the article to learn more about it.
Using tutorials like this one found at instructables.com, it's possible for you and I to create our own objects and have them appear cel-shaded. But wouldn't it be cool to be able to take our own photographs and have them automatically converted?
I recently found an article to be published in IEEE Transactions on Visualization and Computer Graphics called Flow-Based Image Abstraction. The authors have been working on improving the automatic conversion of a photograph to a rendering that looks a lot like cel-shading.
They break down the problem into two steps: creating a line drawing, and performing region smoothing.
To get the line drawing, they analyze the direction that colours are changing in an image to get a sense of where the lines should be. Think of this as the optical flow. This step is like a fancy edge detector (you may have heard of standard edge detectors, like Canny Edge Detector). The next image shows a comparison between other edge extraction techniques and that developed by the authors, seen at the far right (image directly from the paper):
In a separate process, unimportant details (in terms of colour) are removed from the insides of regions defined by the detected lines. After this, the two images can be combined together.
(Once again, things are more complicated than this; if you want the technical details of how this is accomplished, the paper outlines the steps very well.)
The following image summarizes the process (image directly from the paper).
So who knows - thanks to computer science at work, maybe this will be the newest filter in the next version of Photoshop, available for you to make some pretty cool artwork from your own photos!