This is the first in a three-part series of design proposals for augmented reality learning applications. These are from a paper I wrote in my computers and cognition class. I'll be reworking the main ideas in the paper for a future submission, but probably won't include these, so I figured I'd share!
Optics is a subject often included in high school science curricula (see, for example, The Ontario Curriculum, Grades 9 and 10: Science), where concepts like reflection and refraction with lenses and mirrors are taught. There are many opportunities to help explain how light travels through space with augmented reality visualizations, but the proposed design here will focus on explaining the optics involved with a photographic camera.
There is much involved when it comes to the optics of a standard single lens reflex camera (as discussed in the detailed tutorials on Cambridge in Colour). For instance, there are many factors that affect how an image will be formed, including the lens focal length and aperture, and the distance of the actual focal point from the lens. These things will affect, for example, how much of a scene in front of the camera is captured and what parts of the image will be sharp. Learning how to take the photographs one has in mind can take many hours of trial and error practise. Being able to visualize the optics involved would be a huge advantage in this process, and could be used in classrooms learning about optics as an interesting applied lesson.
An augmented reality application is proposed next that can be used to help students and photographers gain a deeper understanding of how cameras work. A camera that can communicate information about its lens is required. This camera must be tracked by a computer vision system so that graphics visualizations may be made relative to it. A mobile device using the magic lens paradigm may be sufficient for this application, though a head mounted display might give a learner a more clear idea of what the he is seeing.
The learner will set up her camera to take a photograph. She will then use the augmented reality application to see how her settings will affect the final image. A three-dimensional, translucent shape will emanate from the camera and lens to indicate what portion of the scene will be captured on the camera's film or sensor. This shape will be determined by the camera's sensor size (a constant), and the lens's current focal length (dependent on the lens mounted, and the current focal length chosen in the case of a zoom lens). Using this shape, the learner can see whether a particular object in the scene -- say, a flower -- will be included in the final image. Two planes parallel to the camera's sensor will intersect the three-dimensional shape at locations that will indicate which portions of the scene will be in sharp focus. These are adjusted as the learner changes the aperture setting on her camera. The learner can view these visualizations from different angles as long as the camera remains in her field of view so it can be tracked.
What is depicted can be as complicated as is desired, and could incorporate more detail about how rays of light pass through the lens and hit the camera's sensor. The key point here is that this application provides an in-situ visualization of how the camera works that is much easier to understand than a similar visualization on a flat screen. Information is embedded into the environment so the learner does not have to recall how focal length and aperture affect her photograph as she is learning. She is able to see how the camera is working in a real environment in real time, and can examine the situation by moving in three dimensions, just as she is accustomed to doing in daily activity.