The first is that the original panoramas being investigated by Google were actually the 'pushbroom' type. These are long strips created by slicing and stitching long video sequences taken with one camera. There were all kinds of problems with generating the panoramas themselves, like weird effects from the multi-perspective model. Trying to get a working system inside the vehicle for the very custom setup also posed many challenges, down to simply powering the Windows and Linux boxes set up in the back of the vans!
Then I learned that, when they experimented with spherical panoramas, they did indeed use the Ladybug camera. This is what the panoramas I'm using in my thesis were made with. However, the slide with that rig was up for about 0.3 seconds before Luc quickly and politely expressed how much it sucked before showing the custom rig with real lenses and cameras that Google put together. No wonder their panoramas increased in quality so much! Even though it doesn't actually help me with my thesis, it does make me feel just a little bit better about all the problems I've been having with the panoramas generated with Ladybug data.
Finally, Luc showed us a new interaction tool used in Street View, called the 'pancake.' The video below shows how awesome this tool is.
This would not be easy to implement - a full 3D model of the scene would be required. Luckily, Google uses laser data in addition to the images captured, which gives the depth information needed. Getting my hands on both the higher quality panoramas and some laser data would have made my thesis much easier, and given me much better results. I heard that some panoramas are going to be released, so maybe someone can pick up where I left off.
0 comments:
Post a Comment
Comments are moderated - please be patient while I approve yours.
Note: Only a member of this blog may post a comment.