I recently received my Google Summer of Code t-shirt in the mail. Here are a couple of photos showing its awesomeness:
In related news from the Why Google Is So Well Loved department, I also received some swag for my Women in CS event coming up in December. I got enough umbrellas and sticky notepads for everyone who signed up (and for my guest speaker), and a few really nice mugs that I will probably draw names for. All of this has the Google women's logo, which I have never personally even seen before, making it rare in my eyes, and all the more special. I am certain that everyone who attends our event in a couple of weeks is going to love this stuff!
Friday, November 30, 2007
Wednesday, November 28, 2007
Anita Borg Scholarship
I plan to apply to this scholarship from Google, and if you are a Canadian computer science student, you should too!
Google Expands Anita Borg Memorial Scholarship to Canada!
http://www.google.ca/anitaborg/
Dr. Anita Borg (1949 - 2003) devoted her adult life to revolutionizing the way we think about technology and dismantling barriers that keep women and minorities from entering computing and technology fields. Her combination of technical expertise and fearless vision continues to inspire and motivate countless women to become active participants and leaders in creating technology. In honor of Dr. Borg's passion, for the past four years, we have sponsored a scholarship with the Anita Borg Institute for Women and Technology in the U.S., and more recently, Europe and Australia.
This year, we're excited to announce the expansion of the program to include Canada, where we're very pleased to offer an opportunity to both undergraduate students and postgraduates. Scholarships will be awarded based on the strength of candidates' academic background and demonstrated leadership. A group of female undergraduate and graduate student finalists will be chosen from the applicant pool. The scholarship recipients will each receive a $5,000 CAD scholarship for the 2008-2009 academic year. The remaining finalists will receive $1000 CAD each.
All scholarship recipients and finalists will be invited to visit the Google engineering office in New York, on May 1-3, 2008 for a networking retreat which will include workshops with a series of speakers, panelists, breakout sessions and social activities.
We are looking for women who will carry on the legacy of Dr. Borg, with strong academic backgrounds and demonstrated leadership qualities.
Tell your friends, or apply yourself, at http://www.google.ca/anitaborg/
From the Google blog: http://googleblog.blogspot.com/2007/10/anita-borg-scholarships-expand-to.html
Mini-Course Accepted!
I just got the good news about my proposed mini-course "Computer Science and Games: Not Just For Boys!" -- it was accepted! I was a little bit worried that it might not make it because there were apparently many more applications than needed courses, and some colleagues proposed a course on game design at the same time. I'm really excited to start hashing out the details of my many ideas for this course. Will keep you all updated!
More Tangible User Interfaces
Thanks to the wonders of Reddit (a link submission and ranking site), I found an article from Smashing Magazine that outlines what they consider to be the user interfaces of the future. I find it interesting that almost all of them are very clearly what I would call tangible user interfaces, which I wrote about earlier this month.
First up is the Cheoptics360 XL 3D video screen. Though it's difficult to tell exactly what's happening in the press video as opposed to seeing it in person, it looks like this setup will essentially present a video that can be viewed from any angle as you walk around the structure. It's not clear whether the video will look 3D to the viewer. The creators of this technology seem to tout it as a good way to get product information out there, and Smashing Magazine thinks it is the future of gaming. I'd like to add a third possibility: education. Can you imagine how great it would be for a museum to present video of an artifact in life size rather than having to physically display it? This is especially true for those things that can't be brought indoors, or that don't exist anymore; for example, you could present the streets of Germany after the WWII bombing in a war museum, or have an interactive view of dinosaurs in a natural history museum.
I believe this next example has been around for some time now, but it still impresses me. The Reactable "is a collaborative electronic music instrument with a tabletop tangible multi-touch interface." What an amazing idea for a totally new kind of music making device! It appears that even Icelandic singer Björk is on board with this one, using it in her Volta tour. The fact that this system is easy to learn without any manuals or instructions makes it a perfect way to help children learn about music. After all, no parent loves the sound of their kids learning the recorder!
The example photos of the showcased research on multi-touch interfaces can really remind you of the movie Minority Report. Check out this video, presented by a company that spun off from the research linked to by Smashing Magazine, Perceptive Pixel. Now, many of the tangible user interfaces discussed today make use of the multi-touch paradigm, so this isn't necessarily all that unique; however, the video really gives you a good feel for what multi-touch is all about. Notice the many different ways the hands are used to interact with the data presented on the screen: zooming, panning, drawing, tilting, and so on. These movements are very natural in a real-world kind of way.
Finally we have the BumpTop. This interface ditches the usual GUI desktop paradigm, which doesn't keep the connection between what documents laid out on a real desk look like and how they can be arranged on the screen. With BumpTop, you can pile your files up just like in the real world, organizing your stuff just how you want it. It is claimed that these piles and arrangements convey information about the data in themselves.
It is examples like these that continue to convince me that the future of user interfaces lies in the innovative solutions that bridge the gap between the physical and digital world. Long live tangible user interfaces!
First up is the Cheoptics360 XL 3D video screen. Though it's difficult to tell exactly what's happening in the press video as opposed to seeing it in person, it looks like this setup will essentially present a video that can be viewed from any angle as you walk around the structure. It's not clear whether the video will look 3D to the viewer. The creators of this technology seem to tout it as a good way to get product information out there, and Smashing Magazine thinks it is the future of gaming. I'd like to add a third possibility: education. Can you imagine how great it would be for a museum to present video of an artifact in life size rather than having to physically display it? This is especially true for those things that can't be brought indoors, or that don't exist anymore; for example, you could present the streets of Germany after the WWII bombing in a war museum, or have an interactive view of dinosaurs in a natural history museum.
I believe this next example has been around for some time now, but it still impresses me. The Reactable "is a collaborative electronic music instrument with a tabletop tangible multi-touch interface." What an amazing idea for a totally new kind of music making device! It appears that even Icelandic singer Björk is on board with this one, using it in her Volta tour. The fact that this system is easy to learn without any manuals or instructions makes it a perfect way to help children learn about music. After all, no parent loves the sound of their kids learning the recorder!
The example photos of the showcased research on multi-touch interfaces can really remind you of the movie Minority Report. Check out this video, presented by a company that spun off from the research linked to by Smashing Magazine, Perceptive Pixel. Now, many of the tangible user interfaces discussed today make use of the multi-touch paradigm, so this isn't necessarily all that unique; however, the video really gives you a good feel for what multi-touch is all about. Notice the many different ways the hands are used to interact with the data presented on the screen: zooming, panning, drawing, tilting, and so on. These movements are very natural in a real-world kind of way.
Finally we have the BumpTop. This interface ditches the usual GUI desktop paradigm, which doesn't keep the connection between what documents laid out on a real desk look like and how they can be arranged on the screen. With BumpTop, you can pile your files up just like in the real world, organizing your stuff just how you want it. It is claimed that these piles and arrangements convey information about the data in themselves.
It is examples like these that continue to convince me that the future of user interfaces lies in the innovative solutions that bridge the gap between the physical and digital world. Long live tangible user interfaces!
Thursday, November 22, 2007
Fun With Accelerometers
Accelerometers: Teeny little chips that can be found in more places than you probably imagined. And trying to actually write code that makes use of them? You either love 'em or you hate 'em... or maybe a bit of both.
As many of you know, the Wii remote has an accelerometer built in as one of several sensors used to enhance game play. Apple's iPhone uses an accelerometer to sense how the device is tilted; that's how it's able to change the orientation of its screen based on how you are holding it. Robots can use them to keep their balance, and Nike's shoes can count your steps with them. And that's just the beginning - there are tons of applications for this simple little device!
I recently had the 'pleasure' (if you can call it that) of working on a course assignment using a Kionix accelerometer encased in a little joystick-like cover. I'm not sure what model it was; it may have even been a demo or evaluation version our teacher was able to obtain. Anyway, this is where the love-hate thing comes in. The ideas behind what you could do with these devices are really cool, but the reality kind of hurt. It's really hard to make them work!
Our assignment had us first code up some basic calculations you can do with an accelerometer. Tilt wasn't so bad to figure out. In fact, you can determine pitch and roll, rotation around the x and y axes respectively, and tilt, the angle the z-axis makes with the vertical. What you cannot compute, however, is yaw - that's why many inertial systems also use gyroscopes.
The second set of calculations to implement used integration equations to find location based on the rate of change of velocity (i.e., acceleration). We tried out several methods of doing this: Euler, Verlet, and Beeman equations. Through an experiment of our own design (I used the known properties of free fall in what I thought was a clever way), we comparatively evaluated the accuracy of these. Incidentally, I found that the Verlet equation (the velocity version for those keeping track) performed the best of all of them.
At this point I could tell that there were going to be some real limitations when trying to use the accelerometer in any practical application. The tilt calculations actually worked fairly well when measuring the orientation of the device when it wasn't moving. The two limitations of this are not being able to figure out yaw (as mentioned), and the fact that you can't tell the different between acceleration due to gravity, which is what allows you to compute pitch and roll, and actual movement in some direction.
The former is an annoyance only in the sense that in many of the demo applications I played with, yaw was the most natural direction to move the accelerometer to accomplish what I wanted! When I wanted to move a character in a game, for example, I always tried to use the yaw rotation for left and right movement.
The second limitation of distinguishing between movement and gravity can't be that bad if all these phone companies are able to implement their screen orientations functionalities so well. Still, I imagine things gets hairier when you want to use the same accelerometer to measure actual movement in addition to tilt orientation. Again, this is probably why so many systems use gyroscopes too.
Ok, so now onto the fun stuff. The second half of the assignment involved applying these calculations in fun and useful ways (can you sense the joy?). In the first question, we were to implement a simple gesture recognition system. In the second, we would use the accelerometer in some application of our choosing (or of our own creation), and think about how this would affect the users of the application.
I can tell you right now that although position information seems the most natural choice for most applications, the calculations alone are not good enough! Besides the annoyances of inaccuracies, consider this: If I start moving my accelerometer with increasing speed, the acceleration being read is a number strictly above zero. So far so good. Now I stop moving the device. Maybe there is some deceleration for a short while, but eventually I should get a reading of zero. Fine, makes sense. Except for one thing. Because you are relying on acceleration alone, you don't get any information that would tell you your velocity is actually zero now, too. No, a zero acceleration simply means a constant velocity, so unless your reading came at exactly that point in time that your velocity was also being calculated as zero (which is unlikely as it turns out), your velocity will keep on pumping out changes in position. Sigh.
So tilt orientation it is.
For the gesture recognition, I took inspiration from these mouse gestures that can be used in a web browser. My up/down movement came from changes in pitch, and the left/right from changes in roll. My very simple system just looked at the last gesture made after a predefined dead time, and tried to make some sense of it. I was able to use the system with better than 85% accuracy, and my husband was able to succeed at performing 50% of the dictated gestures (which actually isn't bad if you consider that he had only a minute of training first).
I implemented my own application for the last question rather than seek to integrate the accelerometer into something that already existed. I created a little matte chooser program because I used to work at Ross Video, and they have a matte chooser interface for their live video production switchers. I thought it would be interesting to see how the same type of interface would work when controlled with tilt instead of knobs. Basically, you get a colour wheel that you can control by tilting the accelerometer in various ways. This worked reasonably, but I think it could be a lot better with some more fine tuning.
Looking back, the assignment was kind of fun, despite the numerous problems I ran into (not all mentioned here). I don't know how well I did yet because I only just handed in the completed assignment.
There are many potential uses for the accelerometer even in just the software I use and write. But to be honest, I think I'll probably avoid the frustration in the future and just stick with my usual mouse or game pad, and wait for the industry to iron out the bugs for me!
As many of you know, the Wii remote has an accelerometer built in as one of several sensors used to enhance game play. Apple's iPhone uses an accelerometer to sense how the device is tilted; that's how it's able to change the orientation of its screen based on how you are holding it. Robots can use them to keep their balance, and Nike's shoes can count your steps with them. And that's just the beginning - there are tons of applications for this simple little device!
I recently had the 'pleasure' (if you can call it that) of working on a course assignment using a Kionix accelerometer encased in a little joystick-like cover. I'm not sure what model it was; it may have even been a demo or evaluation version our teacher was able to obtain. Anyway, this is where the love-hate thing comes in. The ideas behind what you could do with these devices are really cool, but the reality kind of hurt. It's really hard to make them work!
Our assignment had us first code up some basic calculations you can do with an accelerometer. Tilt wasn't so bad to figure out. In fact, you can determine pitch and roll, rotation around the x and y axes respectively, and tilt, the angle the z-axis makes with the vertical. What you cannot compute, however, is yaw - that's why many inertial systems also use gyroscopes.
Image from http://en.wikipedia.org/wiki/Flight_dynamics
The second set of calculations to implement used integration equations to find location based on the rate of change of velocity (i.e., acceleration). We tried out several methods of doing this: Euler, Verlet, and Beeman equations. Through an experiment of our own design (I used the known properties of free fall in what I thought was a clever way), we comparatively evaluated the accuracy of these. Incidentally, I found that the Verlet equation (the velocity version for those keeping track) performed the best of all of them.
At this point I could tell that there were going to be some real limitations when trying to use the accelerometer in any practical application. The tilt calculations actually worked fairly well when measuring the orientation of the device when it wasn't moving. The two limitations of this are not being able to figure out yaw (as mentioned), and the fact that you can't tell the different between acceleration due to gravity, which is what allows you to compute pitch and roll, and actual movement in some direction.
The former is an annoyance only in the sense that in many of the demo applications I played with, yaw was the most natural direction to move the accelerometer to accomplish what I wanted! When I wanted to move a character in a game, for example, I always tried to use the yaw rotation for left and right movement.
The second limitation of distinguishing between movement and gravity can't be that bad if all these phone companies are able to implement their screen orientations functionalities so well. Still, I imagine things gets hairier when you want to use the same accelerometer to measure actual movement in addition to tilt orientation. Again, this is probably why so many systems use gyroscopes too.
Ok, so now onto the fun stuff. The second half of the assignment involved applying these calculations in fun and useful ways (can you sense the joy?). In the first question, we were to implement a simple gesture recognition system. In the second, we would use the accelerometer in some application of our choosing (or of our own creation), and think about how this would affect the users of the application.
I can tell you right now that although position information seems the most natural choice for most applications, the calculations alone are not good enough! Besides the annoyances of inaccuracies, consider this: If I start moving my accelerometer with increasing speed, the acceleration being read is a number strictly above zero. So far so good. Now I stop moving the device. Maybe there is some deceleration for a short while, but eventually I should get a reading of zero. Fine, makes sense. Except for one thing. Because you are relying on acceleration alone, you don't get any information that would tell you your velocity is actually zero now, too. No, a zero acceleration simply means a constant velocity, so unless your reading came at exactly that point in time that your velocity was also being calculated as zero (which is unlikely as it turns out), your velocity will keep on pumping out changes in position. Sigh.
So tilt orientation it is.
For the gesture recognition, I took inspiration from these mouse gestures that can be used in a web browser. My up/down movement came from changes in pitch, and the left/right from changes in roll. My very simple system just looked at the last gesture made after a predefined dead time, and tried to make some sense of it. I was able to use the system with better than 85% accuracy, and my husband was able to succeed at performing 50% of the dictated gestures (which actually isn't bad if you consider that he had only a minute of training first).
I implemented my own application for the last question rather than seek to integrate the accelerometer into something that already existed. I created a little matte chooser program because I used to work at Ross Video, and they have a matte chooser interface for their live video production switchers. I thought it would be interesting to see how the same type of interface would work when controlled with tilt instead of knobs. Basically, you get a colour wheel that you can control by tilting the accelerometer in various ways. This worked reasonably, but I think it could be a lot better with some more fine tuning.
Looking back, the assignment was kind of fun, despite the numerous problems I ran into (not all mentioned here). I don't know how well I did yet because I only just handed in the completed assignment.
There are many potential uses for the accelerometer even in just the software I use and write. But to be honest, I think I'll probably avoid the frustration in the future and just stick with my usual mouse or game pad, and wait for the industry to iron out the bugs for me!
Wednesday, November 14, 2007
Tangible User Interfaces
I've stumbled upon a whole new realm of possibilities for an exciting research area: tangible user interfaces!
A tangible user interface is, according to our friends at Wikipedia, nothing more than "a user interface in which a person interacts with digital information through the physical environment." Such a simple little concept at first glance, but one that I think will eventually become a standard paradigm for human-computer interactions. There are so many applications that simply make more sense when a user interacts with something real, something they understand without a second thought.
For example, many of us have heard of Microsoft Surface by now (if not, check out this video for a really good overview of what it can do). The fact that it can be controlled with such tangible mediums as your own bare hands, everyday store bought items, and other custom tagged objects really opens a whole new world of possibility for interactivity. It's a shame that in the foreseeable future systems like this are likely to remain accessible only in commercial settings (using the cost of $15k as a good starting reason). Even when they are available to home users, a lot of reworking of the software we all know and love to fit this paradigm would be necessary. But the eventual ease of use should be worth it!
Now here's another cool example of a tangible user interface that you probably haven't heard of: Illuminating Clay. The creators of this system wanted to figure out a way to bridge the physical-digital divide that landscape architects face when they model with real clay but compute information about their models on a computer. Their solution? Let the architects continue to use clay, but have the changes made in the physical world be digitized in real time, with the results of various computations projected back onto the clay. This image from the Illuminating Clay website gives you the idea of how this looks:
This is probably the most novel user interface design solution I have ever seen. It should really help demonstrate why I am so excited about the future of tangible user interfaces! Imagine how many applications there must be for this paradigm. (Post some comments with your ideas!)
Now, some of you may be wondering why I would be so interested in user interfaces when my profile claims that I care about computer vision and geometry and such. The truth is that I have had a pretty keen interest in the effective design of user interfaces, but little time to study it further. Pair this interest with the fact that both the examples above, and many others beyond it, require extensive computer vision and geometric computation techniques to bridge that gap between the real world and the computer, and you've got a pretty attractive research area for a girl like me!
A tangible user interface is, according to our friends at Wikipedia, nothing more than "a user interface in which a person interacts with digital information through the physical environment." Such a simple little concept at first glance, but one that I think will eventually become a standard paradigm for human-computer interactions. There are so many applications that simply make more sense when a user interacts with something real, something they understand without a second thought.
For example, many of us have heard of Microsoft Surface by now (if not, check out this video for a really good overview of what it can do). The fact that it can be controlled with such tangible mediums as your own bare hands, everyday store bought items, and other custom tagged objects really opens a whole new world of possibility for interactivity. It's a shame that in the foreseeable future systems like this are likely to remain accessible only in commercial settings (using the cost of $15k as a good starting reason). Even when they are available to home users, a lot of reworking of the software we all know and love to fit this paradigm would be necessary. But the eventual ease of use should be worth it!
Now here's another cool example of a tangible user interface that you probably haven't heard of: Illuminating Clay. The creators of this system wanted to figure out a way to bridge the physical-digital divide that landscape architects face when they model with real clay but compute information about their models on a computer. Their solution? Let the architects continue to use clay, but have the changes made in the physical world be digitized in real time, with the results of various computations projected back onto the clay. This image from the Illuminating Clay website gives you the idea of how this looks:
This is probably the most novel user interface design solution I have ever seen. It should really help demonstrate why I am so excited about the future of tangible user interfaces! Imagine how many applications there must be for this paradigm. (Post some comments with your ideas!)
Now, some of you may be wondering why I would be so interested in user interfaces when my profile claims that I care about computer vision and geometry and such. The truth is that I have had a pretty keen interest in the effective design of user interfaces, but little time to study it further. Pair this interest with the fact that both the examples above, and many others beyond it, require extensive computer vision and geometric computation techniques to bridge that gap between the real world and the computer, and you've got a pretty attractive research area for a girl like me!
Monday, November 12, 2007
Women in Computer Science (Event)
The second initiative to encourage women in computer science that I mentioned back in October is taking shape now. (Recall in my last post that I submitted a course proposal for the enrichment mini-course program.) I have my room reserved, my guest speaker booked, and Google swag ready to go whenever the number of RSVP's comes in. Check out my poster that I'm using to advertise the event, and if you happen to be a female computer science student at Carleton or elsewhere nearby, feel free to join us!
Tuesday, November 6, 2007
Computer Science and Games: Not Just for Boys!
I mentioned previously that I was planning to submit a proposal for an all-girls course about computer science and games for the Mini-Course Enrichment Program. The theory is that perhaps girls will feel more comfortable signing up for a course on technology if they knew there wouldn't be any boys (particularly the stereotypical nerdy gamer type). We may soon see, but first the course needs to be approved.
Here is the description:
Here is the description:
Computer Science and Games: Not Just for Boys!
Are you a girl who's ever wondered what computer science was all about, but was too afraid to ask? Whether you are geeky or the opposite, this is your chance to find out! To learn about computer science, we're going to see how it is involved in the design and development of video games. After taking a quick look at the state of the industry and how women are involved, we will cover such topics as usability and design, graphics, audio, and artificial intelligence. Best of all, you will get to work on making your own game to take home at the end of the week! And don't worry, you won't need to do any programming all week.