I'm pretty bad at answering questions on the spot unless I already know the answer (in which case I am practically no longer conscious of the good things I'm saying). I recently asked for some advice on how to get better at answering questions on the spot, particularly in situations like technical interviews, oral exams and thesis defences. The following list is a compilation of suggestions from friends on Twitter:
Think about the questions you will be asked and research the in-between knowledge as well. Think about the big picture and interconnections between topics.
Practice, practice, practice. Particularly practice answering the questions in various ways.
Since inexperience might make anticipating the questions difficult, try to get your lab mates and supervisor to help. Weekly paper presentations are a good venue; you can practice answering questions on the spot and get critiqued on it.
Toastmasters isn't just about oral skills (as in presenting well). I just learned it can help you with answering questions on the spot, too!
Joining an improv group would give you lots of practice of thinking on your feet.
Before speaking, visualize a list of things you want to say, then imagine yourself checking each item off as you speak.
When appropriate, think through your answer out loud. Often those asking are interested in your way of thinking as much as your final answer. Don't be afraid to pause while you think, and try to say things that makes them nod.
During a formal presentation, leave some content out so you know your audience will ask about it rather than unexpected aspects. Create some slides at the end to use when the questions come up.
Brush up and be able to use terminology to sound like an expert.
Answer with multiple options for an answer and explain why each could be correct, and conclude with "but it's a difficult problem."
Link topic to a classic problem in a different field to demonstrate insight.
Never say "I don't know" right away; instead, rephrase the question until you are sure what they are asking.
Interestingly, two people suggested that women might have a bit of a harder time with this than men, since men may be more likely to make something up if they aren't sure. Or, according to @petitegeek: "I'd say it's social conditioning. Women are more prone to say things like "I might be wrong but" = nice. Here it = fail."
My next opportunity to use this advice will be my thesis proposal defence, and I am definitely glad to have such a great list!
I remember a student telling me the other week that she thought writing stories for games was easy, and wondered if there were jobs doing that. I had been giving a presentation with a professor from Carleton to high school students, and my part was all about connecting your passion to computer science. I've been thinking about that comment, and considering the fact that it may seem easy to come up with a general story, but to make it truly interactive is another thing altogether.
Most stories in games are written as a 'string of pearls.' The main structure of the story is fairly linear, but each pearl allows the player to solve puzzles, explore, and generally interact before moving on to the next chunk of the story. There are sometimes branches, but they often come back to the same junction, or aren't overly complex if they stay diverged. After all, as pointed out in this fun video about storytelling in games, designers would have an awful lot of content to create for a story with many different paths!
Despite this common form of narrative, I think games have found some interesting ways to tell their stories that might not be the same in printed word or on film. For example, I've been playing Bioshock with my husband and really enjoy its storytelling mechanisms. The game itself is fairly linear, but the insight into the world of Rapture is revealed slowly through recorded audio diaries scattered throughout. The voice acting is really well done, and getting just small tidbits of how the city fell from the mouths everyday residents is fascinating. Without cut scenes, the emotional accounts of events that have already happened or are currently happening at the time of recording allow you to use your imagination. I quickly found myself getting excited every time I saw a tape recorder lying around, ready for the taking.
As game design progresses, I think we'll start to see new twists on storytelling. (I want to say "just like we saw in film" but since I don't really know the history, I can't know if this is the case - perhaps some film buffs can enlighten me.) One of the ideas I had was a take on the "see the ending first" technique movies and TV shows often use. You would begin playing the game and making choices until suddenly you realize you just played the end. The traditional "x years earlier..." would appear on screen, and you'd start playing from the beginning. The twist would be, however, that how you played at the end would very blatantly affect how the rest of the game progresses. As a player you'd be thinking, "Oh no! Why did I do that?!" but feel helpless to change it. I haven't decided what sort of scenario this would fit well with, but I feel like it could explore some of the darker aspects of human nature since you aren't just watching how a particular event ends up happening - you see how your own actions end up being taken.
Have you seen any unusual narratives in games? Have any interesting ideas? I'd love to heard about them, so please do leave a comment!
I did my oral defence for my comprehensive exam this morning and wanted to share what I learned, since it might help others. The details aren't important, but suffice it to say I got a conditional pass - I just need to do an implementation for my major topic.
The main thing that I wanted to express was the fact that I seemed to have a different idea of what the comprehensives were for than some of the committee members. This is by no means anyone's fault, but being aware of it might help you if your exams work in a similar way. I wrote in my previous post about the exams:
How these are run seems to differ school to school, and even more between disciplines. For our School of Computer Science, we have to choose three topics - one major and two minor - and know these topics at a fourth year undergraduate level. Then we have a two or three hour written exam on each of them, followed by a one hour oral once they are all graded. The oral is usually used to ask the student questions on areas they didn't do as well on in the written portion, making it a second chance of sorts.
It seems that some expected more depth of knowledge than the broad fourth-year undergrad level. They wanted me to interconnect ideas from the books and go beyond them to draw from other knowledge. To be honest, I didn't clue into this until it was too late. I was thinking in a very structured undergrad exam kind of way, where you answer the question and that's that.
Could I have gone deeper if I had realized I needed to? Perhaps for some questions, but to be honest, I didn't prepare that way and crammed all my studying for the comps into two months, so quite possibly not. I'm also not very good at doing that on the spot - I tend to need to think about something on my own first. I always thought I was good at seeing the big picture and making connections, but maybe I need to rethink my ability to do this and find new ways of digging deeper. (What are your strategies for achieving this level of understanding?)
My advice for anyone who has yet to do their comps is to determine exactly what's expected, and maybe even confirm your impression with all of your committee members before deciding how long you want to spend preparing. That way, if you are expected to have a deeper knowledge, you can leave time to practice with implementations, thinking about connections, and reflecting.
I remember seeing a non-zero amount of grumbling last year when acceptance notifications for CHI came out. In particular, those who were rejected listed all kinds of reasons why the process was broken. Well, I got to join the ranks of the rejected when I got my notice the other day, but contrary to the popular reaction, I'm actually not unhappy about it.
Note: That's not to say there's nothing wrong with the process. I'm just too new to know about it. ;)
You see, after we got our first reviews, I already knew we weren't going to be accepted. I wasn't even going to bother with a rebuttal. The reason was that I believed what the reviewers were saying at face value, and just figured I didn't know enough about the field or something. I figured I'd have some work ahead of me to fix it up.
One of the co-authors was more of a CHI veteran than me and knew how to interpret the reviews. Turns out that one of the problems was that we unknowingly chose an inappropriate committee and paper type, so the type of people looking at the paper were really not what we were expecting. It kind of went downhill from there. Luckily this co-author said we should indeed do a rebuttal, not because we believed it would change our chances of getting in, but because it would be a useful way to better understand what the reviewers were saying and see where to go from there.
So we did the rebuttal, and I indeed found it incredibly useful. The most valuable thing I learned was that the content wasn't necessarily even the issue - as I said, the bigger issue seemed to be the lens the reviewers were using to look at it. The resulting confidence in my own work makes me feel good about submitting an edited paper to another conference. Better still, an alt.CHI idea we'd been toying with became more clear after the rebuttal. So really, it was a double-win.
I know it's really easy to have anger be your first reaction to a rejection, or perhaps something else negative - but my advice is to try really hard to see the positive side of it. If nothing else, you will hopefully have some feedback to help make your paper better, so when it eventually does get published, citations will be more likely.
This week I've been very busy with my written comprehensive exams. These exams are one of the last PhD requirements I have to worry about before the thesis proposal defence. After this, I don't think I'll ever have to write an exam again (certainly not as a student, anyway).
How these are run seems to differ school to school, and even more between disciplines. For our School of Computer Science, we have to choose three topics - one major and two minor - and know these topics at a fourth year undergraduate level. Then we have a two or three hour written exam on each of them, followed by a one hour oral once they are all graded. The oral is usually used to ask the student questions on areas they didn't do as well on in the written portion, making it a second chance of sorts.
My topics are human-computer interaction, computer vision, and computer graphics. I chose graphics as my major area even though it's the area I know the least about - I figured I might as well take the opportunity to force myself to learn it! All of these topics should be handy in my research area of educational games and augmented reality.
These exams are a little different than exams in regular courses. It's much easier to prepare for an exam based on lectures you've attended because you get a good sense of what the key information is during class. For the comprehensives (or comps for short), I have to know an entire textbook or two, and guess what's important myself. This takes a lot more work, but even the process of deciding what's important helps you understand the material better, so it's not all bad.
Here's the process I've been following to prepare. I've only had one exam so far, but it went well, so it seems to be a good strategy. The two minor topic exams are open book, and the graphics one is closed book with one cheat sheet allowed.
As I read through the textbook, I make notes on plain white sheets of paper in a binder.
I use colourful pens to write my headings so I will be able to skim them quickly and easily.
I periodically add page numbers to the side to make looking up more detail as easy as possible.
To make the graphics cheat sheet, I am going through my notes and picking out the most important things to remember. During the first pass, I am not worrying about the page limit.
I'm using LaTeX to type out my cheat sheet since there is a lot of math (especially matrices) to capture.
Once the cheat sheet is done, I will see how long it is, and decide what information can be dropped. This will be partially based on what I think I can remember on my own.
I'm planning on having a little "teaching session" with my husband tonight or tomorrow where I will try to explain as many of the basics to him as possible. He's done a graphics course before but doesn't remember much, making him a good candidate for this activity.
What do you do to prepare for exams? Is there anything that worked really well, or that didn't?
Dance Central by Harmonix appears to be one of the better Kinect launch titles (some say the best). I'm personally very excited about getting it for my birthday this weekend. I think my mom might even dance with me! ;)
This is what some might call a casual game, "typically distinguished by their simple rules and lack of commitment required in contrast to more complex hardcore games." This is one of my favourite genres because I find it hard to find the time for longer, more involved games (case in point: I started playing Portal months ago and still haven't finished). I also love the social aspect - many games are kind of boring played alone, but are a blast when you have friends over.
This brings me to the main question at hand: Is the Kinect ever going to be used for anything other than casual games?
(My quick answer: I think it's possible if our creativity is up to the task!)
We have a fairly new game development club here at Carleton, and we've been discussing the Kinect for a couple of weeks now. I did a little presentation on augmented reality and how I thought the Kinect could fit in last week, and yesterday I brought in my Xbox and Kinect so we could experience it first hand. Most of those who tried Kinect Adventures seemed to enjoy it - you could almost see the moment they stopped feeling silly and got lost in the game.
There were a couple of guys who refused to try it, though. Their philosophy was that these games were silly, because they tend to be either a repetition of the same movements over and over in slightly varying contexts, as in Kinect Adventures, or mimicry of an activity you may as well just do in real life (like dancing, or tackling the challenge of playing guitar). Why would you want to play a game about dancing when you could just go out dancing with your friends? (I am of the philosophy that games are fun, and that they don't have to replace the real-life versions of these activities - they just provide another way to enjoy them.)
These weren't the only two who didn't see a very interesting future for Kinect, though. In fact I'd say our discussion turned into a somewhat heated (though civil!) debate on the matter. I, with a few others, believed that Kinect could go beyond casual games. We (as in game designers) just aren't necessarily sure how to do it yet.
One person who was not convinced at all explained his stance by pointing out that the Kinect isn't accurate enough to allow for complex game mechanics, and therefore it could never be used for a 'deep' game. I tried to say that while it can't be used for the types of games that need that kind of input precision (like shooters, RTS, etc), that's not what it's meant for. I think we will come up with new types of mechanics that take advantage of the Kinect style of input.
It seemed to me that there was an equating of complex mechanics and inputs with the ability to have a deep game, but I don't think this is the case at all - otherwise, how would you explain the depth found in many board games? Chess and Go, for example, have very simple mechanics but lots of emergent behaviour. I could see myself getting pretty deeply immersed in a game that used some form of Kinect input while engrossing me in a great story. It's possible the result wouldn't please the hardcore gamers, but that's not really a problem from my point of view, since the potential market of 'rest of us' is pretty huge.
An alternative perspective is to consider that a game's inputs don't have to come only from Kinect. For example, a fellow club member suggested having the Kinect get some basic body language information from the player might cause the non-player characters to react appropriately. I figure even voice could be used here, the tone being interpreted as friendly or not. Or customization in a game could come from objects in your playing space or movements you make. A neat idea I had when someone was talking about fighter games was to capture your characters opening pre-fight sequence. Or perhaps the Kinect just gives you a choice - you can either shoot your occasional-use slingshot with the regular controller, or do the aiming with your hands. With so much work going on in the Kinect hacking arena, innovation certainly seems possible.
For a specific example of inspiration, check out the video below. It's not using Kinect, but this kind of input could be done with Kinect and a projector, and perhaps a similar enough game could be created for a traditional screen. The demo shows and open play type of environment, but I could see creating a game that would be considered deep for a younger audience (or even adults if done right).
While I can't say for sure whether we as game designers are ready to come up with ways to use Kinect for deep games (maybe we need to go through another generation or two of technology first), one thing's for sure: this is definitely a hot topic of debate. Where do you stand on it? Where do you see Kinect going in the future?
In September I submitted my first CHI paper. Since then we've got our reviews back and written a rebuttal, and now must wait until the final decision comes down in December (though we already know what it will most likely be). During this process, I've found an unexpected source of insight into how the CHI community works: the #chi2011 Twitter hashtag.
Cate Huston's Masters research is all about Twitter, and she recently wrote a blog post on exploring conference hash tags. She grabbed data from the Eclipse Conference 2010 hashtag and visualized a few different things, including a Wordle and frequency graph on tweet content and various information about clients used to tweet. She also captured insights into the users participating in the chatter. I immediately thought about what information would be available from the CHI hashtag because of how useful it's been to me in the past few months.
One of the biggest things the hashtag did was make me feel like I was part of the community, even though I'm really completely new to it (I haven't even attended a CHI conference yet). Watching everyone panic together as the submission deadline loomed ever closer was actually kind of thrilling, for example.
The review period was a bit of a roller-coaster ride, with tweets about how great and how horrible the papers were - you never know if it's yours they are talking about! But when those reviews did come back, it was relieving to see how many people fared as well (or, more accurately, not-so-well) as we did. It was even more fun to see the exact complaints people were making about their reviews and how they would position their rebuttals.
And then there's the entertainment value. One of my favourite CHI tweeters is @SottedReviewer, who makes various witty and timely remarks in all-caps. A couple of my favourites:
IF YOU STILL HAVEN'T COME TO TERMS WITH HOW BAD YOUR #CHI2011 PAPER WAS ALL ALONG, SUBMITTING A REBUTTAL IS A GREAT WAY TO STAY IN DENIAL.
IF YOU'RE NOT STUDYING MICRO-BLOGGING TURKERS USING MULTI-TOUCH EYE-TRACKERS FOR SOCIAL GAMING, YOU'RE NOT GETTING INTO #CHI2011
FUTURE VERSIONS OF THIS PROMISING PAPER SHOULD INCLUDE LESS SUCK, MORE FLATTERING OF MY EGO, AND USE OF NETWORKED TABLETOP TURKERS. #CHI2011
Interestingly enough, I get the feeling this humour also gives me insight into some of the inside jokes of CHI, also making me feel more part of the community. Like that whole turkers thing. What's up with that?
It sounds like Cate's going to be doing some analysis on the CHI hashtag. I'm looking forward to seeing if any of her data gives me more insight into my reflections here. For example, am I getting only a small part of the picture because only some small cliques do most of the tweeting? How many more people use the hashtag close to the submission deadline, review release date, and rebuttal deadline?
What has your experience been following conference hashtags before, during, and after the event?
I just joined the Anita Borg Institute for Women in Technology (ABI)'s Board of Advisors, and while I'm obviously pretty excited about this on a personal level, I wanted to share a bit about how I got here. It really goes to show that with a little effort and commitment, great things can happen.
I suppose you could say it all started with this blog. After all, by the time my first Grace Hopper Celebration of Women in Computing in 2008 rolled around, I was pretty confident in my writing. So I volunteered to be a community blogger, and delivered what I promised to do. Leading up to 2009, I offered more and more to help with community-related tasks (mostly blogging still), and was eventually made Lead Blogger. I organized all the blogging and note taking volunteers and made sure as many key sessions were covered as possible. This past year, I was on the newly formed conference committee for Online Communities, where I was again Lead Blogger among other things.
Thanks to my involvement with ABI and the conference, I became pretty visible in the community. This past year, I was also a Hopper volunteer for Grace Hopper, which means I did 8 hours of work for free registration. Through a little bit of good luck, one of my assignments ended up being the Board of Advisors meeting. This was pretty funny, because it's not like they needed me there. But I thoroughly enjoyed meeting everyone, and even sitting beside (and explaining the Poken devices to) the legendary Fran Allen. (She's awesome, by the way.) I joked that it was like fate, because if they were looking for more members and wanted another student and/or someone from Canada, I would totally be interested.
Lo and behold, less than two months later, I was invited to the Board. Who knows whether being at that meeting helped, or whether my visibility through my active involvement with ABI was enough - either way, going above and beyond and offering yourself to help where you can clearly pays off. Don't be afraid to offer to help with something you're good at, because you just never know where it could lead you.
I received a comment on Is a Software Architect Worthy of the Name? via email from Lourens Veen, a software architect for a project at the University of Amsterdam, where he is helping build a biodiversity information system. He wrote about how much more similar software and traditional architects are than what I originally laid out. I really liked his email, and he gave me permission to repost his comment here.
My job is to look at how the system we build fits into its environment. In my case, the lead engineers are similar to the structural engineers that you have in building architecture. Their job is to ensure that the things we design can actually be built and maintained by our programmers. What I do is figure out how it interacts with other systems, and with its users, system-wide and in the long term. My job is to look ahead and think about what the users are likely to want in a few years time. Of course I have no hope of doing so in detail (typically, users don't even know what they want for themselves, right now), but I can hopefully see well enough where things are going to make sure that the core of the system won't need to be recreated from scratch anytime soon. I think building architects have a similar role: they design a building for a client, but also need to take into account that the building will still be there decades into the future, and still needs to earn its keep then.
So, I think that building architects and software architects have even more in common than what you described. I even think there is an equivalent to that other job of a building architect: making it beautiful. Of course software has no physical embodiment, so it has no physical beauty (artful syntax highlighting excepted :-)), but its design can still be just _right_. It's hard to define what makes a design right, but a good software architect recognises when that is the case, just like a building architect recognises a good-looking building. Linus Torvalds (original author and architect of the Linux kernel) even calls it "taste".
As an added bonus, Lourens ended his email with a little bit about why he loves computer science so much in general. I wanted to include it here because the point about puzzles is so similar to what I love, too.
The attraction of computer science to me has always been the solving-a-puzzle aspect, and while software architecture is certainly not the most technical area of computer science, getting current user requirements, potential future requirements, current technology and future technological developments all covered in a single design is usually a very interesting puzzle. And a rewarding one too, if you manage to give the users a system that satisfies their needs now and into the future.
Really makes me want to be a software architect myself one day! I suppose if I do end up in industry at some point (and I hope I do, at least during internships), this is something to explore. Thanks again for letting me share your perspective, Lourens!
We all know there aren't enough women in computer science. There have been several different approaches to encouraging them to consider it as a career, from fun webisodes to a variety of outreach activities. But has anyone ever considered using games? Not just putting on courses on making them, but creating a real video game that puts players into the role of a computer scientist. One that's still fun to play but that conveys how great computer science really is!
Enter Imagine Cup, Microsoft's challenge for students to solve the world's toughest problems through games, software, and other digital media. While the UN's Millennium Goals are outlined as part of the competition's theme, entrants are not restricted to only those issues:
It might sound lofty. And perhaps a little ambitious. But when it comes down to it, this year's theme couldn't be more relevant. And while it's not a requirement to base your entry on one of these ambitious goals, you may want to use them as inspiration to promote change around the globe.
I decided that I wanted to create a game designed to get young women to see computer science as an interesting and attractive option. Imagine Cup is the perfect place to get started. When I first pitched the idea to some potential team mates, they weren't entirely convinced, but once I had a more concrete game concept, they got more and more excited.
The main idea of the game is to create a scenario that has a strong emotional tie for the player, where she learns computer science concepts in an effort to solve the problem presented. I was really inspired by the movie Up, and in particular, the opening sequence showing the lives of the early childhood friends as they grow up, get married, attempt to have children, and grow old together. It's hard not to cry at the end of it, yet not a single word is spoken. I wanted a similar emotional tie for the players in my game.
Thus the story is that Grandma is at risk of having to leave her beloved home, similar to how Mr Fredrickson is at risk of losing his house in Up. Social workers come and want to take her away to a group home, but you know how awful this would be for her, so you make an offer. If you can equip Grandma's house with the necessary technology to make her independent according to the social workers' satisfaction, then she can stay.
You spend the game searching for the hardware you need, and each time you bring something back, you must solve a puzzle to activate it (loosely correlated to "programming" it). These puzzles will actually centre on computer science topics that make sense for the technology (such as figuring out a sorting algorithm for arranging bottles of medication properly). The player puts herself into the role of a computer scientist who is doing social good and helping someone dear to them.
James Paul Gee talks in his work about how important roles are for learners, and explains how games can help put them in more positive roles. Based on this, girls should have a much more positive outlook on the field of computer science. Even if they don't end up choosing it for themselves, getting more people to see it as something other than nerdy will help prevent social barriers going up for girls who are interested in choosing it.
The best part? This isn't a girls-only game. It's an everyone game that happens to be carefully designed to appeal to girls as well. I can't wait to get started.
I have a friend who is an architect (the kind that designs buildings instead of software). I was reminded recently of a conversation my husband and I had with him a while back that was basically about whether you could really be called an architect when you designed software. It didn't really make any sense to him, possibly in part because of the intangible nature of the end product.
I was reminded because of a sentence I read in a text book I'm reading called Interaction Design: Beyond Human-Computer Interaction. It was comparing architects with engineers, pointing out that architects care more about the user experience (what layouts of building are conducive to certain activities, etc), while engineers are concerned with the technical details (like calculations and numbers). While I'm sure this isn't the whole picture, it does give a bit of basis for arguing why software architects can be called architects.
Software architects are generally responsible for the overall design of code. They care about the user experience of software developers (who in this comparison may be seen as the engineers) for the end goal of making it easier for them to create high quality software. They do this by considering design patterns, enforcing coding standards, and making high-level decisions. In a sense, they create a layout of the software architecture in much the same way that architects do for buildings. Even though the 'user' in the considered user experience isn't as much an end user as a regular architect would be designing for, I still think the philosophies of the two roles align.
What do you think? Are there more similarities, or do you see the two types of architect being pretty distinct?
It took me a whole two days after launch to finally get it, but I now not only have a Kinect but my first Xbox console. After playing an evening with my husband, my brother and his friends, then again last night, I have to rate it very highly.
We played Kinect Adventures, which came bundled with the console and Kinect sensor. It was really easy to get started with the obvious controls, and thanks to the competitive nature of our guests, we had fun playing the same level many, many times. Our only issue was that we had barely enough space. The taller folks of the group definitely hit their head a few times on the heavy metal chandelier. (The image below gives you a pretty good idea of what looked like, except that we only had room for one player at a time.)
I got Kinect largely because I see potential for some augmented reality games that I could work on for my thesis. There's clearly a camera in there based on the photos Kinect took of us while we played Kinect Adventures, but how good it is remains to be seen. I'm guessing the photos taken in this game are purposely low-res so they don't clog up the storage space. Some demos show people video chatting and such, and the picture looked a lot better for those images. After I learn XNA for Imagine Cup this year, I definitely want to look into how one can go about developing for Kinect.
The sensor itself is actually quite incredible. As the video below shows, it throws out a bunch of little IR dots into the room and uses these to measure the full body positions of players. The great thing about this technique is that it seems to work remarkably well in low light. Not only were we able to play games with just one small lamp on in the corner of the room, but after calibration, Kinect ID was able to tell exactly who was standing in front of the sensor every time! Whether they do their face recognition with the IR sensors or with the cameras, I'm impressed and optimistic about the aforementioned possibilities for AR.
I'm looking forward to seeing what kinds of Kinect games come out over the next while. The current offerings are all fairly similar - dancing, jumping and ducking, and fitness. Some have criticized that these are probably the only kinds of games that would even work well with Kinect anyway, but I disagree specifically because I don't feel an entire game has to be played via body movement. Instead, the main mechanics can still be controller-based with various occasions to use Kinect instead. Some games could require Kinect, and for others it could be a bonus alternative way of completing certain tasks.
There's a new theme over at Comp Sci Woman about what we love, and I wrote the first post for it:
There are two things I love the most about computer science: the ability to connect it with whatever your passion is, and the type of problem solving involved.
Go check out the rest of the post to find out what it is about problem solving and passion that I love, and be sure to share the love by contributing your own post to the site!
I've been studying for my upcoming PhD comprehensive exams, and my major topic is computer graphics. Even though I haven't actually had the opportunity to take a class on graphics, I'm finding I've seen a lot of the material in the book I'm reading (Fundamentals of Computer Graphics). One thing that struck me recently is the book's alternate approach to explaining perspective projection.
I learned projection based on the pinhole camera model while learning computer vision. The way to think of it is that light hits an object at point P, and some of it travels through the centre of the camera at point O. Inside the camera is an image plane (Y1), which might be, for instance, film or a digital sensor. The particular bit of light from P will hit the image plane at point Q. When light is traced from each point on the object back to the image plane, an image of the object will be formed upside down. The math behind figuring out exactly what the image will look like and where it will be involves similar triangles and the like.
This same concept, known as perspective projection because the final image will have perspective (parallel lines that don't look parallel) is still true in the Fundamentals of Computer Graphics explanation. But in this case, we want to be able to express the projection in terms of an orthographic projection, something that was already established in the book mathematically. Orthographic projection is when an image is created, but parallel lines stay parallel. Architectural and model drawings are often drawn this way.
Objects further away look smaller with perspective projection (left), but not with orthographic (right)
An orthographic projection works by drawing a straight line from P to the image plane, where that line is perpendicular to the image plane. It turns out that any point along the line that passes between O and P in the diagram above will appear on the image plane in the same place in a perspective projection. So if we transform points on that line so that they now become perpendicular to the image plane, we can do an orthographic projection and get the same result (since all points on a line perpendicular to the image plane will appear on the image plane in the same place for an orthographic projection).
I enjoyed seeing perspective projection from this point of view; it actually helped me understand the geometry behind it all a bit more deeply. It makes you wonder what other topics we could explain in two or more simple ways, and how many students would benefit from doing so.
Miniaturization: Less memory and disk space, more interdisciplinary opportunities.
Consolidation: Standards, complexity, integration, and information sharing.
Data explosion: Classification, privacy, security, control, monitoring.
Outsourcing: Deciding where workforce should come from, where and how the workforce will work, trusting the provider of services and products, privacy.
There's an app for that: Trusting people to write important apps, governance and quality control in app stores.
Accountability: Enforcing age limits (upper limits for children's sites, lower limits for adult sites), identity confusion, logging, analysis, balancing privacy/anonymity.
Customization: Increased complexity, trusting users to make the right choices (particularly with security settings), issue of it being good for users and bad for developers.
Communication / Social computing: Governing what goes up online, expectations of use and access in organizations, combining with other applications.
Context: Better help systems, have to be careful with personal information.
Convergence: Inter-application communication.
One cool thing I noticed about this list, especially near the end, is that many of these trends fit into the realm of augmented reality. Context is pretty obvious, given that the very nature of AR indicates its importance, but a lot of these other ideas are still at least a bit related. An example given of convergence was MIT's Sixth Sense for the fact that it provides information in a way that makes it 'always available' - AR in general can make information always available, since, at least conceptually, it doesn't require that you switch context between what you are doing in the real world and the information related to that task. Customization of one's own world is possible because of AR's inclusion of virtual objects. AR applications can definitely be social in a built-in kind of way, but also in a talking-to-people-in-the-real-world kind of way. There are even more ways to fit the remaining trends into AR but I'll leave that up to the imagination of the reader.
Are there other up and coming interfaces that speak to this list of trends?
I submitted a paper to this year's SIGCSE (ACM's Special Interest Group on Computer Science Education) conference that didn't get in. The reviews were actually fairly positive overall; I got the impression that even though it was an experience report and not a research paper, it needed to be more like the latter (so I'll know where to improve for next time). Thanks to the power of the Internet, luckily, I can share the paper and hope that those who would find it useful will stumble upon it.
You can find the paper, called "Adding Computer Science to an Introductory Computing Class for Non-Majors," on my portfolio page about the course. My main purpose for the paper was to show that arts students are capable of learning more difficult computer science topics if they are taught in the right way, and that they actually enjoy gaining insight into how computing works. My hope is that other departments that have "using computers" courses for non-majors rather than "computing" courses will consider trying something new.
Just in case I wasn't attending enough Women in Computing events (see posts on this year's Grace Hopper), I registered for the Ontario Celebration of Women in Computing (ONCWIC) coming up this weekend in Kingston, Ontario. Not only that, I even ended up being involved in two posters and one presentation. Not bad for a 24 hour event!
Kingston at night (a photo by me last year)
One reason I decided to attend this event is to force myself to make a research poster for my recent work on how cognitive theories help explain the value of augmented reality. (If nothing else, our lab needs more posters to hide the dirty while walls we aren't allowed to paint.) I'm pretty happy with the results of the poster, and will definitely post it on my portfolio a little later this year with more info about the research (just want to wait until the related paper has been reviewed).
The other poster and presentation are based on a presentation that the CU-WISE co-founders have given before at NCWIE and at Grace Hopper. Our poster is going to be pretty simple with some photos as discussion points and room to pin up our promotional and outreach supplies. The talk is only ten minutes and is going to go over some of our keys to success. I'm particularly excited about these because Barb and Natalia will be coming to present! Better yet Natalia is bringing her new baby. Yay! :D
Finally, I'm looking forward to a program that's a little different from other conferences I've attended. Being much smaller than Grace Hopper and the like, ONCWIC is able to be more intimate. We're actually going to be able to meet everyone personally if we want to (musical appetizers at 6:15!), and they're all going to be local! There's an evening social event followed by games and desserts (which counts as social to me, so it's like a double header).
Last but not least, I'm hoping to check out Fort Fright at Old Fort Henry before heading home Saturday night. This is going to be an amazing weekend.
I love it when something so simple is so effective. Tom Moher's 2006 paper [ACM, CiteSeer] describing his work on what he calls Embedded Phenomena was a case of "why didn't I think of that?" for me for sure. He offers an affordable way to integrate digital information into standard classroom practice, and while he doesn't use the term augmented reality, I think the systems created definitely are.
The abstract of the paper goes like this:
‘Embedded phenomena’ is a learning technology framework in which simulated scientific phenomena are mapped onto the physical space of classrooms. Students monitor and control the local state of the simulation through distributed media positioned around the room, gathering and aggregating evidence to solve problems or answer questions related to those phenomena. Embedded phenomena are persistent, running continuously over weeks and months, creating information channels that are temporally and physically interleaved with, but asynchronous with respect to, the regular flow of instruction. In this paper, we describe the motivations for the framework, describe classroom experiences with three embedded phenomena in the domains of seismology, insect ecology, and astronomy, and situate embedded phenomena within the context of human-computer interaction research in co-located group interfaces and learning technologies.
As mentioned in the abstract, the paper reports on three different projects. In each, simple tablet computers act as windows into another world. Their placement in the classroom matters. For example, the solar system project, HelioRoom, has the tablets positioned so that the centre of the classroom becomes the sun, and planets orbit around it in a proportionally correct small scale. As the planets orbit around, they appear in the tablet windows at exactly the time they would had they actually been travelling around the entire room. This makes the digital information location-dependent, and this is what makes it an instance of augmented reality.
One of the things that struck me about this use of technology in the classroom is how easily the teacher could continue working how he or she always has. I remember another educational games author pointing out that we can't bring all kinds of new and exciting technology to the classroom and expect teachers to be able to learn how to teach in a whole new way as well as learn the new technology. Instead, we need to first bring technology that supports the way the classroom already works, and in the future begin slowly transitioning to new ways of teaching. If you look at the pictures included in the paper, you'll see students working on charts, in groups, with teacher direction -- heck, you'll even see those traditional Styrofoam model planets hanging from the ceiling! Everything teachers did before they still do; they just have a new way to visualize things in a spatially and temporally aware way.
I'd really like to see more projects that use simple technology like this in education. Sure, it'll be great when we all have our own augmented reality glasses and can recreate detailed simulations right in front of our eyes, but those days are a long way away. Let's use what we have now to create engaging learning environments without having to drastically shift our way of teaching quite yet.
The Grace Hopper Celebration of Women in Computing had two special technical tracks added to the program this year: open source and human-computer interaction. While I was definitely happy to see the open source track, it was the HCI talks that really got me excited. I'm just getting into HCI myself, choosing it as one of my topics for my PhD comprehensive exams and submitting my first CHI paper. There was so much to learn from a variety of great speakers!
When I tell someone about the Grace Hopper Celebration of Women in Computing , I start by explaining the dance parties. I tell them, “You wouldn’t think that an all-female dance would be fun… but you’d be wrong. There’s nothing like dancing with hundreds of technical women who let loose because there’s nobody around to feel stupid in front of.”
One of the sessions I attended on Wednesday at GHC was a PhD forum. In this special type of session, three PhD students present their research in an hour, and the audience fills in feedback forms to give them suggestions and/or praise. It's a great opportunity.
The first presentation in this particular session was given by Laurian Vega, studying HCI at Virginia Tech. Her research is all about usable security, with a focus on day cares and doctor's offices. Although I'm not a security person by any stretch of the imagination, I found the topic quite interesting. (My friend Terri is also looking at usable security in her PhD research.)
Laurian is doing a qualitative study of security in the aforementioned settings by being an active observer of their everyday practices. One of the keys here in terms of security is that the users are members of communities, not individuals. And while it has been traditionally held that humans are the weakest link in security technology, neither Laurian and Terri buy it. Instead, they say that security is just not designed with user's mental models in mind.
One of the most interesting findings from the study was the reliance the practitioners have on paper records. They like the fact that the information is physically nearby. Some like that they can put more sensitive information near the back of a file where it's unlikely anyone else would look. The files can be closed and shredded. The downsides, however, include the fact that, according to some research whose source I can't remember, 41% of the time somebody is distracted they don't return to their task. This makes files left open vulnerable when whoever is reading them is interrupted.
Laurian's work will end before a concrete design is actually proposed. I am very interested in seeing what kind of technology would work well in these kinds of settings yet still be secure. I hope more security researchers become more willing to consider the human side of the security equation. Terri also wrote about this session.
This year's edition of the Grace Hopper Celebration of Women in Computing is officially underway, and you can almost taste the excitement. Last night I had the opportunity to speak with an external evaluator about my experiences with the conference in an effort to determine what kind of impact it really has. I quite enjoyed the opportunity to reflect on my role this year and the previous two years I’ve attended.
I arrived in Atlanta on Monday for this year's Grace Hopper Celebration of Women in Computing. The conference officially started on Tuesday night, so we had a bit of time to explore before registering. There's lots of great stuff we wanted to see, but settled on the Atlanta History Center.
On the grounds are two old buildings that you can tour. The first we went to was at the old farm house.
I found this tour pretty fascinating because my husband Andrew and I own an old farm house as well, and it turns out that the style of house found in rural Georgia in the mid 1800's is a lot like ours. The clapboard exterior matched exactly, and they had pine floors, just like us. One of the noticeable differences, however, was that their windows had a grid pattern and were thus likely made out of smaller panes of glass put together. Our windows (still original!) have just one large pane of glass for the whole upper or lower part.
We had fun posing in the kitchen outbuilding.
After that tour, we headed to the Swan House. It's a fancy house built in 1928 in the style of an old English country home. It's massive with columns! It's very ornate inside too, where photography was not allowed. A little too ornate in some cases. Neat to see the original showers from the 20's though - apparently they thought that washing your dirt away was more sanitary than sitting in it. Makes sense to me.
Other than that, we took a quick look at the Abraham Lincoln special exhibit that showcased a collection of original documents related to his life and presidency, and learned about the Civil War.
A great way to gear up for the Grace Hopper Celebration of Women in Computing is to decide what your goals are for when you get there. This year, for the first time, I actually have some specific types of people I want to meet, so this is what I'm going to focus on.
I recently got some business cards printed. I designed them to look like my website, and I gave myself the tagline "computer scientist, educator, blogger." Right before they arrived in the mail, I heard that I was eligible to get a free Poken to use at the conference, since I'm a student. I'm pretty excited about the Pokens (I already set up an account and installed their iPhone app), but I'm actually still excited about my business cards. I think I will try to use both together. The Poken will be a convenient way to collect online profiles of people I meet, but the business card should help the people I give it to remember me a little better.
Now that I've started the second year of my PhD, I know what my main thesis topic will be: educational games and augmented reality. I don't know what my exact project will be. I have a few ideas, and a research project that I've been working on lately in the realm of AR should really help ground my final choice. I figure that if I get the chance to meet some others in games, AR, and HCI in general, I might get some cool new ideas! So that's who I'll be looking for.
If you're interested in education, games, and/or augmented reality, and want to meet up, I'd love to set up a time to chat with you! I'll be in Atlanta from Monday until Sunday. Contact me.
I've been working on a paper for CHI2011, one of the (or the?) top conferences in human computer interaction. I'm aiming really high with this and know full well that it's a competitive conference that I can't expect to get into on my first attempt. The way I see it is that I have a 100% chance of not getting in if I don't even try, and if I do get rejected, I'll hopefully receive feedback useful for the next iteration of the paper no matter where I plan to submit it next. Plus, this goal encourages a much better paper than I might have written otherwise, because we're not going to submit something we know isn't good enough for CHI.
To make things even harder on myself, it's the first time I've ever written a paper like this one. I'm proposing that designers use a certain set of useful cognitive theories when creating augmented reality (AR) systems. These theories are also useful for explaining why AR is good and to influence the design of user studies, but for this paper I'm concentrating just on design. It's a theoretical paper, and I don't know how well received it will be by CHI reviewers. But more interestingly, I only learned about cognitive science in a class I took in the fall. After all, I'm a computer scientist and we don't usually talk about these things.
Because I am somewhat out of my element on this paper, I have been noticing a few things that I didn't really think about when writing previous papers. For instance, going through a few iterations has been key. I always get a little stressed before the next meeting to go over issues, but I'm usually relieved by the fact that a lot of the missing elements are things I've had inside my head but not managed to get out onto paper yet.
One of the things I was constructively criticized for was not being assertive enough in my statements. Especially in this type of paper where the contribution is not experimental results, I need to be less afraid to say with confidence that "this is the way it is." At least, that's how I'm interpreting the advice; we'll see how well I can incorporate it as I start my next iteration.
A related issue I've been struggling with is how much I can say without citing something to support it. For example, I want to just describe what I think AR is, but I have been limiting myself to saying it in a way that others have said it. It's kind of stifling so for my next iteration I'm going to try to allow myself more freedom, and see what the others think. I can always backtrack.
With just a week left before the submission deadline, I'd welcome any advice on a such a paper for CHI. With very open arms. Please and thanks. ;)
Have you seen the new CompSci Woman blog yet? No? Well get over there and check it out! And better yet, if you happen to be female and have any kind of computer science background, consider contributing to the blog as well.
I just wrote up my piece for this month's theme on "how I got into computer science." It's called Behind the Screen:
I once considered attending a local specialized high school called Canterbury. It’s an arts school, and I wanted to attend for creative writing. After all, I had won a writing contest or two in my day, so I thought I was pretty good at it.
Unfortunately, the bus ride was far too long from my rural home, so I never went. Fortunately, I never let go of my creative side, which also included a love for drama, music, and now photography.
You'll have to read the rest of the story over at the blog.
Cate Huston is one of the two creators of CompSci Woman (Maggie Zhou is the other). Cate shared some of the "why" behind it all:
What brought it home so strongly, how hard it had been to be a minority, is that at the time I wasn’t. Extreme Blue Canada had an amazing number of women in the program this year. There was a girl on every team - two on some, including the team I was on. It was noticeable compared to the US teams at expo - Canada had exceeded the magic ratio, at which the women were not minorities, but normal.
It was different for Maggie, who was one of two women in her building. We talked about this - we had very different coping strategies. Towards the end of the summer, I floated the idea of a blog to her - the natural next step from the many conversations we had that summer. We thought that whilst you might not want to brand yourself as a woman in CS (every woman in CS I know is so much more than that, perhaps it’s like evolution, only the most awesome/stubborn/motivated/interesting survive), you could brand a platform, provide a forum for women who don’t have the time, or inclination to run their own blog. Maggie was excited by the idea as well, and we started to sketch out a vision and pitch (EB gave us a lot of practise in that) our idea to people. They were interested. They promised to blog for us. CompSci Woman was born, although unnamed.
With your help, we can build a platform, and a community. Because more people means more mentors, and more role models, and more inspiration. And that - well, I hope it’s just the start.
Inspired? I hope so! Now get out there and write your piece! I'll look out for it in the next few weeks. ;)
Today is the first official day back to school, even though classes don't start until Thursday. I used to get very excited about this new beginning, but lately I haven't had the opportunity to enjoy it. That's because I never really left.
Still, I get to enjoy the hustle and bustle that is Frosh Week thanks to my involvement with CU-WISE. Tonight we have a booth at the clubs and societies fair, Carleton Expo. We'll be setting up our booth with some nice swag to give away and getting a bunch of new students to sign up for our mailing list. Tomorrow, a few of us will speak at the Faculty of Science orientation, where we will reach both males and females. A few others have already been involved with the special engineering events happening on campus.
It's a weird feeling, still being in school. Grad school is very different from undergrad, but it's still school. I still have at least three or four years of it, too. It's difficult to imagine leaving school for good simply because it's been so long. The longest I've been away was for my eight month co-op terms, and I can't even remember what that was like anymore. If I end up staying in academia when I graduate, I might never remember!
Anyway, I just wanted to wish everyone a happy back to school. If any of you are doing anything exciting or want to share how you feel about that first day back, I'd love to hear about it in the comments.
Some people scoff at those who have too many friends on Facebook. "They probably just friend a bunch of people they don't know to look popular," they say. While I'm not interested in having lots of 'friends' for that reason, I do find that Facebook can be incredibly valuable for networking. That's why I tend to have 400-500+ connections at any given time.
You know how when you used to go to a conference and you met someone you were interested in connecting with? The standard practice would have been to exchange business cards. I have no idea how often people would have corresponded before email, but before social media, email was the main choice. I remember emailing people after some conferences in my early undergrad years. We'd exchange one -- maximum two -- emails and then forget about each other.
One day in early 2007, someone I met at a conference finally convinced me to join Facebook. I had been avoiding it because the concept seemed dumb at the time (shows how much I knew), but finally relented so I could keep in touch with this person and a few other conference attendees.
This kind of networking is still probably one of my most valuable reasons to have Facebook to this day. Now if I want to keep in touch with someone, I find them on Facebook instead of thinking I'll actually email them more than the first time. I can have a passive connection with them where neither of us have to put any extra work into keeping in touch, but we don't forget about each other. Plus, if I see an update from one of these people that I think I can help out with, I jump on the opportunity. Most do the same for me. I've definitely seen many of them again thanks to this!
(Note: This goes for Twitter or any other social network that you and the other person you are connecting with use often. Take advantage of the places you hang out anyway!)
Grace Hopper is fast approaching so I find myself, once again, madly going through my usual routine list of tasks to do before heading to a conference. Here's my process:
Get funding. This comes from different sources depending on the conference, ranging for me from our CU-WISE budget to Carleton's Student Activity Fund to my supervisor.
Make a Google Map. I create a new map for each conference or event I travel to. I start by plotting the main conference hotel. If applicable, I then add other hotel options. Finally, I add potential sight-seeing opportunities and restaurants I want to visit.
Book flights and hotel. This one's pretty obvious. Best to get it sorted out early.
Plan schedule. This certainly won't be set in stone, but I like to look through the conference program and decide what sessions are "can't miss," and add these to my Google Calendar. I also try to plan out the sight-seeing portions of the trip (I always make sure to have some extra time for looking around!).
Gather documentation and currency. I print out all my flight and hotel info, my schedule, and make a packing list. I make sure I have my passport and enough money in the appropriate currency. I also bring my marriage certificate because I changed my name on a lot of my documentation, but not my passport. This time I'm also going to order custom business cards to hand out at the conference.
Prepare camera. Fresh batteries, clean memory cards, and in my case, clean lenses. I also have to decide what equipment I'll take (lenses? flash? carrying cases?).
Prepare laptop/phone. Again, want to make sure they are well charged for the plane, and that I have all the charging cables. I also try to make sure all the software I might want is installed on the computer I'm bringing.
Pack. It really is best not to leave this until the last minute. Use the packing list you made earlier and cross stuff off once it's ready in your "packing pile." Be sure to be strategic in what you put in your carry-on if you are checking your main bag. Just pretend your main bag will get lost and put your essentials in the carry-on.
Triple check flight times. I've been wrong before.
Check in online. Even though it doesn't take much less time to do it online, I feel more comfortable, since I think it means I have a little more leeway time to get to the airport (since the cut-off for boarding is later than the cut-off for checking in). I hope I'm not wrong on this!
I think that more or less covers it. Anything else you do to get ready?
If you ever need to write up a paper in LaTeX and you aren't terribly interested in doing it by hand, I'd like to recommend the open source software LyX. It's not a WYSIWYG editor, but when writing in LaTeX that's probably not what you want, anyway. Rather, it abstracts only just enough of the annoying bits of writing a document by hand by creating the codes for you, but still making it quick and easy to do things like write equations and label figures.
(Not sure what LaTeX is or why you'd bother learning to us it? Check out this crash course on the subject to find out. I avoided it for a while because it was just "one more thing to learn," but the investment I finally made was totally worth it.)
If you're a Windows user, I wanted to share a few tips on LyX. Even though I've been using it for a couple of years now, I recently ran into some headaches and had to look all this stuff up, so hopefully I can save you the trouble.
When installing LyX, I like to have LaTeX set up first. I recommend MiKTeX.
Once that's installed, go ahead and install LyX. Get the standard installer here. This version assumes you already have LaTeX installed, and will automatically find where you put MiXTeK. In the past, trying to get everything all at once (i.e. using the bundle install) did not work well at all. Here's more info about LyX for Windows.
Once you start using LyX, you may find that your spell checker doesn't work (the most common checker used is called Aspell). This happened to me recently. Because of some changes to more recent editions of LyX, I couldn't get Aspell to work after installing the newest version of LyX. There are various possible reasons for this, such as needing to install the Aspell dictionaries.
In my case, the registry keys left over from old installs of LyX never got updated, so LyX couldn't find the dictionaries. If you get error messages when you try to start spell checking, try searching your registry keys for "aspell" and double checking that the paths stored in the keys are actually correct. Your copy of aspell might have been installed right on the c:\, or potentially in Program Files. You can even move the installed stuff to your LyX folder if you want to keep things together, just as long as you update those paths in the registry.
Next, if you have a document class file (*.cls) for a conference or something, you need to make sure both LaTeX and LyX know about it. This is pretty easy once you know how. This post tells you exactly how to do it.
One of the greatest things about LaTeX is how easy it is to create a bibliography using BiBTeX. You technically don't need to install anything extra to use BibTeX, but the source file (*.bib) is not so fun to write by hand (and after all, we are using LyX to avoid this sort of hand-coding in the first place!).
Instead, try a reference management tool like JabRef. It makes it easy to format your references and creates a BiBTeX file for you. You can even make adjustments by hand when that's easier.
Once you have a *.bib file, you just tell LyX to "Insert > List / TOC > BiBTeX Bibliography" and point to it. Then when you "Insert > Citation" all the references in your *.bib file will show up there for you to choose from.
If you get errors when trying to compile your LyX file into a PDF (or whatever), check that the codepage for your BiBTeX database supports the characters in your file, but isn't UTF-8 (it's not supported by BiBTeX even if JabRef allows it). You will also need to double check that the font you are using in LyX actually supports the characters you are trying to use. For instance, I recently had to switch from Times to Times Roman to accommodate some of the accented characters that were appearing in authors' names.
That's it for now - I hope you enjoy LyX and don't run into too many problems!
The Online Communities Committee for this year's Grace Hopper Celebration of Women in Computing has been working to bring you a series of how-to's that are intended to help you get the most out of the conference this year (before, during, and after).
With the advent of digital cameras, we can all consider ourselves photographers. But what happens to the hundreds of photos you'll inevitably take at this year's Grace Hopper? Instead of letting them sit unopened on your hard drive, why not share them with fellow attendees and those who couldn't make it? The best place to do this is on Flickr!
While Grace Hopper is a great technical conference, it is also a wonderful place to network and find jobs. When you're preparing for the conference, you should consider creating a LinkedIn profile or updating your existing one. LinkedIn is a great professional networking site, ripe with opportunities to reconnect to past colleagues and find new employment.
Those of you who have been to GHC know what a great opportunity it presents for networking - and those of you who haven't been before are soon to find out! Facebook is a great tool to help you make new connections and maintain them after the conference, so I wanted to share a few tips on using it to help you get the most out of this year's GHC.
Watch for posts on Twitter and YouTube later at the Grace Hopper blog.
Conference goers have rated the usefulness of participating in any online community very highly, and I can vouch for the fact that getting involved in any way really does enhance the experience. So don't delay! Go upload last year's photos or introduce yourself in a community today!
I've been leading an effort for our Women in Science and Engineering group, CU-WISE, to create a set of promotional items intended to be used during outreach events. We want giveaways that will be fun and that will leave the recipients with a positive image of studying science and engineering.
One of our Officers, Judy, offered to help design some buttons for our campaign. She came up with these images which are super cute!
As you can see the main theme is that "Smart Girls Rock!"
The next step is to produce a solo banner to put up at events. It should have some of these cartoons, possibly some photos taken at past outreach events, and a slightly longer slogan to go with the one on the buttons. I'd also take this design into a postcard which could have more information on what kinds of careers there are in science and engineering, or perhaps an explanation of the myths about these fields.
For the slogan, I'm thinking something like this, since there's often too much focus on the tools themselves (something males seem to prefer than females), rather than what you can do with them:
It's not about the lab coats, keyboards, or [something engineering related] - it's about making a difference in the world! Smart girls rock!
... but I'm not sure what to put for the engineering bit. I figure lab coats cover a lot of science and some engineering, and keyboards cover computing, but how to represent the rest?
If you have any ideas for the slogan or for the kind of material we should include in our postcard handouts, please do share your thoughts in the comments!
Some friends finally got me to download the iPhone version of the addicting game Carcassonne, and I've quite enjoyed it so far. But thanks to my recent desire to study games academically, I couldn't just enjoy it - I had to analyze it, too.
The first observation I made was about how my understanding of the game progressed. The first time I opened the app was when a friend requested that we play an Internet game. I tried to open the rules and read them, but didn't really understand them (this has a lot to do with the telling vs. discovery principle I discussed earlier). So I started placing tiles and Meeple (what they call the set of seven "followers" you place on the board to earn points) pretty much randomly.
After not having a clue what I was doing for a few moves, I switched to a local game against the built-in AI (in an Internet game you can end up waiting hours for the other person to move again). I noticed the AI opponent scored points any time it finished a city with its Meeple on it, and then even got the Meeple back, so I did that for a while, too. Same deal for scoring points for roads.
Eventually I needed to open the rules again to figure out how the fields and clusters worked. Just as one would expect, the explanations within suddenly made sense. I finally knew how all the scoring worked and could move on to coming up with strategies. I got a bit of a sense of good and bad moves with the AI games, but I learned the most playing with my friend in that first human-to-human game (which, by the way, he didn't win by all that much, considering I was clueless at the beginning).
After a day or two of playing, I started thinking about whether I thought this game would be better with real tiles or in its digital form (at that point, I didn't even realize there was a physical version - ha!). I figured that placing real tiles and the little followers would be much nicer than sliding digital tiles into place, but I also wondered how well I'd be able to count up the points. That part might be kind of annoying.
On the other hand, maybe being forced to analyze the field boundaries would have made me understand the rules more quickly. This is an interesting point. When I play board games during our bi-weekly meeting, we always have someone familiar with the game explain it to us and give us advice for the first few moves. Contrast this with my feeling of being in the dark in the digital version. It's pretty clear that I learn how to play physical games very differently from digital ones. Which way is more effective? Is one way better than the other at all?
It would be nice to try the physical version of Carcassonne and then play the digital version of a game I've already played in real life. I'd like to compare how I learned to play in each case and see if there is any advantage to one or the other.
Ok, obviously this potential proof for P != NP is causing quite a stir. I really wasn't planning on blogging about it, but today I had the opportunity to share the buzz with a group of grade 6-8 girls, so I thought I'd write about it from that perspective, since it's likely nobody else has.
I have a more or less preset workshop design that I use for most of my outreach with girls of this age. I start by introducing myself and explaining how university works before moving into what computer science actually is. I usually let them guess first, then give them a "big fancy definition" from Wikipedia. I break that down into a simple statement: Computer science is about figuring stuff out. This is followed by a list of questions like "What can be figured out automatically?" and "How hard is it to figure out?" I end with a selection of CS Unplugged activities.
I normally use the Travelling Salesman Problem as an example of a problem for the question "How hard is it to figure out?" that seems easy to solve, but that can take a really long time even with modern computers if you give it enough data. The idea that computers can't solve everything blows their minds.
Usually, I leave it at that, but today I threw in a little aside for them. I mentioned that the Travelling Salesman problem was part of a big group of similar problems that take a long time to solve, and that we aren't sure if there's some way to transform them into easier problems. I explained that if we can show how to do that, all these problems we couldn't really solve before suddenly become much easier. That also seemed to blow their mind.
The coolest part? When I mentioned "P vs. NP" some of them actually said "Oooh yeah!" or looked like they had heard the term before. I told them that someone put out a proof in the last couple of days that seems to show that P does not equal NP, and that the whole computer science community has been alight about this. They seemed to like being clued into what was making us all excited these last few days. ;)