Monday, July 25, 2011

It's not what you say...

This is a distillation of a talk Sadie Crabtree gave at TAM 9 in Las Vegas.  The ideas are (hopefully) as she presented them - I'm not qualified to know if they're original or if she (just) did a great job of presenting them.

It's not what you say, it's what they hear.

That's the #1 sound-bite to take away.  There are some details, but the heart of the matter is captured in that message.

Classic communication theory has the entire process broken down into a few steps:

  1. You have an idea.
  2. You translate the idea into words.
  3. You speak/write/sing/or otherwise encode those words into a medium.

    The medium transmits the encoded message
  1. Someone hears/reads/otherwise receives the message.
  2. They parse the phrases and words.
  3. They form an idea of what you're saying based on their interpretation of your words.

If any step is missing what you thought is not received by the person on the other end.  If at any step the message changes, their perception of your idea is changed as well.

You have control over your side of things - what words you chose, what form you chose.

From there, the message goes out.  You have no control over how it is transmitted.  You have no control over how someone hears it.  You have no control over how they parse it, or what ideas they form.

The only control you have over how the receiver interprets your message is in the words you chose.

Pro-Choice.  Pro-Life.

Who could be against either of those?  Those are very carefully chosen words because it creates an uncomfortable mental state to think of yourself as against Choice or against Life.  The mental dodge we all use is to say "I'm not Anti-Life, I'm Pro-Choice", or "I'm not Anti-Choice, I'm Pro-Life."  But it's just that - a dodge.  We know "the other side" picked their phrase to make it hard to refute.  We refute it anyway, claiming it's bunk.  But ours... Ours is the real interpretation.  I'm not Anti-Life, I'm Pro-Choice.

Labels.

Next time you're talking to someone, try to be aware of the labels you use.  We all use them - communication is impossible without them.  Just try to think of how someone might have a slightly different definition of a label than you.  What does "Republican" mean to a republican?  What about to a democrat?  What about "Taxes"?  Or "Wellfare"?

Labels have power, in their ability to condense an idea to a simple, easy-to-repeat sound bite.  But that power can be (and often is) subverted by people having different definitions for those labels.

You may be trying to say, "Lets not burden the poorest people with extra expenses" when you say "Tax the rich", but many will hear, "I think the most successful people should be punished for their success by having to pay the way for the lazier members of society."  "Rich", "Taxes" and "Poor" are labels with different definitions for different groups of people.

---

Next time we find ourselves in a situation where someone is just not understanding us, the right thing to do is step back, learn what we can about their perspective and try rephrasing what we're saying so they'll understand it better.


Communication is difficult.  But it's also the most powerful invention of the human species.  Learning that our perspective, our words, and our definitions for those words are not the only ones out there is the first step to effectively utilizing this most powerful of inventions.

Sunday, July 3, 2011

What happened to the Food Network?

I have a confession to make.  It's a dark secret I've been carrying with me for many years.  I love cooking shows.  Not just cooking shows... any show where they show you how to make something.  But cooking shows... there's something much more immediately attainable about them.  I may not have every gadget they use, but it's easy to get by and start making a dish that catches my eye.

And the beauty of those shows, the formula that Julia Child came up with - is in the de-mystifying of the art of cooking.  Showing that while great cooks are artists, you too can create some pretty great food.  You might not ever run a kitchen at a world-class restaurant, but there's no reason you can't try something new in your own kitchen, and no mystery to how the chef's do what they do.  Just years of practice with stuff you have around your own house.

The rise of cable television, the proliferation of 24-hour channels dedicated to a given hobby or interest was a great boon for a while - wannabe cooks got the Food Network, science geeks got Discovery channel, Alien abductions and paranormal woo-ists got The History Channel.

But something has changed.  I tuned in to the Food Network the other night and up came a show - Top Chef.  Prior, recent exposures showed me other shows like Chopped, and Hell's Kitchen. These are reality contest shows.  Not cooking shows.  They don't show you anything of the cooking involved - They quickly toss out who is doing what, but there are so many people and so much time spent showing off their personalities, that there's no time to talk about the how or why - it's 20 seconds of Bob is making souffle! Jody is making Creme Brulee!  Then on to the tasting and judging, with a lot of shots of the contestants as they're being judged.

The cooking has been almost completely jettisoned from the cooking show.  In favour of overhyped drama and jump cuts to fires flaring and pans being banged.  What the food is, how it's made, why the cook is doing what they're doing... all of that is gone.  Most of the time you don't even get to see what the cook is doing.  The "mystery" is back - some people do some stuff, food you couldn't think of yourself is presented, and then judges weigh in.

That's not a cooking show.  That's a game show, based on a skill we can't judge for ourselves.  I see no reason it should even be on the Food Network.  And definitely no reason to watch it myself.

And while we're ranting, what's with these kids, with their music?  Get off my Lawn!

Bah.  Maybe I am getting old after all.

Wednesday, June 15, 2011

It is everyone's business to be downtown during a riot.

I want to start by saying I disapprove entirely of the violence and destruction going on across the bridges from me.  I also think it's entirely likely that it is just a core group of people who are causing the bulk of the damage.  And I understand what the police are saying about it the damage being against the law.  All of those are quite obviously correct.

What I disagree with is the sentiment that the people standing around taking pictures and videos have no business being there.

The police very much want to keep things under control - that's fine.  That's their job.  And it's a good thing to do - without the police, society would collapse pretty fast.

But with the advent of portable, cheap cameras and YouTube I don't think anyone should be at all surprised that people want to video what the police are doing.  Go to YouTube.  Right now.  Put "Police Violence" into the search box and see what comes up.  How many of those videos are from people "standing around, taking pictures, with no business being there"?  How many of those situations would have had one-sided reports from the officers involved prior to the advent of hand held video and easy, wide-spread dissemination of that video?  It's hard to say, but it's easy to understand why most people would think things are more one-sided when only the police can be around and only they can file reports.

I think that most of the lookie-loos downtown are doing us all a service.  Their presence, with the recording equipment they most likely have on them, is helping to keep the police in check.  If not during this incident, any video captured of police crossing the line will serve as a warning for future incidents.

I've also seen footage of the police with cameras - one officer confronting a person, another filming it.  And a group of people all around filming it all.  Bravo!  That is the solution to big-brother.  Give everyone a camera and who is watching the watchers?  Everyone.  The threat and power of Big Brother comes from the lopsided nature of the equation - it's even in the name - BIG brother.  If everyone is equal, if only through their ability to film the goings-on, then Big Brother isn't so big any more.

Yes, I have sympathy for the police - it's a tough situation and bad things are happening.
Yes, they need to get in there and clean things up.
But No - people do have business downtown.  As long as you, as public servants, acting in public, are around, the public has a right to watch.
Yes, they can do it from outside of any areas you designate as hotspots - specific blocks with specific problems.

Tonight hasn't been the best night for this city.  I'd rather have seen an orderly, if disappointed, crowd leaving after the game.  But I'm actually rather pleased that so many people are staying around, watching and recording.  The media is portraying things one way, and they have a lot of sway.  But the deluge of video that is likely going to hit the 'net in the coming hours is going to make an impression.  Some good will come of this, even if it seems pretty rough for a while to come.

Saturday, June 11, 2011

Redefining Computer Science

People have been complaining about the lack of students in science and tech degrees here in North America for some time now.  The fewer students studying these topics, the fewer people going into the work force in those fields, the more our position in those fields is weakened relative to other countries.  I haven't come to a conclusion on whether that's necessarily a bad thing, but I suspect most people will assume it is.  Or that there are plenty of other reasons that it's good to have a decent proportion of our students studying these fields.

Regardless, the slipping enrolment numbers for science and tech degrees is being treated as a problem by universities.  Not surprising - they're in the business of selling degree programs.  And I don't really have a problem with this - it's in their interest to decry the numbers going down in what is basically their revenue stream.  And it's probably in our interest to have more students studying those things.  So it's likely a win-win, if it works.

What I find a little disturbing, though, is the possible dilution of the field of Computer Science, as hinted at in this article in the New York Times.

On the surface, the changes the article talks about seem like a good idea - tinker with the curricula to draw potential students in with a focus on the applications they use every day.  Want to know how your iPhone works?  Come study Comp. Sci. and we'll show you!  On the way, you'll learn the fundamentals, and off you go to make your fortune.

The problem, though, is that the assumption seems to be that the theories of computer science, and the focus on those theories, is what is driving students away.  Therefore the way to fix the numbers problem is to ... adjust ... the balance of the curriculum to not hit the theory so hard.


What worries me is that my personal experience on the interviewer-side of the table these last few years has been disheartening.  Too many recent grads with C.S. degrees just don't seem to have a solid understanding of the underlying theories of computation or the mental tools used in solving computational problems. 
Curricula are already too light on the theory!

Talking about tinkering with them to make them less overbearing, with all their talk of Theory... it makes me worry that they're re-branding Computer Science.  It sounds like they're shifting it to be what was called Computer Information Systems at my university.  Basically - application programming, with only enough theory to make it so you could ask something resembling an intelligent question of a Comp. Sci. person when you run up against the edges of your knowledge.


Don't get me wrong - I think there is plenty of room for that kind of a degree.  The software field has more than enough work to go around, and the vast majority of it really doesn't need a deep understanding of formal languages or set theory to be done.

Just don't call it Computer Science!  It's dishonest and confuses the matter.

Wednesday, June 8, 2011

Essentialism and Evolution

I've been thinking a bit about Essentialism lately.  People seem to be wired up as Essentialists.  We love to, maybe even need to, classify things - that's a cat, this is a dog.  That over there is a tree.  Or is it a shrub?  And is that a Beagle or a Basset?  It's a hybrid?  What breeds?

Classifications and labels are very useful things, for sure.  Beagles are good hunting dogs.  Bassets... not so much.  Knowing which is which is pretty helpful when you've got a fox on the loose.

But it's easy to lose sight of the forest for the trees.  Richard Dawkins makes this point in his most recent book when talking about why some people seem to have a hard time grasping Evolutionary Theory.  Thinking of something as Essentially a dog appears to be a detriment when trying to understand how the species evolved and continues to evolve.

Take Beagles.  A "Beagle" is just a member of a population of beings similar enough to interbreed and produce something we'd still lump in with the Beagles.  Any one beagle is a member of that population, and will have characteristics that are within the normal range for that population.  That population is "All Current Beagles", lumped together based on our arbitrary description of the characteristics of the breed.  But within that group there is a lot of variation.  If you took the extremes of height from within the group - the tallest and shortest individual dogs - and stood them next to each other, it's less apparent that they're the same breed.

Now imagine splitting the population in two - half stay here, the other half are sent to Russia.  You breed them for 100 generations, selecting only the shortest.  The Russians do the same, taking only the tallest.

At the end, you bring them back together - the 100th generation tallest of the tall and shortest of the short would probably be considered different breeds by then - Great Beagles and Miniature Beagles, perhaps.

At no point along the way could you identify one individual in the group and say "your parent is a normal Beagle, and you're the first Great Beagle".  There's no clean line across the family tree - there's just a gradual shift from some individuals at the start that are clearly Beagle to an indistinct group in the middle to some individuals that are clearly Great Beagles at the other end(*).

Once we start thinking about things as individuals in a population, and the population as only roughly defined by a set of somewhat arbitrary characteristics, our view of things change.  No longer are there individual species... there's a massive population of living beings, some of which can interbreed and produce remarkably similar offspring.  Some very few of which can interbreed in fascinating hybrid ways.  And most of which can't interbreed.  Over many generations, the characteristics of the various populations will shift and morph to fit their environment, driven by the environment's relentless selection of only those individuals that are least-worst suited to it (IE: that can produce the most offspring).

Looking at it that way kinda makes the whole Evolution thing seem a little more clear.  At least, I find it so.


(*) - I picked height because it's generally a feature driven by many genes.  The case of dwarfism, though, would be an exception.  As I understand it, there is a single mutation that causes dwarfism in humans, and interestingly, I understand it's a very similar mutation that causes it in dogs.  Miniature breeds with short, stubby legs have that mutation, and it's quite conceivable that you could draw a line in the tree where that first shows up.  Aside from that, though, height is a good example for the point I'm trying to make.

Wednesday, June 1, 2011

Robo-Ethics

I went to a thing called Cafe Scientifique last night.  I happened upon one a month or so ago when it overlapped with the Skeptics In The Pub event that I was attending (only the second of those that I'd been to, as well!).  It struck me as rather a nifty idea - a person of a scientific persuasion shows up and talks about their specialty.  "Talks" being the active verb - it's something of a conversation with the group, not so much a lecture.

This week was a rather lively discussion on the ethics of Robotics, led by AJung Moon from UBC.  The conversation wandered a fair bit, and I'm afraid she didn't get a chance to talk about everything she'd planned to, but it was still fascinating and I'm glad she came out to talk with all of us.

There was a lot of discussion on defining what a Robot actually is.  Which I suppose shouldn't have been too surprising - a scientifically minded audience is going to want you to be precise with your definitions.  Perhaps a couple of people got overly into that sub-topic.

AJung did provide a definition - an entity that is capable of interacting with the world, through sensors and some sort of mechanism (IE: a camera and robot arm), and has some level of autonomous decision making capabilities.  It's a broad definition, but the topic of ethics should cover a wide range of situations (if not all!), so it seemed suitable to me.

The very breadth of the topic is a little daunting.  After a little bit of introduction she jumped in with a number of more-or-less disturbing examples of robots that are out there now, some on the market, some in more of a research capacity.

For me, the most interesting, and most immediately pressing from an ethical point of view, is the robot teacher from Korea.  It's not fully autonomous - it has a human operator.  But the operator isn't controlling every minute motion of the robot.

The ethical questions she brought up around this particular example were on two lines:

1) the changing of the teacher-student relationship - normally it's dominant-submissive. But person-robot is as well, and when you make the teacher a robot, suddenly you have a dominant role but a submissive entity...

2) uncertainty around such young children being exposed to and interacting with a telepresent person - will they be able to tell the difference between that and a robot and a person?

Both are interesting questions.  My gut feeling is that the second is less of a problem than some people might think - kids are pretty good at sorting out complex rules of interaction with different people.  I suspect they'll handle this new telepresent person-entity just fine.

But there's definitely room for research into both - how do kids bond and interact with such artificial entities?  How do people behave when you have contradictory role and entity relationship modes?  Science and research can provide a lot of data to inform ethical decisions like that, and it's good to have people like AJung bringing them up for consideration.

I think I'll be going to more of these events.

Saturday, May 28, 2011

Another Look at the Cost of Working in Games

After talking with a couple of friends I decided to take another look at the overtime data I posted on the 26th.  Specifically, I decided it would be interesting to look at a weekly breakdown.  People are more used to talking about hours-per-week and it makes the data a lot more approachable.  I'll still do it in minutes, to keep the integer accuracy.

I went through and sorted the each day's OT number into one of 5 categories:

  1. Slacking off (working a short day)
  2. Comp Time (specifically given time off in lieu of a holiday worked)
  3. Regular OT (extra time spent working on a regular work day)
  4. Holidays Worked
  5. Weekends Worked
 I then summed the daily totals into weekly totals in each category and generated the following chart.


And for completeness, the net sum as well:



While doing that, I ran into a few days early on where I wasn't counting holidays worked/comped in the same way I was later.  I corrected those for this set of charts.  I may go back and adjust my previous data if I get the time.

Anyway, on with the cool stuff!

A couple of trends are pretty apparent.  After weeks of OT, and especially when I worked a weekend or holiday (or both), there is a bit of relaxation.  Mostly that came in the form of comp time, but I did slack off a bit after a run of weeks of OT.

You can also see the amount of slacking build up just prior to my post-project comp time and vacation.  That won't surprise anyone working in the industry - things slow down as the project is wrapping up.  There's really not a lot you can do at the office if you don't have bugs to fix.  So you come in a little later and maybe leave a little earlier.  As long as you stick around long enough to offer moral support and make sure those that do have work won't need help from you, you're free to go.

Finally, the overall trend of the project is a little more apparent.  Things start out quiet, with maybe a spike early on for a deadline.  Then a slowly building pattern of pulses - a push for a deadline followed by a little down time, then another push.  Toward the end it looks like I added more weekend time, but cut back a little on the regular-day OT a little.

Some stats

Weeks tracked: 132
Weeks with a 0 balance: 18
Weeks of holiday or vacation: 16
Working weeks with a 0 balance: 2

(Apparently I'm pretty bad at hitting a 0 balance.  Heh.)

Across all weeks, all numbers in minutes.
S & C = Slack & Comp time

s & c OT total
Count 132 132 132
AVG -159.8 246.5 86.7
MEDIAN -60.0 108.5 25.0
Min -2250.0 0.0 -2250.0
MAX 0.0 1941.0 1941.0
STDEV 334.6 344.1 517.9


Across weeks with non-zero values, again in minutes:

s & c OT total
Count 98 104 114
AVG -215.3 312.9 100.4
MEDIAN -94.0 169.5 53.0
Min -2250.0 2.0 -2250.0
MAX -5.0 1941.0 1941.0
STDEV 373.1 360.1 556.4


The weeks where I had a lot of comp time is skewing things quite a bit, making it harder to see how much slacking I did.  Subtracting those weeks I get:


s & c
Count 95
AVG -148.8
MEDIAN -93.0
Min -784.0
MAX -5.0
STDEV 151.7

That still has the occasional comp day mixed in, but takes out the worst of the effects, so should be easier to compare against the OT weeks.  Since the OT still contains the holidays worked and those are the reason for the comp days, it seems only fair to leave them in.


Conclusions

Again, this is by no means a complete picture.  These numbers don't capture things like the effects on my health and well being, relationships with friends, etc.  But it's a start.  Now to see if I can figure out a good way to start tracking something quantifiable for those other aspects of my life and start including them on my new spreadsheet...

Friday, May 27, 2011

A Funny Thing Happened on my way to the Blog

Ars Technica posted an article today by Andrew Groen about death marches and the culture of overtime in games. It's not a new story. In fact, it covers a lot of the rumours and stories I rather blithely brushed over when I said "I've seen examples of most every rumour you hear out there, and heard 1st hand from people who've lived the rest."

As with previous versions of this story, Mr. Groen paints a pretty grim picture. Flagrant disregard for years of research on overtime and worker productivity. Exploitation of employees, forcing them to work long hours to make horrible games they don't care about. Early burnout and retirement by industry stars.

But I don't think that's a complete or accurate picture. Mainly because it lacks solid numbers to back it up. Every single assertion Mr. Groen makes is, at its heart, an anecdote. I can provide counter anecdotes for each and every point he makes from my own personal experience. I can also find anecdotes that back up most of what he says and know people who can back up the rest.

The problem is, there's no comprehensive survey to tie it all together, to put those anecdotes in their proper context. He's not talking about a real study or doing quantitative analysis himself. He's just retelling an old story about overtime and exploitation, employees buying into the machismo and visions of creating cool games.

That doesn't mean he's necessarily wrong, though!

It's just that he's not necessarily right, either.

We just don't know, because there aren't enough people gathering real numbers to say for sure.

Do I think the industry has problems? Definitely.

Does the industry need to address the OT issue? Mmm... that's more complicated.

Yes, chronic OT and death marches are bad. But some OT, carefully focused and used wisely, can result in pretty amazing things. Both in the end product, and in the team chemistry. Ask anyone who's gone through it - the guy that leaves before the crunch is rarely missed as much as the one that leaves after.

Will these problems be fixed any time soon? Doubtful.

Until the industry grows up a bit... probably even unionizes, I don't think we'll be able to really address these problems. And I don't think unions will happen as long as most of us are salaried. It allows them to ... encourage ... us to work OT, but it also means they're competing with other salaried positions for similar skills. And for us programmers, that means not a bad living. It's hard to organize a union when most of the potential members have a pretty high standard of living and even sympathize with management and buy into a bit of the mythos of OT.

So yeah. There are issues in the industry. But writing won't-someone-please-think-0f-the-games-worker articles won't solve it. Those of us in the industry now are the only ones who can start to solve it. And the only way we can do that is by gathering data to understand what's really going on. Only then, only once we know where we are, can we figure out the road to get us to where we want to be.

Thursday, May 26, 2011

The Cost Of Working In Games

I've worked as a programmer in the games industry for some years now. The details of where and when aren't important right now so I won't go into them. What I'd like to talk about is the general perception of what it's like to work in the industry vs. the reality.

I run into a couple of common views from people. The most common is "That must be awesome!"

Mostly this comes from people who hear "Video Games" and think we must sit around the office and play games all the time.

Which, well, we do. But not *all* the time, and we do try to keep it off the clock.

And there are certainly some other pretty cool aspects to the job.

But there are downsides as well, and all the pluses don't make the minuses any less bad.

The other view I get a lot is "Wow, I hear it takes over your life."

That one is pretty wide spread among people in the industry and the people close to them. The EA Spouse article is the most prominent example of this.

What I find most interesting about this view is that I've never seen any hard numbers to back those claims up. Anecdotes, regardless of how many you have, don't add up to evidence.

Don't get me wrong - I think there is definitely something going on. I know I've felt like the job was taking over my life. People in the industry do work a lot of hours and sacrifice a lot of holidays and weekends.

But we also get comp time. And there's a lot more flexibility for start times, especially if you just worked until 2am.

I've seen examples of most every rumour you hear out there, and heard 1st hand from people who've lived the rest.

But how much of that is confirmation bias and/or selection bias?

How does it all play out, when you add up the comp days, the late nights, and sometimes equally late mornings? Certainly, every person's experience is different, but you just don't see real numbers.

I've always been salaried, which, as far as I know, makes it illegal for them to track my hours in detail. But that doesn't mean I can't track them.

So I did, for 2.5 years.

The Good Bits

I started a spreadsheet late in pre-production on a particular project. I tracked my time all the way through to finalling, took some time off, and when I got back, continued tracking into pre-production on the next project. I was laid off after ~30 months, late in pre-pro on the next title, so this covers basically a full project cycle.

I decided the best way to post the data is in chart form. The raw data has a lot of both personal info (it turned into a very handy calendar) as well as company-private info (deadlines, etc). The later I clearly can't release (it's just wrong, never mind the lawsuit I'd probably face). The former could be interesting if I was better about being complete with it - joining my personal schedule to the time worked would be fascinating, but probably socially awkward if released.



(Click the images for a larger version.)

This chart is my daily minutes of over time across the entire project. I counted any time worked past the 7.5 hours mandated by law here in Canada as the maximum working day's length. I also counted any time spent on a weekend, or when working a holiday. I opted for minutes to avoid decimal hours, days, etc.

The negative spikes are times I left early, came in late, took a long lunch, or was given a comp day in lieu of a holiday. The -450 minute days are the comp days. The cluster toward the end, followed by the long stretch of 0's is my post-project comp time followed by vacation.

This chart alone is interesting, but it's a little noisy - there's a lot of jitter, and it's hard to see the overall trend - sure, towards the middle there seems to be a bit more of working late and less of leaving early, but it's hard to see the accumulation.


That's the cumulative story. There are a few flat bits after spikes - those tend to be when we got comp time. The down-slope of the tallest peak is my post-project comp time. The plateau is my vacation - I took a few months off after this project, so I wasn't logging any hours either way. The cliff is me returning from vacation and deciding it was best to just reset the counter. It was a new project, a less-stressful environment during pre-pro. I felt a mountain of "uncompensated OT" looming over me would just lead me to slack off to try and balance it out. Which wouldn't have served the project or myself well at all. So I reset it to 0 and started counting again.


Some stats, all in minutes:

Total OT worked: 31644
Total Comp Time: 9675
Total Slacking: 10521
Balance: +11448 OT

Peak running OT balance: 17750
Balance when reset: 10585

Single longest day: 813 (OT: 363)
Longest whole-day OT: 600


Total Days tracked: 921

Total Comp days: 21.5
Sick Days taken: 5 *
Vacation Days taken: 70 **

OT Days ***: 41
5 were holidays, moved and compensated for.
36 were weekends, mostly **** uncompensated.

Streaks (10+ days in a row):

19 days (10 office, 2 home, 7 office)
12 days (5 office, 2 home, 5 office)
12 days (5 office, 2 home, 5 office)
18 days (16 office, 1 home, 1 office)
11 days (5, office, 2 home, 1 sick, 5 office)
13 days (8 office, 1 sick, 4 office)
19 days (6 office, 1 home, 12 office)
14 days (all office - followed by 1 day off, then another 5 days in the office)

(home == working from home)


* - I'm not 100% sure on this figure, but I'm generally healthy and 5 days over 2.5 years is normal for me.

** - 70 days of vacation?! Yeah. One of the (apparently fading?) perks of working in games is a "sabbatical" after some years of service. Basically it's a bunch of extra time off. Mine came up and I took it.

*** - Weekend/Holiday working days - to be fair, it was rare for me to work a full day on these days, though my longest OT day of 600 minutes certainly goes against that.

**** - mostly uncompensated weekend days - we were given a few extra days here and there, but it's hard to attribute them to specific weekend days worked.


So, there you go. Some numbers for one person on one project. I don't want anyone to think this is typical in any way - it's a data set of 1, so take it as just an example.

Conclusions

1) It's pretty clear that, for this one project, I worked a fair amount of OT.

However, I'm by no means bitter about this. Anyone that knows me should know that already! And this is by no means an accusation against any former employer.

Making a game is a creative process, and like all creative processes, it requires some experimentation. "Finding the fun" is a very touchy-feely process, and no amount of planning will ever be able to predict how long it will take.

That means adding features late, cutting a feature after it's built (wasting time), and a fair amount of backtracking - many a time have I torn out code when the design changed, only to have the design change back a week later and put the code right back in.

Overtime is just a reality for any creative process that has marketing involved - IE: any creative product you want to sell. That's just the way it is.

I happen to like the environment. But I also think it's interesting to get some detailed numbers and see how big of an effect it's actually having on me.

2) Interestingly, I found that the act of tracking my hours also helped me hit my estimates for task length. Estimating how long something will take to write is one of the unteachable skills of being a professional software engineer. The only way to learn it is by writing a lot of code and making a lot of bad estimates. Over time, you get a little better at it.

However, I found that shortly after I started tracking my time, my estimates got a lot more accurate. I don't feel like I changed my estimating habits any - rather, it was the heightened scrutiny of time worked that I think made the difference.

Trying to keep track of hours worked in my head is messy, and at 4pm on a Friday, it's hard to start a big task. Even if I felt I should make up for that long lunch the other day.

Having a big -120 in front of me on the spreadsheet made it clear, though. I should stick around for a couple extra hours. So instead of leaving at 6pm, I'll stay 'till 8 and balance things out. That works out to a 4 hour block of time to code, mostly uninterrupted, if I start something at 4pm on a Friday. Not something I'd have thought of before.

3) So on the whole, I'm quite pleased with this experiment. I've learned a lot about the actual impact of working in games on my life. Sure, this data doesn't show the whole picture, but it's a start. I'd love to see numbers from other people in the industry.

Sunday, May 22, 2011

Debugging Software and The Scientific Method

I find it odd that debugging software, sorting out the errors in a program's source code, is simultaneously the most frustrating and exhilarating aspect of writing software. I've actually banged my head against a wall when trying to track down a bug. Conversely the feeling of satisfaction when I finally pin a difficult bug down and dissect it is indescribable. Being able to point at a line of code and say "that's the culprit, and this is how we fix it" is a joy like no other. Almost as good as the first time you watching a program you designed and wrote run correctly.

I've described the process to various non-programmers in the past and the analogy I've always used is that of a detective - sleuthing out the underlying problems and pointing a finger at a particular area of code to say "book 'em Danno." And it's not a bad analogy, really. But I think both processes are actually better described as using the Scientific Method.

When trying to sort out what's misbehaving in a piece of software we have to:

Observe what the software is doing - gather as much data as possible on how it should behave given various inputs, and then observe how it actually behaves when given those inputs. How does the output differ from the expected output?

Hypothesize - in order for the program to behave the way it is, the code must be doing X instead of Y. At this point, for simpler bugs, we can often just look at the source code to see. For more complex or subtle bugs, though, we often need to go further.

Predict - If that's the case and it is doing X, then we should see a particular behaviour A, given some input not in our initial observation.

Test - try that input!

Collect data - Does it do A, as predicted? Or did it do something else? If it did something else, back to step 1! Going through here too frequently can lead to that frustration I mentioned...

However, if it did follow our prediction, we need to make a judgment call - is that enough evidence? What other possible hypotheses are out there? How can we test those, to see if they're the case or not? Building a hypothesis and set of tests to falsify (or not) that hypothesis, and simultaneously falsify other likely hypotheses is the art of debugging. And the core of what Science is all about - subjecting a hypothesis to rigorous testing and possible falsification, building up a consensus of data lines to raise it to a proper Theory.

Well run software companies carry the process even further by adding in peer review - all changes should be put before at least one of your peers to see if they can find faults in your solution or new errors you might introduce with your change. It's not (usually) as wide spread as publishing in a peer-reviewed scientific journal, but it can be just as rigorous. A good reviewer will ask penetrating questions about your understanding of the situation and propose alternative hypotheses as to what might be going on, to see how well your solution works.

Certainly, the analogy only goes so far - we're building software, which is an Engineering task. We're not adding to the sum of human understanding about the functioning of the universe. But the methodology is fundamentally the same and good engineers need to have a solid grasp on it.

Wednesday, May 18, 2011

Crushing your head... crush crush

"It suddenly struck me that that tiny pea, pretty and blue, was the Earth. I put up my thumb and shut one eye, and my thumb blotted out the planet Earth. I didn't feel like a giant. I felt very, very small."

— Neil Armstrong

After a long day at the office, and a busy few weeks, I went out for my monthly Skeptics in the Pub. A fine evening of odd, interesting and fun banter, with people of the same characteristics. I was introduced to a game called "Marry, Shag, Kill", in which one participant proposes 3 people to whom another participant has to assign one of each of those verbs. A fascinating game with many and fine permutations. The more one has imbibed, the finer the game seems, it seems.

Perhaps the most memorable event of the evening, though, occurred on the ride home. A few minutes of relative solitude on the bus provided me with the chance to think. Disdaining such activity for the waste of time it is, I chose instead to distract myself with that odd, moon-like object hanging in the sky.

Which, it turns out, was the moon.

Now, this is an odd thing here in Vancouver, seeing the moon. Rare is the night when one can see past the perpetually looming cloud deck, drizzling its grey gloom.

Remembering Armstrong and the basics of geometry, and glancing nervously around at my fellow late-night commuters, I tentatively raised my thumb to see how easy it was to abolish a world. And easy it was. The moon disappeared behind my thumb, along with an astronomical number of galaxies, quasars, super-clusters and super-massive black holes that happened to be roughly in the light cone extending from my retina and through the most opposable of my appendages.

And oddly, it did make me feel a giant.

For a moment.

Until I realized that the sun is roughly the same size, from my perspective.

And that blew my mind.

The sun and the full moon are (almost) the same size.

How else could we have a total solar eclipse?

Looking at the moon, it was hard to picture the sun as that big in the sky. It's a blinding, but ultimately small point of light, right?

Step outside the next time you can. Look at a full moon. And then take a look up during the day. Those two things are the same size in the sky.

And most interesting of all, this wasn't always so! And won't always be so! Though other, more interesting things will probably occur before we need to worry about losing our total eclipses.

Still, see how easy it is to cover either with your thumb.

And think about what a fascinating, insignificant, pale, blue dot we live on.

G'night.

Sunday, May 15, 2011

Particles and Waves

Light behaves as both a particle and a wave. This is a core part of the standard model of physics and is interesting for a number of reasons. Two things in particular are on my mind of late.

First, I spent a long time trying to wrap my head around how light can be both at the same time. If it behaves as both, it must somehow be both. I've seen a number of people hit that wall and just give up on ever understanding physics or anything quantum.

But that doesn't have to be. Light is neither a particle, nor a wave, nor a duality. It is something else. Light is its own, unique thing. It happens to sometimes behave like our model of a particle. And at other times it behaves like our model of a wave. Thinking it's somehow both is getting hung up on the labels of our models. The fact that light acts kind of like one model and kind of like another just means that neither model by itself is correct.

And that means that neither label is correct, either. Light is light. The fact that it doesn't fit under the intuitive sounding labels Particle or Wave is irrelevant. Even if it behaves like them sometimes. Especially if it behaves like both at some time or another. Light is light, and we need to think of it as its own thing.

I can't recall who said that first... perhaps Hawking? Regardless, when I read that the apparent paradox vanished and my thinking became a lot clearer. Hopefully more people can come to understand that and move past this particular hurdle to their understanding of physics.

--

Second, I'm fascinated by the double slit experiment that so aptly shows that the nature of light is neither a particle nor a wave. For those not familiar with the experiment, google it up. It's amazing.

I recently read Programming The Universe, in which the author discusses how the photon could interact with itself as it passes through both slits to form the interference pattern we see even with single photons shot through the apparatus. What I'm curious about, and haven't been able to find in an admittedly cursory glance around the web, is how far apart the two slits can be to still get the effect.

I'm thinking of an electron's cloud/orbit around the nucleus of an atom - it's not really an orbit, the cloud is more of a probability volume, where the electron is very likely to be at a given time. If a photo has a similar probability volume, it seems likely that the slits couldn't be any further apart than the diameter of the probability volume. After a certain distance apart, the two slits essentially become two separate, individual slits.

The question is - does a photon behave like an electron, IE: does it have a probability distribution for it's location at any given time? And if so, is that related to the wave length or amplitude of the wave, or is it a separate variable? And if so, what is it called?