New: Jotting notes on Scott Blomquists' GC Summit 2009 Lecture "An Analytic Framework for Estimating Puzzle Quality"

[I re-watched another 2009 GC Summit lecture. In this one, Scott Blomquist of Team Sharkbait talks about measuring puzzle quality. It's kinda a measure of puzzle simplicity--avoiding putting stuff in the puzzle which suggests red herrings. As with my previous set of notes, I paraphrase the speaker except that I have my own comments in [square brackets]]

Has anyone tried mapping out everything that teams could try on a puzzle and turning that into a help system? That seems hard when I think about it, but maybe no harder than coming up with a help system in general.

Labels: ,

Posted 2009-05-16

 Dan said...

Help systems are hard in general. The only way reasonable way I know to make one is to start with everything you can think of that could go wrong, and then playtest the puzzle a lot, at which point you'll discover lots of new things that go wrong. Then you'll iterate on the puzzle in response to the playtest, which will invalidate a bunch of the help system data. And then you'll run the game and hope for the best.

Which is why, I think, most games (Shinteki aside) are converging on live talk-to-GC hinting, despite the subjective unfairness of it all.

Sure, some formalism for noting data streams and possible transformation steps could help you think about how teams could go wrong, but I don't think that's the big problem. I think the big problem is actually figuring out what steps exist in the first place -- i.e. given some presentation, what will teams think to do?

16 May, 2009 12:06
 lahosken said...

Yeah, you're right. If there was just some way to prevent teams from being so darned creative.

17 May, 2009 09:30