MITMH 2020: 2019 Feb

Careful: I'm pretty blasé about spoilers for this year-old hunt. If you haven't already solved the puzzles you wanted to solve, maybe don't keep reading.

It was February. It was time to get organized.

As usual, we had two mailing lists: one for folks participating and one for folks saying "maybe next year." Not as usual, we had two Slacks: our usual Slack, plus a new one just for hunt-writing.

Every so often, fearless leader Corey sent out a "State of the Hunt" email. Here are some excerpts from two such February emails to let you know what the writing team was doing.

Natalie, Linda, and I got a briefing from Setec’s core 2019 Hunt planning team. Lots of good advice but not a lot of surprises. They gave us some leads for getting student involvement and some advice for what’s important to MIT wrt Hunt organizers.

I sent two surveys for Hunt 2019 impressions and for contentious and priority topics for discussion. If you’ve not already, please respond! The 2019 impressions will be used in the coming weeks to shape.

We’ve compiled the team’s opinion about important characteristics for our theme and structure. These are coming into play as we discuss theme this week.

Theme and structure selection has begun! The first batch of Round 1 proposals are being discussed now, and the second batch will be released tomorrow night.

We’re hiring! I’m collecting interest in many open roles within the team; e-mail me if you’d like to sign up. If interested in editor-in-chief or director, please contact me ASAP.

We have some important decisions about Hunt to make (eg, what are we going to do about answer verification!) but these haven’t yet started. I’ve asked for these to wait ‘til after Round One [story/theme] voting (so as not to overload discussion across the team) but we might start this soon(er).

There was a flurry of research happening: Looking over notes from past hunt-runners; looking at contents of past hunts; internal surveys to find out what Left Out folks wanted to [put into/get out of] this hunt.

Also: Bootstrapping the people-organization: We'd previously listed a set of probably-useful "lead" roles: editor-in-chief, producer (a.k.a. "story nerd"), treasurer… Now we needed folks to step up for these roles.

Also: The start of design: The call had gone out for "one-pagers" with ideas for story/theme ideas.


The survey of "…contentious and priority topics for discussion" and "important decisions about Hunt to make (eg, what are we going to do about answer verification!)" were an important part of internal research: Finding out from Left Out folks what kind of hunt we wanted to run. How did this work?

This worked out pretty well in a lot of ways; not-so-well in a few others. Overall, I think it was a good approach.

What didn't work well?
It took a while; this probably blocked off some options. E.g., if you wonder: Why did we have a relatively-plain-vanilla overall hunt structure? That structure ran into plenty of "contentious" topics along the lines of: how big should hunt be / how well should most solvers understand the overall structure? Maybe we could have come up with something more complex that fit our goals… but it took us a while to figure out what our goals were.
It put a lot of pressure on the triumvirate Someone had to figure out the choices for our internal multiple-choice survey; i.e., someone had to fairly and respectfully phrase a few sides of some genuinely contentious issues. That ain't easy. It fell to the triumvirate to decide the most-contentious issues: those issues for which there was no clear winner. This wasn't fun.

What worked well? A lot.
Survey made it painless to ping potentially-contentious topics that weren't contentious after all. If there was a topic that you thought might be contentious, you could bring it up. If the survey revealed that the topic wasn't so contentious, you could stop worrying. (If you'd jumped straight to discussion, someone could have jumped in pour their heart into an argument though there wasn't real hope of swaying the team.)
"Time bomb" issues got defused. We wanted to stay friends; so why start the year with a pile of arguments? When we wrangled in online-chat over these issues, we did so under good conditions. Since we wrangled over issues survey-shown to be contentious, you went into a discussion knowing that a sizeable fraction of the team disagreed with you—and you couldn't convince yourself that many folks were being stupid. Because we'd pre-announced times for these discussions, cool-headed folks made a point of hanging around even for discussions they didn't have a personal stake in—just to keep things friendly.
There were some interesting conversations. Some folks have thought pretty hard about Hunt over the years. They had interesting things to say.


Also, folks noodled around with story ideas. I noodled around with story ideas.

We chose a story before we chose a structure to go with that story. I dunno if that's the approach we'll use if we do this again, but I think it worked out pretty well for a first-time-running-the-hunt team.

We started by encouraging everyone to write up "one-pager" versions of their story ideas. We'd vote on these; a handful of top vote-getters would get expanded into more-thought-out proposals. And we'd vote to choose one of those.

We started by encouraging everyone to to write up "one-pager" versions of their story ideas. And we encouraged them to read each other's ideas. Thus: we encouraged everyone to get engaged and excited about our Hunt early on.

As I said, folks wrote up story ideas. I wrote up some story ideas. I'm not going to tell you about my ideas. (We didn't use them; maybe I'll want to use them for something else someday.) There was an outpouring of creativity.

Some folks are shy about putting their names on such rough-draft creations. Thus we allowed anonymous submissions. Our low-tech way of achieving anonymity: If anyone sent a draft to me, I'd copy-paste its contents into a Google Doc. Some folks did that. (I wish more folks had done that. Or, rather, I wish certain other folks had done that. Not-anonymous folks could create their own Google Docs for other folks to read and vote on…but Google Docs permissions are tricky, and so there were ideas which nobody had permission to read for the first few minutes and…anyhow, eventually it all worked out.)

One of the theme ideas: a love-themed hunt. As part of the hunt, two new team members, Mark Gottlieb and Gaby Weidling, would get married. In the end, we didn't choose that theme idea, but were pretty excited to incorporate a wedding into whatever theme we did use.

Another theme idea: an amusement park. I voted against this idea. I don't enjoy amusement parks! Still, it did pretty well in the initial vote, so along with some other ideas, folks started working it up into a multi-page proposal.


Some puzzle ideas didn't need to wait for story and meta-structure and such. I got on early start on designing some puzzles.

I looked over my MIT campus travel pictures to figure out which sites I wanted to puzzle-ify.

E.g., I put together a puzzle I called "Six". This puzzle started out with a gimmick crossword: all of the entries had embedded direction words. Like "I was, I was, I was, I was out here playing games" or "Some folks and Dan Egnor thought of a better use for this gimmick." This would clue people to go to the Eastman Lobby in building 6.

If you went through the puzzles I wrote for this hunt, that last paragraph might seem strange: in the end, I did write a puzzle which used the Eastman lobby, but there wasn't a direction-word-gimmick crossword.

Of the places I'd snapped pictures of, there were four I felt I could turn into puzzles. I made proof-of-concept drafts for four puzzles. Editor-in-chief Yar pointed out that each of these was kind of skimpy on its own. Maybe I could combine these four small-puzzles into two medium-sized puzzles?

That was a good idea. I did that.

I was nervous about having committed to these puzzles that depended on campus not changing. In past hunts, I'd written puzzles based on city-locations that were later broken for construction.

Thus, around this time I also wrote an emergency backup puzzle. This puzzle idea was adaptable: it was pretty easy to change the puzzle-answer as needed. So if Eastman Lobby was under construction, I could swap in this puzzle for that. Or if the Stata Center time capsule was opened up early, I could swap in this puzzle for that instead. I wrote a little computer program to re-work the puzzle for an answer quickly.

I didn't need the emergency backup puzzle: the sites I chose endured all year, unchanged! Still, this emergency puzzle will show up in this story a couple of times, so let's give it a name: "Hindsight H."


I helped out on some research. Folks wanted to know: what kind of puzzles go into an MIT Mystery Hunt? Darned few Left Out folks had experience running the MIT Mystery Hunt, but plenty of us had experience running other puzzle-y event-y things. (E.g., my most recent puzzle-writing was a set for Puzzled Pint, meant to be quickly solved by tipsy folks. Could we write 200 Puzzled Pint sets, wrap them up in a metapuzzle, and call it done?) What should we do differently? What should we do the same?

Ian Tullis and I teamed up to look this over. I looked over some recent past hunts to see what categories (word, logic, trivia…) puzzles tended to fall into. (My (incorrect) hypothesis: MIT's engineering nerds would skew very logic-puzzle-y.) Ian put together a list of "mainstays": more-specific puzzle types that fill in the blank "Each year, Hunt has a ____ puzzle." E.g., a Duck Konundrum (albeit not called that if the author's not Dan Katz), a knitting puzzle, a scavenger hunt, …

I didn't want to categorize years' worth of hunt-puzzles myself, but I knew that Joe DeVincentis' excellent MIT Mystery Hunt Puzzle Index had all that information for puzzles up to 2018. Getting that info was pretty easy. It seemed only fair for me to volunteer to categorize the 2019 puzzles and hand that work over to the Index. So I got in touch with Joe and got him to write up instructions for his labeling process Then I couldn't figure those instructions out and I backed away. So in the end, I put Joe to a lot of trouble and did nothing for him. D'oh. Sorry, Joe.

What did I find out? The "mix" of puzzles changes a fair amount from year to year. If we made the kinds of puzzles we were accustomed to making, we wouldn't end up with something "out of bounds"—the bounds were pretty loose.

I also looked over some old survey data: After Hunt, the running team can send out a survey for solvers to fill in. A recent survey had asked solvers what categories of puzzles they preferred. Overall, solvers (narrowly) preferred word puzzles; but MIT undergraduates (narrowly) preferred logic puzzles. So my hypothesis wasn't crazy, just wrong. (Darned few MIT undergrads filled in that survey, by the way. So if you're a running team hoping to encourage more MIT undergraduate involvement, maybe get a statistician to look over that survey data before you swap out all your crosswords for sudoku. I'm not a statistician, but I bet you a nickel that a statistician would tell you "don't make any changes to your approach based upon this paltry data sample".)

While researching, I stumbled onto some unwanted information: My Eastman-Lobby puzzle used six mini-plaques arranged like ⠿ to make Braille. A hunt 12 years before had used those same mini-plaques to make Braille. What was the "statute of limitations" for re-using this? I shrugged and decided to keep the gimmick.

Continued: Mar [>]

comment? | | home |