MITMH 2020: 2019 Mar

Careful: I'm pretty blasé about spoilers for this year-old hunt. If you haven't already solved the puzzles you wanted to solve, maybe don't keep reading.

Every so often, fearless leader Corey sent out a "State of the Hunt" email. Here are some excerpts from such a March email to let you know what the writing team was doing. (And to remind me, as I write this, what we were doing.)

We’ve filled all our core czar roles. Other roles remain open (and, of course, there’s ample opportunity to author or test solve puzzles!); please contact me, Todd, or Yar for interest.

Round Two [theme] proposals were edited; Round two voting commenced; and Round Two voting is over. We have a theme: Puzzle World! Big thanks to the entire team for all the creativity in revising the final four proposals. Puzzle World was a top pick for nearly half the team, and while that means it was not the top pick for nearly the other half, we hope everyone will soon rally around our new theme.

Yar ran the first Puzzle Creator’s Workshop this evening. More to come!

More planning! Triumvirate shared their guiding principles for Hunt 2020, and Justin is facilitating a discussion about Hunt requirements and features (including proposals to do away with answer callbacks and to install a leaderboard). Shelly summarized the ideas for storyline from #theme [Slack channel] and Dan’s summarized the current thinking from #structure.

For our longer-lived and larger, more complex topics, Slack has facilitated communication but not necessarily clarity or progress. The discussions around storyline, structure, and answer callback have been good examples of this pattern. Creating companion docs to maintain the state of the discussion (as Shelly, Dan, and Rich have done) seem to be a good pattern to follow. We should be on the lookout for more instances where Slack, alone, isn’t leading to progress.


Back in February, fearless leader Corey was matching people to leader-roles. He sent around a document with a list of jobs and asked folks to self-nominate for jobs they wanted.

I'd looked over the job list; I'd thought about our team roster. For each job, there was one (or more) Left Out team member who was an obvious choice. For each job, that obvious-choice person wasn't me. (On a less-powerhouse team, maybe I would have been the obvious choice for something. On this team of champions, world record holders, captains of industry, etc, I wasn't.) On the other hand, probably some of those obvious-choice folks were pretty busy and couldn't volunteer. I was a fine choice for a few roles and had plenty of free time to volunteer this year.

So I'd written to Corey; and a few weeks later he wrote back and asked if I'd like to be "Production Czar." I'd organize some volunteers to convert puzzles from napkin-sketches-or-whatever to things we could show on a web site. That sounded good to me.

Soon enough, I was tinkering with ideas of what information we'd need for each puzzle.


Meanwhile, we were getting ready to turn many many puzzle ideas into puzzles.

Tech Czar Doug Zongker got Puzzletron up and running. Puzzletron is a program useful for testing many many puzzles. Using Puzzletron, puzzle authors can upload draft puzzles; testsolvers can test those puzzles and submit feedback; puzzle authors can revise their puzzles; editors can approve puzzles and keep track of those approvals.

(In 2021, the excellent ✈✈✈ Galactic Trendsetters ✈✈✈ folks revealed puzzlord, their replacement for Puzzletron. If you're reading this report because you want to run a hunt, use puzzlord. Don't use Puzzletron just because you saw it mentioned in this here write-up and think it must therefore be SOP.)

With Puzzletron up and running, folks who'd been tinkering with puzzles at home could upload those puzzles for other folks to test. There weren't many of these early puzzles, but there were enough such that we could start figuring out the Puzzletron workflow.

We held some classes about puzzle-writing for newbie puzzlewriters. Experienced puzzlewriters speechified on various topics. Someone set up some big videoconferences; presenters presented. Students sat at home and watched. There were exercises. Students brainstormed ideas. Mentors chose one brainstormed-idea per student. Students developed chosen-ideas into puzzles to be tested…

I don't know how much of that actually happened. I mentored three students. How much did I help them?

That's cool and all, but if you ask me "Which parts of your new-puzzlecrafter-training program worked and which didn't?"…maybe nothing worked? Well something worked. Many many folks on the team wrote puzzles, including some who hadn't written puzzles before. I don't know if the workshops and mentoring helped directly, but they did demonstrate that there were folks on the team who were eager to support new writers. Maybe that was encouraging?

Anyhow, there weren't many puzzles piling up in Puzzletron, but we still got organized for shepherding them along. Editor-in-Chief Yar Woo found five experienced Left Out puzzle writers who were willing to be editors: kibbitz on new puzzles; approve them to start testing; approve them to finish testing; keep writer-tester discussion civil. Our starting task: look over new puzzles showing up in Puzzletron, occasionally claim some, and start kibbitzing.

There weren't many puzzles coming in yet, though. It was pretty easy to forget to look at Puzzletron for days at a time. But there were some. It's just as well that things were slow at first; we didn't really know how Puzzletron permissions worked yet, and there were some early stumbles. (I'm not sure how many of us ever figured out Puzzletron permissions.)


There was other-puzzle-stuff going on this month.

The year before, I'd turned in a set of puzzles for Puzzled Pint. I'd worked with Neal Tibrewala and thus gotten the puzzles into pretty good shape. This month, a bunch of Puzzled Pint organizers testsolved the puzzles; they had some suggestions. So I blew the dust off of those puzzles, carried out the suggestions. There was last-minute formatting stuff; adding copyright notices and such.

The Miskatonic Game announced their application process; Team Mystic Golems scrambled to apply. Actually, I didn't do much scrambling here. But other folks on Mystic Golems did.


We voted on and discussed "contentious topics"; we talked about which aspects of our Hunt we wanted to be amazing… and which aspects would be "good enough is good enough" quality.

When we could, we informed our decisions with data. E.g., we were figuring out whether we'd continue the hunt's phone-answer-callback tradition. We suspected that for a winning team of our size, calling back teams for each answer-guess would strain us. But "suspected" isn't such a rigorous word. We were darned curious to know: How many answers do teams submit? Depending on that number, maybe we'd need to keep one nerd on the phone or maybe we'd need six. Some past Hunt-runners made server logs available; that helped a lot. Thank you, past Hunt-runners.

I looked over some survey data. A couple of recent hunts had surveyed players and asked them what they liked/didn't like. That survey data was part of a big pile of data that SETEC (the previous year's running team) handed over to us.

One thing we were curious about: What's a normal team size? The Hunt doesn't have a set team size, a maximum team size… We knew that we didn't want to write enough puzzles to keep a hundred-person team busy for 72 hours if that meant a normal-sized team would only get 10% of the way through a hunt, but we didn't know what "normal" size was. So we looked at team sizes.

I had an insight that I liked: We weren't necessarily looking for a plain ol' median team size. If you had five teams sized 10, 15, 20, 30, & 50, you might say the median team size was 20. But if you asked the people as individuals, you'd say the "middle" team size was 30, because many individuals are on that 50-person team, about as many as on the 10, 15, and 20-person teams combined.

I didn't have an insight at the time, but in hindsight wish I did: We should have especially considered the sizes of teams with MIT undergrad members. We had a goal: fun for MIT undergrads. (It wasn't our only goal, but it was an important one.) Are undergrads more likely to be in a "six students in a dorm lounge" team than in a hundred-person monster team? I have no idea.

I wish that, when figuring out a "normal" team size, I'd looked especially closely at poll responses from MIT undergrads.

I looked at responses from all poll-responders, which was good.

But I dunno whether MIT undergrads tend to play on big teams (big teams need "fresh blood" to stay big) or on small teams (the legendary "small gang of nerds in a dorm lounge") or if they're distributed similarly to generic puzzlers or…

And I dug through survey data to find various other things. How many puzzles should be unlocked at a time? Folks have opinions. How many hunts have folks done? (Unsurprisingly, MIT undergrad participants have done fewer than four.) How important is the opening skit and story to solvers' enjoyment? (Meh.) How important are HQ room-visits to solvers' enjoyment? (Better.) How important is the phone callback system to solvers' enjoyment? (Quite a bit.)

While researching the phone-callback system, I learned something that was news to me (though not to some other folks on Left Out): Mystery Hunt phone callbacks were also an unofficial hint system. Until this point, I'd ignored folks who were talking about whether our hunt should have a hint system; I "knew" that the MIT Mystery Hunt didn't have hints. But I was wrong: there were hints hiding in the phone callbacks.


So yeah: The month of March was a lot research and discussion informing decisions. Figuring out which way we were going.

Continued: Apr [>]

comment? | | home |