I follow many feeds; I used to follow the tech news site PandoDaily. I ignored most of it; modern tech reporting has problems and PandoDaily illustrated plenty of those. But for a while, there was something interesting: a reporter named Hamish McKenzie wrote about the Chinese tech industry. While he was writing articles, he was also writing a book with his experiences; it has snippets from interviews, some compare-and-contrast, some insights which might turn out to be right or wrong. It was a quick read, and cheap besides, a snapshot of China's tech scene.
An Adam Segal quote on how corrupt local officials, going where the money is, hamper China's tech:
American C.E.O.s love to quote China’s rising investment in R. & D., but they never follow with Chinese reporting that suggests sixty per cent of state R&D funds are lost to corruption. High-profile plagiarism and academic malfeasance cases continue to surface
It was also a bit scary to read about the Chinese view that Google+'s UI is a rip-off of Sina Weibo. So maybe there's not much point in innovating if you think some big American company can copy your design. (And even if that didn't happen… if everyone around you is telling you it happened, it sure makes you reluctant to put a lot of time into wild new ideas.)
Max Alper tried to fool San Francisco yesterday. He posed as a Google employee and screamed condescending crap at people protesting high San Francisco rents. Apparently, the folks staging the protest have previously used a fake Google shuttle as a prop. This could only make sense in a crazy alternate universe in which Google employees like paying high rents.
So this guy lied, trying to smear Google; that's the kind of thing that makes you wonder about motive. Who benefits from framing San Francisco's rental woes as tech-workers-versus-the-world? I'm guessing it's the landlords who are evicting folks. They've caught some heat for the evictions, they'd be glad to deflect that heat.
The Noun Project sponsored an iconathon to create and compile a collection of little drawings to represent educational topics. Many icons were designed, two of which caught my eye.
Games for Learning, Problem-Based Learning
Since I'm trying to teach folks how to play puzzlehunt games, I want another icon, one for Game-Based Problems for Learning Problem-Based Games. C'mon designers, get on it. OK, I don't really want that icon. I'd rather imagine it.
It's a science fiction novel. How do financial transactions work in an interstellar civilization which has that nasty speed of light to contend with? How would you defraud such a system?
I added a puzzle to Octothorpean this weekend. It took longer to figure out where to put the puzzle than it did to write the puzzle itself. I wanted to add a puzzle close to the beginning of Octothorpean, but didn't want to make the longest "arc" even longer. I didn't know which was longest, so I measured.
The plurality of Octothorpean is a set of eight "arcs" or trails of puzzles that come together at a metapuzzle. Some arcs take longer to finish than others. Ideally, they'd all take about the same amount of time. In practice, I'd settle for the slowest path taking less than e times as long as the quickest path.
I wanted to add a puzzle. If I added it to the slowest arc, I'd make that arc even slower. Really, I wanted to add it to the quickest arc. But which was quickest?
I'd already measured this somewhat: I'd originally "balanced" the arcs based on playtesters' solving times. But that timing data was… noisy. Some puzzles had only been seen by two teams; a few had only been seen by one team. In some cases, that one team was a Small Subset of the Demonic Robot Tyrannosaurs, and they were very fast indeed.
But now more than 100 teams have played. So I had more data. I wrote a little program to crunch the logs: for each of the eight arcs, for each team that's completed that arc, figure out how long that team took. From this, for each arc, we can find the median time to complete the arc, easy-peasy.
The results surprised me: the longest arc took more than five times as long to finish as the shortest.
Now I wasn't trying to figure out where to insert a new puzzle: suddenly I was doing damage control, trying to figure out why this one arc was so difficult to solve.
I wrote another little program, this one to crunch the logs another way, going for finer granularity: instead of computing the median time to finish an arc, I computed the median time for each puzzle. Then added up the puzzle-times to figure out arc times. The big multiplier… disappeared. Using this computation, another arc was the longest arc (but not by a factor of five, thank goodness).
What was going on? Why did teams take so long to get through that arc, but not if you looked puzzle-by-puzzle?
More than half of the teams took a break partway through that arc. They didn't all do so on the same puzzle. But somewhere in that arc, they took a break. When I looked more closely at the median time to finish the arc, more interesting than its time as a multiple-of-the-quickest arc's time: the slow arc took 11 hours. Partway through, teams took a break, went away to sleep, and came back. Some went away for a week, some for a day. The median time to go away was about enough for a good night's sleep and a big breakfast.
Since they took their breaks on different puzzles, when I looked at median puzzle-solving times, the breaks disappeared. For each puzzle, the median-time team solved without a break.
In theory, the median's a good thing to measure for stuff like this: avoiding skewed timing data from fast Tyrannosaurs and slow folks who decided to get some rest. But I'm glad I looked more closely at that data, or else I'd still be trying to cure that slow arc.
Anyhow, now I'm looking at "slow" puzzles. And using that "funnel" analysis I wanted on each arc, figuring out which puzzles caused the most folks to quit that arc. I looked at stats from 100 teams. As a rule of thumb, for each arc 50 teams started the arc and 30 finished. That's not terrible, but I bet those quit-inspiring puzzles could each use a closer look.
(Oh, and I moved away from using the median to something I've been calling the "twedian": instead of the middle value, I take the value that's 2/3 of the way through the sorted list. It's an attempt to counteract the effect of so many Tyrannosaur-like teams playing this game but trying to figure out how to optimize the fun for not-so-experienced teams.)
The author, @choonpiaw, is a computer nerd who made good and didn't squander all his money. Maybe you're a computer nerd who just got lucky with an IPO… and in a few years you would like to be another computer nerd who made good and didn't squander all your money. Maybe you've heard enough about the odds on startups to realize your chances of being similarly lucky in the future are not great. Maybe you want to sock the loot away for future lean times. You started to learn about investing, but backed off when you recognized it as a flawed interface. Now you're thinking of using an advisor… but you've seen some of those people acting like irresponsible jerks. How to pick a good one? You'll probably need to talk to them. The book How to Interview a Financial Advisor is a book for nerds who want to find a financial advisor. It has questions to ask; more importantly, it's got things to look for in the answers, positive and negative.
Careful, this this book has a not-so-hidden agenda. It turns out that you can come up with a decent financial plan without a financial advisor. This book, along the way of telling you how to interview an advisor, will teach you enough to come up with such a decent plan. You'll probably overlook the next big fad in investment… probably wouldn't understand it even if you saw it… but most of those turn out to be not-so-great, anyhow.
Disclosure: I've known Piaw for, wow, decades. I read an early draft of this book. Whenever there was something in it I didn't follow, I asked for more explanation. The final draft is clear to me, but your mileage might vary. Oh yeah, and I haven't actually tried interviewing a financial advisor based on the advice in this book. Hmm, maybe if things get slow around the table at Thanksgiving dinner I can try out some of these questions on my cousin. Hmm, actually they probably wouldn't liven up the discussion so much, never mind.
As usual, these "hits" aren't a measure of humans visiting pages; that count would be much lower. It's just requests to the website: everytime a robot visits some page, the count goes up. If a human views a page that contains a dozen graphics, those graphics cause another dozen hits. So it's not as impressive as it sounds, but it's easy to measure so that's what I measure. The 24 millionth hit…
22.214.171.124 - - [25/Nov/2013:08:47:04 -0400] "GET /manual/sc/jp/index.html HTTP/1.1" 200 5546 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)"
…looks like the Bing web search engine's web crawler robot just came by to confirm that I haven't changed the Japanese translation of the GEOS-SC documentation's top page. Bingbot only checked 170 pages that day. You may recall that it used to crawl absurdly more pages; I kinda made fun of it, crawling my site so assiduously though I rarely change the site. This raises the question: how often do I change it? I tried counting the unique dates on which I changed files on this site. That's not a perfect way to count this. The Daily Nonsense page, of course, updates each day. But my quick-and-dirty counting method doesn't notice that: it only notices that the page changed today. Anyhow: from late 1999 to late 2012, I see 1113 file modification dates. That's about 85 days per year, once or twice a week. That's not so often, by computer speeds.
I think we can all agree that Bingbot needs to relax.
Ha it is so amusing: in my apartment, two smoke alarms ended up right next each other ha ha. Rental laws ha ha so droll.
Oh wait, how can I tell which of them was occasionally low-battery-chirping all last night. Did I say amusing? I meant "vexing". Yes, that's it.
This morning, I had the spirit to look up. Above the usual eye-level was a crudely taped-up laminated message, triva-lly cluing a certain location. (Not my location; I was in San Francisco, USA, of course.)
A loose bit of trivia, out in the world. Maybe not a puzzlehunt… but probably some game or other. I can imagine the game. Perhaps clues send you to locations; at each location you find the paper with the trivia question; you note down the answer on your answer sheet? Maybe. It's a lot to extrapolate from one taped-up paper, but it doesn't feel like too much of a stretch. Should I have taken down the question, checked the back to see if it had the organizer's contact information? Maybe. But I didn't. Hey, give me a break, I had other things to do. I found out that construction no longer obscured the [redacted] 2-Tone puzzle near Union Square. I checked on the [redacted] Octothorpean puzzle. I captured a Munzee. Y'know. Things. Anyhow.
This was the opening weekend for The Octothorpean Order game. It felt good! 60 teams solved the start-puzzle; 28 already made it as far as solving the big scary meta. Wow, that was fast; I think it's safe to say some experienced teams have now gone through and shaken a lot of the bugs out. Perhaps I mean big, experienced teams; it's hard to tell how many people are on a team, but most of those who've finished seemed to be solving puzzles on a few "arcs" at the same time. And folks who mail email@example.com with trouble reports (thank you for those!) refer to themselves as "we" so far. So they're either on multi-person teams or royalty, in which case I apologize for omitting your title, your highness!
Still more to do, of course. I've got notes for more LA puzzles to uhm, reify. Some folks were muttering about writing puzzles earlier; I should nudge them.
And now that there's data, I want to measure the "funnel." Sales folks look at a user's path through a web site as a funnel, perhaps because sales folks like funnel metaphors. They notice "OK, 1000 people came to our 'landing page' by clicking on an ad. Of those, 11 clicked through to the Product Details page, and 10 purchased something." They look at the fraction of users who made it through each step (11/1000 ain't so great; 10/11 is pretty darned good), trying to figure out where users run away from becoming customers. I'm not selling anything, but I bet a similar analysis would point out puzzles that are too hard; in an "arc" of puzzles, if 40 teams solved #2, but only 20 solved #3, probably someone should tinker with #3.
That tinkering can lead to changes. When playtesters went through, they struggled with some puzzles. It's OK if they struggle some; they gotta learn this stuff. But some puzzles they struggled with weren't teaching them anything they'd really use in other puzzlehunts. So I kicked those puzzles into a pile of bonus-puzzles at the end; they weren't broken per se but they broke the flow.
I'm giving myself the Iron Ass Award for this weekend. I was basically just sitting at the computer and looking at the guess-logs, trying to spot new partials to add. (A la, "Everyone keeps guessing 'blark' on the such-and-such puzzle… Oh, I think I can figure out why they're guessing that." then I tweak the website to give a "nudge" to future teams who guess "blark".) I was pretty glad that things calmed down enough on Sunday such that I granted myself permission to go out and procure sandwiches for lunch and dinner.