E.g., by the time I graduated from high school, I had more computer programming experience than this guy had when he was hired to write software that astronauts' lives depended on. That's crazy. But I grew up in the 1980s, when personal computers were coming in; I didn't have to wait around for hours for someone to process my punchcards. He was programming in the 1960s; there weren't any personal computers; programming back then involved a lot of waiting and hoping.
E.g., the Apollo 11 mission hit a pretty big computer problem, the 1202 error. This was because the lander had just one computer, responsible for many tasks. When everthing went smoothly, this computer was powerful enough for these many tasks—a sliver of time for this task; a sliver of time for that task, and so on. But when things got gnarly and some tasks took longer, then there weren't enough slivers to finish everything. And you thus ended up with the tense situation of the lander's computer horking up a 1202 error, going blank, and re-starting in the middle of, y'know, trying to land some humans safely on the moon. I, reading about this, had the naive thought: Why didn't they just set up a dedicated processor for this task? That would be a lot simpler than juggling all of those slivers. But it was the 1960s. You couldn't just walk down the street to a hobby electronics shop to buy a processor. Integrated circuits were a newfangled invention. When designing a computer, you had to second-guess your chip supplier: Were you the only customer for this chip? Could you buy enough to justify that chip's continued manufacture? It was crazy to use one computer for all that… except that it was the only choice.
So, yeah, if you're used to modern-day comforts to quality assurance like plentiful processors for running all those unit tests or a computer display that can show more than three numbers at a time… this book is kind of a white-knuckle reading experience. Also, there's snippets of bohemian lifestyle around MIT in the 1960s and 1970s.