Part of the answer is simple: If we as leaders can talk about our mistakes and our part in them, then we make it safe for others. You don't run from it or pretend it doesn't exist. That is why I make a point of being open about our meltdowns inside Pixar, because I believe they teach us something important: Being open about problems is the first step toward learning from them. My goal is not to drive fear out completely, because fear is inevitable in high-stakes situations. What I was to do is loosen its grip on us. While we don't want too many failures, we must think of the cost of failure as an investment in the future.
It's not clear how well Pixar's learn-from-failure approach translates to other organizations. At Pixar, some movie-story smarties hear how projects are going and suggest changes. If you're trying to engineer better software, you might get more oomph by measuring and improving… then again, you might still need some smart folks to tell you what's worth measuring in your situation.
"You can't manage what you can't measure" is a maxim that is taught and believed by many in both the business and education sectors. But in fact, the phrase is ridiculous—something said by people who are unaware of how much is hidden. A large portion of what we manage can't be measured, and not realizing this has unintended consequences. The problem comes when people think that data paints a full picture, leading them to ignore what they can't see. Here's my approach: Measure what you can, evaluate what you measure, and appreciate that you cannot measure the vast majority of what you do. And at least once every once in a while, make time to take a step back and think about what you are doing.
I've thought a lot about about technical documentation for software developers—writing things down that, when read, help computer programmers to figure out their jobs. Alas, it's tricky to measure. Nobody's come up with a way to measure engineer "output". (Well, nobody's come up with a good way to measure engineer "output".) So we don't really know whether some document's improving that output or making it worse. Thanks to web analytics, we can measure how many times some page is read. And it's good to measure that—if you have time to polish one of two pages, you want to prioritize the one that many folks read. But you shouldn't fall into the common trap of using number-of-times-page-read as a measure of how documentation is helping an engineering organization. You're measuring a cost; it's sad that engineers are taking time to read documentation; you hope the documentation is good enough to justify this time cost.