The traditional development cycle for training courses, technical or otherwise, is a little different from the software development you may be used to. Think of all the artifacts in a typical training course. You probably get manuals to take home. There might be a few videos to watch. Instructors must be trained on the material. Labs and exercises must be designed and tested and tested again. Marketing assets must be generated and distributed. Facilities must be booked, along with any network requirements. If it sounds exhausting, that's because it is. That's why most training courses have a measured development cycle. A course is developed, proofread, tested, debugged, and then released. Put a fork in it, because it's done. Time to start again on the next course--scheduled for release in 6-8 months if you're lucky.
My Puppet Labs training is a little different. We move at a different cadence and might release three times in a week, if needed. Read on to find out why and how we manage this without losing our sanity.
Kari and I had dinner last night with some good friends who have just opened their new dojo1 in John's Landing, Portland. Along with the beer and wine, we had lots of philosophical discussions, because that's how Tony is. He's fully immersed in this philosophy and is happiest sharing this love with people he cares about. It's part of what makes him such a great teacher.
People often ask me for advice on designing or delivering training material. One of the most common questions is how you determine the appropriate pace for the training courses or the lab exercises. It's no surprise, that's a tough question. The answer is suprisingly straightforward though.
Something that's been missing in my training classes for a while is a quantitative method for monitoring how well the class was keeping up. Instead, we relied on the ability of each instructor to effectively read the class and adjust pace appropriately. This does work, but it's less consistent than I'd like and it doesn't give me the ability to systematically gather metrics about the knowledge retention of the different training sections.
Clearly, I needed something better.