A study that I would really like to see
Recently I have been helping a client establish standards for software development and configuration management. The developers are all very excited about instituting practices like unit testing, code reviews, and continuous integration but there is concern whether the pressure and pace of development would allow for taking the time to things right. Despite the fact that management brought me in to establish these practices, there is doubt as to whether the business could abstain from setting unrealistic expectations that force hurried work.
So the big question is - does doing things the "right" way take longer and jeopardize short term deadlines? I think the conventional wisdom is that process investment starts to pay off toward the end of the project or in subsequence releases of the system. But when is the break even point? I have heard lots of claims but very little hard data - mainly because it is impossible to compare different projects because no two projects are alike. I think the only way to get evidence that is even close to conclusive is to do a study like the following.
Take a large group of under graduate computer science majors and randomly assign them to three independent classes taught by the same professor. The course work for the semester is a series of software development projects to build an application where each project builds on the previous project. Each class gets the same projects. None of the classes has visibility as to what the next project will be or what the other classes are doing. All students track time spent on the projects. The different classes are instructed as follows:
-
Class A is given very aggressive deadlines and is graded solely on how well the application meets the requirements.
-
Class B is given less aggressive deadlines and is graded on development standards (unit tests, configuration management, and code structure and style) as well as the application's satisfaction of the requirements.
-
Class C is given the same deadlines as Class B but is only graded on the same criteria as Class A.
At the end of the test, you compare the time spent by the three classes and the overall quality of the three applications (defect rate, adherence to the requirements, etc.)
If widely considered best practices in software development are as effective as believed, Class B will end the semester with the best software and the least stress. Class A will be totally stressed and their application will be barely hanging together. I would guess that Class C starts the semester off breezing through and enjoying their free time but has to do a bit of cramming in the end.
Has anyone seen or done a study like this?