Showing posts with label Planning. Show all posts
Showing posts with label Planning. Show all posts

Thursday, November 19, 2009

The Planning Race

Step 1. Figure out which features to implement.

Step 2. Specify the tasks that are required to complete the desired features.

Step 3. Peer review the specified tasks.

Step 4. Calculate team members' velocities based on the previous two iterations of accomplished work.

Step 5. Start the planning race.

The planning race is where team members attempt to fill their task queues (the upper limit of which is determined by the velocity) as quickly as possible. The faster you grab tasks, the more likely you'll get to do the things you want to do. The race should take place in a large conference room where loud talking is allowed. Bargaining, bartering, calling dibs, and talking smack are all highly encouraged. If you're doing it right it should almost sound like the commodities market where teammates are buying and selling tasks at fixed hourly rates. As punishment for taking more tasks than allowed by your velocity, other team members are allowed to cherry pick tasks from your queue.

Advantages: Dialing the planning knob to 11 means less time spent planning. I know how much you love meetings but the less time spent planning means the more time spent getting this done.

Disadvantage: The Planning Race requires a healthy backlog of tasks to pull off. There has to be at least enough tasks for everyone to fill their queue and ideally a few more left on the backlog. Tasks also have to be well specified, meaning everyone understands and agrees what needs to be completed.

The moment of Zen for the Square Root team was when we incorporated Step 3. Peer reviewing new tasks in the backlog streamlined our entire planning process and allowed us to plan faster and better than we had ever planned before. The result: not only were we spending less time planning but the quality of our plan increased dramatically. Some of this may be due to increased maturity and practice, but I stand by the Planning Race. It's super fun.

Before tasking peer reviews:

After tasking peer reviews:

Friday, November 13, 2009

Planning Process Communication

Background

During the first half spring semester, our team was following a continuation of our fall planning process. Milestones were assigned to milestone-owners at the planning meeting, along with budgeted time, and then it was the milestone-owners' responsibility to make sure that the milestones is completed at the end of the iteration.

There were several issues with this process:


  1. The milestone owners were supposed to give out the tasks for the milestones, but they felt responsible for the tasks, and so owners tried to finish up milestones by themselves.
  2. Since there was no team-review of tasks, the milestone owners often did not specify tasks in adequate detail. So even when other team members wanted to help out, they did not have enough detail about the tasks.
  3. Lack of detail contributed to bad estimates on the milestones, and so each iteration, most of the milestones would not get done. As a result, the team was getting demoralized(Finishing makes people happy).
At that time, right before Spring break, Michael suggested that we should try something scrum-like.


Creating the process


With Michael's suggestion and a Scrum book from Marco, I wrote up our Spring planning and tracking process that galvanized the team, organized the team's work, and brought happiness to the world. No, actually, this process raised another issue, but I'll talk about that in the next section.
As you may see from this process, the key points of it were these:
  1. Milestones still had owners, but the owner's responsibility was to adequately specify tasks before theplanning meeting, and not afterwards. These tasks would create the product backlog.
  2. The team sat down together and decided on the milestone and task priorities, and then take tasks from the backlog according to the priorities.
  3. The team-lead's job was to make sure overhead tasks were also included in the individual backlogs. At the end of the planning meeting, no team member was supposed to have more than 12 hours of tasks.

After writing down the process, I emailed it out to the team, and then reviewed it at our weekly status meeting. The team seemed to generally like it.

Implementation

However, after Spring break, when we came back to implement the process, we found there were significant gaps in the team's understanding of the process. It was clear that we all had different interpretations of the process, even though it was written down and crystal clear(to me at least).

That was when we had to forget our previous reviews of the process, and just work it out face to face with the team. We tried out each stage of the process, and adjusted it according our needs. We prioritized milestones, and then went off to detail tasks, and then came back to take those tasks according to milestone-priorities.

At the end of that 5 hour long exercise, we had our first product backlog and individual tasks that we were responsible for. Since everyone had tasks that they had voluntarily taken, the milestone owners were not burdened with distributing tasks. And everyone knew the milestone priorities. Therefore, the most important milestones were guaranteed to finish.

I learned a big lesson from that meeting. It does not matter how eloquently you write down something. It does not matter how emphatically your team nods their heads when you ask if they understood it. Only when the team can work through the process together and feel confident about it, can it say that it understood a process.

Conclusion

The result of all this effort was a happy ending after all. In summer, the team added some additional checkpoints to this to make sure tasks are being specified correctly and completely, and added a client-prioritization step. However, the underlying process stayed the same.
The key takeaways from this were:

  1. Critical processes such as planning and tracking needs to be communicated. And sometimes you have to sit with your team in a 3 hour meeting for that communication to happen.

  2. Prioritizing milestones together with the team really helps to get team buy-in on the importance of those milestones
  3. .
  4. Since the team members were taking tasks, they felt more responsible for them than before


Tuesday, November 10, 2009

Using Use Case Points

Early on we decided to record our functional requirements as use cases. Estimating how much effort was required to complete the project based on the use cases collected turned out to be a much more challenging problem. In creating our estimates I turned to use case points estimation(pdf), a technique developed by Michael Cohn of Mountain Goat Software.

The basic premise of use case points is that by counting the number of transactions in a use case and then applying some general rules of thumb to the number (possibly taking into account adjustment factors based on quality attributes and environmental factors) you can get a rough order of magnitude estimate for understanding how big a use case is. This is same basic premise used for all estimation (count, optionally adjust, project based on data) McConnell talks about in his excellent estimation handbook, Software Estimation: Demystifying the Black Art.

Counting transactions is easy and straightforward. For the SQUARE Tool, we calculated approximately 255 unadjusted use case points and 244 adjusted use case points. [A complete breakdown of use cases and their corresponding point estimations are available on our archived project website.] The use case point estimates gave us a rough idea of how big and complex each use case would be compared to other use cases. The tricky part for us was projecting effort and schedule based on the use case point number. Being a new team, we didn't have historical data. To further complicate matters, we were conducting this estimate in parallel with our architectural design work, much later in the life of the project than Cohn's paper implies this technique should be used.

Not having a means of projection I turned to some interesting sources. Keep in mind that the total effort allocated to the studio project is only about 5,300 person hours (over a 16 month period) and time must be split among all stakeholders including faculty (e.g. end of semester presentations). At the time these estimates were created about 4,000 person hours of work remained in the project.

Assuming 20 - 28 hours per point means we would need between 4,800 and 6,800 person hours of effort to complete the project.

Converting use case points to function points to Java lines of code (result is approximately 8,000 LOC) and then running these numbers through COCOMO II (default settings) gives an estimate of 3,700 - 5,900 person hours of effort to complete the project.

Surely MSE Students work faster than the default COCOMO II assumptions. Given that MSE teams typically produce code at a rate of 4.48 LOC/hour, the Square Root team will need only 1,819 person hours to complete the project.

According to estimates, none of which corroborate with one another, the project will take between 2,000 and 7,000 person hours of effort! So we'll either finish under time or blow the whole project. Not very useful.

To overcome the variation in our estimates and hopefully come up with something a little more useful, we conducted a Wide-band Delphi estimation session, sampling a set of use cases to determine an approximate value for a use case point. Following the session, we determined that use case points for our team were worth between 8 and 12 hours. This gives us an estimated range of 1,800 and 2,300 person hours of effort, a much more manageable range and certainly (hopefully) more realistic.

We used the average use case point value of 10 hours for the purposes of planning. Tracking the team's progress over time using earned value, it became clear that we should have chosen the lower, 8 hour point value.

Conclusions

Use case point estimation worked out OK for the team. Realistically, any reasonable proxy would have done. We wasted a lot of time trying to find reliable industry sources for point values when the most accurate estimation of all was a simple Wide-band Delphi estimate done by the team.

The most important thing about the estimates was that, for the first time it gave us the ability to see beyond the next iteration and allowed us to project through to the end of the summer. That we were able to produce these estimates, in my mind, marked the end of the "Period of Uncertainty." From this day forward we had a plan, knew how long it would take, and could definitively determine where within that plan we currently were and whether we were where we needed to be to successfully complete the project.

Use case points were unsatisfying because use cases were generally unsatisfying as a means for recording requirements. While the nature of use case points would have allowed us to create an estimate earlier, the Wide-band Delphi session was so successful only because we had enough detail to have meaningful conversations about what was in each use case. Had we attempted this too early, the estimate would naturally of been less accurate (though perhaps still useful if you could figure out a way to track architectural progress within the created estimates).

Thursday, October 8, 2009

Deciding to use Pair Programming

During the construction phase of our project, the team decided to use XP. Of the 12 XP activities advocated in the first edition of the XP book we used 9, choosing to skip the metaphor (since we already had a solid architecture), simple design (not that we advocated complex design, just forward thinking), and having an onsite customer (since our customer was...not on site). Of all the XP practices, the most controversial for the team (and probably everyone else in the world) was pair programming.

According to our estimates, the schedule was extremely tight. We weren’t sure whether we would have enough time to complete all the features, let alone complete them working in pairs. Luckily I was introduced to some interesting data through a class I was taking, Significant Papers in Software Engineering taught by Mary Shaw. One of the papers we read was an article by Laurie Williams, Robert Kessler, Ward Cunningham, and Ron Jeffries titled Strengthening the Case for Pair Programming (PDF) and published in mid-2000. The data was a little old, but the essence of programming in pairs hasn’t changed that much.

The paper outlined two interesting findings which had a direct impact on how we might work during the summer. The first finding was that pair programming requires only a little more effort than programming alone. This study found that pairs require between 15% and 80% more programming hours to complete a task, not 100% more hours, than programming alone. In the data presented, a pair would usually start out requiring more effort and over time become more efficient, almost but never quite to the point of a single programmer. The second finding complements the first - pair programming gets work done faster. This study found that pairs are between 20% and 40% faster than programming alone. And on top of all this, the code quality was found to be superior compared to programming by on your own!

There were some problems with the data presented that made me skeptical. The first was that the projects used in the experiment were really just toy programs - small applications that were self contained and can be started and finished in a few hours. The second problem was that the teams were made up of undergraduate students with little to no industrial experience. The most novice person on the Square Root team has just over two years of experience and the team average is over three and half years. That’s three and a half years of coding habits (both good and bad) that we would have to reconcile to make pair programming work - something the undergraduate teams didn’t have to deal with as much.

With these things in mind, I created some projections to see if we could even afford to use pair programming, assuming the Square Root team performed similarly to the teams from the experiment. As you can see from the graphs, the findings were interesting from a planning perspective. The team decided that we were going to use some kind of code review, either pair programming or Fagan inspection.


In terms of effort, if the team operated in the 15% efficiency range, it was conceivable that pair programming would require less overall effort than Fagan inspection.


In terms of completion time, it appeared that we would be able to write more lines of code faster with pair programming than working alone (individual LOC/hour is derived from previous MSE studio rates in Java).

With these projections in hand, we decided to commit to pair programming for the first two iterations. Personally, I was extremely curious to see how we compared to the numbers presented in the paper. So, following the first two iterations, I pushed hard to conduct an experiment. Our experiment combined informal replication and lessons learned approaches to assess the efficiency of the Square Root team compared to the previous experiment. I’ll talk more about how the experiment was set up, the specific results, and what we learned in future posts.

Creating some projections on how pair programming would affect the team's plans turned out to be a great way to create buy-in for the process and instilled the team with more confidence. I don’t think pair programming would have been used as much had we not created these early projections to show that it wouldn’t completely explode in our faces and prevent us from finishing the project by the end of summer. On top of that, these projections and the data presented laid the groundwork for a series of interesting experiments that further solidified the team’s processes and use of XP.