Tuesday, November 24, 2009

The Square Root Poster

For our final reflection poster (pdf) the team decided to try something a little...different.



It's a supergraphic:
Supergraphics are interpreted by the viewer on their own terms. Allow an audience to absorb the information at their own rate. Sure, you may wish to call attention to certain details, that’s why you’re in front of them, let the audience come to their own conclusions and this can generate fruitful discussion during or following your talk.

The best example of a supergraphic is Napoleon's War of 1812 march to Moscow by Charles Joseph Minard. It's great for a number of reasons: it's high resolution, multi-variate, indicates causality between variables, and uses great graphic design. With this map as our guide, and advice from Tufte gleaned during one of his excellent seminars (students are only $200!), the Square Root team attempted to create our own, as shown above.

Reading the Poster

The X-axis shows time logarithmically spread to show the relative effort spent in each semester.

The Y-axis shows team morale. Morale is really trust as measured using our regular team dysfunctional surveys. These surveys were taken from Patrick Lencioni's The Five Dysfunctions of a Team in which trust forms the basis of a well-functioning team.

The thickness of the line shows team maturity. Maturity was measured through our team reflection process in which we asked three simple questions. "Is the process documented?" "Is the team following the documented process?" "Is the process working for the team?" Quantitative answers to these questions let us get a notional idea of maturity and qualitative responses helped us to improve as a team.

The branching tributaries or excursions, leaving the main trunk of the graph show processes that the team was able to execute independently of other processes. This is another way of thinking about maturity. For example, by the end of the summer the team had matured such that we could tweak individual processes without affecting other processes.

The annotations on the graph show what the team decided were the most critical moments of our project. Critical moments are events which had a significant and immediate impact on the team in some way. You can read about many of the stories behind the critical moments on this blog.

Analyzing our data as a supergraphic allowed the team to see things that we would not have seen otherwise, to think about and reflect on the project in a way that no one else has thought about it. Some interesting things that can be seen in the graphic:
  • The forming, storming, norming, and performing team stages are clearly visible
  • The effects of better visibility on morale (we were blissfully ignorant in the fall)
  • even negative things can be a positive as was the case in our planning breakdown
  • Commitment to process can lead to big pay-offs
  • Small changes can have a huge impact on a team and the changes should be made.

In addition, it just plain looks awesome.

Our Message

There were two big messages that we wanted people who read our poster to take away.

First, there is no single right answer except "it depends." We designed our poster so you can take away messages that are meaningful to you. As you can see on this blog, every member of the team has taken away different ideas from the studio experience. The poster was meant to reflect this by making it easier to share advice on a wide range of topics, all of which will be interesting to someone but not all to the same person. Tufte puts it best: create an image which allows readers to explore the data using their own cognitive style.

Second, since there is no single right answer and no best way of doing things, experimentation is the key to success. The studio environment is an ideal time for experimentation. Success or failure is not nearly as important as understanding why you succeeded or failed.

Enjoy exploring the data. If you have questions, please feel free to read through our blog or any other data in our project archive. If you have questions, don't hesitate to get in touch.

[Edit: We've got video of the presentation too!]

Thursday, November 19, 2009

AUP Chosen as Guiding Process




One of the best decisions made during our Studio program was making AUP as our overall process guidance. This was a critical decision because early in the semester we were wandering in what type of activities to focus on.

The AUP’s phased based approach mapped really well to the MSE program semester structure. In the high level perspective of the project it helped the team to do better planning. It provided the Go or No Go criteria to move from one phase to another. These exit criteria milestones in our plan, from which all the project tasks were derived from. The good thing is that by creating our plan with these milestones we streamlined the team effort toward a same direction, and we were following AUP process by enforcing its activities in our plan.

AUP also allowed us to embed iteration within phases, this allow us to use SCRUM like iterative process for our planning and extreme programming for our Construction phase. This allowed us to follow the AUP in the long term of the project and use other processes within each phase.

The Planning Race

Step 1. Figure out which features to implement.

Step 2. Specify the tasks that are required to complete the desired features.

Step 3. Peer review the specified tasks.

Step 4. Calculate team members' velocities based on the previous two iterations of accomplished work.

Step 5. Start the planning race.

The planning race is where team members attempt to fill their task queues (the upper limit of which is determined by the velocity) as quickly as possible. The faster you grab tasks, the more likely you'll get to do the things you want to do. The race should take place in a large conference room where loud talking is allowed. Bargaining, bartering, calling dibs, and talking smack are all highly encouraged. If you're doing it right it should almost sound like the commodities market where teammates are buying and selling tasks at fixed hourly rates. As punishment for taking more tasks than allowed by your velocity, other team members are allowed to cherry pick tasks from your queue.

Advantages: Dialing the planning knob to 11 means less time spent planning. I know how much you love meetings but the less time spent planning means the more time spent getting this done.

Disadvantage: The Planning Race requires a healthy backlog of tasks to pull off. There has to be at least enough tasks for everyone to fill their queue and ideally a few more left on the backlog. Tasks also have to be well specified, meaning everyone understands and agrees what needs to be completed.

The moment of Zen for the Square Root team was when we incorporated Step 3. Peer reviewing new tasks in the backlog streamlined our entire planning process and allowed us to plan faster and better than we had ever planned before. The result: not only were we spending less time planning but the quality of our plan increased dramatically. Some of this may be due to increased maturity and practice, but I stand by the Planning Race. It's super fun.

Before tasking peer reviews:

After tasking peer reviews:

Sunday, November 15, 2009

Wicked Requirements

Rittel and Webber coined the term wicked problems (pdf) to describe a certain class of problems that were particularly tricky to deal with. Such wicked problems exhibit most of 10 specific characteristics. I propose that planning software within a fixed budget is a wicked problem.

At the start of our studio project we knew we had to gather some kind of requirements. Requirements are the first step in nearly every software lifecycle model but few processes focus any effort on describing how those requirements are supposed to be gathered. We were basically left to our own devices when building a process for requirements elicitation. Sure we had techniques we could apply, but those don't help us address the wickedness of planning the project.

There is no definitive formulation of a wicked problem. Depending on the methods we chose the elicit requirements, the solutions would change. Different techniques might prompt different reactions from the client and in turn give us a different view of the solution space. This was certainly true as the information we got from ARM, use cases, prototypes, and problem frames was always different.

Wicked problems have no stopping rule. When do you have enough requirements? Agilists subscribe to the pay-as-you-go model to deal with this issue. Knowing that we wanted to spend more time on design and didn't have a firm grasp of the problem domain, we felt we needed more information. Any requirements engineering process we built would need to provide guidance for stopping.

Solutions to wicked problems are not true-or-false, but better or worse. Any plan we create for the project based on our requirements will never be The Plan. Our requirements can only ever provide more or less information which allows us to make better or worse plans. Of course if the requirements are incorrect...

There is no immediate and no ultimate test of a solution to a wicked problem. How do you know you gathered the right requirements? The funny thing about requirements is that, though they may be testable, unambiguous, and precise, that doesn't mean they are right. The best part is that even if the client and team thinks a requirement is right, once it's implemented (the eventual, partial test) everything changes. "That's great but can it do this other thing instead?"

Every solution to a wicked problem is a "one-shot operation"; because there is no opportunity to learn by trial-and-error, every attempt counts significantly. Normally this wouldn't be too big of a problem but given that our project is so short lived, we really only get one shot at gathering our requirements or we'll be so far behind that we might be unable to catch up. Like or not, our product ships in August.

Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan. What do requirements look like? We chose to record our requirements as use cases. Eventually we needed to add more information in the form of paper prototypes, work flow diagrams, and additional team communication.

Every wicked problem is essentially unique. Sure, folks have built web applications before, but no one has ever built a SQUARE Tool like what we've built.

Every wicked problem can be considered to be a symptom of another problem. It was not uncommon for our requirements elicitation discussions to uncover other ideas about the SQUARE Process or other ways the tool should operate (other than the specific feature we were currently discussing).

The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem's resolution. Our solution took the form of use cases. This worked well in some ways but was awkward in others. In particular, it was difficult to articulate, plan, and implement system-wide features such as quality attributes. We knew this going in and tried to compensate but our ideas didn't always work out the way we thought they would.

The planner has no right to be wrong (planners are liable for the consequences of the actions they generate). Ultimate responsibility for each requirement laid with the team. If we were unable to gather the right requirements it would impact our grades and our relationship with our client.


10 for 10. Requirements engineering and software project planning is absolutely wicked.

Requirements Engineering
We chose to focus effort in the fall on gaining domain knowledge and gathering requirements. Whether this was the right thing to do or not I leave to another discussion on another day. Instead I'm going to discuss what we did and how it worked out.

At first we started with a depth-first approach to collecting our requirements. I've spoken on my personal reflection blog about the importance of having proper affordances in software process and this is a case where the affordances were all wrong. Our original requirements process required that the team completely record requirements for the first three steps in the SQUARE process before moving on the other steps.

Given that requirements engineering is a wicked problem, this process was doomed to failure. There were two main issues with this process. First, the team had to wait for answers to questions from the client thus blocking the completion of other use cases. Second, not enough was known about some use cases to continue pursuing them until further research or experimentation could be completed. According to the process we would need to conduct those experiments before moving on to other requirements. This is obviously not satisfactory.

Almost a year ago to the day (November 14), as the team lead at the time I called an end to the original requirements engineering process and suggested a breadth-first approach in which a known set of use cases would be specified to an agreed upon level of quality (defined by a checklist).

The new breadth-first approach worked as planned. As a team we were able to gather information on a variety of use cases simultaneously and achieve a minimum level of understanding about the system.

Having the guts to make the change as a team allowed us to avert near disaster and let us have a complete (if not finished) set of requirements that were good enough for starting design work, prototyping, and experimentation. We nearly failed because we tried to solve a wicked problem with a tame process.

Sharepoint Installation

This blog post is probably a figment of the writer's imagination. Read at your own risk.

Background

In fall semester, the SQUARE Root team's tracking was inadequate. The team was tracking a bunch of stuff, but it was not getting the value for it. Tracking was done on excel sheets on the shared folder and combined using scripts. Therefore, when a team member wanted to put his/her time on the timecard, s/he had to sign into the the VPN, remote desktop into the server, and then put in the time. This was tedious and boring. Also the team was not accustomed to tracking As a result, aggregating the data at the end of iterations was difficult, and the team could not draw any conclusions from it.


A New Hope

At the beginning of Spring, I installed sharepoint to tackle with the tracking problem. Sharepoint provides a central repository for tracking data. The user interface of Sharepoint was similar to Microsoft Excel, and so the team did not have to learn anything new.

There was still some scepticism regarding Sharepoint's ability, probably because it was not clear at first what would be documented in the wiki vs. what would be in Sharepoint. However, we resolved these pretty quickly.

We were able to trace from the milestones set at the planning meetings to the individual tasks completed by the team. Therefore, we were also able to see the team's progress on a week to week basis.


The Empire Strikes Back

However, towards mid-spring, the team realized that most of the milestones were not being completed, and at each status meeting, we had action items to clean up tasks on Sharepoint. One particular week, it appeared that only 20% of the architecture refinement milestone was done, but in reality it was 80% done. This was an issue given that the team's planning, forward progress, and overall team morale depended on the tracking data from Sharepoint.

Return of the Jedi

At that time, we changed our planning process to a Scrum-like process. Team members took tasks from the backlog in a meeting, and therefore the buy-in for these tasks increased. Since the team members all took only 48 hour's(the amount of hours available in an iteration) worth of work, they also felt more responsible to finish and track those tasks. This gradually improved our tracking process, and by the end of Spring, we could rely on our tracking data.

This helped us in Summer, when we used the tracking data to measure team velocity, and planned the iterations based on that velocity. With the building blocks of tracking fixed in Spring, we were able to make enhancements such as using earned value and using trends from previous iterations burn-down.

The key takeaways from this were:
  1. A good collaboration tool is essential in making tracking data available for analysis and decision making.

  2. No matter how good the tracking tool is, the team has to buy into it for it to be useful to the team.

Friday, November 13, 2009

Planning Process Communication

Background

During the first half spring semester, our team was following a continuation of our fall planning process. Milestones were assigned to milestone-owners at the planning meeting, along with budgeted time, and then it was the milestone-owners' responsibility to make sure that the milestones is completed at the end of the iteration.

There were several issues with this process:


  1. The milestone owners were supposed to give out the tasks for the milestones, but they felt responsible for the tasks, and so owners tried to finish up milestones by themselves.
  2. Since there was no team-review of tasks, the milestone owners often did not specify tasks in adequate detail. So even when other team members wanted to help out, they did not have enough detail about the tasks.
  3. Lack of detail contributed to bad estimates on the milestones, and so each iteration, most of the milestones would not get done. As a result, the team was getting demoralized(Finishing makes people happy).
At that time, right before Spring break, Michael suggested that we should try something scrum-like.


Creating the process


With Michael's suggestion and a Scrum book from Marco, I wrote up our Spring planning and tracking process that galvanized the team, organized the team's work, and brought happiness to the world. No, actually, this process raised another issue, but I'll talk about that in the next section.
As you may see from this process, the key points of it were these:
  1. Milestones still had owners, but the owner's responsibility was to adequately specify tasks before theplanning meeting, and not afterwards. These tasks would create the product backlog.
  2. The team sat down together and decided on the milestone and task priorities, and then take tasks from the backlog according to the priorities.
  3. The team-lead's job was to make sure overhead tasks were also included in the individual backlogs. At the end of the planning meeting, no team member was supposed to have more than 12 hours of tasks.

After writing down the process, I emailed it out to the team, and then reviewed it at our weekly status meeting. The team seemed to generally like it.

Implementation

However, after Spring break, when we came back to implement the process, we found there were significant gaps in the team's understanding of the process. It was clear that we all had different interpretations of the process, even though it was written down and crystal clear(to me at least).

That was when we had to forget our previous reviews of the process, and just work it out face to face with the team. We tried out each stage of the process, and adjusted it according our needs. We prioritized milestones, and then went off to detail tasks, and then came back to take those tasks according to milestone-priorities.

At the end of that 5 hour long exercise, we had our first product backlog and individual tasks that we were responsible for. Since everyone had tasks that they had voluntarily taken, the milestone owners were not burdened with distributing tasks. And everyone knew the milestone priorities. Therefore, the most important milestones were guaranteed to finish.

I learned a big lesson from that meeting. It does not matter how eloquently you write down something. It does not matter how emphatically your team nods their heads when you ask if they understood it. Only when the team can work through the process together and feel confident about it, can it say that it understood a process.

Conclusion

The result of all this effort was a happy ending after all. In summer, the team added some additional checkpoints to this to make sure tasks are being specified correctly and completely, and added a client-prioritization step. However, the underlying process stayed the same.
The key takeaways from this were:

  1. Critical processes such as planning and tracking needs to be communicated. And sometimes you have to sit with your team in a 3 hour meeting for that communication to happen.

  2. Prioritizing milestones together with the team really helps to get team buy-in on the importance of those milestones
  3. .
  4. Since the team members were taking tasks, they felt more responsible for them than before


Tuesday, November 10, 2009

Using Use Case Points

Early on we decided to record our functional requirements as use cases. Estimating how much effort was required to complete the project based on the use cases collected turned out to be a much more challenging problem. In creating our estimates I turned to use case points estimation(pdf), a technique developed by Michael Cohn of Mountain Goat Software.

The basic premise of use case points is that by counting the number of transactions in a use case and then applying some general rules of thumb to the number (possibly taking into account adjustment factors based on quality attributes and environmental factors) you can get a rough order of magnitude estimate for understanding how big a use case is. This is same basic premise used for all estimation (count, optionally adjust, project based on data) McConnell talks about in his excellent estimation handbook, Software Estimation: Demystifying the Black Art.

Counting transactions is easy and straightforward. For the SQUARE Tool, we calculated approximately 255 unadjusted use case points and 244 adjusted use case points. [A complete breakdown of use cases and their corresponding point estimations are available on our archived project website.] The use case point estimates gave us a rough idea of how big and complex each use case would be compared to other use cases. The tricky part for us was projecting effort and schedule based on the use case point number. Being a new team, we didn't have historical data. To further complicate matters, we were conducting this estimate in parallel with our architectural design work, much later in the life of the project than Cohn's paper implies this technique should be used.

Not having a means of projection I turned to some interesting sources. Keep in mind that the total effort allocated to the studio project is only about 5,300 person hours (over a 16 month period) and time must be split among all stakeholders including faculty (e.g. end of semester presentations). At the time these estimates were created about 4,000 person hours of work remained in the project.

Assuming 20 - 28 hours per point means we would need between 4,800 and 6,800 person hours of effort to complete the project.

Converting use case points to function points to Java lines of code (result is approximately 8,000 LOC) and then running these numbers through COCOMO II (default settings) gives an estimate of 3,700 - 5,900 person hours of effort to complete the project.

Surely MSE Students work faster than the default COCOMO II assumptions. Given that MSE teams typically produce code at a rate of 4.48 LOC/hour, the Square Root team will need only 1,819 person hours to complete the project.

According to estimates, none of which corroborate with one another, the project will take between 2,000 and 7,000 person hours of effort! So we'll either finish under time or blow the whole project. Not very useful.

To overcome the variation in our estimates and hopefully come up with something a little more useful, we conducted a Wide-band Delphi estimation session, sampling a set of use cases to determine an approximate value for a use case point. Following the session, we determined that use case points for our team were worth between 8 and 12 hours. This gives us an estimated range of 1,800 and 2,300 person hours of effort, a much more manageable range and certainly (hopefully) more realistic.

We used the average use case point value of 10 hours for the purposes of planning. Tracking the team's progress over time using earned value, it became clear that we should have chosen the lower, 8 hour point value.

Conclusions

Use case point estimation worked out OK for the team. Realistically, any reasonable proxy would have done. We wasted a lot of time trying to find reliable industry sources for point values when the most accurate estimation of all was a simple Wide-band Delphi estimate done by the team.

The most important thing about the estimates was that, for the first time it gave us the ability to see beyond the next iteration and allowed us to project through to the end of the summer. That we were able to produce these estimates, in my mind, marked the end of the "Period of Uncertainty." From this day forward we had a plan, knew how long it would take, and could definitively determine where within that plan we currently were and whether we were where we needed to be to successfully complete the project.

Use case points were unsatisfying because use cases were generally unsatisfying as a means for recording requirements. While the nature of use case points would have allowed us to create an estimate earlier, the Wide-band Delphi session was so successful only because we had enough detail to have meaningful conversations about what was in each use case. Had we attempted this too early, the estimate would naturally of been less accurate (though perhaps still useful if you could figure out a way to track architectural progress within the created estimates).

Friday, October 9, 2009

Power of Presentations

No, I am not talking about Powerpoint today.

I am talking about the presentations that are less obvious—the ones we do every day. We are constantly doing presentations, although we are unaware of most of them.

You are writing an MSD paper---that’s presentation. No matter how well thought-out your content is, if you cannot spell correctly, you’re dead. Your readers will think, “s/he can’t even spell—s/he obviously cannot say anything intelligent!”

You are serving food on the table---that’s presentation. If the noodles look like worms—no matter how tasty they are, no one may touch them.

You are sending out a wiki page to your teammates---that’s presentation. You may be wanting feedback on the content and the format, but if the pictures are awry, you may get some different kind of feedback.

My grandma told me a story when I was a kid.

A great king once saw a dream. He saw himself waking up one morning to find out that all his teeth have fallen, except for one. Terrified, the king woke up from the dream (and made sure that all his teeth were still remaining).

The next morning, he called the royal wizard and asked him for the meaning of this dream. The great wizard thought for a while, and his face darkened. He said with a sad face, “Your Majesty, this dream means that all your relatives would die in front of you. You’d be the last one remaining, all alone.”

The king was so angry at this bad news that he took out his sword and killed the wizard instantly.

The following day, the kind summoned the wisest man in the land, and asked him the meaning of the dream. The wise old man thought for a bit, and his face brightened up. “Your Majesty, this is great news,” he exclaimed, “You will be the longest living among all your relatives!”

Call it euphemism or call it polishing, the wise man survived because of the way he presented.

So lastly, when you are communicating with others---that’s presentation also. Be very aware of what you say, as you may be presenting something you don’t mean to present.

Why is it so hard to talk about Architecture?

Through my experience in the program I have revised various software literature about architecture. And I have noticed tremendous advancement in the area of Software architecture and a huge effort to unify concepts. However, I have also perceived that software architecture is still young field of study. That is not yet well spread in many parts of the world.

My opinion is that the majority of the ambiguities in software architecture are mainly caused by the following factors: vocabulary, representations, abstractions levels and unawareness of factors that impact architecture.

Vocabulary

Vocabulary is important for communicating in any area of study. However, in this abstract world an architectural term could mean thousand words, just think in architectural pattern name and see how many words you can write about it. Having said that now just imagine an architectural discussion with several colleagues interchanging hundreds of these types of loaded terms. What is the possibility at the end of the meeting that the participants are in the same page? The point I want to make is that we need to be careful with the terminology we use. The architectural flaws have proven to be really expensive when these are caught at advanced stages of software development [Watts Humphrey]. So it will be worthwhile agreeing in terms and definitions as the SQUARE process does in its first step [Nancy Mead].

Here are small examples of some terms I find to cause confusion:

o What is the difference of these terms Software Design, Detailed Design and Software Architecture? Isn’t the size of the system affect meaning of these terms?

o MVC Pattern – Does this means the same thing to everyone? If you were going to express this pattern would it be in static perspective or dynamic perspective?

o Layers vs. Tiers – Are these the same? Why people used them interchangeably?

Seems like in different part of the world in both industry and academy there are continuous effort trying to come out with the most popular architectural dialects. As future software engineers we should be aware of this reality and be aware that when we go out there we should be careful how we communicate and introduce terms, because as in any change there will be resistance. Note that more important than the term itself, we should be worried about the meaning behind it. We see this clearly with the quality attributes names like reliability, maintainability, and which could mean anything if not defined.

Representations

As seen before, terms in the context of software architecture could mean thousands words. So, what can we expect about pictures or graphics trying to represent aspects of a system? My answer would be, even more implicit words than terms. As mentioned by Tony Lattanze in his book Architecting Software Intensive Systems, a picture could mean thousand words, but it will not be the same thousand words for everyone. That is why I consider representations a huge source of ambiguities when communicating architectural topics.

To help solving these ambiguities, it is important when providing graphical architectural representations that these should be accompanied with good legends specifying the different elements in the picture and their relations. However, this is not enough we should always write explanatory prose to complement and explained any underlying assumptions and responsibilities of the elements. Another important part when presenting documentation is to explain what is the perspective of this diagram and make sure the perspectives are use consistently.

It should be easy to determine whether it is it static, dynamic of physical perspective? Knowing the perspective of the provided structure is critical to know what properties of this structure assess when revising the diagram. For instance, knowing that a diagram is in the dynamic perspective will help reasoning of attributes such as performance and availability, which is not possible in the static perspective.

Abstractions levels

The nature of software is already considered abstract. And making abstraction of something that is already abstract is challenging. This show us that software architecture is not easy, and it is even harder if we do not speak the same language.

It is also important to know that additionally to perspectives to reason about structures. We also have within perspectives different levels of abstractions. These levels of abstractions are used to remove unnecessary details, which facilitates the reasoning about certain properties of the structure in context.

For instance just think in Google earth we can see the planet earth at different levels such as continent level, country level, city level and even street level. As you zoom in you will get more and more details. We can thing of this as abstraction levels when comparing this when representing the structure of the system. The problem in the architecture world is that we do not count with the “altitude” which helps Google earth to maintain consistent levels of abstraction. Knowing what is the “altitude” of our models is one of the biggest challenges to keep a consistent level of abstraction. Mixed perspectives and mixed level of abstraction is a huge source of ambiguities.

Factors that can impact architecture

I consider this point very important because, even before the project is conceived there are many decisions made that will have considerable impact in the final architecture and therefore a big impact in the project. If upper management knew a bit more how can business decisions such as deadlines, resources and team collocation affect architecture they will include an architect from the creation of the business case. However, in the majority of the cases there is not much to negotiate with the business that is why these are thought as constraints, but at least showing red flags at the beginning will contribute to avoid big surprises later. I do believe that a good management will appreciate knowing this upfront.

Another, type of decisions that have huge impact are technical constraint, generally seeing when there is the need for integrating with existing systems. Also, it happens a lot that companies constraint their self through technology selection. It is important to be aware that this decision will have an impact in the final structure. We should be critical about what quality attributes the selected technology will inhibit or promote and see if this is aligned with the system goals.

In short, it is obvious that requirements and quality attributes will affect the architecture, but is not always obvious that business and technical constraints will.

As mentioned by our architecture professor Tony Lattanze these are like loaded bearing walls because they are constraint and are not negotiable having a huge impact in architecture, so we should be aware these and communicate their importance at early stages of the project.

In conclusion, knowing the critical roll of software architecture in software projects and also knowing the high probability of miscommunication when communicating it due to factors mentioned above vocabulary, representation, level of abstraction and unawareness. As Abin mentioned in his post is crucial to be proactive to and try to surface ambiguities. I will recommend focusing in these areas because they a huge source of misunderstandings.

Thursday, October 8, 2009

Deciding to use Pair Programming

During the construction phase of our project, the team decided to use XP. Of the 12 XP activities advocated in the first edition of the XP book we used 9, choosing to skip the metaphor (since we already had a solid architecture), simple design (not that we advocated complex design, just forward thinking), and having an onsite customer (since our customer was...not on site). Of all the XP practices, the most controversial for the team (and probably everyone else in the world) was pair programming.

According to our estimates, the schedule was extremely tight. We weren’t sure whether we would have enough time to complete all the features, let alone complete them working in pairs. Luckily I was introduced to some interesting data through a class I was taking, Significant Papers in Software Engineering taught by Mary Shaw. One of the papers we read was an article by Laurie Williams, Robert Kessler, Ward Cunningham, and Ron Jeffries titled Strengthening the Case for Pair Programming (PDF) and published in mid-2000. The data was a little old, but the essence of programming in pairs hasn’t changed that much.

The paper outlined two interesting findings which had a direct impact on how we might work during the summer. The first finding was that pair programming requires only a little more effort than programming alone. This study found that pairs require between 15% and 80% more programming hours to complete a task, not 100% more hours, than programming alone. In the data presented, a pair would usually start out requiring more effort and over time become more efficient, almost but never quite to the point of a single programmer. The second finding complements the first - pair programming gets work done faster. This study found that pairs are between 20% and 40% faster than programming alone. And on top of all this, the code quality was found to be superior compared to programming by on your own!

There were some problems with the data presented that made me skeptical. The first was that the projects used in the experiment were really just toy programs - small applications that were self contained and can be started and finished in a few hours. The second problem was that the teams were made up of undergraduate students with little to no industrial experience. The most novice person on the Square Root team has just over two years of experience and the team average is over three and half years. That’s three and a half years of coding habits (both good and bad) that we would have to reconcile to make pair programming work - something the undergraduate teams didn’t have to deal with as much.

With these things in mind, I created some projections to see if we could even afford to use pair programming, assuming the Square Root team performed similarly to the teams from the experiment. As you can see from the graphs, the findings were interesting from a planning perspective. The team decided that we were going to use some kind of code review, either pair programming or Fagan inspection.


In terms of effort, if the team operated in the 15% efficiency range, it was conceivable that pair programming would require less overall effort than Fagan inspection.


In terms of completion time, it appeared that we would be able to write more lines of code faster with pair programming than working alone (individual LOC/hour is derived from previous MSE studio rates in Java).

With these projections in hand, we decided to commit to pair programming for the first two iterations. Personally, I was extremely curious to see how we compared to the numbers presented in the paper. So, following the first two iterations, I pushed hard to conduct an experiment. Our experiment combined informal replication and lessons learned approaches to assess the efficiency of the Square Root team compared to the previous experiment. I’ll talk more about how the experiment was set up, the specific results, and what we learned in future posts.

Creating some projections on how pair programming would affect the team's plans turned out to be a great way to create buy-in for the process and instilled the team with more confidence. I don’t think pair programming would have been used as much had we not created these early projections to show that it wouldn’t completely explode in our faces and prevent us from finishing the project by the end of summer. On top of that, these projections and the data presented laid the groundwork for a series of interesting experiments that further solidified the team’s processes and use of XP.

Saturday, September 19, 2009

Architecture and Communication

My post today is going to talk about how software architecture can be a vehicle of communication.

What is Architecture?

Wikipedia defines architecture as the structure or structures of system, which comprise software components, the relationships among those components, and the relationships between them.

When software engineers think architecture, they primarily think of architecture and documentation and diagrams of the above-mentioned structure. And the popular “architecture” sites support such notions.

For example, Wikipedia lists the Common language infrastructure architecture as this:






The documentation suggests that all the code of the different languages get compiled to Common Intermediate Language. However, it does not say some significant things: Does the compilation support the full-fledged language or does it apply restrictions on the languages for compilation? Can the languages communicate with one another? And most importantly, what quality attributes does this architecture support, and why?


Any such diagram can raise these questions, and it is important that the development team is knowledgeable about the answers. Without architectural knowledge, the team may not have a good big picture view of the system.


This "lack of big-picture view" happens in both small and big companies. Roni Burd, a program manager at Microsoft, discovered this issue with his Bing development team. He had to take action to raise the architectural awareness of the team. I have noticed this also at my new job, where the software is too large to comprehend without proper architectural perspectives. Without good architecture documents, the team members each get familiar with parts of the system, but no one can really talk about the system as a whole.


It is the architect's job to ensure that the development team is adequately knowledgeable about the answers to these questions. S/he can share this knowledge through documentation or architecture workshops or through any other means.


We used the following activities in Spring and Summer to raise architectural awareness:


ACDM architecture reviews:

We conducted architecture reviews throughout Spring. In these reviews, the architect would present a partitioning of the system, and explain the rationale for that partitioning. Then the team would raise and record issues with that partitioning.


What we could have done better:

We should have been more upfront about giving feedback on architecture. We all had different ideas and impressions on the architecture, and we could have done a better job at harmonizing those differences. We could not relate how the pictures and diagrams of the architecture related to the actual product. If we implemented parts of the application, that would have given us a concrete mental model of how the architecture and the actual software were connected. With that mental model, we could have been better at communicating our differences of opinion about the architecture.


Architectural documentation:

Architecture documentation was our key way to communicate in Spring and Summer. These were up-to-date diagrams of the system, with design rationale relating them to the quality attributes.

What we could have done better:

Since we were implementing the product in summer, we could have improved architectural communication by printing out the architectural diagrams and posting them in the cubicle. This would have allowed the team to question and verify the architecture as they were implementing the system.


Topic of the week:

We did have some verification of the architecture at our 'topic of the week' sessions. 'Topic of the week' was our way of addressing misconceptions. Each week, at the status meetings, we budgeted some time to talk about a topic that we needed to communicate across the team.


At the architecture sessions, we addressed the following questions: how the architecture decomposition related to the implementation, and how were the quality attributes supported.


What we could have done better:
We could have addressed more architectural topics. There were questions on how did our package structure relate to our static architecture, and why were packages decomposed the way they were. These could have been better addressed early in the project to clarify the responsibilities of each package. That would have reduced some architectural refactoring work.

Friday, September 18, 2009

What!? When’s the decision made?



Making a decision in a team is not easy, especially working in a peer group. MSE studio teams are peer groups. Although there is a team lead in the team, it doesn’t mean he or she is superior to other team members. So, while working in this kind of environment, one big question is “Who gets to make the decision?”


A Story
I remember the first conflict in the Square Root team was happened in the first semester. One day, after the class which was talking about choosing a process for a project, our team was excitingly discussing about what process we are going to use in our cubicle. Everyone was saying his opinion about what processes we should use for each semester. I remember I stood there and kept questioning “Why? Why we should use this process instead of that? Do you have any analysis to show that this process will work for our project?” However, we spent a lot of time on the discussion and we didn’t make any concrete decision. 


Sometimes it was just so difficult to let everyone in the team knows:
When to send out meeting agendas?
When to input tasks time on the sharepoint?
What data we are going to track for the estimation?
How we do planning?
Who should take the responsibility of communicating with the client?
This kind of team consensus problems happened only in a team of five. Not to mention what will happen in a larger team.


Finally, we learned to use “Proposal” to help the team communicate more effectively and to drive the team making defensible decision. 


What’s inside a proposal?
First, we wrote about the objective of making a decision. This sound trivial, but sometimes people really would forget “why they are doing this.” 
Second, the approach of the process we are going to follow. Any detail procedure of how we do things would be recorded here. 
The last but not the least – we also wrote down what metrics and data we wanted to get from this process. This is for future analysis on this process.


How does “proposal” change us?
After we used the proposal approach to make decision, the team started to create a lot more proposals than we thought! For example, we have meeting proposal, process proposal, planning proposal, tracking proposal, and even a proposal for proposals!! 
We became very clear that according to different roles who is responsible for making what decision. We spent less time on quarrelling about any decision we need to make. However, team member would review a proposal and provide very specific suggestions.
Later on, we even invent a proposal survey mechanism to use quantitative data to prove how well the proposals are.


Writing a proposal is never a fun thing, but this is one of the approaches that keep the team align and hold the team together!

Monday, September 14, 2009

Avoid Project Assumptions

Assumptions are the root of all evil on a project.

Why wasn’t the branch release made today?
The quality assurance manager assumed someone else was going to do it.

Why is this code so sloppy? It’s barely readable!
The lead developer assumed that wasn’t important.

Why didn’t anyone mention this problem earlier?
The team lead assumed folks would raise concerns before they became problems.

Why are we using this communication mechanism between the client and the server?
The architect assumed the modifiability quality attribute was not important.

Why was I the last to hear about the new quality assurance process?
The process manager assumed someone else would let you know.


The fastest way to project failure is to make the wrong assumptions. At the beginning of any project, especially when there’s a brand new team involved, there are going to be assumptions. People come with biases – judgment is one of the things humans are really good at. The more assumptions you can avoid as a team, the better off you’ll be. Write down your thoughts, talk about your history and experience with everyone on the team, write down the decisions you’ve made throughout the project, and record the justifications you used to make those decisions. But whatever you do, don’t assume.

Sunday, September 6, 2009

Are you indecisive? If so, play a game!




Decisions! Decisions!! Decisions!!! They are key ingredients in the colorful life of any engineer (not just for us, the reclusive software hermits!). No longer can we secure sanctuary at the confines of our desk, hoping to have our decisions deferred indefinitely or delegated to some unwilling soul. The conclusions that we arrive at often have far reaching consequences, some of which we can barely fathom. Wouldn't it then, be wise to employ the collective wisdom of our peers to arrive at conclusions that have their benediction? But wait, does that mean more boring meetings - constructs devised to drain the energy and enthusiasm of its participating members. Well, Not exactly. Sure, meetings are unavoidable but they need not be long - and they definitely could be fun! How, you may ask, do you do that? By playing games! Intrigued? Well, the following strategies listed below illustrate how we employed games to make the decision making process a bit more entertaining.


The Planning Poker

We had an estimate of the time that was available. We also had a list of certain milestones that needed to be accomplished within this time frame. The quandary that was ostensibly placed before us was - could we accomplish all these milestones? If not, which ones do we prioritize? Sure, there were a lot of slick estimation tools available; however, most of them rely on historical data - something that was not available for our benefit in the Fall semester when we embarked on this project. The viable solution seemed to be the employment of past experiences of team members in order to arrive at a reasonable estimates - estimates that would prevent the plan from being entirely quixotic. To avoid inconclusive discussions and foster team enthusiasm, we decided to employ the game of planning poker. As indicated by the website http://www.planningpoker.com/ " The idea behind Planning Poker is simple. Individual stories are presented for estimation. After a period of discussion, each participant chooses from his own deck the numbered card that represents his estimate of how much work is involved in the story under discussion. All estimates are kept private until each participant has chosen a card. At that time, all estimates are revealed and discussion can begin again." In our case, the stories were milestones, where each member rationalized his estimates by discussing the tasks he believed were inherent within this milestone. Not only did this environ alleviate peer pressure, but it also fostered amusement in an otherwise mundane process.

The Task Selection Race

Based on historical performance, each team member had an upper limit on the tasks that they could place on their respective backlog. This task selection process was spiced up by incorporating race flags and lights on a central progress monitor. Its principle was simple - as each team member filled up their backlog queue, the indicators associated with their queue changed from green to amber. The goal was for each team member to select tasks until the indicator changed to red. The concomitant effect of this strategy was that team members tried to race each other to the finish line. Individuals who would otherwise deliberate indecisively on which tasks to select to their queue, were now subconsciously motivated to speed up their decisions and select the tasks that they found interesting. The overall effect was a drastic reduction in meeting duration and a dramatic improvement in excitement in what would otherwise be a mundane, repetitive process.

Rock, Paper, Scissor!

The simplest games too can come to you rescue! When all else fails, we decided to base our decisions on chance. Note that, this was used only in situations where all paths were feasible and only a selection had to be made in term of individual ownership. To cite an example, for determining a presenter for an event, this technique would come in handy to eliminate options and quickly arrive at a winner (some would call a loser!) who would take ownership for the task at hand.

Have fun!
The whole idea being this article (if you haven't figured out already) is that the workplace need not be boring! I have anecdotal evidence to support this, and believe that even the most mundane tasks can be transformed into something interesting. Arriving at decisions that are in conformance with multiple members is a daunting task, and can be quite demoralizing if they degenerate into long, inconclusive meetings. Why then, not speed up the process and have fun while at it?

Saturday, September 5, 2009

Nodding Is Not Enough

Using nodding heads as criteria to determine team understanding, alignment, and commitment is kind of risky.

Wikipedia defines it. A nod of the head is a gesture in which the head is tilted in alternating up and down arcs. In many cultures, it is most commonly, but Not universally, used to indicate agreement, acceptance, or acknowledgment. [http://en.wikipedia.org/wiki/Nod_(gesture)]

I will define nodding as binary output from an individual with a "true" value in a giving context. Yes, I am nodding or not.

But, the million-dollar question is. What are you nodding to? What component in the diagram? What part of the plan? Remember it could also mean: Yeah Right! And I'm Chuck Norris!

Having said that, would you rely on nodding heads as way to determine people buy-in and understating in your project?

Well, this post is to foster software engineers to make their peers to "produce content" in their team interaction. Because nodding heads is not enough.

The reason why is important to make people generate content. Is that this is the only way ambiguities will surface. Many times it had happened to me that I thought I knew something, until I tried to do it. The same thing happens in a team. There are many misconceptions that need to be revealed, preferably as soon as possible depending in the criticality of the topic. It is your job to help catch them.

Now I am going to cover some of the activities that helped our team members identify misalignment and improve communication.

Daily Status Meeting (15 min)

We started every day with our daily standup meetings, which last no more than 15 minutes. In which people "produced content" answering three questions, what did you do yesterday, what are you doing today and do you have any issues? We also addressed any important reminder.

Common time & Cubicle Discussions:

Having all the team available at the same time in the same place. Is the best you can get. Having team members to talk in the cubicle open the opportunity for people to produce content by asking question, writing in the board or even thinking out loud.

All this availability of information lead to uncover many issues and solved them right away issues.

Status meetings (45minutes):

These helped the team to keep aligned with our plan, tracking data and quality.

In this meeting different team members had the opportunity to produce content (Planning, Tracking, Quality). An important lesson from these meetings was that presenting content in the similar format systematically kept people focused on the data.

Topic of the week and Sanity Checks:

This is an activity we did to proactively make people "produce content" on important topics such as architecture, processes and planning.This helped considerably to surface team misalignments.

As can be seen during our studio project we invested a lot in team communication.

However, that was not all. There were some software engineering practices that provided other communication channels. These were Fagan inspection and Pair programming. These practices complemented communication areas that were not covered with the previous techniques. Specifically, they improved knowledge transfer of coding practices and technology specific concepts. Also, Fagan and Pair Programming required continuous participation of different member to identify issues that could impact quality.

All this investment paid off with team alignment, good environment and a good client survey result. So, my recommendation is to have different ways to assess team alignment instead of assuming too much.

Monday, August 31, 2009

XP and Earned Value

Iterative planning is great. In fact, I think it’s one of the best parts of XP. As a team, we struggled when planning in the large. Looking ahead further than about a month was difficult for us mostly because the world changed drastically as soon as we started working in it. Once the rubber hit the road our idealized plans were rendered useless within in days, sometimes within hours. Why bother looking past what you can’t reasonably anticipate?

Seeing far into the future with great detail can be tough but XP’s "build only what is needed today" attitude keeps plans from getting too out of hand. With stable enough requirements and a solid architecture in place it’s relatively easy to figure out what needs to be done within a short (in our case, two weeks) iteration to meet the team’s commitments. Much more difficult is knowing how the project is progressing within the total scope of everything that needs to get done.

XP works well with pay-as-you-go style contracts. It’s a natural fit. From the development team’s perspective, the idea is to provide as much value as possible for the customer as early as it makes sense for the project. The Studio Project is very much a fixed time/cost project - we graduate in December and additional students can’t be "hired" for our project. Scope (and quality) is the only remaining negotiable software element. Though scope can be negotiated, the customer still has an idea of success in terms of the sorts of features that she needs in her software. Since we can’t extend our deadline or hire additional programmers, understanding exactly where we are in the project and how much work remains is critical.

This is where earned value analysis comes into play.

As it turns out, earned value analysis and XP can work together. The trick is understanding how the scope can change, what needs to be fixed and by when, and modifying the standard earned value graph to show the current progress in a more volatile planning environment. To use earned value within the XP environment, we made the two changes, one to XP and one to the earned value graph.
  • Make commitments for two iterations at a time but only plan the next iteration in detail. For this to work, the team has to be willing to change commitments for the following iteration based on how the next iteration was executed.

  • Add a new line, the "Projected Planned Value," which is based on the known amount of work remaining. Treat this line as an extension of the standard Planned Value line.

The result looks something like this (click on the image for an animated demo!):



We found that XP and earned value complemented one another quite effectively. We used burndown and the planning game for iteration planning-in-the-small and earned value to get a better feel of where we were in the larger context, when (and whether) we would actually finish the project.

Tuesday, August 25, 2009

About Square Root

Square Root was one of five studio teams from the 2009 class of Master of Software Engineering students at Carnegie Mellon University. Throughout our project we experimented with various software engineering methods and practices while working to complete a web-based security requirements elicitation and analysis tool based on the SQUARE process.

This blog is a comprehensive reflection of the time we spent working on the project - what we learned, what we thought, what we did, what we liked, what we didn't like, what "software engineering" means to us. Please ask questions or leave comments and don't hesitate to link freely to our posts. A complete archive of our project has been created as well.

SQUARE stands for Security QUAlity Requirements Engineering. There are nine steps in the SQUARE process:
  1. Agree on definition of terms

  2. Identify safety, security, and privacy goals

  3. Develop artifacts

  4. Perform risk assessment

  5. Select requirements elicitation technique

  6. Elicit security requirements

  7. Categorize security requirements

  8. Prioritize security requirements

  9. Inspect security requirements

Nancy Mead is the principal investigator for SQUARE. You can read more about SQUARE on the SEI website.

The Square Root team consists of:

Our studio mentors throughout the project were Dave Root, John Robert, Licinio Roque, and Paulo Rupino.

The team, from left to right: Loomi, Abin, Sneader, Michael, Marco