Ballot Debris

Thoughts on Agile Management, Leadership and Software Engineering

Corporate Greyhound Racing

clock September 4, 2010 04:26 by author Chad Albrecht

Betting on a greyhound race can be tons of fun.  You find a cool sounding greyhound, check the stats, then lay down $20 to win. (or some other bet variation)  We are ecstatic when our greyhound wins, collecting our money and touting the prowess of our keen eye for the stats.  When we lose, we attribute it to the dog having a bad day, the weather, the law of averages, etc.  If we are an avid better, we may begin to use complex formulas that take into account the dog’s past performances, current weather conditions, time of day, etc.  In fact, my wife’s uncle has written a book on such formulas for greyhound racing.  Even after an exhaustive amount of work we are still left with a game of chance each time we place a bet.  As hard as we try, predicting the future is hard, improving our chances takes significant effort and in the end we may be left with nothing more than a 1 in 10 chance of being right.  We’ll come back to this in a second…

Let me now turn the focus to that of software project estimation.  I’ve been spending time lately reviewing all the COCOMO II (B. Boehm et al.) material along with concepts of variable covariance. (as I talked about in this blog post)  What I am realizing is that formulas used by COCOMO II bare significant similarities to those used in gambling, e.g. greyhound racing.  We look at the empirical data (if we have it) from the team’s past projects and try to determine (guess) certain characteristics for our new project.  For example, in COCOMO II’s early design model, Person Months(PM) is estimated using the following formula:

PM = A x Size^B x M

Where A is a local coefficient that depends on things like organizational practices and type of software.  Similarly, B is an exponent that reflects the increased size of the project based on its novelty with Size being measured in thousands of lines of source code.  Finally M is another multiplier based on seven project specific attributes defined as follows:

M = PERS x RCPX x RUSE x PDIF x PREX x FCIL x SCED

I’m not going to go into each of these as the information on this formula is readily available.  The thing to understand is that each attribute is a value from 1 to 6 that represents a guess. (albeit an educated one)

What you will find in any traditional estimation system is something very similar to COCOMO II.  Tons of factors looking to measure this or that, complex formulas and a good mix of probability theory. In many cases today, this type of method simply won’t work.  Why? 

If we look at the challenges in quantifying each factor above we can begin to understand the reasons. 

A - Organizations change practices on a regular basis, fail to keep the necessary empirical data, face frequent significant technology changes.

B – Team membership changes which may impact team cohesion, varying degrees of market understanding and regular changes to the architecture driven by market demands.

M – Platform difficulty not understood early on, team changes effecting personal capabilities along with experience and poorly understood reuse model.

To wrap this all up, I thought it would be fun to consider our greyhound race being run as a modern software development project.  First, we are betting on not who wins the race, but how long it takes each dog to finish.  Further, we are going to change the track during the race (but we’ll announce it to the audience) and swap out dogs here and there to keep it interesting.(the dogs are just running right?)  :)  Anyone want to go to the track?

While I’m by no means advocating for companies to stop estimating, I am advocating for simpler, less risky and more cost effective models such as Planning Poker tied to Release and Sprint Planning coupled with a Product Roadmap.  These tools used with a good Agile approach are showing us a more sensible way forward.  I’ll try to explain some of the ideas around why we “bet on the dogs” along with some tools to help us move away from this in future posts.



Estimate Histograms in TFS

clock March 10, 2010 12:53 by author Chad Albrecht

Last July I posted an article on how to use a histogram to gauge how accurately you or your team members estimate.  I’ve had a few people ask me about this recently so I thought I’d post on how to create these histograms in TFS 2010.  For a quick recap on what we want to accomplish with these histograms, take a look at my July article.  You will need a process template that allows you to capture and Original Estimate and Complete Work values.  (Such as the MSF for Agile v5.0 template)  Assuming you have Excel 2007 and Team Explorer 2010 installed, go ahead and open Excel and follow the steps below:

Step 1:

Click on the Data Ribbon and Select Existing Connections.

image 

Step 2:

You should see TfsOlapReport which is a data connection to the Tfs_Analysis cube.  Select it and click Open. (If you don’t see the connection, go here.)

image

 

Step 3:

You should the Import Data Dialog.  Change the location of the data to $A$2 as show below and click OK.

image

 

Step 4:

Drag the field “Completed Work” into the Values box as shown below:

image

Step 5:

Drag the fields “Assigned To” and “State” into the Filter box as shown below:

image

Step 6:

Drag the field “ID” into the Row Labels box as shown below:

image

Step 7:

Select the team member you want to look at(in this case Bob Smith) and select the State to be closed in the filter list above the pivot table.  This is show here:

image

 

Step 8:

Click the dropdown next to Row Labels, select Value Filter and click on Equals.

image

Step 9:

Setup the Original Estimate value you want to estimate as shown below.  (In this case we will look at the original estimate being 16 hours.) Click OK.

image

Step 10:

At this point you are ready to build your histogram.  You can use Excel’s Data Analysis pack to build one for you or you can build you own. I like to build my own since the Data Analysis pack charts are kinda crappy, so this is the method I will show.  Start by clicking the top-left corner of the worksheet to select the entire worksheet.   Press Ctrl-C to copy it.

Step 11:

Select Sheet 2 and press Ctrl-V to paste a copy of the pivot table into the new worksheet.

Step 12:

Select the cell directly to the right of the “Completed Work” column header.

image

Step 13:

Select the Data ribbon and click the Advanced Filter button.

image

Step 14:

Set the List Range to the all the data in the Completed Work column and select the "Unique records only” check box.  Click OK.

image

Step 15:

You should have a list that resembles the following:

image

Step 16:

Copy the values from the filter Completed Work column and paste them back into Sheet 1.  This should resemble the following:

image

Step 17:

Label the column data you just pasted in as “Bin” and label the column to the right of it “Count”

image

Step 18:

In the first data cell of the Count column add the following formula:  =COUNTIFS($B$5:$B$100,"=" &D5)

image

Step 19:

You will need to modify the first argument of the formula added in Step 18 to be the full range of the Completed Work column and the second argument to point the value in the Bin column.

Step 20:

Copy this formula down in the the empty cells.

image

Step 21:

Total you Count column.

image

Step 22:

Select your count column and click a bar chart on the Insert ribbon.

image

Step 23:

Rick click on your chart and click Select Data.

image

Step 24:

Select Edit under the Horizontal (Category) Axis Labels text.

image

Step 25:

Select all the values in your Bin column for the Axis label range. Click OK all the way back to the worksheet.

image

Step 26:

Select your Bin/Count table and then click the Sort Smallest to Largest button in the Data ribbon.

image

 

You’re Done!

 

This data should be used to help you and your team get better at estimating.  As the goal of this type of exercise is to increase our skills, I would advise against using this as a means of rating individual performance.  This can backfire by creating resistance to entering real data significantly skewing the results.

Good luck and enjoy!



Locking a Task’s Original Estimate in TFS 2010

clock February 26, 2010 09:04 by author Chad Albrecht

During my presentation at the WI .NET User Group the this past week, I had a number of questions on customizing the TFS Process Template.  I’m going to answer some of these questions in detail on my blog.  The first one I’m going to cover is “Can I lock the original estimate in TFS so I can see how good we are at estimating?”  The answer is yes.  But before we begin, a couple of notes.

First make sure you have TFS Power Tools installed.  For the TFS 2010 RC, download them here.

Second, I’m going to show you how to do it using the MSF for Agile v5.0 Task Work Item Type (WIT).  While other templates are going to be slightly different, you can use this same method as a general guideline on how to do this.

Finally, what we are looking to do here is lock (make read only) our Original Estimate once we start booking time against the task.  Here’s how:

Step 1:

Open the “Task” WIT from the Server.

image

Step 2:

Goto the Workflow tab.

image

Step 3:

Open the initiating transition.  (The one that sets up the Active state on the far left)

image

Step 4:

Select the Fields tab.

image

Step 5:

Click new.

image

Step 6:

Select the Microsoft.VSTS.Scheduling.CompletedWork field.

image

Step 7:

Goto the Rules tab.

image

Step 8:

Click new and select the COPY rule type and click OK.

image

Step 9:

Set the fields as follows:

image

Step 10:

OK all the way back to the Workflow tab.

Step 11:

Open the Active State either by double clicking or selecting “Open Details” from the context menu.

image

Step 12:

Click New.

image

Step 13:

Select the field Microsoft.VSTS.Scheduling.OriginalEstimate.

image

Step 14:

Goto the Rules tab.

image

Step 15:

Click New, select WHENNOT and click OK.

image

Step 16:

Select Microsoft.VSTS.Scheduling.CompletedWork for the Field and 0 for the Value.

image

Step 17:

Goto the Rules tab.

image

Step 18:

Click New and Select READONLY.

image

Step 19:

OK all the way back to the Workflow tab.

Step 20:

Click Save.

You’re Done!

 

Now as long as you don’t enter any work against as task you can continue to modify the Original Estimate field. 

image

Once hours are booked against the task, the Original Estimate field changes to read only.

image

Good luck and Enjoy!



Prioritizing SaaS Features – More on Ideal Hours

clock July 30, 2009 09:05 by author Chad Albrecht

I received a couple comments on my post about Prioritizing SaaS Features.  One comment was focused on the use of EV and AC which I will address later, the other was on the use of Ideal Hours.  The comment on Ideal Hours was from Rob Park, an Agile Coach from Colorado.  Here is an excerpt from Rob’s comment:

The one thing I feel is optimistic about your calculation is the ideal hours though. Ideal hours are not actual hours, so I think you need another factor in there to convert to actual hours based on actual velocity trends.

Great point Rob!  While Ideal Hours are not representative of Man Hours, there are a few tools I use to try to get them as close as possible i.e. the conversion factor as close to 1 as possible.  First, team members sign up for tasks and estimate as part of the iteration planning and decomposition process as discussed here.  Second, estimates for specific team members are tracked against actual time as discussed here.  This gives us our margin of error for a specific team member that we can use to size our buffers.  Third, we use the process described here to evenly load the work onto the team for the iteration. Finally, if tasks change hands during the iteration, estimates and cost curves are adjusted accordingly.  This may cause an item(s) to get descoped during the iteration and technical debt to increase, but this hopefully will be addressed as part of the next iteration.

So why not just use Man Hours?  I like the concept of Ideal Hours because it gives you more flexibility on estimates.  The term Man Hours has rigid foundations in project management that do not work well in the Agile world.  Think of it this way.  With Ideal Hours we try to approach real Man Hours and use a factor to determine how right (close) we are.  With Man Hours we consider estimate accuracy a measure of how wrong we were.  Ideal Hours just fit better into Agile thinking.



Prioritizing SaaS Features

clock July 29, 2009 06:14 by author Chad Albrecht

I’ve talked about using your Value Proposition(VP) to rank features by dollar value here and using Ideal Days to estimate duration here.  Now let’s combine the two and look at total value based prioritization of features or PBI’s.  In essence what we are doing is giving the highest rank to features that will yield the most value in the least amount of time.(cost)  To calculate cost we will need some measure of cost per hour.  If you have the actual numbers, use them, if not here are some tools you can use.  Work hours in a year: 2080 Annual salary:  $50K=$24/hour  $100K=$48/hour.  Margin for overhead: 20-30%.  With these tools let’s assume that our team makes on average $90K annually.  This gives us $43 an hour.  Adding 25% we get about $54 an hour.  From here we can simply multiply our cost per hour by our Ideal Hours to get total cost per feature.  Subtract our total cost from our value (sales) and we get the profit per feature.   We then prioritize (rank) by profit.

 

Feature Rank % of total Value Estimate (Ideal Hours) Cost Profit
8 1 5% $ 4,902 2 $ 108 $ 4,794
9 1 5% $ 4,902 2 $ 108 $ 4,794
2 1 5% $ 4,902 4 $ 216 $ 4,686
1 1 5% $ 4,902 8 $ 432 $ 4,470
3 1 5% $ 4,902 8 $ 432 $ 4,470
6 1 5% $ 4,902 8 $ 432 $ 4,470
7 1 5% $ 4,902 8 $ 432 $ 4,470
11 1 5% $ 4,902 8 $ 432 $ 4,470
4 1 5% $ 4,902 16 $ 864 $ 4,038
5 1 5% $ 4,902 16 $ 864 $ 4,038
10 1 5% $ 4,902 16 $ 864 $ 4,038
12 1 5% $ 4,902 16 $ 864 $ 4,038
16 2 4% $ 3,922 4 $ 216 $ 3,706
17 2 4% $ 3,922 4 $ 216 $ 3,706
13 2 4% $ 3,922 8 $ 432 $ 3,490
14 2 4% $ 3,922 8 $ 432 $ 3,490
15 2 4% $ 3,922 8 $ 432 $ 3,490
21 3 3% $ 2,941 2 $ 108 $ 2,833
22 3 3% $ 2,941 2 $ 108 $ 2,833
18 3 3% $ 2,941 4 $ 216 $ 2,725
23 3 3% $ 2,941 4 $ 216 $ 2,725
19 3 3% $ 2,941 16 $ 864 $ 2,077
20 3 3% $ 2,941 16 $ 864 $ 2,077
24 4 2% $ 1,961 8 $ 432 $ 1,529
25 4 2% $ 1,961 8 $ 432 $ 1,529
      $100,000 204 $ 11,016 $ 88,984

 

As you can see this moves our prioritization around a little bit.  The nice thing about the above is that we can automate it.  In your Agile Management System each feature should accept a VP rank and an duration estimate in Ideal Hours.  You will also want to have a mechanism to enter the Value of the release, $100K in the example above.  Using these inputs your system should be able to auto-prioritize your features for you.  Now that you have cost, you can track EV,PV and AC on your CVE vs. CVP graph.  I don’t like the PMI term “value” in the EV and PV metrics.  Their old names used to be Budgeted Cost of Work Performed(BCWP) and Budgeted Cost of Work Scheduled(BCWS) which I think is more fitting. Either way, don’t confuse the term “value” in these metrics for revenue.  The value they represent is the cost, i.e. the value of the work.  For this example, we will assume that the estimates were correct and AC and EV are equal, that is, there is no cost variance during the release.  The graph would be similar to that of Figure 1.

image

Figure 1 – CVP, CVE, PV, EV & AC

I know there are some hard-core Agile managers out there who believe that we should not concern ourselves with revenue estimates.  Some think that simply managing costs while we continue to iterate is enough.  If you are of this mindset, let me ask some questions.  How do you know when your costs are nearing your sales?  How do you determine if your project will be profitable?  If given multiple projects to work on, how do you choose the most important one?

I would also like to remind you that I have not taken risk into account yet, which will again change the priority of the features.  I will try to cover this in future posts.

As always, I would love to hear your thoughts on this.



SLKK - A New Agile Toolset

clock July 19, 2009 11:06 by author Chad Albrecht

In much the same way Michael Kunze coined the acronym LAMP, I propose the use of a new industry acronym for Agile. Scrum, Lean, Kanban & Kaizen. (SLKK, pronounced slick)  I have successfully used this process which incorporates techniques from all of these areas and felt it needed a name.  I also feel that it will be very useful to many of you that are already using or just beginning to use Agile.

 

Scrum – Describes the artifacts, meetings, backlog and iteration terms, etc. A good intro here. (Hat tip Ken Schwaber and Jeff Sutherland

Lean – Describes mechanisms for the elimination of waste, systems thinking and pull concepts. Wikipedia description here. (Hat tip Tom and Mary Poppendieck)

Kanban – Introduces the card wall, reduction of work-in-progress(WIP) and resource workflows.  Personal Kanban also useful. A great description by Kenji Hiranabe here. (Hat tip Taiichi Ohno)

Kaizen – Company and culture. Also speaks to elimination of waste using the scientific method. Using actionable items found in Scrum retrospectives, teams perform “kaizen blitzes” to improve processes and performance. Personal Kaizen also useful. (Hat tip TPS, W. Edwards Deming and Joseph M. Juran)

 

I have read numerous articles and blog posts discussing one technique vs. another and have found it amazing that very few discuss the how they compliment one another. This is what I hope to begin to do in this post.  If you would like to learn more about any of the four pieces, click on some of the links above or books below. First a picture to graphically show my thoughts.

 

image

Figure 1 – Scrum, Lean, Kanban, Kaizen Relationship

In a nutshell, here is what the process looks like:

  1. Initial Backlog prioritized by business value.  Value drawn from market (client) needs.  This is the pull in Lean.
  2. Map your value stream.  Make sure you understand how you will measure partially done work, task switching, defects, etc.
  3. High level estimates.
  4. Build release roadmap based on value and estimates.  Refactor if necessary.
  5. Release planning session for release.  Make sure everyone is clear on the vision of the release.
  6. Sprint planning session.  Limit Work-in-progress (WIP) as much as possible.  Prioritized by business value.  Make sure work is buffered and balanced. 16 hour SBI’s.  Make sure everyone is clear on the vision of the Sprint.
  7. Commitment on Sprint Backlog.
  8. If not already done, automated build and test environment setup.
  9. Kanban card wall goes up for Sprint.  An electronic version of this on a 60” LCD works well!  :)
  10. Daily Scrums review the card wall deltas.  If electronic, task cards added to personal Kanban walls.  If manual, use resource swim lanes.  How have we improved on our Kaizen item?
  11. Sprint Review.  Demo new value features. What was the Throughput in dollars? Review PBI’s contained in the next sprint and verify correct priority order.  Sprint Review minutes published via email to all stakeholders.
  12. Sprint Retrospective.  What are we doing well?  What could we do better?  Pick and actionable item for improvement during the next Sprint. How will we measure our improvement?  How did we do on our last item?
  13. Ready for Release?  If yes, go to #14, if not go back to #6.
  14. Software released to production.  Release notes sent to all stakeholders including version number, release CVE, de-scoped features, added features, impediments and any open discussion items.
  15. Is the dollar value (revenue potential) for the next release greater than the cost?  If not, the project is ended or put on hold.
  16. Does this project have the highest IRR of all the potential projects for this team?  If not, change projects.
  17. Begin next release, go to #5.

 

While I hope to go into more detail on this technique in future posts, but for now, here are some great primer books to get you started:

Agile Software Development with Scrum (Series in Agile Software Development)
Agile Estimating and Planning (Robert C. Martin Series)
Lean Software Development: An Agile Toolkit (Agile Software Development Series)
Scrumban - Essays on Kanban Systems for Lean Software Development
Agile Retrospectives: Making Good Teams Great
Kaizen and the Art of Creative Thinking - The Scientific Thinking Mechanism
The Kaizen Pocket Handbook


The Dollar Value of SaaS Features

clock July 9, 2009 06:27 by author Chad Albrecht

I had a discussion with a colleague yesterday on how to determine the priority of features on a given service. We quickly arrived at the topic of assessing business need, i.e. value, of the features. This is a conversation I've had many times, with many clients and thought it might be worthwhile to document some of my thoughts on this.

If you are building "shrink-wrap" software you can estimate the number of units sold and multiply by the price to get annual revenue. From there you can work backwards to establish your financial metrics. (ROI, NPV, IRR, PV, EV, AC,T,I,OE, etc.) But what about when you are building Software as a Service? This is not a simple question to answer. Regardless of the level of difficulty, it is one that many of us will need to answer in order to effectively grow our organizations. While I don't have a one size fits all answer here, I would like to toss out some tools, ideas and links that may help each of us answer the question for ourselves.

Know Your Value

For starters we should have a good understanding of the value proposition(VP) for the service. The VP will speak to why clients will choose your service over the competition. The VP will look something like:

"For sales teams seeking to reduce time-on-sale by as much as 25% our CRM application will provide full sales process management at half the price of the competition"

Or

"Our internal CRM application will allow our sales team to reduce time- on-sale by 25% and cost only 50% of an off-the-shelf package".

Here is some additional reading:

Developing a Compelling Value Proposition

Stop Coding, Start Marketing! Getting Your Positioning Right

Powerful Value Propositions

Bringing the Value Back Into Value Propositions

In Search of a Value Proposition

Understand the Features That Support Your Value

We add features to attract new clients, keep existing ones, or generate new sources of revenue. We should be doing this in alignment with our VP. We should not be adding features simply because we've conceived of the idea. Given that, we should start by asking ourselves "How important is this feature relative to our VP?" Let's start by using the following 5-point scale for each feature:

  1. Direct VP feature.
  2. Component part of a VP feature.
  3. Compliments a VP feature.
  4. Nice to have.
  5. Optional.

When analyzing the features that support your VP, remember that only 6% of the work on software projects is value added and 64% of all software features are rarely used. (Thanks for the numbers Ryan!)

Price your Value

After you have the VP and the features to support it, you will need to work with your sales and marketing organization to understand what they feel the market will bear, in terms of price, for each unique service. They should also have an estimate on the number of users that will subscribe to the service. Jason Rothbart talks more about this here. Combine this with an innovative monetization model and you have some annual revenue numbers you can work with.

You may also be in a project that has the role of supporting the business. If this is the case, an estimate of the value to the business should still be generated. You may choose to adopt the model that if the service saves the organization 1 hour during a 40 hour week then 2.5% of the annual revenue is attributable to the project. For a $40M company this is $1M annually!!!

If there are no answers to these questions, you may be in the "build it and they will come" mode which is very difficult to generate value metrics from. Try using one of the above models to create an estimate and see how it compares to reality when everything is said and done.

Finally, you should consider what monetization models will be used by your service. Will you only generate revenue from paid subscriptions or will you adopt a freemium model? Will there be paid advertising from within the UI of your service? Are there client stakeholders that are willing to help fund the service? All these are examples of monetization models that will have an impact on revenue and need to be considered as part of the analysis.

Market Lifetime

Given the speed of today's market, including the market lifetime in your analysis is also important. Every service has a limited lifetime due to advances in technology and usefulness to the client base. Typically the market lifetime curves are bell shaped (Figure 1) since full market adoption is not gained immediately and decays slowly due to technology and market pressures.

Figure 1 - Typical Revenue Cycle

How long will your service be able to generate revenue in its current state? 6 months, 1 year, 2 years? If you assume the lifetime to be 2 years and the revenue curve to be bell shaped with the max subscribers in the middle, you can make some fairly good estimates. The answer to this question will play an important part to developing the Value Schedule.

Bringing It All Together

Let's assume that we've determined we will generate $1M over the next 2 years on our service. In order to generate this revenue, we will need to add continuous value and bear continuous cost over the next 20 months. Let's further assume that we will be on 2 month release cycles over these 20 months giving us a total of 10 releases.

Using our 10 releases and our $1M in revenue we can simply calculate the value per release to be $100K. (Ignoring any discounting)

Now if we look at the first release and assume that we have 25 features all of which we have ranked using our 5-point scale from above we can estimate the dollar value per feature. The trick is to normalize the value based on the rank. The formula is shown in Equation 1.

Equation 1 - Feature Value Equation

Using Equation 1 in our example we can now generate a Value Schedule which will give us the dollar value of each feature.

Feature Rank Value
1 1 $ 4,902
2 1 $ 4,902
3 1 $ 4,902
4 1 $ 4,902
5 1 $ 4,902
6 1 $ 4,902
7 1 $ 4,902
8 1 $ 4,902
9 1 $ 4,902
10 1 $ 4,902
11 1 $ 4,902
12 1 $ 4,902
13 2 $ 3,922
14 2 $ 3,922
15 2 $ 3,922
16 2 $ 3,922
17 2 $ 3,922
18 3 $ 2,941
19 3 $ 2,941
20 3 $ 2,941
21 3 $ 2,941
22 3 $ 2,941
23 3 $ 2,941
24 4 $ 1,961
25 4 $ 1,961
    $ 100,000

Conclusion

While not the answer for everyone and realizing I've left out some detail, I hope the tools I've presented here will be useful to you while you are analyzing your next project. I would love to hear what you think!



PMBOK, Agile & TOC: Planning the Project – Part 3, More on Estimates

clock July 1, 2009 11:46 by author Chad Albrecht

Before I talk about schedule development, I want to touch on estimates a bit more since I've received some questions on this. For those of you that are new to Agile estimation, I strongly suggest reading Mike Cohn's book Agile Estimating and Planning. It covers many of techniques I discuss in this series.

The primary question I received was "How can you be sure your estimates are right?" The first thing we need is an Expert Opinion. According to Cohn, If you want to know how long something will take, ask an expert. In the PMBOK, this is called Expert Judgment. (6.4.2.1 in the Third Edition) While this is a good start, I've found that some experts (read developers) are talented estimators while others are horrible, most are somewhere in between. Due to this fact, we need a few tools to help assess our expert's ability to estimate.

The first tool we need some historical data, in lieu of historical data estimating becomes an educated guess. If you don't have historical data, start collecting it today. For reference, you will need to collect the initial estimate and the actual time taken to complete. Once you have some historical data you can produce the deviations I discussed in my previous article and the histograms shown below.

The second tool we can use is an estimate histogram. For features (vs. bugs) you can compile histograms for each expert on tasks they have estimated to be 1,2,4,8,12 and 16 hour efforts. Remember that any task over 16 hours should be broken down into smaller chunks. What you are looking for is each expert's ability to accurately estimate.

Figure 1 – 4 Hour Estimation Histogram

As we can see from Figure 1, Joe (our hypothetical expert) tends to overestimate the amount of time it takes to complete a 4 hour task. While 32 of Joe's tasks that he estimated to be 4 hours in duration actually took 4 hours, 72 of them only took 2.5 hours. In this case, we can leave the estimate as-is since the median of 2.5 is only 1.5 hours off of our expert's estimate. This is in-effect gives us a 1.5 hour buffer which, as you will see, is what we want. If we produce histograms for all other effort sizes we may find that Joe tends to underestimate larger efforts, this would look something like Figure 2.

Figure 2 - 16 Hour Estimation Histogram

Knowing that Joe has a tendency to underestimate larger tasks, it may be in our best interest to try to get Joe to break larger tasks down further. Alternatively, as Mike Cohn suggests, we can provide feature buffers to reduce the risk of poor estimates. Both these methods are valid and have a basis in the Theory of Constraints. We are assuming that Joe's issue with underestimating longer tasks is going to create a constraint during the iteration. My experience has shown me that this is typically true and the use of buffers can help protect the constraint. This is also referred to as Critical Chain Buffers on which there is ample work available online.

To calculate buffer times, you can use the simple "just add 25%" method which may work. In the case shown in Figure 2, adding 25% to a 16 hour estimate would only give us 20 hours, still under the 32 hours seen the most. However, there is only about a 40% chance that Joe will finish a 16 hour task in 32 hours. This is because 60% of the area of the 16 hour histogram occurs from 4-31 hours. A better approach to calculate a buffer is to use two standard deviations which, in this case is about 12 hours. This would give us a buffered task time of about 28 hours. Mike Cohn discusses this in more detail in Chapter 17 of Agile Estimating and Planning. Eliyahu M. Goldratt suggests in his book, Critical Chain, to simply use the median value for a given task, which in this case is about 30 hours, so we can use a buffer of 14 hours.

Whatever method you find works for you, create the buffer as a separate task that is dependent on the primary task. You will want to be able to continue to track estimates independently of the buffer. The goal of this process would be to center the curve over the estimated time.

Now on to Schedule Development...



PMBOK, Agile & TOC: Planning the Project – Part 2, Estimates

clock June 25, 2009 03:23 by author Chad Albrecht

In my post Planning the Project – Part 1, I talked about the use of financial metrics to determine if we should execute a project or not. In my last post I empirically showed that slight variations in a group of dependent processes can have very dramatic effects on the outcome. So the question is, how do we use these two bits of knowledge to help us estimate? In simplest terms, we want to use some estimation techniques that take statistical variation into account and use NPV and IRR to gauge financial impact. We can use a number of techniques to estimate time and therefore cost. A few of the more popular ones are:

  1. Proxy based estimation. (PROBE)
  2. Parametric estimation.
  3. The Planning Game or Planning Poker.
  4. Putnam model based estimation. (SLIM)
  5. Evidence-based Scheduling. (EBS)
  6. Other various algorithm based models. (COCOMO, SEER-SEM, etc.)

Before we start using any of these techniques, it's good to have a couple of releases under your belt to gauge the velocity of your team. Depending on the Agile methodology, we may want to estimate features, Product Backlog Items, Scenarios, etc. We will want to start by having the teams make a guess at how long it will take to complete these items. I recommend breaking down any item that is estimated for more than 2 days into smaller chunks. Then you will also need to monitor the actual time needed to complete the item. Over a couple releases you should get a better understanding of how good certain individuals are at estimating and how quickly they can actually get things done. Given that information I like to build a "risk factor" or deviation for each developer that is providing estimates. Using this risk factor I can produce min/max estimates more accurately.

Now that you have some data in hand it's time to generate some estimates. I like to use a combination of Evidence-based Scheduling (EBS) and Proxy based estimation. Joel Spolsky has an EBS primer here. Proxy based estimation is simply the act of using similar pieces of completed code as the basis for current estimates, but here again we need data. Let's look at the widget example from Part 1 and look at the specifics. A 6 man-month project is a two month project for a team of three developers. Assuming we use Scrum, we are looking at three development sprints followed by a release sprint. The two week release sprint will be used to stabilize the software by eliminating all the bugs that are deemed unacceptable for release. Each Product Backlog Item (PBI) will be broken into one or more Sprint Backlog Items (SBIs) and used to generate the estimate. I'm just going to use the first PBI which will take the first week of the first Sprint for purposes of discussing estimation. All the other PBIs will be estimated the same way.

Figure 1 - PBI/SBI Estimation Example

Using the historical estimation data you can produce a deviation for each developer that will give you a pretty good range. You can see from the Figure 1 that we have a margin of error of 6.6 days, this is 5.5% of a 120 man-day project. I would be nice if we could reduce this, but let's review what we are trying to do. We are building an ESTIMATE, not an EXACTAMATE! What these numbers really tell you is the confidence you can have in completing the sprints and the release on-time.

I can hear people screaming now, "But Chad! You shouldn't do this in Agile!" Really? I disagree. Agile is not an open checkbook that allows development to just keep working on projects indefinitely, it is a mechanism to Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.1 We still need to create estimates that allow the business to plan on a software release by a certain date. Remember, we can choose to add or remove scope as necessary but we MUST meet our release dates. We estimate to gain a level of confidence that we can, in fact, meet those dates. As we move forward in the release and learn more we will continue to plan and execute as we monitor the project during the execution phase. This is the "Plan-Do-Check-Act" cycle prescribed by the PMBOK.

If you haven't already figured it out, I'm working my way through the Planning Process Group section of the PMBOK. (3.2.2 in the Third Edition) This article covers .3 through .8 and a bit of .9. I'm going to go into a bit more detail on Schedule Development in my next post.

 

  1. Principles behind the Agile Manifesto - http://agilemanifesto.org/principles.html


PMBOK, Agile & TOC: Planning the Project – Part 1, NPV & IRR

clock June 10, 2009 12:50 by author Chad Albrecht

This is typically where things start to break down in many organizations. For the companies that are using Agile as a means of answering the tough questions, this part is usually skipped. There is a mentality that Agile will provide a "Build it and they will come" utopia where the product is magically produced and the revenue begins flowing just as quickly. Let me say now that this is NOT the case. Before an organization invests any time or money into a product or service, a number of questions should ALWAYS be answered. They are:

  1. What is the Net Present Value (NPV) for the life of the product or service?
  2. Is the NPV greater than zero?
  3. If NPV is greater than zero, what is the payback period?
  4. What is the Internal Rate of Return? (IRR)
  5. How does the IRR compare to other projects being considered?

While these may be difficult questions to answer in some cases, they none-the-less must be answered. Let's look at some basic examples of how to answer these questions using Cost Accounting. In one of the next articles I will answer these questions a bit differently using Throughput Accounting.

What is the Net Present Value (NPV) for the life of the product or service?

For this example, we will take the case of a company that builds software widgets. This company has been building these widgets for a number of years and knows the market well. Sales and marketing estimates the market price for the new widget to be $199 and can sell 1000 a year for 5 years. Let's also assume we can build the widget in 6 man-months with resources costing $100K annually. That gives us a one-time cost of $50K. We will also assume that the product won't be sold until development is complete. (in 6 months)

If we define the following:

Where:

T=Number of periods (years in this case)

C0=Starting cash flow (initial investment)

Ct=Period cash flow

r=Interest rate (taking into account inflation and risk)

 

This is rolling up our Present Value and Benefit/Cost Ratio analysis into one simple calculation. Using this definition with and interest rate of 10% and the data from our example we see:

Is the NPV greater than zero?

So we can see from the above that a $50K investment will yield $704K over the next 5 years…probably worth doing. Probably? Read on…

If NPV is greater than zero, what is the payback period?

This is a pretty simple calculation which determines when the investment is returned in the form of revenue.

So in our example:

What is the Internal Rate of Return? (IRR)

Let's look at another complementary metric, Internal Rate of Return (IRR) or discounted cash flow rate of return. IRR is a good measure of the quality of the investment. IRR is defined as r where:

It is the interest rate that makes NPV of future cash flows equal to zero. This is an easy calculation using Microsoft Excel's IRR function. If we plug in our cash flows and use the equation IRR(B1:B6) we see from Figure 1 that the IRR for our example is 398%.

Figure 1 - Microsoft Excel IRR calculation

How does the IRR compare to other projects being considered?

This is one question that often gets overlooked. If we are resource constrained, it behooves us to use the resources optimally. Let's assume that our sales and marketing team discusses another widget with us that we estimate can be developed in 5 man-months, be sold for $180 and will sell 1100 a year for the same 5 years. Which project should we choose? If we look at the raw revenue numbers, 1100 units a year at $180 is $198,000 vs. the $199,000 from the first example. Using the numbers from widget number two, we calculate our NPV and get $708,909 with an IRR of 475%. It is clear from the numbers that we should build widget number two.

Summary

NPV and IRR are some handy tools for evaluating and comparing projects. I know that I've skipped over a bunch of things and made some assumptions, but I wanted to introduce NPV and IRR. In future articles I will cover other tools that can be used to estimate and evaluate a project as well as go into more detail.



About me...

bio_headshot

I am a leader, entrepreneur, software engineer, husband, father, pilot and athlete. Over the last 17 years of my career I have built numerous successful companies and software development teams. This amazing journey has taken me all over the world and allowed me to work in a number of diverse industries. I have had the privilege to meet and work with thousands of unique and talented people. As you will see from my blog I am a strong believer in Agile techniques and the Kaizen corporate culture. I am always looking to grow myself, my teams and the companies I am partnered with.

Contact me... View Chad Albrecht's profile on LinkedIn Follow Chad Albrecht on Twitter Subscribe to this blog

Professional Scrum Trainer

Professional Scrum Developer Professional Scrum Master I Professional Scrum Master II Professional Product Owner I Professional Product Owner II Certified ScrumMaster

MCTS

Calendar

<<  October 2014  >>
MoTuWeThFrSaSu
293012345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar

Blogroll

Download OPML file OPML

Sign in