Sponsored Links

Project Task Duration Estimation

•    Key to Schedule success
•    Historical and repeatable tasks
•    Learn why schedules fail
•    Find out how to eliminate the target date tango
•    Build a schedule defense that manages the risks
Our experience suggests that insufficient time spent on schedule development is a key risk to project success. A little research in printed material or on the internet should confirm this hypothesis.

Is it always guess work?

Project estimation does not have to be new or novel to the executing or responsible parties on the project. Ultimately, estimation is and will always be an educated guess with some risk mitigation, which is supported by prior knowledge in the form of experience. As the team marches through the project, they will find that the uncertainties are consumed by duration of project just as the arrival estimate of a GPS system becomes more accurate as we approach the target location.

Steps in estimating

Some information is essential to creating a meaningful estimate, for example:
•    A statement of scope or scope document that defines what the project is and is not
•    A task list in the form of a work breakdown structure (WBS)
•    The task details defined (not simply a list of task names)
•    Duration estimations provided by the team
•    Task dependencies (schedule and risks) clarified
•    Schedule risks identified, such as
–    Critical Path? (longest consecutive, slack-less path)
–    Task variations
•    Planned schedule risk mitigation

All of the previous items should be in at least preliminary form before attempting an estimate of project schedule.

What are we doing?

The project scope really defines the constraints on the project. Without this definition and agreement with the customer (internal or external) on the boundaries, the team has no way to re-estimate after a scope change, not to mention the risks assumed when the scope does, in fact, change. The very heart of scope is an activity called the work breakdown structure, which can take several formats.
Our scope definition should also permit us to readily quantify meaningful project success by providing the boundaries with which the team will work. In short, the scope defines the restrictions on the problem space.

Work breakdown structure

Work breakdown structures are often hierarchically decomposed as cost centers particularly if the organization is following the dicta of MIL-STD-881C, the standard that defines the format and content of the WBS for the U.S. Department of Defense. Cost center or task names can originate from:
–    Organizational processes
–    Known and proven best practices
–    Expert and experienced opinion
–    Major deliverables (MIL-STD-881C)

Just as the overall project has a scope, each task objective will have a task scope. The Department of Defense expects these individual scopes to be defined in a WBS dictionary, which typically will provide a textual definition of the WBS line item. Again, these scopes also help to delineate conditions for success for each task.

Duration estimation ala PERT

Program Evaluation and Review Technique (PERT) originated in the 1950s during the U.S.N. Nautilus nuclear submarine project. PERT has terminology and concepts all its own:
•    Optimistic = O
•    Most likely = ML
•    Pessimistic = P
•    Task variance
•    Normal distribution
•    Task duration as a continuum of possibilities (probability)
•    PERT equation = [(O + 4 x ML + P) / 6]

The result of the PERT equation is a weighted average that attempts to represent the conflation of the three varieties of estimate.  It is possible to down play the most likely estimate by reducing the multiplier for any given component; for example, software development duration is often optimistic among all team members. Additionally, the approach assumes that all components of model are sensible estimates themselves.

PERT example ala Excel

Here is an example of PERT using a Microsoft Excel spreadsheet. The division by six suggests that PERT uses the assumption of a normal distribution (six sigma covers 99.98% of the possible variation), which may not be warranted by the data.
 project task duration estimation

(Editor's note: An excellent .xls file provided by the authors is available in the downloads section for FREE download)

Task variance

The task variance, the root of which is the standard deviation (normal distribution), is the delta between pessimistic and optimistic durations in its roughest form (statisticians may observe that this value is actually the range, a coarse measure of dispersion). The larger the variance, the higher the degree of uncertainty assumed by those doing the estimating.

Task dependencies

Task dependencies are another part of this stew. These are composed of task sequences where one task cannot start until another task is complete. For example, a facetious set of dependencies would follow the sequence egg → chick → hen → fryer. If any task other than the project kickoff (first task) or the project closing meeting (last task) has no dependencies, then the task can presumably be executed immediately if resources are available.
Large task variation usually has significant impacts on the schedule analogous to tolerance stack up in the mechanical world (another form of variation). Each one these variances is a schedule risk. If the project has been properly baselined in the project management software, the Variance can be measured. In the case where the team experiences large variation on dependent tasks, the project will see a “ripple effect” on schedule risk and an increase in overall project variance. If this ripple occurs on the critical path, the risk to the estimated schedule is immediate and recovery is potentially unattainable.

Network diagrams & GANTT Charts

We use network diagrams to understand dependencies and schedule impacts. The network is a graph-theoretical representation if dependencies. Commonly, the nodes in the graph will show a variety of information such as resources, start and finish dates, and budgetary information. The mathematical object itself is known as a directed graph (digraph).

Gantt charts are the best known graphical representation of projects, but this approach has some significant limitations:

•    Dependency impacts not so easily visible
•    Not a mathematical entity amenable to graph theoretical calculations

Estimation and probability

Any time an upstream manager requests single, ‘hard’ dates—which imply 100% likelihood—they request an absurdity. A more rational response would be to use a span of dates. The span of dates may indicate use of PERT, with the estimated mean plus standard deviations indicating the ‘normal’ distribution assumption. An alternative would be to use a Rayleigh distribution.


The Rayleigh distribution is a Weibull distribution with a shape factor whose value is the integer two.
The Minitab 15 diagram that follows shows a Rayleigh (Weibull) distribution compared with a normal distribution:
 raliegh distribution curve
The Rayleigh mean and variance is not the same as that for the normal distribution, but rather:

raliegh mean variance

Where the position can be negative. The scale is the point at which we have 63.2% completion of the specific task.

Why is probability difficult?

Probabilistic approaches may have some difficulties, for example:
•    Lack of project history
•    Failure to baseline previous projects
•    Failure to scrutinize previous projects
•    Tendency to be optimistic (can do attitude) or pessimistic
•    Assumption of distribution can be totally wrong (especially for tasks with no history)
•    Incorrect dependencies (joint probabilities)
•    Existence of a critical path

Each of these issues can be overcome and some factors may not be that important if team member estimates are solid. In some cases, errors are unavoidable—some of the errors we have seen are:
•    Insufficient up front time generating estimates (point source)
•    Underestimates of test time
•    Estimates provided by personnel with no experience or responsibility for the task
•    Underestimates of the impact of lateness on chain of dependencies
•    Overestimates of benefit of ‘crashing’ schedule
•    Underestimates of the human cost of overtime
We might represent the problems with an Ishikawa diagram:
 ishikawa diagram

Can the target ever be met?

Yes, look at the Empire State Building
•    Completed 1.5 months ahead of schedule
•    At less than 90% of budget!
•    And NO project software or electronic spreadsheets!!!

Expanding the interval

Expanding the confidence interval is another action that—up to a point—increases the probability of a meaningful estimate. As we expand the interval in which our estimate falls, we increase the confidence in the result—80% confidence has a narrower interval than 90% confidence. The issue can become meaningless, for example:
•    Beginning of universe à end of time = 100% confidence!
Cone of uncertainty
Project estimates have a cone or triangle of uncertainty, which looks like this:
The corollary to the cone of uncertainty is the impact on target date estimate, which looks like this:
An additional factor is measurement uncertainty, which is equal to the estimate variance.
The following equation represents one model for estimate uncertainty:


Managing slack

Managing the slack, or ostensibly idle time, may be perhaps the single most important factor in project success. When there is no slack, the team may move into the ‘death march’ phase followed by project doom.  Managing the slack through control mechanisms (feedback) and monitoring leads to project success because the team will identify key task (critical path) metrics and then track them to predict task conclusion, while sounding the alarm when slack time vanishes.

Managing the risk

Without some kind of risk mitigation, estimates are likely to fail. We suggest the following:
•    Perform a project failure mode and effects analysis
•    Activate contingency plans to keep on track
•    Pay Attention to the details

Managing deliverables

The project manager might consider managing the deliverables rather than directly managing the tasks. After all, what the customer receives is a deliverable they can see or hold or use. Delivering a product is part of the story; documentation is a deliverable also; support work is a deliverable also. We introduce error when we don’t account for these items; hence it is best to develop the schedule cross-functionally


If all else fails, we can re-estimate the course of the project. Re-estimation should be routine for the following:
•    Any change in scope
–    Schedule
–    Budget
–    Feature set/quality
•    “Noise”
–    Floods, power outages, hurricanes, etc.
–    Strikes
–    Key players leave

Project complexity

Project complexity is a difficult concept to grasp quantitatively, but it is there. Additionally, we anticipate that few projects scale linearly; that is, increasing scope increases complexity. We say that the re-estimation should be linear only if scope increase is very small. Without history from other projects, nonlinear adjustments are difficult.

What about reviews?

Reviews are a primary feedback mechanism. For reviews to really work:
•    Consider frequent in-process reviews
•    No more than 30 days apart
•    Update client after each review
•    Eliminate surprises (this way the project won’t be more 30 days out of whack!)

We can also activate a set of prophylactic schedule responses:
•    Alter the task sequence / dependencies where possible
•    Better control the method of achieving the specific task (eliminate the risks that cause the high variation – where possible)
•    Account for the task variation in the project delivery schedule
•    Use a capacity resource planning approach (the critical chain approach)

Crashing schedules as a ‘solution’

Do NOT build schedule crashes into the estimates. In order for a crash to work, the team will need to maintain some slack and have a convenient management reserve (more capacity). Once the project is unavoidable and completely on the critical path, the target date is most likely unachievable. Crashing may sound great to an upstream manager, but crashing often means the project already has exhausted its contingencies and the schedule itself is now out of control.


We say that the best schedule is the one that is accurate enough it doesn’t have to be changed. Barring that, contingency plans are the order of the day and they must be exhaustive; in fact, it is not unreasonable to have multiple or layered contingency plans. We suggest that project managers express targets in probabilistic terms and that neither they nor their managers expect the “old school try” will save a fiasco. In short, comprehensive preparation is the surest way to moderate the effects of accidents, stupid decisions, and irrational expectations.


About the Authors

project managementKim H Pries APICS: CPIM: ASQ: CQA, CQE, CSSBB, CRE is responsible for all test and evaluation activities and automated test equipment at Stonebridge Electronics-North America. He holds a masters degree and presently teaches the ASQ Six Sigma Black Belt Certification. Kim has written Six Sigma for the Next Millennium A CSSBB Guidebook published by American Society of Quality Publications and co-authored with Jon Quigley the book titled, Project Management of Complex and Embedded Systems: Ensuring Product Integrity and Program Quality, published by Auerbach Publications.

Project Management of Complex and Embedded Systems: Ensuring Product Integrity and Program Quality




project managementJon Quigley MBA, M.Sc.PM, PMP, is Manager of Electrical / Electronic Systems and Verification group for Volvo 3P - Product Development North America. He holds two Masters Degrees and is a certified PMP.
Jon has secured four US patents and in 2005 was part of the team that was awarded the prestigious Volvo-3P Technical Award in 2005 going on to win the 2006 Volvo Technology Award in 2006.

Six Sigma for the Next Millennium: A CSSBB Guidebook

Comments (0)Add Comment

Write comment
bold italicize underline strike url image quote Smile Wink Laugh Grin Angry Sad Shocked Cool Tongue Kiss Cry
smaller | bigger