Introduction: true story (with pictures)

Greetings (and, by special request, a Hoo-ya’ to ya’ SoCal),

Many of you are here because I gave you this link while facilitating a VIP (Value Improving Process), and some of you may have gotten the link as part of my Project Manager mentoring program. I’m also getting a few hits via Google, however, so let me introduce myself… scott01

There are lots of Scott Smiths; this one is me.

At Exxon, Project Management teams are assisted by experts, referred to as NPQC (non-process quality control). While resident at a contractors offices in Milan, I had a fascinating NPQC for Instrumentation and Computerization, who flew in regularly from Amsterdam, named Albert

Albert had a great spiel for simplification, going something like this: The ultimate computer would only have a single button, and it would be labeled : “You Know What I Mean“.

I suspect that often the first screen to flash after pushing the button on such a machine might start with: “Let me tell you a story“.

So, let’s begin such a story…

Which story? I am going to select one from the 1980′s. Mainly because it is illustrative of a well executed, schedule-driven key phase of a mega-project, but also because I have lots of data, photos and even video. My follow-up assignment running the contractor’s home office activities (while the action was in the field) also gave lots of time to put together my close-out report: two bound volumes upon which I have received quite good feedback (Lou, especially: thanks!). While putting these pages together, I also reluctantly admire some youthful pluck – I was just seven years into “becoming a pro”, (see my post on that) optimistic beyond all reason, and I am somewhat charmed by the voice I hear in my reportage, which I have not tempered, below… My role was not Commanding Officer (Project Manager), but his very active XO, filling in as top dog for a few weeks, formally in charge of Schedule, Costs and Contract Administration throughout.

My temptation is to start with what, if this was a movie, would be called Plot Point I, on pages 25-27 of the script. Here, after some previous foreshadowing during the visit by iconic Exxon head-of-all-projects Manny Peralta, prepping for the following week’s visit of Saudi Royalty to our site, Manny expresses “interest” that a similar project in the same facility has hired Henry Kissinger to lobby top management and is offering $$ one million dollar per month unilateral incentive bonuses to meet schedule, while my incentive plan is monthly beers at the local pub. Manny sends out Exxon’s top scheduling expert and the top Fluor (our managing contractor) scheduling expert to perform a joint audit on my work. Conducted five months before the first of our five scheduled module-shipment dates, they report that at best I will be three months late – at $50,000 per day demurrage on the ships, plus hugely more in other disruptions – and at worst case six to nine months late! Fade to “bad day” music, as the sun sets over the module fabrication yard…….


Positive Note: this will be resolved in Plot Point II, pages 85-90, after I have convinced my management to stick with me, and Shipment 1, even though we waited an extra four days so the president of Korea could attend the shipping-out ceremony, goes on time, as do all of the other shipments, including the last shipment two weeks early. The time frames achieved compare favorably to Exxon internal “Effective Construction Span” data based on direct construction manhours (13 months actual vs. 18 predicted), and also compares well to the next best bidder for this module work, an experienced Japanese yard (a first-pass contractual six to nine month longer schedule, which might have been negotiated down to, say at best, a three month longer schedule than was achieved). But at what price? 9.1% below the control budget; 21.2% below the estimate at the time of contract award. The modules were insured for $125 million, as part of a roughly $500 million total project.

As I said, I was tempted to start at Plot Point 1. But if you will indulge me, I’ll start at the beginning (of course, feel free to jump down to page 6 to get to the meat, if you wish). By beginning, I am talking Michener-like, way beginning…

PM101: Scope and Costs

cost estimate cartoon 1

A cost estimate is, to me, a useless number without a fit for purpose, clearly defined and documented (a) scope, (b) methodology and (c) range. Each is discussed below.

History seems to hold with three major categories of cost estimates:

  • Screening;
  • Budget;
  • Definitive .

Among these, there are five specific estimates:


  • Class 5: business planning; and
  • Class 4: facility planning;


  • Class 3: project planning / bid evaluation; and sometimes
  • Class 2: (in-house, factored) execution / cost control;


  • Class 2: (in-house, factored) execution / cost control; and
  • Class 1: (contractor, detailed) execution / cost control.

Estimates are linked, generally, to decision gates. For the most part, the scope definition and methods used for each estimate will be based on what’s reliable and available for the decision gates they inform.

One of many ways of visualizing these decision gates is presented below (gates are the five blue diamonds, the three major ones identified as Gate 1, Gate 2, and Gate 3). (source: IPA)

IPA Phases

Another, more detailed visualization, also with five gates (gates are the five red diamonds – three major ones: DG1, DG2 and DG3). (source: StatOil),

StatOil Gates


Screening Estimate(s) => Gate 1 (IPA) or Gate DG-1 (StatOil)

Budget Estimate(s) => Gate 2 (IPA) or Gate DG-2 (StatOil)

Definitive Estimate(s) => Gate 3 (IPA) or Gate DG-3 (StatOil)

Another way of looking at these is by Front End Loading (FEL) Phases (source: IPA’s founder and president Merrow, DOE report)

FEL123 from IPA DOE


I’ll try to harmonize a classic (in-house) and a contemporary (contractor-detail-reliant)presentation of cost estimates, relating these to decision gates (recognizing diversity in approach and terminology, as apparent even in the two images above) .

The classic approach is based on the 1978 first edition of Exxon chroniclers Forrest D. Clark and A. B. Lorenzoni’s “Applied Cost Engineering“; selected because I believe it is the finest approach ever developed. (I add some detail on scope documentation). The contemporary selection is from “Estimate Class Definitions” published by the Cascade Section of AACE (Association for the Advancement of Cost Engineering) International (selected because it googled up first).

Main emphasis, here – the difference between two interpretations of the Budget Estimate: an early, in-house version, using a factored approach (Class 2), and the typical, much later, contractor detailed approach (Class 1). The meat of this starts at page 3; some basics for perspective are presented first.

The Classic Approach

Forrest Clark

Class 0

For fun, let’s start at the end: a Class 0 Estimate. These are typically done at the completion of a project. In terms of the big two: (1) what are the quantities and (2) what are the unit rates – both are known with certainty and in detail.

If you have a methods group, they can collect a number of projects, and from these determine specific correlations. And if you have a data group, they can track historical and current factors to apply to these correlations.

Oh, there’s the one rub – often these Class 1′s are NOT done. That’s a problem. Because these can become the basis for producing reliable estimates on subsequent projects.

Reality: if your project is 9.7% over budget (or worse: 20.7% over) and the key players are already chomping for their new assignments, only the more disciplined and foresighted companies will collect and maintain good methods and data in-house. Others may need to look to reliable third parties, or just go with the flow (or “bottom desk-drawer” stuff). At the outset, take a moment to consider which situation you find yourself in.

note: some systems number the earliest estimates at 1, and move forward to more detail at higher numbers. My preference is with those who start with screening estimates at 5 and 4 and move to more details, more accuracy at 3, 2, and 1.

Screening Estimates


Business Decision: (1) Is this opportunity likely to be profitable, and within acceptable risk, worth pursuing? And if so, (2) which of several technologies / locations might be best ?

Hurdle: Commit resources to pursue Basic Design .

Timeframe: Business Planning, Facilities Planning

Scope Basis: May be a page or several pages, with a feed slate and product specifications, locations, general timeframe, rough economics and technologies to achieve it. May also include alternate cases to be evaluated. Assumes a typical Project Execution Plan.

Methodology: Varies. To answer the first business decision: gross proration (Class 5); to answer the second: curves or rough semi-detailed estimate (Class 4).

Effort: With appropriate curves available, a single estimator or small team in days or weeks.

Range: “Blue Skies” estimates from R&D groups have proven to overrun 100 – 300%; gross prorations from a centralized estimating group are generally within +/- 40%; rough semi-detailed estimates may be within +/- 30% .

Class 5

The most common methodology is (1) Gross Prorations

(1) Gross Prorations


Clas 5 Pro-ration

The above is most applicable under the following:

Clas 5 Conditions


Class 4

(2) Curves


Clas 4 Curves

Clas 4 Curve Basis

(3) Semi-detailed Curves

Clas 4 Semi 01

Clas 4 Semi 02

Note that the more detailed curves, above, generate only Materials and Labor for the direct associated costs, unlike the more general curves previously which were based on Total Erected Costs (TEC). Therefore, these more detailed curves may be better suited for studies between equipment / design alternatives. To generate a TEC from them requires additional detail on indirect costs, as per the budget estimates (Class 3 and Class 2, following).

Screening Estimates – What’s at risk: The decision to allocate resources to develop the Basic Design; not necessarily a lot of money, but there may be lots of potentially attractive other projects competing for the same scarce resources.

Screening Estimates – Reconciliation: Tracking the path from one estimate to the next is important for credibility. If the Class 3 is significantly different from the screening estimates, reconciliation may focus first on the proper application of site specifics. If these align, then focus may shift to the basic correlations used in the Class 5 and Class 4 estimates versus the reliability of the Class 3 basis.


Screening Estimates – Help! Okay, maybe you don’t have curves. Your company may not have built a lot of similar projects, or such data is inaccurate or just not available. Here are options :

1. Get third party expertise. Picking randomly from Google: Larkspur Associates has experienced cost engineers advertising conceptual estimating expertise, and uses the latest conceptual estimating software (lineage going back to Icarus, Icarus 2000, Quesstimate): Kbase (now owned by AspenTech). I have been aligned with Pathfinder, LLC, which of course I rate as excellent in this area, as well. And there is a rich, high quality data store in Independent Project Analysis (IPA’s) benchmarking databases; IPA is expanding the cost analysis services provided to their clients.

2. Partner with suppliers. I built a lube oil additives facility in Singapore, and there is a specialized machine used to “masticate” an elastomer / oil combination to produce the multi-viscosity additives (e.g. 10W30, instead of just 10W or 30W). There are only two suppliers in the world for these, so it made sense to involve them very early. Likewise, in the area of instrumentation and computer control, partnering with a supplier can make their expertise available very early in the the screening stages.

3. Use your contractors and subcontractors. This can be tricky, especially well ahead of contract bidding, much less award. I know of no success stories, here. But major Engineering-Procurement-Construction (EPC) firms have methods and data – especially which they use for lump sum bidding. If you are doing, say, a lot of refractory work in Fluid Catalytic Cracking Unit (FCCU) reactors and regenerators, you may have a preferred refractory supplier / installer who would benefit from your accurate assessment of his scope of work. Ditto with crane subcontractors and staging subcontractors, if these become significant factors. It is not now unusual for such as these to attend your Constructability Value Improving Process (VIP) sessions; who knows how willing some might be to also share at the screening stages of an estimate?

Bottom Line (Foreshadowing):

Question: How important are these “screening” estimates?

Emerson Influence

Answer: uh… pretty important!

And… you won’t get the costs to go along with business planning and pre-project planning and analysis if you are waiting for bid award/ early engineering/ contractor’s take-offs…. The pro’s will be timely in getting this right , and benefit!


Probability, more or less

Give Me a Number, please!

Every executive wants to know the completion date for his / her project.

But suppose you developed a curve for one of the following, all with the same “most likely” completion span (e.g. 15 months). How would you differentiate between them, when asked by the exec, to properly respond?

Chart Narrow SYM1

Chart Wide SYM1

chart unsym high1

Chart unsym low1

One response might be: “Please give me a few minutes to talk about probability (risk) curves, and how we derived them for this project”.


Practical Background



Excerpts from an article by Dr. Sam Savage, of Stanford:

(Using probability curves and Monte Carlo simulations) can enhance our view of uncertainty and risk as dramatically as X-rays enhance our view and treatment of broken bones. Pioneering efforts to evaluate uncertainty using computer simulation were used in the study of physics during the Manhattan Project and then moved to Wall Street in the 1970s and ’80s. By the early ’90s, these techniques found acceptance among technical personnel on desktop computers in industries as diverse as finance, energy, and the environment. The way we think about uncertainty is being reshaped along with the prospects for business success. .

There’s a common misconception that business projections based on average assumptions are, on average, correct. I refer to this phenomenon as the flaw of averages, and it gums up well-intentioned plans with alarming regularity in all lines of work.

An analyst at a petroleum company once described how he got his boss to understand the implications of uncertainty on their plans. He took the boss’ own spreadsheet, replaced several cells with uncertainties, and ran a Monte Carlo simulation on it. In this way, his boss was able to gain a better understanding of his own model.

We will run sample Monte Carlo’s on spreadsheets, as suggested above, in the Practical Applications section, below. But the studies require that we input distributions, so a little on those, first…

1. The Normal (Gaussian) Distribution


The most famous distribution is the Normal, or Gaussian. It’s very close to what you will find when you flip lots of coins, or collect people’s IQ’s or heights or weights. In other words, it fits lots of well dispersed and totally independent events.

Part of its beauty – you can describe the curve with only two variables: its mean and deviation, and from that gets lots of information. The downside: it fits with reality (especially regards independence) much less frequently than its use would suggest.


In fine print, in the curve to the left of Gauss on the Ten-Deutsche Mark note above, is the following formula:

The general formula for the probability density function of the normal distribution is:

gauss formula1

where mu is the location parameter (mean) and sigma is the scale parameter (deviation). (you can see how these factors affect the curve by clicking on the link to an applet, the blue curve on black next to Gauss, above).

Also, here (from the ten deutsche mark curve):

Gauss Detail

The 68-95-99.7% Rule

2. Curve Fit your own Distribution based on Data

If you have applicable data, that’s best. There are plenty of good curve-fitting programs out there; I like John Gilmore, and have been using his software, CurveFit .


(from an excel spreadsheet with 20,000 trials)

Model graph1

model eq

model data


3. Use a “Most likely, Minimum, Maximum” Distribution (by Experts)

Unfortunately, there is not always data for a curve fit, nor enough independence to assume a Gaussian distribution.

That means, typically, we overlay the “best” schedule or estimate with an expert analysis on the probabilities for each independent segment. The easiest, most intuitive method is based on an assessment for each segment of the “most likely“, “minimum” and “maximum ” values.

How to distribute these?

(a) The easiest, but least sophisticated, would be to assume that every probability within the min-max range is the same . This is not often used.

uniform 0

(b) The more common, yet also easy to calculate distribution is the triangular distribution.

uniform 1 uniform 1-2

It has been published that the problem with the triangular distribution is that it may not properly model especially the “tails” .


(c) While more complicated, the “PERT” distribution typically better reflects real situations, and is often my distribution of choice.

uniform 2 uniform 2-1

However, the PERT is more difficult to calculate (see the equations below). I have recently been trying out the software “RiskAmp” which performs these distributions and runs Monte Carlo analyses on them. Crystal Ball can also be configured this way, and I suspect that @Risk can, as well.

beta density

beta density note

4. Concluding Remarks on Distribution Functions

Sam Savage says it well: “You may have heard that a simulation is only as accurate as the distributions fed into it. I disagree. Before you climb on a ladder to paint your house, you shake it to test its stability. The random forces imparted when you shake the ladder are quite different from the random forces imparted when you climb on it. So are you going to stop shaking ladders because you discover that you’ve been using the wrong distribution all these years? Monte Carlo simulation provides a way to “shake” your plan to test its stability.”


Quality Decisions

Big Ones and little ones

What’s most important: getting the big decisions right, or the lots of little ones? Answer: both.

Most of the literature, and emphasis in project management, is on getting the big ones right. Justly so. Before we get into that, though, a moment on the little ones.




Remember this, and only this: the decisions we make in a split second (are what count).

-The novel “Split Second” by Alex Kava

The integral over the project life, of all of the individual decisions made and actions taken by each of the project team members, is what makes or breaks the project. These are based on the quality and skills of each person, daily circumstance, and the degree to which each person has “signed up” and aligned with the overall project objectives. Getting the big decisions right obviously helps with the alignment component , but to use a kitchen analogy: no matter how good the sharpening system, you need good steel in each knife.

Okay, on to the big stuff…..

Decision Quality

My thanks to the good people at Chevron , who tipped me on to this, and who have incorporated this matrix into their project milestone decision making process.

From the book “The Smart Organization” by Matheson and Matheson (broadly applicable, though primarily focused on R&D)

I will reproduce the chain, then a sample spider diagram of two decision making processes, with the author’s tabular explanation.

The overall point seems to be this: identify the basis upon which decisions are made. Then, pay particular attention to the weakest areas, and either (a) bolster these aspects, or (b) accommodate these realities into the perceived reliability and future robustness of the decisions taken.

DQ Chain


DQ Spider

DQS Table

A Beautiful Thing

beautiful china

For a cost geek, there may be nothing more beautiful than a Class II Estimate from Exxon’s Golden Years

How did this system come about? I don’t really know, but perhaps some commenters to this site may provide insight (hint, hint…).

I suspect it was something like this. A prescient executive foresaw the sort of environment that Exxon would describe to me in 1980: “we have enough profitable projects to engage every engineer in the world.” They knew that methods would be needed to select the best from among all of the opportunities. And the breakthrough idea was that the selection process would also provide the backbone to ensure successful delivery of the project.

There must have been some sort of “virtuoso team” involved (Exxon paid the best to hire the best, a story for another time). And this team must have been given quite a favorable time frame and budget.

In the end, these estimates didn’t come cheap. I recall being told that 2-3% of the entire project budget might be spent on the Class II (this seems ridiculously high, even if costs included funding a share of the data collection, quarterly vendor reviews and the cost methods team; perhaps I mis-remember…).

But oh, my – what a useful baseline! It was said that you could order bulk materials based on the Class II, and that some fast-track projects did (e.g. piping: sizes, materials and lengths; structural steel; foundation materials; electrical conduit and switchgear, maybe even control valves and instruments). It also included an estimate of all the engineering and direct construction manhours, with productivity per the likely schedule and at location. All this prior to contractor selection.

That’s right, best of all: the propitious timing of its results. The Class II was available before the engineering contractor had the time to do all of the design drawings and specifications and take-offs that usually precede such an estimate (and therefore, performs an otherwise unavailable check on those efforts). So, months ahead, you get a reliable baseline look.

So… how does it work?

Forrest Clark

Let me use Clark and Lorenzoni’s 1978 First Edition of “Applied Cost Engineering” for some background. (Note: I am still searching for a good copy of the 1985 Second Edition, but l also used the 1993 Third Edition in my post “PM 101: Scope and Cost”).

………………………………more to follow

The Wisdom of Crowds and Virtuoso Teams

The Many and the Few…

crowdsVirtuoso team2

There is a healthy creative tension between two ways of providing guidance and boosting your chances of success as you plan and execute your projects.

One is heavy on the involvement of those actually doing the work, characterized by the Value Improving Processes (VIP’s). Get those directly involved on board early, in a structured format based on statistically demonstrated success. Develop the top actions likely to benefit for your project and follow-up their implementation. Let’s explore this with reference to the influential bestseller “The Wisdom of Crowds“.

Another utilizes the collective wisdom of a select group of experts, typically not directly involved with the ongoing work, but with a track record in similar projects. These are typically referred to as “Independent Project Reviews“. A seminal Harvard Business Review article on “Virtuoso Teams” provided my phraseology.

I think that the two complement each other, and suggest most projects coordinate their complementary use. Three or four of the most relevant VIP’s up front, and IPR’s at the 30% and 70% completion points of engineering should, in my opinion, be considered.

The Wisdom of Crowds

of course, a story:

Quote (excerpts)

One day in the fall of 1906, the British scientist Francis Galton headed for the Country Fair… As he walked through the exhibition that day, Galton came across a weight-judging competition. A fat ox had been selected and members of a gathering crowd were lining up to place wagers on the (slaughtered and dressed) weight of the ox.

Eight hundred people tried their luck. They were a diverse lot. Many of them were butchers and farmers, but there were also quite a few who had no insider knowledge of cattle. “Many non-experts competed”, Galton wrote later in the scientific journal Nature.

Galton was interested in figuring out what the “average voter” was capable of because he wanted to prove that the average voter was capable of very little. When the contest was over and the prizes had been awarded, Galton borrowed the tickets from the organizers and ran a series of statistical tests on them, including the mean of the group’s guesses.

Galton undoubtedly thought that the average guess of the group would be way off the mark. After all, mix a few very smart people with some mediocre people and a lot of dumb people, and it seems like you’d end up with a dumb answer. But Galton was wrong – the crowd guessed 1,197 pounds; after it had been slaughtered and dressed the ox weighed 1,198 pounds.

Galton wrote later: “The result seems more creditable to the trustworthiness of a democratic judgment than might have been expected.” That was, to the least, an understatement.


There are three types of problems described in this book:

1. Cognition Problems – are problems that have, or will have, definitive answers. What crane is best for this heavy lift and how many direct construction manhours will be expended on this project are cognition problems.

2. Coordination Problems – require members of a group (process design and procurement, construction and safety) to figure out how to coordinate their behaviors with one another, knowing that everyone else is trying to do the same.

3. Cooperation Problems – involve the challenge of getting self-interested, possibly distrustful people to work together. Owners and contractors; construction firms and unions; facility employees and the community are examples.


The conditions necessary for a group to be wise: (a) independence, (b) diversity and (c) a particular kind of decentralization.

(Note: groups make bad decisions as well as good ones: think riots and stock market bubbles. There are un-wise groups, the way the world works, as well…)


Example: learning from the bees, a twofold process. First, uncover the possible alternatives. Then decide among them.


Independence is important to intelligent decision making for two reasons. First, it keeps the mistakes of people from becoming correlated. Second, independent individuals are more likely to have new information .

(In the bee hive example cited, the scout bees need to be independent in order to seek out every available source of nectar. Track record: searching within three miles of the hive, there is a greater than 50/50 chance that they will find any flower patch within a mile)


Diversity helps because it actually adds perspectives , and it takes away, or at least weakens, some of the destructive characteristics of group decision making. It’s a familiar truism that governments can’t, and therefore shouldn’t try to, “pick winners”…no system seems that good at picking winners in advance. What makes a system successful is its ability to recognize losers and kill them quickly.

(In the bee hive example, the forager bees follow those scout bees who bring in the most nectar and/or have the shortest path to the source and who show it in their “waggle and dance”. Other forager bees follow, preferentially, their most successful “waggling and dancing” brethren. Note: if you’ve got the waggle, you’d better have the nectar, baby!)


In terms of decision making and problem solving, there are a couple of things about decentralization that really matter. It fosters, and in turns is fed by, specialization (which) tends to make people more productive and efficient. Also, the closer a person is to a problem, the more likely he or she is to have a solution for it.

Decentralizations great strength is that it encourages independence and specialization. Its great weakness is that there is no guarantee that information uncovered in one part of the system will find its way to the rest.

Random Quotes:

A survey found that physicians, nurses, lawyers, engineers, entrepreneurs, and investment bankers all believed they knew more than they really did. It wasn’t just that they were wrong, they didn’t have any idea how wrong they were. The only forecasters whose judgments were routinely well calibrated were expert bridge players and weatherman.

Herders may think they want to be right, and perhaps they do. But for the most part, they are following the herd because that’s where it’s safest. John Maynard Keyes wrote in The General Theory of Employment, Interest and Money, “Wordly wisdom teaches that it is better for reputation to fail conventionally than to succeed unconventionally”.

Virtuoso Teams

I have worked (on and off) for several years for Pathfinder, LLC – attracted there in large part because of the ways its founder and president, Lou Cabano, fosters and gets the most out of virtuoso teams. Something “magic” happens when small groups of really smart people, relying on well-honed “chunks” of knowledge, wisdom and experience, “go at it” to solve the tough ones.

Virtuoso Teams Cover

The HBR article here and here popularized the phrase “virtuoso teams”, which I like, but the full article is not 100% applicable to Independent Project Review Teams.

Here is an sample, however, of a “virtuoso attitude” to consider adopting during such a study period:

Traditional teams: Focus on tasks; complete critical tasks on time; complete the project on time.

Virtuoso Teams: Focus on Ideas; a frequent and rich flow of ideas between team members; they strive to find and express the breakthrough idea on time.

Related Thought

Consider the entrepreneurial small business start-up (in a way, every new project begins sort of like that). If the hypothetical small business team is really good at 46 of the 50 key essentials to success, they may eventually struggle in those deficient areas. The “independent” in Independent Project Review team infers freedom from some self-selection / interest and skills bias that the base project (perhaps the company) may display. Therefore, the IPR team may be especially well suited to spot such potential deficiencies, and have some ideas about how to bolster the project team to overcome them.

Going Pro…

These articles point to some keys towards becoming a pro: (a) work done in a challenging environment, (b) in real time, with (c) real rewards on the line, (d) enjoyably, (e) committed, (f) with an expert attitude (e.g. keeping the lid of one’s mind’s open all the time; to inspect, criticize and augment its contents) and (g) supported by the experience and wisdom of professional, expert, successful others.

A decade is probably a necessary, but not a sufficient (lots of novices put in the time!) condition for becoming a pro.

Three related viewpoints are offered:

I. “Teach Yourself Programming in Ten Yearshere.


Learn Pascal in Three Days

(It differentiates itself from such a book as “Learn Pascal in Three Days”)




“Researchers (Hayes, Bloom) have shown it takes about ten years to develop expertise in any of a wide variety of areas, including chess playing, music composition, painting, piano playing, swimming, tennis, and research in neuropsychology and topology. There appear to be no real shortcuts: even Mozart, who was a musical prodigy at age 4, took 13 more years before he began to produce world-class music”

Recommendations (some paraphrased):

1. Get interested… and do some because it is fun. Make sure that it keeps being enough fun so that you will be willing to put in ten years.

2. Talk to others, study the work others are doing well in the field.

3. Do the work. The best kind of learning is learning by doing. To put it more technically, “the maximal level of performance for individuals in a given domain is not attained automatically as a function of extended experience, but the level of performance can be increased even by highly experienced individuals as a result of deliberate efforts to improve.” (p. 366) and “the most effective learning requires a well-defined task with an appropriate difficulty level for the particular individual, informative feedback, and opportunities for repetition and corrections of errors.”

Some may refer to number three as “get x years of experience, not one year of experience x times”.


II. Secrets of the Expert Mind


SciAm The cover story of August 2006 issue of Scientific American is thoughtful essay by Phillip E. Ross on “The Expert Mind” (p.46)



Quotes (four excerpts):

A man walks along the inside of a circle of chess tables, glancing at each for two to three seconds before making his move. Dozens of amateurs sit pondering his replies. The year is 1909, the man is Jose Raul Capablanca, and the result is 28 wins in as many games, part of a tour in which Capablanca won 168 games in a row.

How did he play so well so quickly? How far ahead can he calculate in this time? “I see only one move ahead,” Capablanca said, “but it is the right one.”

Overview/ Lessons from Chess

  1. Because skill at chess can be easily measured and subjected to laboratory experiments, the game has become an important test bed for theories in cognitive science.
  2. Researchers have found evidence that chess grandmasters rely on a vast store of knowledge of game positions. Some scientists have theorized that grandmasters organize the information in chunks, which can quickly be retrieved from long-term memory and manipulated in working memory.
  3. To accumulate this body of structured knowledge, grandmasters typically engage in years of effortful study, continually tackling challenges that lie just beyond their competence. The top performers in music, mathematics and sports appear to gain their expertise in the same way, motivated by competition and the joy of victory.

The 10-year rule states that it takes approximately a decade of heavy labor to master any field.

(He) argues that what matters is not experience per se but “effortful study”. That’s why enthusiasts can spend tens of thousands of hours playing chess or golf or a musical instrument without ever advancing beyond the amateur level… Even the novice engages in effortful study at first, improving rapidly… but having reached an acceptable level – most people relax. In contrast, experts-in-training keep the lid of their mind’s box open all the time, so that they can inspect, criticize and augment its contents, and thereby approach the standard set by the leaders in their fields.


III Going Pro


Steven Pressfield’s the War of Art




It is one thing to STUDY WAR,

and another to LIVE THE WARRIOR’s LIFE.

-Telamon of Arcadia, fifth century B.C. mercenary

Professionals and Amateurs

Aspiring artists defeated by Resistance share one trait. They all think like amateurs. They have not yet turned pro.

The moment an artist turns pro is as epochal as the birth of his first child. With one stroke, everything changes.

The amateur plays for fun. The professional plays for keeps .

To the amateur, the game is his avocation. To the pro it’s his vocation .

The amateur plays part-time. The professional full-time .

The word amateur comes from the Latin root meaning “to love”. The conventional interpretation is that the amateur pursues his calling out of love, while the pro does it for money. Not the way I see it. In my view, the amateur does not love the game enough . If he did, he would not pursue it as a sideline from his “real” vocation.

The professional loves it so much he dedicates his life to it. He commits full-time.

That’s what I mean when I say turning pro.

Resistance hates it when we turn pro.


Q4 Take-Away (Fire in one’s Belly)



At the top of this page, we summarized: The (above) articles point to some keys towards becoming a pro: (a) work done in a challenging environment, (b) in real time, with (c) real rewards on the line, (d) enjoyably, (e) committed, (f) with an expert attitude (e.g. keeping the lid of one’s mind’s open all the time; to inspect, criticize and augment its contents) and (g) supported by the experience and wisdom of professional, expert, successful others.

In other words – take on the tough ones, the big ones, not recklessly yet with failure being a possibility.

That’s certainly not the easiest nor the safest path. However, if you can brook no other option, you will know that it’s the necessary path for you.

Ranier Maria Rilke had the following to say about this “fire in the belly” for one’s profession; in this case it was in a Letter to a Young Poet:


There is only one thing you should do. Go into yourself. Find out the reason that commands you to write; see whether it has spread its roots into the very depths of your heart; confess to yourself whether you would have to die if you were forbidden to write. This most of all: ask yourself in the most silent hour of your night: must I write? Dig into yourself for a deep answer. And if this answer rings out in assent, if you meet this solemn question with a strong, simple “I must,” then build your life in accordance with this necessity; your whole life, even into its humblest and most indifferent hour, must become a sign and witness to this impulse

In my post called “Quadrant Four Project Management” I propose that for some of us, maybe 15% of a typical population, there is a particular context towards life or a world-view that is characterized phenomenologically. Part of the Quad Four characterization is work towards “the greater good”. And while your profession is never all-encompassing, it should align. The work of the above three authors appears to do so.