descriptive than prescriptive. But if you don’t give yourself a target (even if based on incomplete information), then it’s difficult to know whether you’re making any progress.
Make sure to re-balance regularly as your financial goals change over time. In our case, as Core programs become more efficient and generate more profits that we can reinvest in other segments, we increase the weights in other segments—enabling us to develop a pipeline of future Core programs.
Step 3: Categorize Programs and Projects into Segments
Categorizing programs and projects into the appropriate segment is the most difficult part of the process because the boundaries can be fuzzy. For example, there’s a gray area when adapting an existing technology for a new market—is it new or existing technology? While it’s important to have clear definitions, you also have to make judgment calls based on the intent behind the return/risk categorizations.
Since the asset classes or segments correspond to varying degrees of technical, market, and execution risk, you want to define “existing” relative to your company. PARC defined:
- Existing technology as one where we have a prototype suitable for engagement with commercial customers—otherwise, it’s still considered new technology;
- Existing market as one where we have at least one commercial reference client (which greatly increases the probability of future commercial success)—otherwise, it’s a new market for us, even if the market is already out there.
Step 4: Evaluate Programs and Projects Within Each Segment
The goal of evaluating and ranking programs and projects is to provide insightful data to foster productive management discussion—NOT to do management by spreadsheet.
To promote the intended behavior and avoid unintended consequences from gaming the system, each company will need to think carefully about the specific factors, weights, and definitions used for scoring. You also don’t want to make the process of scoring programs and projects too cumbersome if you don’t have teams of analysts and instead require your line managers to do the scoring.
For us, I created a two-part scoring system with a weighted return score that is then discounted by a risk score. We used ranges of values to normalize all scores between 0 and 3, instead of asking for absolute values. I suggest you do the same, because in any model, a critical mistake is to get a false sense of precision from data calculated from assumptions.
Return factors. After considering many different factors, I ended up with: 1) Projected profit with five-year outlook, 2) Time to revenue, 3) Value of IP, and 4) Business model. Additional factors could be profit margin and alignment with strategy.
Risk factors. To reduce scoring variance, I developed a questionnaire for each type of risk and based on the responses, calculated a score between 0 and 1 for each, corresponding roughly to its probability of success (so a 0.9 technical risk score means 90 percent probability of the project achieving its technical goals):
- Technical risk focused on our ability to achieve the technical breakthroughs or maturity level that was necessary for commercialization.
- Market risk focused on size and growth characteristics of the market, PARC’s degree of competitive advantage, and whether we had just one-shot or multiple opportunities.
- Execution risk focused on our understanding of the market, value chain, and partner landscape, and whether we had gotten market validation yet.
We then multiply the different risk scores to arrive at an overall risk score. In our system, a low score means a high risk. We treat each type of risk as independent, because