Introduction: Why Decision-Making Needs a Better Framework
The Analytic Hierarchy Process (AHP) is a structured framework for making important and sometimes complex decisions based on multiple criteria.
Every organization makes big decisions—what projects to fund, which strategic initiatives to prioritize, and how to allocate scarce resources. But when stakeholders have different priorities, when data is incomplete, or when dozens of competing options exist, decision-making can quickly become messy, political, and inefficient.
The result? Misaligned projects, wasted budgets, frustrated teams, and strategic goals that remain out of reach.
This is where the Analytic Hierarchy Process (AHP) comes in. AHP is a proven, best-practice decision-making framework that helps organizations make complex choices with clarity, transparency, and consensus. By breaking problems into structured criteria, capturing stakeholder input, and providing one clear score for each option, AHP transforms decision-making from guesswork into a measurable, repeatable process.
At TransparentChoice, we’ve helped hundreds of organizations worldwide—from government agencies to Fortune 500 companies—apply AHP to prioritize portfolios, align stakeholders, and deliver projects that truly move the needle.
👉 Pro Tip Book a Demo to see how TransparentChoice can help your organization reduce politics and bias in project selection.
What Is the Analytic Hierarchy Process (AHP)?
The Analytic Hierarchy Process (AHP) is a structured decision-making methodology developed by Professor Thomas Saaty in the 1970s. It’s designed to enable decision-makers evaluate multiple criteria, balance trade-offs, and reach consensus on complex problems.
Here are the 5 key steps to building an AHP model:
- Build a model – Define your decision goal, then break it down into underlying criteria that you will use to score competing alternatives.
- Weight the criteria – Stakeholders compare two criteria at a time to establish relative importance, thereby building a weight set for scoring alternatives.
- Score alternatives – Work with subject matter experts to score potential alternatives against the criteria. Each option ends up with a 0-100 score.
- Select the best alternatives – Pick the best alternative(s). This step often involves overlaying other data, for example project cost.
- Embed best practice – To maximize impact good decision making should be part of a process: a component part of doing things smarter time and time again.
As well as building an analytic framework, AHP is also designed to build buy-in to decisions through broadening engagement in the process:
- Collaboration is central with broad participation enabled by breaking down complex decisions into smaller addressable questions.
- Voting is done as a team with two-step data collection and structured reviews to minimize common issues, such as anchoring, bias and “loudest voice”.
- Results are transparent and fair, helping build acceptance and trust, even when people don’t like the results.
There are two broad practical applications we will explore in this guide:
- Pick One decisions identify a single “winner” from a range of mutually exclusive options. Typical applications include vendor selection, product design, site location and “build vs. buy” reviews.
- Pick Many decisions are used to cut down the number of potential alternatives in line with defined constraints. This is mainly used for Project Prioritization and strategic planning, to reduce the drag of “Too Many Projects”.
👉 Pro Tip: In plain terms, AHP addresses complex, multi-dimensional problems and makes them simple, structured, and transparent through enabling teams to work together. It works equally well for both portfolio level prioritization and in-project decision making.
A Simple Explanation of AHP for Beginners
If you’re new to AHP, imagine you’re choosing a new car. You care about cost, safety, fuel efficiency, and comfort.
- Instead of just debating endlessly with your family about “which car is best,” AHP helps you structure the conversation.
- You first decide which factors matter most (e.g., maybe safety is more important than comfort).
- Then, you score each car against each factor (e.g., Car A is safer, Car B is cheaper).
- AHP combines these scores into one clear ranking of the cars, reflecting both the facts and the agreed priorities.
Now scale that same method up to multi-million-dollar project portfolios, national policy decisions, or vendor selections—that’s the power of AHP.
👉 Pro Tip: Document poor decisions made with “gut feel”. What was the cost of poor decision making? This will be the basis for investing time in a rigorous analytical review.
Origins & Development of AHP
Foundations of AHP
The Analytic Hierarchy Process (AHP) was developed in the 1970s by Professor Thomas L. Saaty, a mathematician and operations researcher. His goal was to create a method that combined:
- Mathematical rigor – models that best translate preferences into scores
- Human psychology – built around people, and how to bridge judgements into data
- Practical usability – a tool that non-experts could apply in real-world settings.
At the core of this approach is the methodology, pairwise. Here’s how it works.
What is “Pairwise”?
Put in plain English a Pairwise comparison means looking at two things at a time instead of trying to weigh everything at once.
First the psychology of right-sizing complex questions for the human brain:
- You simply ask: “Which of these two criteria is more important, and by how much?”
- Repeating this across all pairs builds a clear picture of preference
- Because it breaks decisions into bite-sized judgments, people find it easier and more reliable than scoring everything in one go.
Then the maths, that turn preferences into a weighted model:
- These comparisons form a matrix, where each cell shows how much more important one item is over another (using Saaty’s 1–9 scale).
- The matrix is reciprocal: if A is 3× more important than B, then B is 1/3 as important as A.
- From this matrix, the priority weights are calculated by finding the principal eigenvector — essentially extracting the pattern of relative importance across all judgments.
- AHP then checks consistency: are your judgments logically aligned? (e.g., if A > B and B > C, then A should > C). A consistency ratio (CR) ≤ 0.1 is considered acceptable.
👉 Pro Tip: Download this free Excel spreadsheet with a worked example to see for yourself, then work with your AI of choice to build your own
Why Human Judgement need AHP
While, Saaty’s original methodology was conceived as being suitable for either individual or group decision making, it has proved most powerful as a tool for collective decision making.
From a psychology perspective, it’s about reducing the noise that comes from an individual human’s judgement. As people we are all subject to bias and blind spots. Later research from Daniel Kahneman would go on to prove judgements can even vary from one day to the next based on how we feel, or indeed on hour to the next based on the weather.
Therefore putting one person’s data directly into a model is flawed, which is why AHP performs best when multiple voters participate in parallel, and then their preferences are aggregated to form a more balanced perspective.
Complete this process with a facilitated review (and chance to iterate votes) to ensure that people can learn from one another, while avoiding the risk of anchoring that comes from regular round table debates,
AHP’s Mathematical Foundations
Back to math, (briefly). The process of aggregation is performed using a geometric mean.
Let’s consider 3 voters’ views when comparing Criteria A & B:
- A vs B = 3
- A vs B = 5
- A vs B = 1/2
The aggregated judgment is:
(3×5×0.5)1/3= 1.95
In other words, multiply the judgments together, then take the nth root, where n is the number of voters.
Why use the geometric mean instead of a regular average?
- It reduces the impact of outliers, so one extreme judgment doesn’t dominate.
- It preserves reciprocity (if A is 3× B, then B must be 1/3 A).
- It’s mathematically consistent with AHP’s use of ratios rather than absolute scores.
How to Score Alternatives with AHP
A Pairwise framework is the basis for objectively rating competing alternatives. There are two main approaches for this next level of the review process:
Firstly, you can use Pairwise again. So just like you compare criteria pair by pair, you can also compare alternatives (projects, vendors, or policies) two at a time. This is the classic approach in the original version of AHP.
- Instead of giving each alternative a raw score, you ask: “Between Project A and Project B, which is stronger on this criterion, and by how much?”
- By repeating this across pairs, AHP builds a preference profile for all alternatives.
- The result is a set of scores that reflect relative performance, not just isolated ratings.
Alternatively, you can apply a scale. Rather than comparing options two at a time, you can score each alternative directly using a predefined scale:
- For example, on a 0–5 scale, 0 means “no contribution” and 5 means “very strong contribution”.
- Each alternative is scored against each criterion using this same scale.
- Because all options are rated on the same consistent scale, it’s easy to compare and aggregate results.
Which approach is better? It depends – we’ll explore this is more detail below.
👉 Pro Tip: Read more about why AHP is Great for Prioritization in our in-depth blog, or learn more about “Pick One” decisions in our Collaborative Decision Making Guide
AHP Variants: Development Over Time
AHP has formed the basis of further research in the Decision Science community over the years. Here are some of the main developments:
- Analytic Network Process (ANP) - A generalization of AHP that handles interdependencies and feedback between criteria and alternatives through a network (instead of a strict hierarchy).
- Fuzzy AHP (FAHP) - Incorporates fuzzy logic to allow stakeholders to express preference judgments as ranges (e.g., “2–4 times more important”) instead of fixed numerical values.
- Interval AHP (IAHP) - Extends AHP by using interval judgments, acknowledging uncertainty by allowing decision-makers to specify ranges for comparisons rather than single values.
- AHP with Hybrid MCDM Methods - Combines AHP weighting with other multi-criteria decision-making techniques (e.g., TOPSIS, VIKOR, PROMETHEE, goal programming) to enhance ranking or optimization under constraints.
However, in this guide we’ll focus on the practical application, as none of these developments has the proven track record or practical application of ‘core’ AHP.
AHP’s Practical Applications
Here are examples of AHP, and its practical application:
- The University of New South Wales analyzed different ways to prioritize project portfolios and found that AHP (and DEA, a rather more complex approach) were the only effective frameworks.
- The UK Civil Service recommends AHP as a practical method for transparent decision-making in policy development in long-list assessment ahead of more detailed cost-benefits analysis
- NASA has used AHP in complex engineering and mission planning decisions, demonstrating its ability to handle high-stakes, high-complexity environments.
- Studies in the Project Management Institute (PMI) library show AHP is particularly effective for prioritizing projects in complex portfolios.
- Research continues to evolve AHP, combining it with AI and optimization techniques for modern applications.
👉 Pro Tip: Check out this Webinar with Dr James Brown to learn more about AHP and its role in NASA
When should I use AHP?
AHP should be a go-to for any significant decisions where there is no obvious choice, but here are four common triggers that make it the smart choice for successful leaders:
Multiple criteria compete
For example, balancing cost, risk, and strategic fit, or choosing between short and long-term goals.
For commercial organizations the benefit is about finding a balance between different financial levers. Short term gains matter but need to be viewed relative to long term growth. Revenue growth is key, but so is revenue protection. As is margin. The list goes on – the point is AHP is a framework to balance these factors systematically.
For governments and non-profit balance is typically even more complex, with factors such as public service, internal cost control and public confidence to balance.
Stakeholders are not aligned
AHP builds consensus by giving everyone a structured voice and encouraging empathy to valid counter-opinions.
Structured pairwise comparisons reduce political battles by asking people to explain why different criteria matter in relative terms. This is somewhat abstract – it’s not about competing to get people to buy into your idea; it’s a more reflective alignment on what you are there to collectively achieve. This higher-level conversation offers far greater scope for compromise, and in doing so helps create leadership alignment.
This is key for three reasons:
- You get a better model, because it’s built on collective judgements with less noise.
- You build buy-in to the decisions that follow, because everyone has had a chance to be heard, and knows that the process was fair and rationale
- By reducing the ambiguity for what you want, you make it easier for the rest of the team to follow guidelines
Scoring is subjective, data is complicated
AHP is about building quality data points, and that often means turning human judgement into a quantifiable scores. Sounds easy, but often it’s not. Let’s revisit our goal to buy a new car earlier as a case in point.
Firstly, we care “how it looks”. This is entirely subjective, so scoring needs to balance potentially conflicting taste. Next, we care about safety. This is quantifiable, but how can we get the array of data available into a scale that differentiates our choices?
We could go on, but the point is clear: we need a mechanism to structure data to make it possible to compare different views and different types of data.
Poor decisions keep happening
The biggest single rationale for using AHP is that the way you make key decisions today isn’t working. This can manifest in many ways, but here are the most common signals:
- Loss of confidence in the process – the way decisions are made keeps getting changed, while those waiting for outcomes become cynical
- U-Turn are commonplace – disruptive changes are leading to wasted effort
- Decisions get delayed – big calls get fudged as leadership lack confidence to eliminate options
- Outcomes are disappointing – poor decision making typically manifests in missed benefits, overspend and delayed delivery
👉 Pro Tip: AHP shines when “gut feel” is no longer good enough. This is often reflected in protracted decision making, costly U-Turns, and a high rate of project failure.
Who Uses AHP? PMOs, Strategy Teams, Policy Makers and More
AHP is not just an academic framework; it’s been applied to some of the world’s most complex and high-stakes decisions and is a vital tool for any data-driven leader,
PMOs and Portfolio Management
Organizations use AHP to prioritize projects, ensuring limited budgets and resources are directed toward initiatives with the highest impact. This means building AHP into the Demand Management funnel so that Too Many Projects can be right sized to an achievable portfolio:
- Optimize the portfolio to take account of resource limits
- Build a balanced portfolio that reflects preferences in the model
- Stagger start dates to put high value projects first
👉 Example: Harbor Foods, a major U.S. distributor, faced challenges in aligning executives around which projects to fund. By adopting AHP with TransparentChoice, they eliminated pet projects, improved executive buy-in, and focused resources on the initiatives with the highest strategic impact.
Government & Public Policy
AHP is ideal for helping to make challenging choices in the area of public policy, where competing stakeholder interests are often in direct conflict:
- Use collaborative participation at scale, for example getting a room full of real people to vote on a topic (we’ve done this)
- Create defensible decisions. Reduce the risk of challenge with a clearly explicable framework that is easy to explain and justifiable as fulfilling public duties.
- Use AHP to join complicated models and experiments that might otherwise create “analysis paralysis”.
- De-politicize long term initiatives. Rational frameworks are more effective in an environment where leadership can change every few years, but investment lifecycles run in decades.
👉 Example: The World Bank has used AHP to prioritize infrastructure and development projects across countries and regions. AHP provided a transparent, evidence-based way to allocate funding where it could deliver the greatest societal impact, while maintaining accountability to stakeholders.
Project Managers / Engineers
If you are building a solution there are often tricky “one-way” decisions where you must commit to a choice.
- Vendor Selection
- Picking a design solution
- Site selection for new facilities
- Go-No Go milestones for major investments
👉 Example: While planning a new ticketing solution for the Stockholm Metro, the delivery team had a critical go-no go decision to make, and chose to replace the normal 200-page review with an AHP model with a clear “winner”.
Corporate Strategy / Finance
Global companies use AHP to align initiatives with strategic goals as part of annual planning and budget setting.
- Financial and non-financial factors can be integrated into one AHP model, thereby enabling finance and strategy to join up their planning.
- Resource constraints are built into the bottom-up planning process, thereby enabling the PMO to have an achievable base case for delivery.
- AHP provides an objective measure to pinpointing value, thereby helping the organization to move funding to where it has the greatest impact
👉 Example: The American Planning Association (APA) used AHP to bring structure to its strategic planning. By breaking objectives into clear criteria and using pairwise comparisons, APA was able to align its leadership team and prioritize initiatives that truly advanced the organization’s mission.
How to Get Started with AHP: Your 5 Next Steps
Adopting AHP doesn’t have to be complex, this next section covers the key actions you’ll need to take to build an AHP model for your organization.
We’ll break it into five stages, building your model, setting the weights, scoring alternatives, selecting alternatives and integrating your decisions.
1. Build a Criteria Model
Engage Stakeholders
Your criteria are the backbone of the process. Keep them linked to strategic goals and make sure they’re clear and distinct. Defining the criteria is classic stakeholder engagement. Digest documentation, listen to stakeholders and apply AHP-best practice then iterate a strawman to get sign-off.
The Importance of Hierarchy (the “H”)
Start by deciding on the level of complexity you need. Simple models (4-6 criteria, with no sub-criteria) mean less work scoring, but limit precision. A regular AHP model (4-6 criteria, each with 3-4 sub-criteria) is more thorough and suited to higher value projects.
This will be critical for when you do the pairwise review. If you have 15 criteria in a flat model it will take 105 questions to establish relative preference between them. Not fun. But if you have 5 criteria, each with 3 sub-criteria, it’s just 25 questions. That’s more time for the all-important debate.
What’s NOT a Criterion
The most common mistake with criteria building is to include everything that matters to selection. However, there are critical points which should not be in an AHP model:
-
Cost. The value of an alternative does not change based on how much it costs. There are exceptions but as a rule, use cost as a constrain when reviewing the output of the model rather than baking it into the model itself.
Consider our car buying example above. If “Price” is in your model your “winner” might be strong on every other criteria, but actually be beyond your budget. Far better to eliminate the options you cannot afford up front and then use smaller variations in price as a final selection criteria.
-
Gating Factors. If there is something that is simply non-negotiable then it is not a criterion – it’s a Gating Factor. Adding this into a model will skew it.
Consider plans for our car again. If I do not have a license for a manual, then I cannot buy a stick shift. It’s not a factor in my model – it’s a deal breaker to use to thin out the field before I start scoring.
-
Everything. A core feature of AHP is focusing on what matters. Eliminate peripheral factors that are “nice to have” if they are too marginal to have an impact on the final selection.
Let’s go back to our car buying. The kids say they want heated seats in the back. Do I care? Nope, this is not going in my model. If it happens to be in the winning choice that’s nice, but it’s simply less important than safety, economy and brand.
👉 Pro tip: Download our E-book for our guide on this process.
2. Agree a Weight Set
The Power of Pairwise
Stakeholders are asked to express a preference e.g. Revenue vs. Risk Mitigation. They also apply strength in the form of a ratio, for example Revenue is 3x more important than Risk. This is better than asking people to make up a weight – because relative preference is better suited to the human brain, especially with complex decisions.
These ratios create a mathematical relationship between all criteria, which in turn generates each criteria a weight via an algorithm at the core of AHP. There is often inconsistency (we’re human after all) which is the extent to which these ratios cannot be reconciled. Under 20% is considered fine, but over this tends to mean the person taking the survey disagrees with themselves, which happens surprisingly often, and suggests a quick “QA” is in order.
Stakeholders should do this review separately at first to generate independent points of view, therefore reducing group think, anchoring and follow the boss tendencies; all proven flaws in traditional round table discussions.
Generate a Weight Set
Each criterion in the model is now weighed, such that they total 100. This determines high-level preference (e.g. revenue vs. risk).
The weight from each branch is then split between underlying sub-criteria. (e.g. short, medium, long-term revenues) through repeating the pairwise exercise within each branch. In turn when scoring alternatives these weights will be multiplied by sub-criteria level scoring (e.g. how strong is this project in the context of short-term revenue).
Once you have this model take time to reflect. Is it a good representation of our goals? If not, what ratios should we re-examine?
Forge leadership alignment
Building this weight set is both a step in building an AHP model and an opportunity for leadership to listen to each other and improve levels of mutual understanding.
A well-facilitated session will mean everyone with an opinion gets to be heard, but that there is always a resolution to a debate, with an average preference generated from underlying scores. This leaves no scope for disagreements being left indefinitely “open” to block progress.
This experience isn’t just a nice to have – it’s the basis for the cultural acceptance of the AHP model. Put simply, if leadership don’t accept this as their model then it’s useless, new rules for decision making must start at the top.
👉 Pro tip: if you rate a top-level criteria as being very low importance you are making its sub-criteria almost meaningless in term of impact, so be sure to understand what is beneath the top level criteria when rating it.
3. Score Alternatives
Scoring Alternatives - Create evidence-based performance profiles for each alternative vs. each criterion.
Put simply, you’re deciding if a project is “best in class” exemplar vs. the criteria, in which case it scores the full weight of that criterion, if it’s scoring nothing, or if it’s somewhere in between, therefore getting a portion of the criteria weight available.
Scales: Practical Solutions to Scoring
Start with best practice. Use 0–5 scales where 0 = no contribution and 5 = very strong contribution. This best-practice approach avoids inflating weak options and makes it clear when an initiative delivers no value at all.
Each step in the scale should have a clear description which minimizes ambiguity. If bands can be quantified, then do so. The goal is to minimize the scope for misinterpretation, so everyone has the same understanding of what they mean.
If this isn’t right then pick a different approach. Scales can have more or fewer steps. Scales can be non-linear. The key is to have enough levels to split your candidate projects apart, while not over-complicating scoring.
“Hard Data” and Normalization
Normalize quantitative data (e.g., ROI, cost, emissions) so it fits seamlessly into the model.
The basic principle is that you don’t want to waste time collecting opinions if there is already a data point available. However, that data point must be made to fit a scoring framework so it can be built into the criteria model. This is where we apply a Normalization Cap, which defines the value needed to score full weight of the criteria. Above this level also scores full weight, but no more.
Think back to our car. We want legroom, but a spacious saloon is ample. A stretch limo adds no extra value; therefore we would cap this criterion in line with the former.
Applied to an ROI model we might have a hurdle rate for a great project (e.g. 200%). It’s fine to have more, but what we don’t want to do is use an outlier to define “best in-class” as it would effectively reduce the score of all the other rates of return.
Pairwise (and when to use it)
While pairwise is always right for weighting criteria, it’s only occasionally the best way to score alternatives. Scoring with pairwise means determining a ratio of preference for each criterion, then applying matrix math to get a score. It’s used instead of scoring with a scale or hard data, but only makes sense when the following are true:
- There is a fixed field of alternatives. This makes it bad for project prioritization where there is a flow of new projects.
- There are more than 3-4 alternatives. Using pairwise on a large field of candidates quickly turns into a lot of questions vs. using a scale.
- Criteria are highly subjective. Relative preference can be hard to quantify (on a scale) for very “soft” factors. Take our car again. Picking a favorite brand is a feeling – which one do you like more (and by home much)?
As such, pairwise scoring can be right for “Pick One” reviews, but rarely for “Pick Many”.
Wisdom of the Small Crowd
Reducing the effect of “noise” is a key benefit of AHP. That’s why scoring is a team sport. But asking groups of people to commit time to a new step in a process can be challenging (we’re busy people here!) so we recommend a number of proactive steps to consider:
- Divide and conquer. Split surveys into small groups of criteria so you’re not asking people to make judgements about things they don’t really understand
- Focus on disagreements. Have people score alternatives before the meeting, then ignore areas where there is already good alignment
- Show “What’s In It For Me”. This is more than another task – it’s a chance to be heard, to influence important choices and to reduce effects of poor decision making.
- Build muscle memory. The first couple of reviews do feel “weird”. Power through, they will become normal quickly, and time taken will drop significantly.
👉 Pro tip: Use the scoring process to start documenting benefits, with a clear line between (high) scores and the key outcomes of the project.
4. Select Winning Alternative(s)
Once you have scored all the alternatives vs. the criteria you get a final score, a 0-100 rating for how well your alternative meets your criteria. At this point our two modelling approaches start to diverge so let’s look at each separately:
Pick Many – how to select your portfolio
Build a Benchmark. There is no standard “good score” for a model, but for a for a portfolio analysis you should get a sense of what good looks like over time.
Add Cost. AHP has quantified the value of our project. If you compare this to their cost you can analyze them using value for money as a KPIs. If you don’t have a detailed cost that’s normal; work out how to get a sensible estimate.
Rank your portfolio. Using either Value for Value form Money, you can rank your portfolio from best to worst. For an agile style backlog you can then simply pick from the top until your teams are at capacity.
Visualize the data. Don’t forget a key goal with AHP is building buy-in to decisions, so it’s important to make your results transparent and simple. There are many ways to cut the results, especially if you join it to other project data, but there are four which we recommend as core:
- Ranking Criteria: Show the breakdown of the total Value Score so it’s clear which criteria are driving the results.
- Prioritization Matrix shows cost vs. value in a simple 2x2 view of a prospective portfolio. Low value / high-cost projects may simply stop at this point.
- Efficient Frontier uses the same cost-value data as above, but is a ranking, sorted by value for money, with cumulative data for the value and cost of the portfolio. Great for quickly showing where key projects stop and long tail begins. While not normally as stark as 80-20 this is a effectively a portfolio pareto curve.
- Value vs. Risk adds a new dimension, assuming risk isn’t built int your AHP model.
Build Scenarios. For more complex planning exercises you need to apply constraints, which will enable you to undertake a more thorough review:
- What people are needed to complete the work?
- What are the funding limits?
- How can I stagger the projects to boost throughput?
- Does my recommended portfolio align to the weights in my AHP model- i.e. am I achieving a good strategic fit?
- What “What If” versions can I create to give leadership choice?
👉 Pro tip: We go way deeper on this in our Ultimate Guide to Project Prioritization.
Pick One – how to complete a selection review
Our start point is the same as above, a 0-100 score for all alternatives. However, the steps to complete the review are different:
- Present the data. As above buy-in is key. Use the data to “tell the story”and build confidence in the results of the review.
- Narrow the field. If you’re using AHP to get down to a short list then agreeing a group of high scoring alternatives to take forward to detailed cost-benefit analysis.
- Sensitivity Analysis. Flex your model assumptions. What happens to the ranking if you dial up a specific criteria weight? Does this flip the ranking, or does your “winner” remain clearly ahead?
👉 Pro Tip: Learn more about “Pick One” decisions in our Collaborative Decision Making Guide
5. Integrate AHP into your operating model
The main goal of the AHP model is to support selection. However, its application does not stop once an initial decision is made. For example:
- Portfolio Management and the governance process at its core is a great place to bring AHP scores, providing clear recommendations for new project proposals
- Business Cases often require Project Managers to make recommendations on key choices inherent in deliver: picking a vendor or a design solution for example. Use your model to support your choice and clarify where there are alternatives.
- Design Process: Developing engineering solutions or R&D innovations is usually a multi-step process with a blueprint for each stage gate. Work out where AHP fits and instigate it as best practice.
- Benefits Management means taking Value Scores and relating them to specific measurable outcomes. Put simply, if we commit to a project because it promises revenue, be sure to track the realization of that revenue through the delivery cycle.
- Change Control means creating an anchor. You know the benefits which have justified the investment, so can validate how shifts in the scope are impacting value.
- Transparency means documenting your decision as being fair and logical, making it easy to audit with explicit logic for your decision.
- Lessons Learned is a capability you can evolve over time. Review delivery vs. value in the AHP model. Is anyone consistently wrong? Do we have a problem with optimism bias?
What Tools Can You Use for AHP?
Free and Spreadsheet-Based AHP Tools
Between AI and Google it’s easy to get AHP for free. Try it – it’s a great way to test it.
- Good for learning the basics.
- Not scalable for real-world portfolios or corporate planning for larger organizations, without a lot of work / workarounds
- Lacks features like consistency checks, collaboration, and visualization.
👉 Pro Tip: Download this Free Excel Template to capture your criteria and scales.
Specialist AHP Software: TransparentChoice
TransparentChoice has been designed specifically for project prioritization and pick one decision making. It streamlines AHP with easy pairwise comparisons, built-in consistency checks, collaborative workshops, and data visualization (e.g., prioritization matrices, efficient frontiers).
Latest features include AI generated Scenarios with in-built staggering, and personalized dashboards for project owners.
👉 Pro Tip: TransparentChoice differentiates itself by focusing on stakeholder alignment, usability, and integration into PMO processes.
AHP in PPM Platforms: Integration & Automation
Some portfolio and project management (PPM) platforms have begun to incorporate AHP-style weighting and scoring. However, these are often lightweight implementations that lack the rigor of true AHP and don’t support stakeholder workshops or consistency validation.
For organizations serious about decision quality, a dedicated AHP tool integrated with PPM systems is the best of both worlds.
👉 Pro Tip: If you see the value of AHP but also need a PPM tool we can provide a bundled service via one of our trusted partners.
Summary: Why AHP Improves Strategic Decisions
The Analytic Hierarchy Process (AHP) is more than just another framework—it’s a proven way to transform how organizations make decisions:
- It breaks down complex choices into structured models.
- It aligns strategy with execution.
- It builds stakeholder buy-in and consensus.
- It reduces noise and bias in human judgment.
- It delivers transparent, defensible outcomes.
- And it moves prioritization beyond ranking—into true portfolio optimization.
For PMOs, strategy teams, delivery groups, and policy makers, AHP provides the clarity and structure needed to make better decisions, faster.
👉 Pro Tip: Check out our Software demo.
Frequently Asked Questions About AHP
Q: How many criteria should I have in my model?
Keep it simple. Most effective AHP models use 5–9 criteria. Too few, and you won’t capture strategy; too many, and stakeholders lose focus.
Q: Do I really need to ask my executives to commit to a workshop?
Yes—and it’s worth it. The process doesn’t just create weights; it builds alignment and ownership. When executives help define priorities, they’re far more likely to stand behind the results.
Q: How much detail do I need to add to my project descriptions?
Enough to make an informed judgment—but don’t drown people in detail. A concise summary that explains the project’s purpose, benefits, risks, and rough costs is usually enough.
Q: How can I create a scale when I don’t have data for measurement?
Not every decision has perfect data—and that’s okay. Use practical, qualitative scales like 0–5 or “none/low/medium/high.” The “0” option is important: it lets stakeholders indicate that an option adds no value under a given criterion.
Q: How do I get people to make time to score projects?
This is key—and it comes down to communication.
- Explain the payoff: Time spent scoring is time saved later. A structured scoring session prevents wasted months on the wrong projects.
- “Measure twice, cut once”: A few hours of scoring avoids costly missteps.
- Show the payoff: Demonstrate how prioritization reduces politics and speeds approvals.
- Make it easy: Tools like TransparentChoice simplify scoring, add consistency checks, and make it engaging.
👉 Pro Tip: The message: scoring is not a time cost—it’s a time saver.
Q: Can I score an AHP model with existing data / models?
Yes. If you already have data like ROI, NPV, or risk assessments, you can normalize it into an AHP scale.
Q: Can AHP handle subjective judgements?
Yes, that’s a key feature. This can either be with a pairwise review of alternatives for Pick One decision or through a well-defined scale designed to support a Pick Many use case such as project prioritization.
Q: How can I estimate resource requirements without detailed scoping?
You don’t need perfect data upfront. Use rough-order estimates (e.g., T-Shirt Sizing). The goal is prioritization, not detailed scoping. Start with something and improve it over time.
Q: How can I add AHP into my existing planning processes?
Easily. AHP is designed to fit alongside existing governance frameworks and tooling. TransparentChoice integrates with popular PPM tools, so you can embed structured prioritization without reinventing your processes.
Q: Is AHP better than weighted scoring?
Yes. Weighted models lack mathematical rigor and consistency checks. AHP provides structured, validated decisions backed by decades of research.
Q: How long does it take to run a weighting workshop?
Usually 1–2 hours. A small investment that saves months of wasted work on the wrong projects.
Q: How many questions will my Pairwise Review generate?
This depends on the size of model. Use this formula to work it out:
n*(n-1)/2 where n = number of criteria.
For example, a model with 5 criteria would be:
5*(5-1)/2 = 10 questions
Note that using a hierarchy reduces questions because you do not have to compare sub-criteria between branches of the model.
Q: What are common mistakes with AHP?
- Too many criteria.
- Vague/overlapping criteria.
- Skipping stakeholder engagement.
- Ignoring consistency checks.
- Relying on spreadsheets instead of proper tools.
Q: How much does TransparentChoice Software cost?
See our website for pricing or book a call to discuss.