ITIL v5 Compass
Leadership & Implementation
Measuring Success

Measuring Success

From activity metrics to business outcomes

Measuring ITIL v5 adoption success requires connecting operational activities to business outcomes through a structured approach.

The metrics hierarchy

Effective measurement operates at three levels:

LevelWhat it measuresWho uses itExample
StrategicBusiness outcomes and value realizedBoard, CIO, CFORevenue protected by incident prevention
TacticalProcess effectiveness and efficiencyIT Director, practice ownersFirst-call resolution rate, change success rate
OperationalActivity performance and complianceTeam leads, practitionersTicket queue depth, response time
⚠️

Organizations typically measure only at the operational level, reporting ticket counts to boards. This approach misses the mark -- leadership cares about business impact, not raw ticket volumes. Build a metrics hierarchy translating operational data into strategic insight.

Strategic metrics framework

Critical Success Factors (CSFs) and Key Performance Indicators (KPIs)

For each strategic objective, define CSFs (conditions enabling success) and KPIs (measurable success indicators):

Strategic ObjectiveCSFKPITarget
Reduce business disruptionIncidents are prevented or resolved quicklyMean Time to Restore Service (MTRS)under 30 min (P1), under 4 hours (P2)
Improve customer experienceUsers are satisfied with IT servicesCustomer Satisfaction Score (CSAT)> 4.2/5.0
Accelerate innovationChanges are deployed frequently and safelyDeployment frequency, Change failure rateWeekly releases, under 5% failure
Optimize costsIT spending is aligned with business valueCost per transaction, IT spend as % of revenueBenchmark-aligned
Ensure complianceRegulations and standards are metAudit findings, Compliance coverageZero critical findings

OKR (Objectives and Key Results) alignment

For organizations using OKR frameworks, ITIL metrics structure as Key Results:

Objective: Deliver world-class digital services

Key ResultMetricCurrentTargetDeadline
Reduce major incidents by 40%P1 incident count (monthly)127Q4 2026
Achieve 95% customer satisfactionCSAT score3.84.75Q2 2027
Deploy to production dailyDeployment frequencyWeeklyDailyQ3 2026
Reduce resolution time by 50%P2 MTRS8 hours4 hoursQ1 2027

Experience Level Agreements (XLAs)

ITIL v5 introduces XLAs as a complement to traditional SLAs. While SLAs measure technical performance (uptime, response time), XLAs measure user experience (satisfaction, productivity, sentiment).

SLA vs XLA comparison

AspectSLAXLA
FocusTechnical complianceUser experience
MeasuresUptime, response time, resolution timeSatisfaction, productivity, sentiment
PerspectiveProvider-centricConsumer-centric
RiskCan be "green" while users are unhappyReveals the reality of service consumption
Example"System availability: 99.9%""85% of users rate their IT experience as good or excellent"

Implementing XLAs

  1. Define experience indicators: Identify aspects of user experience that matter most (e.g., "Can I complete my work without IT interruptions?")
  2. Establish measurement methods: Use surveys, in-app feedback, sentiment analysis, productivity metrics
  3. Set experience targets: Base these on baseline data and user expectations
  4. Report alongside SLAs: XLAs complement rather than replace SLAs; report both for comprehensive visibility
  5. Act on experience data: Use XLA data to drive improvement; treat it as actionable insight, not just reporting

Example XLA dashboard metrics

Experience IndicatorMeasurement MethodTarget
IT happiness scoreQuarterly survey (1-10 scale)> 7.5
Productive hours lost to IT issuesMonthly self-report + incident dataunder 2 hours/employee/month
Self-service success ratePortal analytics> 75% of requests resolved via self-service
First-contact resolution experiencePost-interaction survey> 90% satisfied
Digital tool adoptionUsage analytics> 80% of target users actively using

Metrics catalogue by practice area

Incident Management

MetricFormulaIndustry Benchmark
Mean Time to Detect (MTTD)Average time from incident occurrence to detectionunder 5 min (automated), under 30 min (manual)
Mean Time to Respond (MTTR-respond)Average time from detection to first responseunder 15 min (P1), under 1 hour (P2)
Mean Time to Restore Service (MTRS)Average time from detection to service restorationunder 30 min (P1), under 4 hours (P2)
First-call resolution rate% of incidents resolved at first contact65-75%
Incident backlog ageAverage age of open incidentsunder 3 days
Repeat incident rate% of incidents that are repeats of known issuesunder 10%

Change Enablement

MetricFormulaIndustry Benchmark
Change success rate% of changes that do not cause incidents> 95%
Change lead timeTime from change request to deploymentunder 1 week (standard), under 1 day (pre-approved)
Emergency change ratio% of changes classified as emergencyunder 5%
Change backlogNumber of pending change requestsContext-dependent

Service Level Management

MetricFormulaIndustry Benchmark
SLA compliance% of SLA targets met> 95%
SLA breach trendMonth-over-month SLA breach countDeclining
Service review completion% of scheduled service reviews completed100%

Continual Improvement

MetricFormulaIndustry Benchmark
Improvement backlog sizeNumber of improvement opportunities in registerContext-dependent
Improvement completion rate% of improvements completed on schedule> 70%
Improvement ROIValue delivered / investment in improvement> 3:1
Practice maturity trendYear-over-year maturity score changePositive trend

Executive dashboard design

What to include

An effective executive dashboard answers four questions:

  1. Are we delivering value? (Strategic metrics: customer satisfaction, revenue impact)
  2. Are we operating effectively? (Tactical metrics: SLA compliance, incident trends)
  3. Are we improving? (Improvement metrics: maturity trend, CI completion)
  4. Are we managing risk? (Risk metrics: security incidents, compliance status)

What to exclude

  • Raw ticket counts (meaningless without context)
  • Too many metrics (maximum 12-15 per dashboard)
  • Metrics without targets (data without context is noise)
  • Metrics without owners (unowned metrics are ignored)
  • Vanity metrics (metrics that always look good but do not drive action)

Reporting cadence

AudienceFrequencyContent
BoardQuarterlyStrategic KPIs, major risks, investment outcomes
CIO/CTOMonthlyPractice performance, improvement progress, resource allocation
IT DirectorWeeklyOperational metrics, team performance, escalations
Practice ownersDailyOperational dashboards, queue management, trend alerts

The balanced scorecard approach

For organizations using balanced scorecard frameworks, ITIL metrics map naturally:

PerspectiveITIL Metrics
FinancialCost per service, IT spend optimization, improvement ROI
CustomerCSAT, XLA scores, SLA compliance, NPS
Internal ProcessChange success rate, incident trends, deployment frequency
Learning and GrowthPractice maturity, training completion, knowledge base coverage

Related pages


Last updated on April 2, 2026

ITIL® is a registered trademark of PeopleCert. © 2026 ITIL v5 Compass