Measuring Success
From activity metrics to business outcomes
Measuring ITIL v5 adoption success requires connecting operational activities to business outcomes through a structured approach.
The metrics hierarchy
Effective measurement operates at three levels:
| Level | What it measures | Who uses it | Example |
|---|---|---|---|
| Strategic | Business outcomes and value realized | Board, CIO, CFO | Revenue protected by incident prevention |
| Tactical | Process effectiveness and efficiency | IT Director, practice owners | First-call resolution rate, change success rate |
| Operational | Activity performance and compliance | Team leads, practitioners | Ticket queue depth, response time |
Organizations typically measure only at the operational level, reporting ticket counts to boards. This approach misses the mark -- leadership cares about business impact, not raw ticket volumes. Build a metrics hierarchy translating operational data into strategic insight.
Strategic metrics framework
Critical Success Factors (CSFs) and Key Performance Indicators (KPIs)
For each strategic objective, define CSFs (conditions enabling success) and KPIs (measurable success indicators):
| Strategic Objective | CSF | KPI | Target |
|---|---|---|---|
| Reduce business disruption | Incidents are prevented or resolved quickly | Mean Time to Restore Service (MTRS) | under 30 min (P1), under 4 hours (P2) |
| Improve customer experience | Users are satisfied with IT services | Customer Satisfaction Score (CSAT) | > 4.2/5.0 |
| Accelerate innovation | Changes are deployed frequently and safely | Deployment frequency, Change failure rate | Weekly releases, under 5% failure |
| Optimize costs | IT spending is aligned with business value | Cost per transaction, IT spend as % of revenue | Benchmark-aligned |
| Ensure compliance | Regulations and standards are met | Audit findings, Compliance coverage | Zero critical findings |
OKR (Objectives and Key Results) alignment
For organizations using OKR frameworks, ITIL metrics structure as Key Results:
Objective: Deliver world-class digital services
| Key Result | Metric | Current | Target | Deadline |
|---|---|---|---|---|
| Reduce major incidents by 40% | P1 incident count (monthly) | 12 | 7 | Q4 2026 |
| Achieve 95% customer satisfaction | CSAT score | 3.8 | 4.75 | Q2 2027 |
| Deploy to production daily | Deployment frequency | Weekly | Daily | Q3 2026 |
| Reduce resolution time by 50% | P2 MTRS | 8 hours | 4 hours | Q1 2027 |
Experience Level Agreements (XLAs)
ITIL v5 introduces XLAs as a complement to traditional SLAs. While SLAs measure technical performance (uptime, response time), XLAs measure user experience (satisfaction, productivity, sentiment).
SLA vs XLA comparison
| Aspect | SLA | XLA |
|---|---|---|
| Focus | Technical compliance | User experience |
| Measures | Uptime, response time, resolution time | Satisfaction, productivity, sentiment |
| Perspective | Provider-centric | Consumer-centric |
| Risk | Can be "green" while users are unhappy | Reveals the reality of service consumption |
| Example | "System availability: 99.9%" | "85% of users rate their IT experience as good or excellent" |
Implementing XLAs
- Define experience indicators: Identify aspects of user experience that matter most (e.g., "Can I complete my work without IT interruptions?")
- Establish measurement methods: Use surveys, in-app feedback, sentiment analysis, productivity metrics
- Set experience targets: Base these on baseline data and user expectations
- Report alongside SLAs: XLAs complement rather than replace SLAs; report both for comprehensive visibility
- Act on experience data: Use XLA data to drive improvement; treat it as actionable insight, not just reporting
Example XLA dashboard metrics
| Experience Indicator | Measurement Method | Target |
|---|---|---|
| IT happiness score | Quarterly survey (1-10 scale) | > 7.5 |
| Productive hours lost to IT issues | Monthly self-report + incident data | under 2 hours/employee/month |
| Self-service success rate | Portal analytics | > 75% of requests resolved via self-service |
| First-contact resolution experience | Post-interaction survey | > 90% satisfied |
| Digital tool adoption | Usage analytics | > 80% of target users actively using |
Metrics catalogue by practice area
Incident Management
| Metric | Formula | Industry Benchmark |
|---|---|---|
| Mean Time to Detect (MTTD) | Average time from incident occurrence to detection | under 5 min (automated), under 30 min (manual) |
| Mean Time to Respond (MTTR-respond) | Average time from detection to first response | under 15 min (P1), under 1 hour (P2) |
| Mean Time to Restore Service (MTRS) | Average time from detection to service restoration | under 30 min (P1), under 4 hours (P2) |
| First-call resolution rate | % of incidents resolved at first contact | 65-75% |
| Incident backlog age | Average age of open incidents | under 3 days |
| Repeat incident rate | % of incidents that are repeats of known issues | under 10% |
Change Enablement
| Metric | Formula | Industry Benchmark |
|---|---|---|
| Change success rate | % of changes that do not cause incidents | > 95% |
| Change lead time | Time from change request to deployment | under 1 week (standard), under 1 day (pre-approved) |
| Emergency change ratio | % of changes classified as emergency | under 5% |
| Change backlog | Number of pending change requests | Context-dependent |
Service Level Management
| Metric | Formula | Industry Benchmark |
|---|---|---|
| SLA compliance | % of SLA targets met | > 95% |
| SLA breach trend | Month-over-month SLA breach count | Declining |
| Service review completion | % of scheduled service reviews completed | 100% |
Continual Improvement
| Metric | Formula | Industry Benchmark |
|---|---|---|
| Improvement backlog size | Number of improvement opportunities in register | Context-dependent |
| Improvement completion rate | % of improvements completed on schedule | > 70% |
| Improvement ROI | Value delivered / investment in improvement | > 3:1 |
| Practice maturity trend | Year-over-year maturity score change | Positive trend |
Executive dashboard design
What to include
An effective executive dashboard answers four questions:
- Are we delivering value? (Strategic metrics: customer satisfaction, revenue impact)
- Are we operating effectively? (Tactical metrics: SLA compliance, incident trends)
- Are we improving? (Improvement metrics: maturity trend, CI completion)
- Are we managing risk? (Risk metrics: security incidents, compliance status)
What to exclude
- Raw ticket counts (meaningless without context)
- Too many metrics (maximum 12-15 per dashboard)
- Metrics without targets (data without context is noise)
- Metrics without owners (unowned metrics are ignored)
- Vanity metrics (metrics that always look good but do not drive action)
Reporting cadence
| Audience | Frequency | Content |
|---|---|---|
| Board | Quarterly | Strategic KPIs, major risks, investment outcomes |
| CIO/CTO | Monthly | Practice performance, improvement progress, resource allocation |
| IT Director | Weekly | Operational metrics, team performance, escalations |
| Practice owners | Daily | Operational dashboards, queue management, trend alerts |
The balanced scorecard approach
For organizations using balanced scorecard frameworks, ITIL metrics map naturally:
| Perspective | ITIL Metrics |
|---|---|
| Financial | Cost per service, IT spend optimization, improvement ROI |
| Customer | CSAT, XLA scores, SLA compliance, NPS |
| Internal Process | Change success rate, incident trends, deployment frequency |
| Learning and Growth | Practice maturity, training completion, knowledge base coverage |
Related pages
- Implementation Roadmap (where measurement fits in adoption)
- Maturity Assessment Guide (baseline measurement)
- Business Case & ROI (financial metrics and justification)
- Service Level Management (SLA management practice)
Last updated on April 2, 2026
ITIL® is a registered trademark of PeopleCert. © 2026 ITIL v5 Compass