ITIL v5 Compass
Case Studies
AI Chatbot Deployment

Case Study: AI Chatbot Deployment

Scenario

Organization: Regional bank (3,000 employees, 200,000 retail customers) with traditional ITSM operation running on ITIL practices.

Initiative: Deploy an AI-powered chatbot for the customer service desk to reduce call volume, improve response times, and provide 24/7 support for common banking queries (account balance, transaction history, card activation, branch information).

Budget: $500,000 for the initial phase. Expected ROI within 18 months.

Challenge: The bank operates under strict financial regulations (data privacy, consumer protection, audit requirements). Any AI system that interacts with customer financial data must meet compliance standards.


Phase 1: Discover

Business case

MetricCurrent StateTarget State
Average wait time (phone)8 minutes2 minutes
Calls handled per day1,2001,200 (400 deflected to chatbot)
24/7 availabilityNo (office hours only)Yes (chatbot)
Cost per interaction (phone)$12$12 (phone) / $0.50 (chatbot)
Customer satisfaction (phone)72%85% (blended)

ITIL v5 analysis: 6C Model assessment

Before selecting a solution, the IT team assessed which AI capabilities (from the ITIL AI Capability Model) the chatbot needed:

6C CapabilityRequired?Application
CreationYesGenerate personalized responses to customer queries
CurationNoNot needed for initial phase
ClarificationYesHelp customers understand account statements and transaction descriptions
CognitionYesDetect patterns in customer queries to identify service issues early
CommunicationYes (core)Act as the primary conversational interface
CoordinationYesRoute complex queries to human agents automatically

Complexity context: Complex

This deployment sits firmly in the complex context: no one can predict exactly how customers will interact with the chatbot, what edge cases will emerge, or how regulatory expectations will evolve. The approach must be probe-sense-respond: deploy incrementally, observe, and adapt.


Phase 2: Design

Human-centred design approach

The design team followed ITIL v5's emphasis on human-centred design:

User research

Interviewed 50 customers to understand their most common queries and frustrations

Journey mapping

Mapped the customer service journey from the customer's perspective

Persona development

Created three customer personas (tech-savvy, tech-averse, business customer)

Prototype testing

Tested conversation flows with real customers before development

Key design decisions

DecisionChoiceITIL Reasoning
Data residencyOn-premises LLM, not cloudLegal factor (PESTLE): banking regulations require customer data to remain within national borders
Escalation pathChatbot can transfer to human agent at any timeGuiding principle: "Focus on value" (the chatbot must never trap a customer in a loop)
TransparencyChatbot identifies itself as AI, not humanResponsible AI: customers have the right to know they are interacting with an AI
Scope limitationPhase 1: informational queries only (no transactions)Risk management: limit scope until the system is proven reliable
Feedback loopEvery interaction ends with a satisfaction surveyContinual improvement: data-driven enhancement

Phase 3: Build and Transition

Change Enablement

The chatbot deployment was classified as a normal change (not standard, not emergency) because:

  • It introduces a new customer-facing capability
  • It processes regulated financial data
  • It requires regulatory approval
  • Risk assessment identified potential customer harm scenarios

Change Component Details

ComponentDetail
Change authorityIT Director (approved by Compliance Committee)
Risk assessmentMedium (customer-facing, regulated data, new technology)
Rollback planDisable chatbot and route all queries to phone agents
Testing4-week pilot with 500 customers (opt-in)
Success criteria80%+ satisfaction, 30%+ deflection rate, zero data breaches

Transition activities

ActivityITIL Practice
Knowledge base prepared for chatbot trainingKnowledge Management
Monitoring dashboards configured for chatbot performanceMonitoring and Event Management
Service desk agents trained on chatbot escalation processWorkforce and Talent Management
SLAs updated to include chatbot-specific metricsService Level Management
Security assessment completedInformation Security Management
Data protection impact assessment filed with regulatorRisk Management

Phase 4: Operate and Monitor

Pilot results (4 weeks)

MetricTargetActualStatus
Customer satisfaction80%76%Below target
Query deflection rate30%42%Above target
Escalation to humanUnder 40%35%On target
Data incidents00On target
False information provided07 instancesRequires attention

Problem identification

The satisfaction score (76%) was below the 80% target. Problem management investigation revealed:

IssueCountRoot Cause
Chatbot could not understand regional dialect variations23 queriesTraining data was standard English only
Chatbot gave incorrect branch opening hours7 queriesKnowledge base had outdated branch information
Transfer to human agent felt abrupt12 queriesNo context was passed to the human agent
Customers wanted to complete transactions, not just get information31 queriesScope limitation (Phase 1 is informational only)

Improvement actions

ActionPracticeImpact
Retrain model with regional dialect samplesKnowledge Management + AI governanceHigh
Automate branch information sync from source systemService Configuration ManagementMedium
Pass conversation context to human agent on escalationService DeskHigh
Plan Phase 2: transactional capabilitiesProduct roadmap (Discover)Future

Governance throughout

AI governance applied

Governance ConcernHow It Was Addressed
BiasTested chatbot responses across customer demographics; no significant bias detected
TransparencyChatbot clearly identifies as AI; provides confidence scores on answers
Data privacyOn-premises deployment; no customer data leaves national jurisdiction
AccountabilityAI Product Owner designated as responsible for chatbot decisions
Audit trailAll conversations logged with timestamps and model version
Human overrideCustomers can request human agent at any point; agents can override chatbot suggestions

Governance pattern used

The bank used a Compliance-based governance pattern (high authority, high assurance), appropriate for:

  • Regulated industry
  • Customer-facing AI
  • Financial data processing
  • First AI deployment in the organization

ITIL v5 concepts demonstrated

ConceptApplication
6C ModelSystematic assessment of which AI capabilities were needed
Complexity contextComplex: probe-sense-respond through pilot deployment
Human-centred designCustomer research, journey mapping, persona development
PESTLELegal and regulatory factors drove technology and deployment decisions
Governance patternsCompliance-based for regulated financial environment
Continual ImprovementPilot results drove immediate improvements to knowledge base and model
Value co-creationCustomer feedback surveys enabled joint optimization of the service

Discussion questions

  1. The chatbot satisfaction score was 76% vs the 80% target. Should the project have been halted? What does ITIL's "progress iteratively with feedback" principle suggest?

  2. Seven instances of false information were provided. Using the ITIL risk management practice, how should this risk be categorized and mitigated?

  3. The bank chose an on-premises LLM due to data residency requirements. How does this decision map to the PESTLE analysis (Legal factor)?

  4. If the bank's complexity context changes from "complex" to "ordered" after 12 months of successful operation, how should the governance pattern change?


Related pages