RAG-Powered Customer Support for a Leading Fintech Platform
A Series C fintech company processing $2B+ in annual transactions across 6 MENA countries, serving 3M+ active users.
Ticket Automation
73%
Resolution Time
8min
CSAT Score
+22pts
Support Cost Reduction
40%
When Scale Breaks Human Support
The client was adding 200K new users per quarter. Their support team of 45 agents was already maxed out, and the rule-based chatbot — built on rigid decision trees — could only handle the simplest "what's my balance" type queries. Anything involving nuance, regulatory context, or multi-step troubleshooting went straight to a human. The breaking point came during Ramadan 2025, when ticket volume spiked 3x and average wait times hit 11 hours. That's when they called us.
Architecture: RAG with Financial Guardrails
We studied how Klarna and Stripe approach AI-assisted support and built something purpose-designed for MENA fintech. The core is a retrieval-augmented generation pipeline: user queries hit Qdrant (our vector database) to find the top-K most relevant knowledge articles, which are then fed as context to Claude 3.5 Sonnet for response generation. What makes this different from a generic chatbot is the guardrail layer. For any response involving account balances, transaction amounts, or regulatory information, the system cross-references the generated response against the actual database record. If there's a discrepancy, the response is flagged for human review instead of being sent. This prevents the hallucination problem that plagues most LLM-powered support bots in financial services.
Automated Resolution Rate (%) Over Time
Ticket Category Breakdown
n8n as the Orchestration Brain
n8n is the backbone of the entire support pipeline. When a ticket arrives, an n8n workflow classifies it by category and urgency, determines whether it's suitable for AI resolution, triggers the RAG pipeline, evaluates the confidence score, and either sends the response or routes to a human agent — all in under 3 seconds. We also built n8n workflows for continuous improvement: every human-resolved ticket that was initially attempted by the AI gets fed back into the training pipeline. This feedback loop is why automation rates kept climbing month over month without manual intervention.
Average Resolution Time (minutes)
The Confidence Scoring System
One of the key innovations was our confidence scoring system. Every AI-generated response gets a score from 0 to 1 based on: retrieval relevance (how well the source docs match the query), response coherence (internal consistency of the generated answer), and factual grounding (whether claims are traceable to source documents). Responses scoring below 0.7 get routed to human agents with the AI's draft response attached — so agents don't start from scratch. This hybrid approach means the AI handles the easy 73% autonomously while giving humans a head start on the remaining 27%.
Key Results
Automated ticket resolution jumped from 12% to 73%. Average resolution time dropped from 4.2 hours to 8 minutes for automated tickets. Customer satisfaction scores increased by 22 points. The client reduced support headcount costs by 40% while handling 2x the ticket volume.
Technology Stack
Want similar results for your business?
Book a free 30-minute consultation — no pitch deck, just a conversation.
Get in Touch →