TalkWise AI

Enterprise Knowledge Access & Support Assistant

LLMRAGQdrantConversational AIPython

A conversational AI platform developed for a major retail chain to address customer service and internal knowledge access needs. The company's call center handled 180,000+ monthly customer inquiries; average wait time exceeded 8 minutes and first-contact resolution stood at just 52%. Internal teams also struggled to access 12,000+ documents scattered across 4 different knowledge bases.

Our system, built on RAG (Retrieval Augmented Generation) architecture, vectorizes all of the company's knowledge sources (product catalogs, return policies, technical documentation, FAQs) and indexes them in a Qdrant database. When a user query arrives, relevant document chunks are retrieved via semantic search and provided as context to the LLM. Source references are automatically appended during response generation. The system operates across both customer channels (web, mobile) and internal Slack integration.

System Architecture

InterfaceUnderstandingDialogKnowledgeWebSocketMessageIntentActionQueryRetrieveContextStreamLogChat WidgetWebSocket APINLU EngineIntent RouterDialog ManagerRAG PipelineResponse GenKnowledge BaseChat History

Highlights

  • RAG architecture: Qdrant vector DB + semantic search
  • Automated knowledge retrieval from 12,000+ documents
  • Source-referenced response generation (hallucination control)
  • Multi-channel support (web widget, mobile SDK, Slack)
  • Automatic support ticket creation (Zendesk integration)

Results

First-contact resolution improved from 52% to 78%
Call center volume reduced by 41% via self-service
Average response time of 2.3 seconds
Internal team knowledge access time cut by 68%