Context
A regional property and casualty carrier processing 45,000 claims per year. Claims intake was 100% manual. Adjusters spent 40% of their time extracting data from documents before they could begin actual claims work.
The operations team needed to reduce intake time without adding headcount. Previous RPA attempts had failed because document formats varied too widely for rule-based extraction.
What we built
An LLM-powered claims intake pipeline:
- Document extraction agent: processes FNOL from web forms, emails, faxed documents, and call transcripts
- Policy verification module: cross-references extracted data against policy records in Guidewire
- Severity scoring: initial triage based on damage type, estimated amount, and claim history
- Smart routing: simple claims to auto-adjudication queue, complex claims to specialist adjusters with pre-populated files
What we traded off
- Accuracy vs speed: We tuned for 95% extraction accuracy with human review on low-confidence fields, rather than aiming for 99% which would have required 3x the development time
- Scope: Started with auto and property claims only. Workers' comp and specialty lines added in phase 2
- Integration depth: Read-only Guidewire integration first. Write-back (auto-updating claim records) came in month 4
Results
- Claims intake time: 45 minutes → 12 minutes (70% reduction)
- Adjuster productivity: up 35% (starting with complete files instead of raw documents)
- Processing cost per claim: down 40%
- Zero additional headcount required despite 15% volume increase
Timeline
- Weeks 1-3: Data audit, pipeline architecture, Guidewire API integration
- Weeks 4-8: Agent development, extraction tuning, routing logic
- Weeks 9-12: Pilot with 2 adjusters, accuracy monitoring, edge case handling
- Month 4: Full rollout across all auto and property claims