AI Business Analyst Agents for BFSI Software Firm
Project Overview
A global financial technology provider serving FCA-regulated
institutions approached us to augment their business analysis function
with AI. Their analysts were overwhelmed with repeatable, high-stakes
tasks: regulatory mapping against PSD2 and Open Banking requirements,
compliance diff checks for ISO 20022 migration, and vendor migration
documentation requiring full audit trails. These tasks consumed up to
40% of their time, were prone to human error, and were spread across
siloed tools such as Excel, JIRA, and Confluence. Our mandate was to
design self-hosted / on-prem AI agents that could streamline
analytical workflows while maintaining complete data sovereignty and
the traceability required by financial regulators.
Key Challenges
-
Regulatory Accountability: Every AI-assisted
decision needed documented reasoning for potential regulatory
review, with clear audit trails linking outputs to source documents
and human approvals.
-
Data Classification Complexity: Analysts worked
with mixed-sensitivity data including PCI-scoped card scheme
specifications, proprietary trading logic, and client PII requiring
granular access controls.
-
Fragmented Toolchain: Critical knowledge was
scattered across JIRA, Confluence, internal wikis, and legacy
documentation systems with inconsistent metadata and access
patterns.
-
Explainability Requirements: FCA SYSC 8 outsourcing
rules meant any AI system needed clear human oversight, explainable
outputs, and documented escalation procedures.
Our Solution
-
Domain-Specialised Agentic Crew: Developed five
specialised AI agents covering payment workflows (SWIFT, SEPA), card
scheme compliance (Visa/Mastercard mandate tracking), ISO 20022
migration analysis, regulatory change management, and documentation
generation. Each agent operates within defined boundaries with
explicit capability declarations.
-
State Machine Orchestration: Built a structured,
graph-based agent workflow with explicit decision checkpoints that
record reasoning, enable human-in-the-loop approval for sensitive
actions, and provide complete traceability for audits.
-
Intelligent Model Routing: Developed an adaptive
model-routing layer that dynamically chooses the best LLM runtime
for each task. High-sensitivity queries are confined to self-hosted
/ on-prem deployments, and critical decisions are verified using
cross-model output triangulation.
-
Domain Fine-Tuning with Provenance: Fine-tuned
models using 50,000+ annotated examples from internal documentation,
regulatory filings, and expert-labeled project histories. Training
data lineage is fully documented with version control.
-
Native Tooling Integration: Agents operate within
analysts' existing workflows via JIRA (ticket creation/updates with
approval chains), Confluence (documentation generation with diff
tracking), Slack (real-time assistance with conversation logging),
and internal IDE extensions.
-
Zero-Trust Security Architecture: Workspace-level
data isolation ensures project boundaries are never crossed. Private
vector stores with namespace separation, API-level RBAC, encrypted
embeddings (AES-256), and TLS 1.3 for all inter-service
communication.
Security & Compliance Architecture
Self-hosted / on-prem deployment provides inherent compliance
advantages for regulated industries, eliminating cross-border transfer
risks and ensuring complete data sovereignty. Given the regulated
nature of the client's operations, security and compliance were
foundational requirements, not afterthoughts:
-
Audit Trail Implementation: Every agent action
generates an immutable log entry including: timestamp, user context,
input data hash, model used, reasoning chain, output, and any human
approvals. Logs are stored with tamper-evident checksums and 7-year
retention aligned to FCA record-keeping requirements.
-
Data Classification Enforcement: Automatic
classification of ingested documents (Public, Internal,
Confidential, Restricted) with policy-driven access controls. PII
detection pipelines flag and optionally redact sensitive data before
model processing.
-
Human-in-the-Loop Governance: High-stakes decisions
(regulatory submissions, client-facing documentation, compliance
attestations) require explicit human approval via workflow gates.
Approval chains are configurable by document type and sensitivity
level.
-
Model Governance Framework: Version-controlled
model deployments with staged rollouts (canary → 10% → 50% → 100%).
Automated evaluation suites run on each deployment measuring
accuracy, hallucination rate, and latency. Rollback is automated on
regression detection.
-
Access Control: OIDC integration with the client's
Azure AD. Role-based permissions (Analyst, Senior Analyst,
Compliance Officer, Admin) with least-privilege defaults. All access
logged and reviewable.
Technologies Used
The solution stack was designed for security, scalability, and
regulatory compliance:
-
Agentic Framework: Lightweight orchestration layer
with minimal overhead and OpenAI-compatible APIs, enabling seamless
future integrations with different tools and services. Persistent
state management, conditional routing, and built-in checkpointing
for audit reconstruction with custom nodes handling approval
workflows and human-in-the-loop escalation.
-
FastAPI + Pydantic: Type-safe API layer with
automatic OpenAPI documentation, request validation, and structured
error handling. JWT authentication with OIDC integration.
-
PostgreSQL 17 + pgvectorscale: Hybrid retrieval
combining structured metadata queries with vector similarity search.
Row-level security policies enforce workspace isolation at the
database level.
-
Observability: OpenTelemetry instrumentation with
Jaeger for distributed tracing. Custom metrics for model latency
(p50: 1.2s, p99: 4.8s), token usage, and approval queue depth.
Alerting via PagerDuty integration.
-
vLLM: High-performance inference engine for
self-hosted LLM deployment, optimized for low-latency serving with
continuous batching and efficient memory management across GPU
resources.
-
Infrastructure: Kubernetes with namespace isolation
per environment. Secrets management via Azure Key Vault. All traffic
encrypted with mutual TLS within the cluster.
Self-hosted / On-prem vs Cloud AI
While cloud AI providers offer rapid deployment, self-hosted / on-prem
infrastructure provides critical advantages for regulated enterprises
where compliance and data sovereignty are paramount:
-
Complete Data Sovereignty: Your data never leaves
your infrastructure, eliminating cross-border transfer risks,
foreign government data requests, and vendor lock-in dependencies
that cloud providers cannot guarantee.
-
Regulatory Compliance by Design: FCA SYSC 8, GDPR
Article 28, and other financial regulations require documented
control over AI systems. Self-hosted / on-prem deployment provides
inherent compliance advantages over cloud alternatives.
-
Cost Optimization at Scale: For high-volume AI
users processing 1-2 billion tokens monthly, self-hosted
infrastructure achieves ROI payback in 3-6 months versus ongoing
cloud subscription costs.
-
Enterprise-Grade Security: Direct control over
encryption keys, network isolation, and access controls without
reliance on third-party security assertions.
Results
The self-hosted / on-prem agentic system delivered measurable
improvements across productivity, compliance, and operational
efficiency. For enterprises spending $100K-$500K monthly on cloud AI,
the solution achieves ROI payback within 3-6 months through reduced
manual work and compliance automation:
-
60% reduction in repeatable manual work for
analysts, measured via time-tracking integration over 3-month
baseline comparison.
-
Compliance documentation preparation cut from
3 days to 4 hours, with AI-generated first drafts
achieving 80% acceptance rate after human review.
-
100% audit coverage: Every AI-assisted decision now
has documented reasoning, source citations, and approval chain —
satisfying internal audit requirements.
-
Junior analyst onboarding time reduced by
40% through AI-guided knowledge navigation and
contextual documentation retrieval.
-
Model accuracy maintained at
>94% task completion rate via continuous evaluation
against golden datasets, with automated rollback on regression.
Why NodeNova
What distinguishes NodeNova from other AI engineering vendors is our
structured approach to overcoming the industry's 88% POC failure rate.
We don't deploy generic chatbots — we build self-hosted / on-prem
agent systems that reflect real operational workflows, accountability
structures, and compliance obligations, with clear pathways to
production success.
-
Pilot-to-Production Success: Structured 60-90 day
methodology addresses the industry's 88% POC failure rate with
evaluation harness, security review, and production planning from
day one. Clear success metrics and implementation support ensure
ROI.
-
Self-hosted / On-prem Compliance by Design:
Complete data control eliminates cross-border transfer risks and
vendor lock-in. ISO 27001-aligned practices ensure your data
sovereignty for FCA-regulated operations, healthcare, and other
sensitive environments.
-
Domain Expertise in Regulated Industries: Our team
includes engineers with direct experience in payments
infrastructure, card scheme compliance, and financial services
operations. We speak the language of your business and regulatory
requirements.
-
Production-Grade Engineering: Enterprise-grade
observability, automated evaluation pipelines, model governance
frameworks, and documented incident response procedures. Not
prototypes — production systems that scale and maintain compliance
in operation.
Long-term Impact & Evolution
We continue to support the client through an ongoing partnership as
regulatory requirements and business needs evolve:
-
Regulatory Adaptation: When new PSD3 draft
requirements emerged, we updated the regulatory mapping agent within
2 weeks, including new compliance checklists and diff analysis
capabilities.
-
Department Expansion: The AI crew framework is now
being piloted in risk management (scenario analysis) and customer
onboarding (KYC document processing), leveraging the same security
and audit infrastructure.
-
Continuous Improvement: Monthly model performance
reviews, quarterly security assessments, and ongoing fine-tuning
based on analyst feedback ensure the system improves over time.
-
Knowledge Compounding: Each interaction enriches
the domain knowledge base. The system now contains 200,000+ indexed
documents with expert-validated annotations, creating a durable
institutional asset.