How to Build a policy Q&A Agent Using CrewAI in TypeScript for fintech

By Cyprian AaronsUpdated 2026-04-21
policy-q-acrewaitypescriptfintechpolicy-qanda

A policy Q&A agent for fintech answers employee or customer questions against approved policy sources: KYC, AML, card disputes, fee schedules, lending rules, and internal SOPs. It matters because bad answers create compliance risk, inconsistent customer treatment, and audit gaps. The agent should not “chat”; it should retrieve grounded policy text, answer with citations, and refuse when the policy is missing or ambiguous.

Architecture

  • Policy corpus

    • Source documents from approved internal systems: PDFs, Confluence exports, SharePoint, or a versioned object store.
    • Split by domain: onboarding, fraud, disputes, lending, collections, privacy.
  • Retriever tool

    • A search layer over embeddings or keyword index.
    • Returns only policy snippets with document IDs, effective dates, and jurisdiction tags.
  • CrewAI agent

    • A single policy-answering agent with strict instructions.
    • Uses tools only; no free-form guessing.
  • Task orchestration

    • One task to answer the user question.
    • Optional second task to verify citations and compliance constraints before responding.
  • Audit logger

    • Stores question, retrieved sources, final answer, model version, and timestamp.
    • Required for financial services review and incident response.
  • Guardrail layer

    • Blocks requests involving legal advice, regulated decisions, or missing policy coverage.
    • Escalates to human review when confidence is low.

Implementation

1) Install CrewAI and define your data contract

For TypeScript projects, keep the agent thin and push policy retrieval into a typed service. Your response shape should include citations so downstream systems can log exactly what was used.

export type PolicySnippet = {
  id: string;
  title: string;
  jurisdiction: string;
  effectiveDate: string;
  text: string;
};

export type PolicyAnswer = {
  answer: string;
  citations: Array<{
    id: string;
    title: string;
    effectiveDate: string;
  }>;
  needsHumanReview: boolean;
};

2) Build a retriever tool that returns grounded snippets

CrewAI agents work best when you give them a small number of deterministic tools. In fintech, that means the tool should return policy excerpts with metadata instead of raw document blobs.

import { Tool } from "crewai";
import type { PolicySnippet } from "./types.js";

const POLICY_INDEX = [
  {
    id: "aml-001",
    title: "AML Customer Due Diligence Standard",
    jurisdiction: "US",
    effectiveDate: "2025-01-10",
    text: "Enhanced due diligence is required for high-risk customers...",
  },
] satisfies PolicySnippet[];

export const searchPolicyTool = new Tool({
  name: "search_policy",
  description:
    "Search approved fintech policy sources and return relevant snippets with citations.",
  func: async (query: string) => {
    const q = query.toLowerCase();
    const matches = POLICY_INDEX.filter(
      (doc) =>
        doc.title.toLowerCase().includes(q) || doc.text.toLowerCase().includes(q),
    ).slice(0, 3);

    return JSON.stringify(matches);
  },
});

3) Create the agent and task with strict instructions

The important part is the system behavior. The agent must answer only from retrieved snippets and must flag anything outside policy scope.

import { Agent } from "crewai";
import { Task } from "crewai";
import { searchPolicyTool } from "./tools.js";

export const policyAgent = new Agent({
  role: "Fintech Policy Analyst",
  goal:
    "Answer questions using approved fintech policies only. Cite sources and escalate uncertainty.",
  backstory:
    "You work in a regulated financial institution. You never invent policy. You prefer exact quotes over summaries.",
  tools: [searchPolicyTool],
});

export function buildPolicyTask(question: string) {
  return new Task({
    description: `
      Answer this policy question using only approved sources:
      "${question}"

      Requirements:
      - Use retrieved snippets only
      - Include citations
      - If the source does not clearly answer the question, say so and set needsHumanReview=true
      - Do not provide legal advice
    `,
    expectedOutput:
      '{"answer":"string","citations":[{"id":"string","title":"string","effectiveDate":"string"}],"needsHumanReview":true}',
    agent: policyAgent,
  });
}

4) Run the crew and normalize the output for your API

Use Crew to execute the task. In production you would wrap this behind an HTTP endpoint and persist the result for audit.

import { Crew } from "crewai";
import { buildPolicyTask } from "./task.js";
import { policyAgent } from "./agent.js";

export async function answerPolicyQuestion(question: string) {
  const crew = new Crew({
    agents: [policyAgent],
    tasks: [buildPolicyTask(question)],
    verbose: true,
  });

  const result = await crew.kickoff();
  
  return {
    raw: String(result),
    // Parse this in your app layer after validating JSON shape
  };
}

A practical pattern is to force structured output in your application layer after kickoff(). If parsing fails or citations are missing, route to human review instead of retrying blindly.

Production Considerations

  • Deployment

    • Keep the retrieval index in-region if you have data residency requirements.
    • Separate EU/UK/US corpora so a request never crosses jurisdiction boundaries accidentally.
  • Monitoring

    • Log every question with source document IDs and model version.
    • Track refusal rate, escalation rate, and citation coverage as core SLOs.
  • Guardrails

    • Block questions that ask for legal interpretation or regulatory advice beyond internal policy.
    • Require exact citation output before sending any answer to users or staff.
  • Access control

    • Restrict sensitive policies by role.
    • A support agent should not see underwriting exceptions if they only need dispute handling guidance.

Common Pitfalls

  • Letting the model answer from memory

    This is how you get hallucinated policies. Force retrieval first and reject answers without citations.

  • Mixing jurisdictions in one index

    A US AML rule can be wrong for an EU branch. Partition by country or regulatory regime and pass jurisdiction into retrieval.

  • Skipping audit fields

    If you cannot reconstruct which policy text produced an answer, you do not have a fintech-ready system. Store query text, snippets returned, final response, timestamps, and reviewer actions.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides