How to Build a policy Q&A Agent Using CrewAI in TypeScript for payments
A policy Q&A agent for payments answers operational questions like “Can we refund this card after settlement?” or “Is this merchant category allowed in region X?” It matters because payment teams need fast, consistent answers that stay inside compliance boundaries, reduce escalations, and leave an audit trail for every decision.
Architecture
- •
Policy knowledge source
- •Your source of truth: internal policy docs, scheme rules, SOPs, and exception playbooks.
- •Keep it versioned so answers can cite the exact policy revision.
- •
Retriever tool
- •Pulls only the relevant policy snippets for a question.
- •For payments, this should filter by region, product line, and transaction type.
- •
Policy analyst agent
- •Interprets the question against retrieved policy text.
- •Produces a concise answer with citations and a confidence level.
- •
Compliance reviewer agent
- •Checks whether the draft answer violates restricted language or exposes unsupported advice.
- •Useful for high-risk topics like chargebacks, AML/KYC, refunds, and sanctions.
- •
Orchestrator
- •Coordinates the agents in sequence using CrewAI.
- •Enforces deterministic flow: retrieve → analyze → review → answer.
- •
Audit logger
- •Stores the user question, retrieved docs, final answer, model version, and timestamps.
- •This is non-negotiable in payments.
Implementation
1) Install dependencies and define your policy document shape
Use crewai plus a lightweight retriever layer. In production you’ll likely back this with Postgres + pgvector or a managed search service.
npm install crewai zod dotenv
Define a strict payload shape so the agent cannot drift into free-form output.
// src/types.ts
import { z } from "zod";
export const PolicyQuestionSchema = z.object({
question: z.string().min(5),
region: z.string().min(2),
product: z.string().min(2),
});
export type PolicyQuestion = z.infer<typeof PolicyQuestionSchema>;
export const PolicyAnswerSchema = z.object({
answer: z.string(),
citations: z.array(z.string()),
riskLevel: z.enum(["low", "medium", "high"]),
});
2) Create the CrewAI agents and tasks
CrewAI’s TypeScript API follows the same pattern as Python: define Agent, Task, and Crew, then run the crew. The key is to keep each agent narrow.
// src/crew.ts
import { Agent, Task, Crew } from "crewai";
import { PolicyQuestionSchema } from "./types";
const policyAnalyst = new Agent({
name: "Policy Analyst",
role: "Payments Policy Analyst",
goal: "Answer payment policy questions using only approved internal policy context.",
backstory:
"You work on payment operations and compliance. You never invent policy and you always cite sources.",
});
const complianceReviewer = new Agent({
name: "Compliance Reviewer",
role: "Payments Compliance Reviewer",
goal: "Reject answers that are vague, unsupported, or risky for regulated payment operations.",
backstory:
"You review responses for PCI DSS, AML/KYC, sanctions, refund rules, chargeback handling, and auditability.",
});
export async function runPolicyQa(questionInput: unknown) {
const input = PolicyQuestionSchema.parse(questionInput);
const analysisTask = new Task({
description: `
Answer this payments policy question:
Question: ${input.question}
Region: ${input.region}
Product: ${input.product}
Use only the provided policy context. Return:
- direct answer
- citations
- risk level
`,
expectedOutput:
"A structured answer with citations and a risk level.",
agent: policyAnalyst,
outputKey: "draftAnswer",
});
const reviewTask = new Task({
description: `
Review the draft answer for compliance issues.
Check for unsupported claims, missing citations, ambiguity,
and any language that could be unsafe in payments operations.
If needed, rewrite it to be safer.
`,
expectedOutput:
"A compliant final answer with citations and risk level.",
agent: complianceReviewer,
context: [analysisTask],
outputKey: "finalAnswer",
});
const crew = new Crew({
agents: [policyAnalyst, complianceReviewer],
tasks: [analysisTask, reviewTask],
verbose: true,
});
return await crew.kickoff();
}
3) Add retrieval before the crew runs
The actual value comes from grounding the model in your policies. Don’t let the LLM answer from memory; pass only approved snippets into task context.
// src/retriever.ts
type PolicyDoc = {
id: string;
title: string;
region: string;
product: string;
text: string;
};
const POLICY_DOCS: PolicyDoc[] = [
{
id: "refunds-us-card-present-v3",
title: "Refunds for Card-Present Payments",
region: "US",
product: "card_present",
text:
"Refunds after settlement are allowed up to T+90 days if original authorization exists. Partial refunds require merchant approval.",
},
];
export function retrievePolicyContext(question: string, region?: string, product?: string) {
return POLICY_DOCS.filter((doc) => {
const matchesRegion = !region || doc.region === region;
const matchesProduct = !product || doc.product === product;
return matchesRegion && matchesProduct && question.toLowerCase().includes("refund");
});
}
Then inject those snippets into your task description:
// src/index.ts
import { runPolicyQa } from "./crew";
import { retrievePolicyContext } from "./retriever";
async function main() {
const questionInput = {
question: "Can we refund a settled card-present transaction?",
region: "US",
product: "card_present",
};
const docs = retrievePolicyContext(
questionInput.question,
questionInput.region,
questionInput.product
);
const contextText = docs.map((d) => `[${d.id}] ${d.title}: ${d.text}`).join("\n");
console.log(await runPolicyQa({
...questionInput,
question:`${questionInput.question}\n\nApproved policy context:\n${contextText}`,
}));
}
main().catch(console.error);
4) Return structured output and persist an audit record
For payments workflows, plain text is not enough. Persist who asked what, which policies were used, what model answered it, and whether human review was required.
// src/audit.ts
export async function writeAuditRecord(record: {
requestId: string;
question: string;
responseText:string;
policyIds:string[];
createdAt:string;
}) {
console.log(JSON.stringify(record));
}
In production you would write that to an immutable store or append-only log.
Production Considerations
- •
Compliance controls
- •Block answers on regulated topics unless relevant policies are present.
- •Require escalation paths for AML/KYC exceptions, sanctions hits, PCI scope questions, and disputes above threshold amounts.
- •
Auditability
- •Store prompt inputs, retrieved documents IDs, final response text, model name/version, latency, and reviewer decisions.
- •If you cannot reconstruct an answer later, you do not have a payment-grade system.
- •
Data residency
- •Keep retrieval indexes and logs in-region when policies or customer data are subject to residency requirements.
- •Avoid sending PANs, bank account numbers, or full cardholder data into prompts.
- •
Guardrails
- •Use allowlisted tools only.
- •Enforce structured outputs with schema validation before returning any answer to users.
- •Route high-risk questions to human ops when confidence is low or citations are missing.
Common Pitfalls
- •
Letting the model answer without retrieval
- •This creates hallucinated policy guidance.
- •Fix it by requiring approved document snippets in every task context.
- •
Mixing policy advice with customer-specific data
- •Payment policies are not customer case management notes.
- •Keep PII out of prompts unless absolutely required and masked first.
- •
Skipping versioning on policies
- •Teams will argue over which rule was active when the answer was generated.
- •Fix it by storing document IDs and revision numbers in every audit record.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit