How to Build a policy Q&A Agent Using AutoGen in TypeScript for lending
A policy Q&A agent for lending answers questions like “Can we approve this borrower under our SME policy?” or “What documents are required for a refinance above this threshold?” It matters because loan officers, underwriters, and support teams need fast, consistent answers grounded in policy text, not memory or guesswork.
Architecture
- •
Policy corpus loader
- •Pulls lending policies from approved sources: PDF manuals, SharePoint exports, Confluence pages, or versioned markdown.
- •Normalizes them into chunks with source metadata, effective dates, and jurisdiction tags.
- •
Retrieval layer
- •Uses embeddings + vector search to fetch the most relevant policy sections.
- •Filters by product type, country, and policy version before anything reaches the model.
- •
AutoGen assistant agent
- •The main
AssistantAgentthat reads retrieved policy snippets and drafts the answer. - •Must be constrained to cite sources and refuse unsupported claims.
- •The main
- •
User proxy / orchestrator
- •A
UserProxyAgentor app-side controller that receives the question, injects retrieved context, and manages the conversation turn. - •Handles escalation when confidence is low or the request is outside policy scope.
- •A
- •
Audit logger
- •Stores question, retrieved passages, model output, source IDs, timestamps, and user identity.
- •Required for lending compliance reviews and post-decision traceability.
- •
Guardrail layer
- •Blocks requests involving prohibited advice, missing jurisdiction context, or stale policy versions.
- •Enforces data residency and redaction rules before sending content to the model.
Implementation
1) Install AutoGen and set up a TypeScript project
Use the AutoGen TypeScript package and a real LLM endpoint. For production lending workflows, keep the model behind your own API gateway so you can log requests and enforce residency controls.
npm install @autogen-ai/autogen openai dotenv
npm install -D typescript tsx @types/node
Create a .env file:
OPENAI_API_KEY=your_key
2) Build a policy retrieval function with source metadata
This example uses a simple in-memory retriever so the pattern is clear. In production, replace it with your vector store and strict filters for product line, geography, and effective date.
import "dotenv/config";
import { AssistantAgent } from "@autogen-ai/autogen";
import OpenAI from "openai";
type PolicyChunk = {
id: string;
text: string;
source: string;
jurisdiction: string;
product: string;
effectiveDate: string;
};
const policyStore: PolicyChunk[] = [
{
id: "mortgage-uk-001",
text: "For UK residential mortgages above 80% LTV, two years of verified income history are required unless an approved exception exists.",
source: "Mortgage Policy v4.2",
jurisdiction: "UK",
product: "mortgage",
effectiveDate: "2025-01-01",
},
{
id: "sme-ke-014",
text: "For SME loans in Kenya above KES 10M, audited financial statements for the last two fiscal years are mandatory.",
source: "SME Lending Policy v3.1",
jurisdiction: "KE",
product: "sme",
effectiveDate: "2025-02-15",
},
];
function retrievePolicy(question: string): PolicyChunk[] {
const q = question.toLowerCase();
return policyStore.filter((chunk) => {
return (
q.includes(chunk.product) ||
q.includes(chunk.jurisdiction.toLowerCase()) ||
q.includes("ltv") ||
q.includes("financial statements")
);
});
}
3) Create an AutoGen AssistantAgent that answers only from retrieved context
The key pattern is to pass retrieved snippets into the prompt and force citation behavior. Use AssistantAgent for generation and keep orchestration in your app code so you can add audit logging and compliance checks around it.
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const policyQaAgent = new AssistantAgent({
name: "policy_qa_agent",
llmConfig: {
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
temperature: 0,
},
});
async function answerPolicyQuestion(question: string) {
const chunks = retrievePolicy(question);
if (chunks.length === 0) {
return {
answer:
"I could not find a matching policy section. Escalate to underwriting or compliance.",
citations: [],
confidence: "low",
};
}
const context = chunks
.map(
(c) =>
`[${c.id}] ${c.text}\nSource: ${c.source}\nJurisdiction: ${c.jurisdiction}\nProduct: ${c.product}\nEffective date: ${c.effectiveDate}`
)
.join("\n\n");
const prompt = `
You are a lending policy Q&A agent.
Answer only using the provided policy context.
If the context does not support an answer, say you cannot determine it.
Always cite chunk IDs in square brackets.
Do not give legal advice.
Question:
${question}
Policy context:
${context}
`;
const result = await policyQaAgent.generateReply([{ role: "user", content: prompt }]);
return {
answer: result.content,
citations: chunks.map((c) => c.id),
confidence: "medium",
};
}
(async () => {
const response = await answerPolicyQuestion(
"Can we approve an SME loan in Kenya for KES 12M without audited financial statements?"
);
})();
4) Add audit logging and escalation hooks
For lending use cases, every answer needs a trace. Log the input question, retrieved sources, output text, user ID, request timestamp, and whether escalation was triggered.
type AuditRecord = {
requestId: string;
userId: string;
};
async function handleRequest(userId:.string?, question:string) {}
In practice, keep this layer outside the agent so you can:
- •redact PII before inference,
- •persist logs to an immutable store,
- •block responses when policies conflict across jurisdictions,
- •route low-confidence cases to underwriting ops.
Production Considerations
- •
Deployment
- •Keep retrieval services and audit storage in-region to satisfy data residency requirements.
- •Pin model versions and store prompt templates in Git so policy behavior is reproducible during audits.
- •
Monitoring
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit