How to Build a policy Q&A Agent Using CrewAI in TypeScript for investment banking

By Cyprian AaronsUpdated 2026-04-21
policy-q-acrewaitypescriptinvestment-bankingpolicy-qanda

A policy Q&A agent for investment banking answers questions about internal policies, procedures, and controls with traceable citations. It matters because bankers need fast answers on things like restricted lists, MNPI handling, approvals, and client communication rules without exposing the firm to compliance drift.

Architecture

  • Policy corpus loader

    • Pulls approved PDFs, SharePoint docs, Confluence pages, or policy DB records.
    • Normalizes content into chunks with document IDs, version numbers, and effective dates.
  • Retrieval layer

    • Uses embeddings or keyword search to fetch only relevant policy passages.
    • Must support citation metadata so every answer can point back to source text.
  • Q&A agent

    • A CrewAI Agent that reads retrieved context and produces a concise answer.
    • Should be constrained to policy interpretation, not legal advice or trading advice.
  • Compliance guardrail layer

    • Blocks answers when context is insufficient.
    • Detects requests involving MNPI, client confidentiality, or prohibited conduct.
  • Audit logging

    • Stores question, retrieved sources, final answer, model version, and timestamps.
    • Required for review by compliance and model risk teams.
  • Deployment boundary

    • Runs in a private network or approved cloud region.
    • Keeps policy data inside the firm’s data residency requirements.

Implementation

  1. Install CrewAI for TypeScript and define your policy tools

    You want retrieval as an explicit tool, not hidden inside the prompt. That gives you control over source filtering, logging, and which repositories are allowed for a given business line.

import { Agent, Task, Crew } from "crewai";
import { z } from "zod";

type PolicyDoc = {
  id: string;
  title: string;
  section: string;
  effectiveDate: string;
  text: string;
};

const POLICY_CORPUS: PolicyDoc[] = [
  {
    id: "IB-MNPI-001",
    title: "MNPI Handling Policy",
    section: "4.2",
    effectiveDate: "2025-01-15",
    text: "Employees must not discuss material non-public information outside approved channels.",
  },
  {
    id: "IB-COMMS-014",
    title: "External Communications Policy",
    section: "2.1",
    effectiveDate: "2025-02-01",
    text: "Client-facing statements must be approved by Compliance where required.",
  },
];

function retrievePolicy(query: string): string {
  const q = query.toLowerCase();
  const matches = POLICY_CORPUS.filter((doc) =>
    `${doc.title} ${doc.section} ${doc.text}`.toLowerCase().includes(q) ||
    q.split(" ").some((term) => doc.text.toLowerCase().includes(term))
  );

  if (matches.length === 0) return "NO_RELEVANT_POLICY_FOUND";

  return matches
    .map(
      (doc) =>
        `[${doc.id}] ${doc.title} §${doc.section} (${doc.effectiveDate})\n${doc.text}`
    )
    .join("\n\n");
}
  1. Create a policy analyst agent with hard constraints

    The agent should answer only from provided context and refuse when the evidence is weak. In investment banking, that is the difference between a useful assistant and a liability.

const policyAgent = new Agent({
  role: "Investment Banking Policy Analyst",
  goal:
    "Answer internal policy questions using only approved policy context with citations.",
  backstory:
    "You support bankers and control functions. You never speculate, never invent policy language, and always cite sources.",
  verbose: true,
});

const answerTask = new Task({
  description:
    "Answer the user's policy question using only the retrieved policy excerpts. If no relevant excerpt exists, say you cannot determine the answer from current policy sources.",
});
  1. Wire retrieval into execution and enforce citation output

    This pattern keeps the model honest. The tool returns only approved snippets; the task prompt forces a structured response with source references.

async function askPolicyQuestion(question: string) {
  const context = retrievePolicy(question);

  const task = new Task({
    description: `
You are answering an investment banking policy question.

Question:
${question}

Approved context:
${context}

Rules:
- Use only the approved context.
- If context is insufficient, say so explicitly.
- Include source IDs in your answer.
- Do not provide legal advice or speculate.
Return a short answer followed by bullet-point citations.
`,
    expectedOutput:
      "A concise policy answer with cited source IDs or an explicit inability to determine.",
    agent: policyAgent,
  });

  const crew = new Crew({
    agents: [policyAgent],
    tasks: [task],
    verbose: true,
  });

  const result = await crew.kickoff();
  return result;
}

// Example
askPolicyQuestion("Can I discuss client pipeline details on a personal device?")
  .then(console.log)
  .catch(console.error);
  1. Add guardrails before returning answers

    In banking workflows, you need refusal logic for high-risk prompts like MNPI disclosure or requests to bypass controls. Keep this outside the LLM so it stays deterministic.

const highRiskPatterns = [
  /mnpi/i,
  /inside(r)? information/i,
  /bypass compliance/i,
  /hide from audit/i,
];

function shouldRefuse(question: string): boolean {
if (highRiskPatterns.some((pattern) => pattern.test(question))) return true;
return false;
}

async function safeAsk(question: string) {
if (shouldRefuse(question)) {
return {
answer:
"Cannot assist with that request. Please route through Compliance or your line manager.",
citations: [],
};
}

const response = await askPolicyQuestion(question);
return { answer: response };
}

Production Considerations

  • Deploy in-region

    • Keep prompts, embeddings, logs, and document stores inside approved regions to satisfy data residency rules.
    • For global banks, separate EU/UK/US corpora if policies differ by jurisdiction.
  • Log for auditability

    • Store question, retrieved document IDs, answer text, timestamp, user ID, and model version.
  • Add compliance review queues

    • Escalate unanswered or ambiguous queries to Compliance rather than letting the agent guess.
  • Lock down access

    • Use SSO/RBAC so front-office users only see policies relevant to their desk or entity.

Common Pitfalls

  1. Letting the model answer without retrieval

    • This turns your Q&A bot into a hallucination engine.
    • Fix it by requiring retrieved excerpts before every response.
  2. Mixing public and internal sources

    • A public web result can conflict with firm policy and create bad guidance.
    • Fix it by whitelisting only approved repositories and tagging each source with provenance.
  3. Skipping version control on policies

    • Bank policies change often; old guidance is usually wrong guidance.
    • Fix it by storing effective dates and filtering out superseded documents at retrieval time.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides