How to Build a policy Q&A Agent Using CrewAI in TypeScript for pension funds

By Cyprian AaronsUpdated 2026-04-21
policy-q-acrewaitypescriptpension-fundspolicy-qanda

A policy Q&A agent for pension funds answers member and staff questions against approved policy documents, fund rules, benefit guides, and operational procedures. It matters because pension operations are full of regulated edge cases: eligibility, contribution limits, transfer rules, retirement options, and complaint handling all need consistent answers with an audit trail.

Architecture

  • Policy document ingestion layer

    • Loads pension scheme rules, trustee minutes, member booklets, and internal SOPs.
    • Splits documents into retrievable chunks with source metadata like document name, version, effective date, and jurisdiction.
  • Retrieval tool

    • Searches only approved policy sources.
    • Returns top passages with citations so the agent can ground every answer in fund-approved text.
  • Policy Q&A agent

    • Uses a strict system prompt that says: answer only from retrieved policy content.
    • Refuses to guess when the policy is missing or ambiguous.
  • Audit logging layer

    • Stores user question, retrieved sources, final answer, confidence flags, and timestamp.
    • Keeps evidence for compliance reviews and complaint investigations.
  • Guardrails and escalation path

    • Detects high-risk topics like death benefits, divorce orders, transfers out, tax treatment, and complaints.
    • Routes those cases to a human pensions administrator instead of generating a definitive answer.

Implementation

1) Install CrewAI for TypeScript and define your policy data model

CrewAI’s TypeScript SDK gives you Agent, Task, and Crew. For a pension fund use case, keep your policy metadata explicit so you can trace every answer back to a controlled source.

npm install @crew-ai/crew-ai zod
// policy-types.ts
export type PolicyChunk = {
  id: string;
  title: string;
  sourceUrl: string;
  version: string;
  effectiveDate: string;
  jurisdiction: "UK" | "EU" | "US";
  text: string;
};

2) Create a retrieval tool that only searches approved pension policies

Use a tool class so the agent can query your indexed policy store. The important part is that the tool returns structured results with citations; do not hand the model raw blobs without provenance.

// tools/policySearchTool.ts
import { Tool } from "@crew-ai/crew-ai";
import { z } from "zod";

const inputSchema = z.object({
  query: z.string().min(3),
});

export class PolicySearchTool extends Tool {
  name = "policy_search";
  description = "Search approved pension fund policies and return cited passages.";
  schema = inputSchema;

  async execute(input: z.infer<typeof inputSchema>) {
    // Replace with vector DB / keyword hybrid search over approved docs.
    const results = [
      {
        title: "Member Transfers Policy",
        sourceUrl: "https://intranet/policies/transfers-v4.pdf",
        version: "4.0",
        effectiveDate: "2025-01-01",
        snippet:
          "Transfers out require identity verification, receiving scheme checks, and trustee approval where flagged by compliance.",
      },
    ];

    return {
      query: input.query,
      results,
    };
  }
}

3) Define the agent with hard constraints for compliance and escalation

This is where you keep the model honest. The system instructions should say it must cite sources, avoid legal advice language, and escalate anything that touches regulated decisions or ambiguous policy interpretation.

// agent.ts
import { Agent } from "@crew-ai/crew-ai";
import { PolicySearchTool } from "./tools/policySearchTool";

export const pensionPolicyAgent = new Agent({
  name: "Pension Policy Q&A Agent",
  role: "Answer pension fund policy questions using approved internal documents.",
  goal:
    "Provide concise answers grounded in cited policy text and escalate risky or ambiguous cases.",
  backstory:
    "You work for a regulated pension fund. You never invent policy details.",
  tools: [new PolicySearchTool()],
  allowDelegation: false,
});

4) Run a task through a crew and return a cited answer

For production use, wrap the agent in a crew so you can add more tasks later, like compliance review or response formatting. Keep the output contract tight so downstream systems can store it in an audit log.

// index.ts
import { Crew, Task } from "@crew-ai/crew-ai";
import { pensionPolicyAgent } from "./agent";

async function main() {
  const task = new Task({
    description:
      "Answer this member question using only approved pension policies: 'Can I transfer my pot overseas before retirement?'",
    expectedOutput:
      "A short answer with citations to the relevant policy passages. If unclear or restricted, recommend escalation.",
    agent: pensionPolicyAgent,
  });

  const crew = new Crew({
    agents: [pensionPolicyAgent],
    tasks: [task],
    verbose: true,
  });

  const result = await crew.kickoff();
  console.log(JSON.stringify(result));
}

main().catch(console.error);

Production Considerations

  • Data residency

    • Keep embeddings, logs, and model traffic inside the required region if you serve UK or EU members.
    • Pension data often includes personal data plus special handling requirements around beneficiaries and dependants.
  • Auditability

    • Persist the user question, retrieved chunks, final answer, model version, prompt version, and timestamps.
    • If a trustee or regulator asks why an answer was given, you need evidence in minutes.
  • Guardrails

    • Block or escalate topics like transfer values, divorce settlements, death benefits, tax treatment, protected rights, complaints handling deadlines, and discretionary decisions.
    • Add deterministic rules before the LLM runs; don’t rely on prompt wording alone.
  • Monitoring

    • Track citation coverage rate, escalation rate by topic, hallucination reports from ops teams, and unanswered questions by document gap.
    • If certain questions repeatedly trigger escalation because policy is missing or stale, update the source material instead of tuning prompts.

Common Pitfalls

  • Letting the model answer without citations

    • This is how bad advice gets into production.
    • Fix it by rejecting any response that does not include source references from your retrieval tool.
  • Using generic web search instead of controlled policy sources

    • Pension answers must come from approved documents only.
    • Fix it by indexing only trustee-approved content with versioning and effective dates.
  • Ignoring jurisdiction differences

    • UK auto-enrolment rules are not the same as EU occupational scheme rules or US plan administration rules.
    • Fix it by tagging every chunk with jurisdiction and filtering retrieval before generation.
  • Treating high-risk questions like normal FAQs

    A question about transferring out after age thresholds is not the same as “how do I reset my password.”

    Fix it by routing sensitive topics to human review with a clear escalation reason in the audit log.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides