How to Build a fraud detection Agent Using LangChain in TypeScript for pension funds

By Cyprian AaronsUpdated 2026-04-21
fraud-detectionlangchaintypescriptpension-funds

A fraud detection agent for pension funds reviews member activity, contribution patterns, withdrawals, address changes, beneficiary updates, and advisor interactions to flag suspicious behavior before money moves. It matters because pension fraud is usually low-frequency but high-impact: one bad withdrawal or identity takeover can trigger regulatory issues, member harm, and a long audit trail you need to explain.

Architecture

  • Event intake layer

    • Receives pension-specific events from CRM, core admin systems, KYC/AML tools, and transaction logs.
    • Normalizes them into a single schema before the agent sees them.
  • Risk signal retriever

    • Pulls policy rules, historical fraud cases, and member profile context.
    • Uses VectorStoreRetriever so the agent can ground decisions in internal evidence.
  • LangChain reasoning chain

    • Uses ChatOpenAI plus RunnableSequence to classify risk and produce a structured explanation.
    • Keeps the output machine-readable for downstream case management.
  • Guardrail and policy layer

    • Enforces pension-fund-specific constraints like residency rules, consent requirements, and escalation thresholds.
    • Prevents the model from making autonomous financial decisions.
  • Case creation / alerting

    • Sends high-risk cases to investigators in ServiceNow, Jira, or an internal queue.
    • Stores the model’s rationale for audit review.
  • Audit store

    • Persists input event hashes, retrieved evidence IDs, model version, prompt version, and final risk score.
    • This is non-negotiable in regulated environments.

Implementation

1. Define the event schema and risk output

Start with a strict contract. In fraud workflows, free-form text is how you end up with unreviewable alerts.

import { z } from "zod";

export const PensionEventSchema = z.object({
  memberId: z.string(),
  eventType: z.enum([
    "contribution_change",
    "withdrawal_request",
    "beneficiary_update",
    "address_change",
    "advisor_change",
    "login_anomaly",
  ]),
  amount: z.number().optional(),
  country: z.string(),
  timestamp: z.string(),
  metadata: z.record(z.any()).default({}),
});

export const FraudAssessmentSchema = z.object({
  riskLevel: z.enum(["low", "medium", "high"]),
  score: z.number().min(0).max(100),
  reasons: z.array(z.string()),
  recommendedAction: z.enum(["allow", "review", "escalate"]),
});

2. Build the LangChain chain with retrieval + structured output

This pattern uses ChatOpenAI, RunnablePassthrough, RunnableSequence, and a retriever backed by your internal knowledge base. The agent should not “invent” pension policy; it should retrieve it.

import { ChatOpenAI } from "@langchain/openai";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { PromptTemplate } from "@langchain/core/prompts";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";
import { Document } from "@langchain/core/documents";
import { PensionEventSchema, FraudAssessmentSchema } from "./schemas";

const llm = new ChatOpenAI({
  modelName: "gpt-4o-mini",
  temperature: 0,
});

const docs = [
  new Document({
    pageContent:
      "Escalate withdrawal requests if bank account details changed within the last 7 days.",
    metadata: { sourceId: "policy-001" },
  }),
  new Document({
    pageContent:
      "High-risk pattern: beneficiary update followed by address change and login anomaly within 48 hours.",
    metadata: { sourceId: "case-044" },
  }),
];

const vectorStore = await MemoryVectorStore.fromDocuments(
  docs,
  new OpenAIEmbeddings()
);

const retriever = vectorStore.asRetriever(4);

const prompt = PromptTemplate.fromTemplate(`
You are a fraud detection assistant for a pension fund.
Use only the provided policy context and event data.

Policy context:
{context}

Event:
{event}

Return JSON with keys:
riskLevel, score, reasons, recommendedAction
`);

const chain = RunnableSequence.from([
  {
    context: async (input) => {
      const relevantDocs = await retriever.invoke(input.event);
      return relevantDocs.map((d) => d.pageContent).join("\n");
    },
    event: new RunnablePassthrough(),
  },
  prompt,
  llm,
]);

export async function assessFraud(rawEvent: unknown) {
  const event = PensionEventSchema.parse(rawEvent);
  
  const result = await chain.invoke({
    event,
    context: "",
  });

  const parsed = FraudAssessmentSchema.parse(JSON.parse(result.content as string));
  
  return parsed;
}

3. Wrap it in an API handler with audit logging

The model output should never be your system of record. Persist the full decision trail separately so investigators can reconstruct what happened later.

type AuditRecord = {
  memberId: string;
  eventType: string;
};

async function writeAuditLog(record: AuditRecord) {
}

async function createCase(payload: {
});

export async function handlePensionFraudEvent(rawEvent: unknown) {
  
}

Use this flow in your service:

  1. Validate input with PensionEventSchema.
  2. Run retrieval against internal policy/case documents.
  3. Parse output with FraudAssessmentSchema.
  4. If recommendedAction === "escalate", create a case and write an audit record containing:
    • model name
    • prompt version
    • document source IDs
    • risk score
    • timestamp
    • hashed member identifier

Production Considerations

  • Data residency

    • Keep embeddings, vector stores, logs, and LLM traffic inside approved regions.
    • For pension funds operating under local retirement legislation, do not send member PII to unmanaged third-party endpoints.
  • Auditability

    • Store every retrieved document ID and every model version used for an alert.
    • Investigators need to answer why a case was escalated six months later.
  • Guardrails

    • Never let the agent auto-block withdrawals or change member records.
    • Limit it to triage and recommendation; humans approve any financial action.
  • Monitoring

    • Track false positive rate by event type: withdrawals will behave differently from address changes.

Common Pitfalls

  1. Using the LLM as the decision engine

    • Don’t ask the model to “decide fraud” without rules or retrieval.
    • Use deterministic thresholds around model scores plus policy checks.
  2. Ignoring pension-specific edge cases

    • A beneficiary change before retirement age is not automatically fraud.
    • Encode fund rules, vesting rules, jurisdiction rules, and consent flows into retrieval content or explicit validators.
  3. Weak audit trails

    • Logging only the final score is useless in regulated investigations.
    • Persist input hashes, retrieved evidence IDs, prompt versioning, model name, and reviewer outcomes.
  4. Letting PII leak into prompts unnecessarily

    • Replace direct identifiers with internal tokens where possible.
    • Keep names, ID numbers, and bank details out of prompts unless they are required for a specific rule check.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides