How to Build a transaction monitoring Agent Using LangChain in Python for pension funds

By Cyprian AaronsUpdated 2026-04-21
transaction-monitoringlangchainpythonpension-funds

A transaction monitoring agent for pension funds scans contribution, withdrawal, transfer, and beneficiary-change activity, then flags patterns that look inconsistent with policy, regulation, or member behavior. It matters because pension data is sensitive, regulated, and high-volume; the agent helps compliance teams catch suspicious activity early without turning every alert into a manual review.

Architecture

  • Transaction ingestion layer

    • Pulls records from core pension admin systems, batch files, or event streams.
    • Normalizes fields like member_id, transaction_type, amount, timestamp, country, and channel.
  • Rules and policy engine

    • Encodes pension-specific controls:
      • early withdrawal checks
      • unusual transfer spikes
      • repeated beneficiary edits
      • contribution reversals
    • Produces deterministic alerts before any LLM reasoning.
  • LangChain analysis agent

    • Uses ChatOpenAI with structured output to classify risk and explain why a transaction is suspicious.
    • Summarizes evidence for compliance analysts instead of making final decisions.
  • Evidence retrieval layer

    • Pulls member history, prior alerts, policy text, and case notes.
    • Uses FAISS or another vector store with RetrievalQA / retrievers for context.
  • Case management output

    • Writes alerts to a queue, database, or ticketing system.
    • Stores full reasoning traces for audit and regulator review.
  • Audit and observability

    • Logs prompts, model outputs, rule hits, timestamps, and human dispositions.
    • Keeps immutable records for retention and inspection.

Implementation

  1. Install the core dependencies

Use LangChain’s current split packages. For a minimal production-shaped setup:

pip install langchain langchain-openai langchain-community faiss-cpu pydantic

Set your model key in the environment:

export OPENAI_API_KEY="your-key"
  1. Define the transaction schema and risk rules

Start with deterministic checks. Pension funds need explainable triggers before any model call.

from datetime import datetime
from pydantic import BaseModel, Field

class Transaction(BaseModel):
    transaction_id: str
    member_id: str
    transaction_type: str  # contribution | withdrawal | transfer | beneficiary_change
    amount: float
    currency: str = "USD"
    timestamp: datetime
    country: str
    channel: str

def rule_flags(tx: Transaction) -> list[str]:
    flags = []

    if tx.transaction_type == "withdrawal" and tx.amount > 50000:
        flags.append("High-value withdrawal")

    if tx.transaction_type == "transfer" and tx.country not in {"US", "CA", "GB"}:
        flags.append("Cross-border transfer to higher-risk jurisdiction")

    if tx.transaction_type == "beneficiary_change" and tx.channel == "call_center":
        flags.append("Beneficiary change via non-digital channel")

    if tx.transaction_type == "contribution" and tx.amount > 10 * 10000:
        flags.append("Large contribution spike")

    return flags
  1. Build the LangChain agent that explains the alert

Use ChatOpenAI plus structured output so the response is machine-readable. That makes downstream case creation much cleaner than parsing free text.

from typing import Literal
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI

class RiskAssessment(BaseModel):
    risk_level: Literal["low", "medium", "high"] = Field(...)
    summary: str = Field(...)
    reasons: list[str] = Field(default_factory=list)
    recommended_action: Literal["auto_close", "queue_for_review", "escalate"] = Field(...)

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

def assess_transaction(tx: Transaction, flags: list[str], member_context: str) -> RiskAssessment:
    prompt = f"""
You are monitoring pension fund transactions for compliance risk.

Transaction:
{tx.model_dump()}

Rule flags:
{flags}

Member context:
{member_context}

Return a concise assessment focused on compliance risk,
auditability, and explainability. Do not invent facts.
"""
    structured_llm = llm.with_structured_output(RiskAssessment)
    return structured_llm.invoke(prompt)

if __name__ == "__main__":
    tx = Transaction(
        transaction_id="txn_1001",
        member_id="mem_42",
        transaction_type="withdrawal",
        amount=75000,
        currency="USD",
        timestamp=datetime.utcnow(),
        country="US",
        channel="branch"
    )

    flags = rule_flags(tx)
    context = (
        "Member has three prior withdrawals under $5k in the last 18 months. "
        "No prior sanctions hits. Recent address change was verified."
    )

    result = assess_transaction(tx, flags, context)
    print(result.model_dump())
  1. Add retrieval for policy and case history

For pension funds, the model should cite policy text rather than guess. Use a retriever over approved documents only.

from langchain_community.vectorstores import FAISS
from langchain_community.embeddings import OpenAIEmbeddings

docs_texts = [
    "Withdrawals above $50,000 require enhanced review.",
    "Beneficiary changes made through call center must be verified by callback.",
]
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_texts(docs_texts, embedding=embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 2})

query = "What policy applies to a $75k withdrawal?"
docs = retriever.invoke(query)
print([d.page_content for d in docs])

Production Considerations

  • Keep data residency explicit

    • Pension records often cannot leave approved regions.
    • Pin model endpoints, vector stores, and logs to compliant regions only.
  • Make audit trails immutable

    • Store prompt inputs, retrieved documents, model outputs, rule hits, and analyst decisions.
    • Regulators will ask why an alert was raised; you need a reproducible chain of evidence.
  • Use guardrails before LLM calls

    • Redact national IDs, bank account numbers, and medical or beneficiary notes unless strictly required.
    • The agent should summarize risk using minimum necessary data.
  • Separate recommendation from decision

    • The LLM should recommend queue_for_review or escalate, not freeze accounts automatically.
    • Final action stays with policy engines or authorized compliance staff.

Common Pitfalls

  • Letting the LLM decide without rules

    • Mistake: sending raw transactions straight to the model.
    • Fix: run deterministic rules first so every alert has a clear trigger.
  • Using free-form text outputs

    • Mistake: parsing paragraphs into downstream systems.
    • Fix: use with_structured_output() with Pydantic models so outputs are stable.
  • Ignoring pension-specific context

    • Mistake: treating all financial transactions the same.
    • Fix: encode fund rules like withdrawal thresholds, beneficiary verification steps, transfer restrictions, retention requirements, and residency constraints.

If you build it this way, you get something compliance teams can actually use: deterministic where it must be strict, LLM-assisted where judgment helps, and auditable end to end.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides