How to Build a underwriting Agent Using LangChain in Python for wealth management

By Cyprian AaronsUpdated 2026-04-21
underwritinglangchainpythonwealth-management

An underwriting agent for wealth management takes client data, portfolio context, risk appetite, and policy constraints, then turns that into a structured recommendation: approve, reject, or route to human review. It matters because wealth firms need faster decisions without losing control over suitability, compliance, and auditability.

Architecture

  • Input normalization layer
    • Converts raw client intake forms, KYC fields, portfolio summaries, and notes into a consistent schema.
  • Policy retrieval layer
    • Pulls underwriting rules from approved internal documents using langchain_community.vectorstores and a retriever.
  • Reasoning and decision layer
    • Uses an LLM chain to classify the case and generate a decision rationale grounded in retrieved policy.
  • Guardrail layer
    • Enforces hard checks for missing KYC data, sanctions flags, concentration limits, or unsuitable product exposure.
  • Audit logging layer
    • Persists inputs, retrieved policy snippets, model output, and final decision for compliance review.
  • Human escalation layer
    • Routes borderline or high-risk cases to an advisor or compliance officer.

Implementation

1. Define the underwriting schema and policy retrieval

Start with a strict input model. In wealth management, loose JSON is how you end up with broken suitability checks and bad audit trails.

from pydantic import BaseModel, Field
from typing import Literal, List

class UnderwritingCase(BaseModel):
    client_id: str
    jurisdiction: str
    age: int
    liquid_net_worth_usd: float
    annual_income_usd: float
    risk_profile: Literal["conservative", "moderate", "aggressive"]
    requested_product: str
    existing_exposure_usd: float = Field(ge=0)
    kyc_complete: bool
    sanctions_cleared: bool
    notes: str

class UnderwritingDecision(BaseModel):
    decision: Literal["approve", "reject", "review"]
    rationale: str
    policy_refs: List[str]

For policy retrieval, use a vector store over approved underwriting docs. This keeps the agent grounded in firm-approved language instead of inventing rules.

from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS

# Assume `texts` is a list of approved policy chunks and `metadatas` contains source refs.
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vectorstore = FAISS.from_texts(texts=texts, embedding=embeddings, metadatas=metadatas)
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})

2. Build the LangChain prompt and structured output chain

Use ChatPromptTemplate plus with_structured_output() so the model returns a predictable object. For underwriting, that is non-negotiable.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

prompt = ChatPromptTemplate.from_messages([
    ("system",
     "You are an underwriting assistant for wealth management. "
     "Use only the provided policy context. "
     "If KYC is incomplete or sanctions are not cleared, return review or reject. "
     "Do not invent policy rules."),
    ("human",
     "Case:\n{case}\n\nPolicy context:\n{context}")
])

decision_chain = prompt | llm.with_structured_output(UnderwritingDecision)

Now wrap retrieval around it:

def build_context(query: str) -> str:
    docs = retriever.invoke(query)
    return "\n\n".join(
        f"[{doc.metadata.get('source', 'policy')}] {doc.page_content}"
        for doc in docs
    )

3. Add deterministic guardrails before the LLM runs

Wealth management underwriting should fail closed on obvious issues. Do not send bad cases to the model hoping it will “figure it out.”

def precheck(case: UnderwritingCase) -> tuple[bool, str]:
    if not case.kyc_complete:
        return False, "KYC incomplete"
    if not case.sanctions_cleared:
        return False, "Sanctions not cleared"
    if case.age < 18:
        return False, "Client under minimum age"
    return True, ""

def underwrite(case: UnderwritingCase) -> UnderwritingDecision:
    ok, reason = precheck(case)
    if not ok:
        return UnderwritingDecision(
            decision="review",
            rationale=reason,
            policy_refs=["internal-control-precheck"]
        )

    query = f"{case.jurisdiction} {case.requested_product} {case.risk_profile} underwriting"
    context = build_context(query)
    result = decision_chain.invoke({
        "case": case.model_dump_json(indent=2),
        "context": context,
    })
    return result

4. Run the agent with traceable inputs and outputs

Keep every decision explainable. Compliance teams will ask who approved what, based on which rule set.

sample_case = UnderwritingCase(
    client_id="C12345",
    jurisdiction="UK",
    age=42,
    liquid_net_worth_usd=2500000,
    annual_income_usd=380000,
    risk_profile="moderate",
    requested_product="structured note",
    existing_exposure_usd=150000,
    kyc_complete=True,
    sanctions_cleared=True,
    notes="Client wants income-focused exposure with capital preservation."
)

decision = underwrite(sample_case)
print(decision.model_dump())

If you want richer tracing in production, wrap this with LangSmith so every prompt, retrieval hit, and output is auditable.

Production Considerations

  • Compliance logging

    • Persist the full decision bundle: input case, retrieved policy chunks, model output, final action.
    • Store immutable logs with retention aligned to your regulatory requirements.
  • Data residency

    • Keep client PII inside approved regions.
    • If you use hosted LLMs or embeddings APIs, confirm region pinning and contractual controls before sending any sensitive data.
  • Guardrails

    • Hard-code disqualifiers like incomplete KYC, sanctions hits, missing suitability data.
    • Use human review for edge cases such as high-concentration exposure or unusual jurisdiction/product combinations.
  • Monitoring

    • Track approval rates by product type, false approvals overturned by compliance, and retrieval quality.
    • Alert when the agent starts returning too many “approve” decisions without strong policy citations.

Common Pitfalls

  1. Letting the model make final decisions without deterministic checks

    • Fix this by running prechecks before any LLM call.
    • The model should recommend within constraints; it should not override mandatory controls.
  2. Using unapproved documents as retrieval sources

    • Fix this by indexing only versioned policy documents from legal/compliance.
    • Tag every chunk with source ID and effective date.
  3. Ignoring explainability requirements

    • Fix this by forcing structured outputs with UnderwritingDecision.
    • Store policy_refs so auditors can trace each recommendation back to source material.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides