How to Build a underwriting Agent Using CrewAI in TypeScript for lending

By Cyprian AaronsUpdated 2026-04-21
underwritingcrewaitypescriptlending

An underwriting agent for lending takes borrower data, pulls in policy rules, evaluates risk signals, and produces a decision package that a human underwriter or downstream system can approve or reject. It matters because lending decisions need to be fast, consistent, auditable, and compliant with credit policy and regulatory constraints.

Architecture

  • Input normalization layer
    • Converts application payloads, bank statements, bureau data, and KYC fields into a consistent internal schema.
  • Policy retrieval layer
    • Loads lending rules: minimum DTI, max LTV, prohibited geographies, income verification thresholds, and exceptions.
  • CrewAI orchestration layer
    • Uses Agent, Task, and Crew to split work across a policy analyst agent and a risk summarizer agent.
  • Decision engine
    • Produces an approval, manual review, or decline recommendation with explicit reasons.
  • Audit logging layer
    • Stores prompts, model outputs, policy version, timestamps, and final recommendation for examiners.
  • Guardrail layer
    • Blocks sensitive attributes from influencing the decision and enforces human review on edge cases.

Implementation

  1. Install CrewAI for TypeScript and define your lending schema

    Keep the application payload narrow. Do not feed raw PII into the model unless you have a documented reason and a retention policy.

    npm install @crew-ai/crew-ai zod
    
    import { z } from "zod";
    
    export const BorrowerSchema = z.object({
      applicantId: z.string(),
      annualIncome: z.number(),
      monthlyDebt: z.number(),
      requestedAmount: z.number(),
      collateralValue: z.number().optional(),
      creditScore: z.number(),
      state: z.string(),
      employmentYears: z.number(),
    });
    
    export type Borrower = z.infer<typeof BorrowerSchema>;
    
  2. Create agents for policy analysis and risk summarization

    In lending, one agent should not be both the rule interpreter and the final decider. Split those responsibilities so you can audit each step.

    import { Agent } from "@crew-ai/crew-ai";
    
    export const policyAgent = new Agent({
      name: "Policy Analyst",
      role: "Interprets lending policy against borrower data",
      goal: "Identify whether the application meets written credit policy",
      backstory:
        "You are a credit policy analyst who applies documented underwriting rules only.",
      verbose: true,
    });
    
    export const riskAgent = new Agent({
      name: "Risk Summarizer",
      role: "Summarizes credit risk factors for underwriting review",
      goal: "Produce a concise risk memo with reasons and required follow-ups",
      backstory:
        "You write underwriting memos for regulated lending operations.",
      verbose: true,
    });
    
  3. Define tasks that force structured output

    Use tasks to separate policy evaluation from the final underwriting memo. The output should be structured enough for downstream systems to parse and store.

    import { Task } from "@crew-ai/crew-ai";
    import { policyAgent, riskAgent } from "./agents.js";
    
    export const assessPolicyTask = new Task({
      description:
        "Evaluate this borrower against lending policy. Return JSON with fields: decisionHint, reasons[], missingDocs[], policyFlags[]",
      expectedOutput:
        "Strict JSON only. No markdown. Include only facts derived from provided inputs.",
      agent: policyAgent,
    });
    
    export const summarizeRiskTask = new Task({
      description:
        "Create an underwriting memo based on the policy assessment and borrower profile. Return JSON with fields: recommendation, rationale[], manualReviewRequired boolean",
      expectedOutput:
        "Strict JSON only. No markdown.",
      agent: riskAgent,
      context: [assessPolicyTask],
    });
    
  4. Run the crew and wrap it in a service function

    This is the pattern you want in production: validate input first, execute the crew second, persist outputs third.

    import { Crew } from "@crew-ai/crew-ai";
    import { BorrowerSchema } from "./schema.js";
    import { assessPolicyTask, summarizeRiskTask } from "./tasks.js";
    
    export async function underwriteApplication(rawInput: unknown) {
      const borrower = BorrowerSchema.parse(rawInput);
    
      const crew = new Crew({
        agents: [assessPolicyTask.agent!, summarizeRiskTask.agent!],
        tasks: [assessPolicyTask, summarizeRiskTask],
        verbose: true,
      });
    
      const result = await crew.kickoff({
        borrower,
        underwritingPolicyVersion: "2026-01",
        jurisdiction: borrower.state,
        notes:
          "Do not use race, religion, sex, age beyond legal eligibility, or other protected attributes.",
      });
    
      return {
        applicantId: borrower.applicantId,
        result,
        policyVersion: "2026-01",
        reviewedAt: new Date().toISOString(),
      };
    }
    

Production Considerations

  • Deployment

    • Run the underwriting service behind an API gateway with mTLS or private networking.
    • Keep model credentials in a secrets manager.
    • Pin your CrewAI package version so behavior does not drift during regulatory reviews.
  • Monitoring

    • Log every kickoff() with correlation IDs, model version, prompt version, policy version, and final decision.
    • Track approval rates by product line and geography to catch silent bias or rule drift.
    • Alert on spikes in manual-review outcomes because that often means upstream data quality degraded.
  • Guardrails

    • Strip protected-class attributes before they reach the agent.
    • Add deterministic checks outside the LLM for hard rules like minimum age of majority or prohibited states.
    • Force manual review when confidence is low or when required documents are missing.
  • Data residency

    • Keep borrower data in-region if your lending book requires it.
    • If you use an external model endpoint, confirm subprocessor terms and retention controls before sending any application data.

Common Pitfalls

  • Using the LLM as the final credit decision maker

    • Don’t let the model approve or decline without deterministic rule checks.
    • Use it to summarize evidence; keep final authority in code or human review.
  • Passing raw PII into every task

    • This increases privacy exposure without improving underwriting quality.
    • Normalize inputs and send only fields needed for each task.
  • Skipping audit artifacts

    • If you cannot explain why an application was reviewed or declined, you will have problems in exam prep.
    • Persist prompt text, output JSON, policy version, timestamps, and reviewer overrides.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides