How to Integrate FastAPI for lending with LangChain for startups
Combining FastAPI for lending with LangChain gives you a clean way to expose lending workflows as APIs while adding an LLM layer for triage, document extraction, and decision support. For startups, this is useful when you need a borrower-facing service that can answer questions, pre-screen applications, summarize documents, and route cases to the right internal workflow.
Prerequisites
- •Python 3.10+
- •A FastAPI app already running for your lending service
- •A LangChain-compatible LLM provider configured with an API key
- •
pipinstalled packages:- •
fastapi - •
uvicorn - •
pydantic - •
langchain - •
langchain-openaior another LangChain model integration - •
httpx
- •
- •A lending domain model ready for:
- •applicant data
- •loan amount
- •income
- •credit score
- •document metadata
Integration Steps
- •
Create the lending API contract in FastAPI
Start by defining the request and response models your agent will call. Keep the API strict; the LLM should not invent fields.
from fastapi import FastAPI from pydantic import BaseModel, Field app = FastAPI(title="Startup Lending API") class LoanApplication(BaseModel): applicant_name: str = Field(..., min_length=2) loan_amount: float = Field(..., gt=0) annual_income: float = Field(..., gt=0) credit_score: int = Field(..., ge=300, le=850) class LoanDecision(BaseModel): approved: bool risk_band: str reason: str @app.post("/lend/decision", response_model=LoanDecision) async def decide_loan(application: LoanApplication): debt_to_income = application.loan_amount / application.annual_income if application.credit_score >= 720 and debt_to_income < 0.4: return LoanDecision( approved=True, risk_band="low", reason="Strong credit profile and acceptable exposure." ) return LoanDecision( approved=False, risk_band="medium", reason="Application needs manual review based on risk thresholds." ) - •
Build a LangChain prompt that prepares structured lending input
Use LangChain to normalize messy borrower text into structured JSON your FastAPI endpoint can consume. In production, this is where you turn chat intake into a validated application payload.
import os from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate llm = ChatOpenAI( model="gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"] ) prompt = ChatPromptTemplate.from_messages([ ("system", "Extract lending application data as JSON with fields: applicant_name, loan_amount, annual_income, credit_score."), ("user", "{message}") ]) chain = prompt | llm - •
Call the FastAPI lending endpoint from a LangChain-powered agent flow
The agent can take free-form user input, extract fields with the LLM, then send a real HTTP request to your lending API using
httpx.import json import httpx async def submit_lending_case(message: str): result = chain.invoke({"message": message}) payload_text = result.content payload = json.loads(payload_text) async with httpx.AsyncClient(base_url="http://localhost:8000") as client: response = await client.post("/lend/decision", json=payload) response.raise_for_status() return response.json() - •
Wrap the workflow in a simple agent-facing function
This gives your startup one callable service that handles intake plus decisioning.
import asyncio async def lending_agent(message: str): decision = await submit_lending_case(message) if decision["approved"]: return { "status": "approved", "message": f"Loan approved under {decision['risk_band']} risk.", "details": decision["reason"], } return { "status": "review", "message": f"Manual review required under {decision['risk_band']} risk.", "details": decision["reason"], } if __name__ == "__main__": sample = "Jane Doe wants a $25000 loan. She earns $120000 yearly and has a credit score of 760." print(asyncio.run(lending_agent(sample))) - •
Add guardrails before sending anything to production
Do not let the model post directly to underwriting logic without validation. Validate every field before calling the lending endpoint.
from pydantic import ValidationError class ExtractedApplication(BaseModel): applicant_name: str loan_amount: float annual_income: float credit_score: int def validate_payload(payload: dict) -> ExtractedApplication: try: return ExtractedApplication(**payload) except ValidationError as e: raise ValueError(f"Invalid extracted application: {e}") # Example usage inside submit_lending_case: # validated = validate_payload(payload) # response = await client.post("/lend/decision", json=validated.model_dump())
Testing the Integration
Run your FastAPI app first:
uvicorn main:app --reload --port 8000
Then test the full flow with a sample borrower message:
import asyncio
async def test_flow():
result = await lending_agent(
"Alex Kim is applying for a $15000 loan. Annual income is $90000 and credit score is 735."
)
print(result)
asyncio.run(test_flow())
Expected output:
{
'status': 'approved',
'message': 'Loan approved under low risk.',
'details': 'Strong credit profile and acceptable exposure.'
}
If you get a manual review response instead, that means the request passed through both layers correctly and your rule thresholds are working.
Real-World Use Cases
- •
Borrower intake assistant
- •Let users describe their situation in plain English.
- •Use LangChain to extract structured fields.
- •Send validated data to FastAPI for loan decisioning.
- •
Document triage for startup lenders
- •Summarize bank statements, payslips, or incorporation docs with LangChain.
- •Push extracted metadata into FastAPI endpoints for underwriting review.
- •
Internal underwriting copilot
- •Use LangChain to explain why an application was flagged.
- •Use FastAPI to fetch real loan status, risk band, and case history from your backend.
This pattern works because each layer does one job well. LangChain handles language-heavy interaction; FastAPI handles deterministic lending logic and API boundaries.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit