How to Integrate LangChain for fintech with PostgreSQL for multi-agent systems
If you’re building multi-agent systems for fintech, you need two things: a reliable orchestration layer and a durable shared state. LangChain for fintech gives you the agent/tooling side, while PostgreSQL gives you transaction-safe persistence for balances, audit logs, customer context, and inter-agent handoffs.
This combo is useful when one agent needs to retrieve account data, another validates risk or compliance, and a third writes the final decision back to storage. Without PostgreSQL, your agents end up passing state around in memory and falling apart the moment you scale past one process.
Prerequisites
- •Python 3.10+
- •PostgreSQL 14+ running locally or in your cloud environment
- •A database user with
CREATE TABLE,SELECT,INSERT, andUPDATEpermissions - •LangChain installed with your fintech-specific package or integrations enabled
- •
psycopgorpsycopg2-binaryinstalled for PostgreSQL access - •An LLM provider key configured in environment variables
- •A
.envfile or secrets manager for:- •
DATABASE_URL - •
OPENAI_API_KEYor equivalent model key
- •
Install the core packages:
pip install langchain langchain-openai psycopg[binary] sqlalchemy python-dotenv
Integration Steps
- •Create a PostgreSQL schema for agent state
For multi-agent systems, don’t store everything in one JSON blob. Split operational state into tables that match your workflow: sessions, messages, decisions, and audit events.
from sqlalchemy import create_engine, text
DATABASE_URL = "postgresql+psycopg://agent_user:agent_pass@localhost:5432/fintech_agents"
engine = create_engine(DATABASE_URL)
schema_sql = """
CREATE TABLE IF NOT EXISTS agent_sessions (
session_id TEXT PRIMARY KEY,
customer_id TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'active',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE IF NOT EXISTS agent_messages (
id BIGSERIAL PRIMARY KEY,
session_id TEXT NOT NULL REFERENCES agent_sessions(session_id),
agent_name TEXT NOT NULL,
role TEXT NOT NULL,
content TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE IF NOT EXISTS agent_decisions (
id BIGSERIAL PRIMARY KEY,
session_id TEXT NOT NULL REFERENCES agent_sessions(session_id),
decision_type TEXT NOT NULL,
payload JSONB NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
"""
with engine.begin() as conn:
conn.execute(text(schema_sql))
- •Set up LangChain tools that read and write PostgreSQL
LangChain agents work best when database access is wrapped as tools. Use StructuredTool so the agent gets typed inputs instead of free-form SQL strings.
import os
import json
from sqlalchemy import text
from langchain_core.tools import StructuredTool
from pydantic import BaseModel, Field
class SaveMessageInput(BaseModel):
session_id: str = Field(..., description="Agent session ID")
agent_name: str = Field(..., description="Name of the agent")
role: str = Field(..., description="system|user|assistant|tool")
content: str = Field(..., description="Message content")
def save_message(session_id: str, agent_name: str, role: str, content: str) -> str:
with engine.begin() as conn:
conn.execute(
text("""
INSERT INTO agent_messages (session_id, agent_name, role, content)
VALUES (:session_id, :agent_name, :role, :content)
"""),
{
"session_id": session_id,
"agent_name": agent_name,
"role": role,
"content": content,
},
)
return "saved"
save_message_tool = StructuredTool.from_function(
func=save_message,
name="save_message",
description="Persist an agent message to PostgreSQL",
args_schema=SaveMessageInput,
)
- •Build a LangChain fintech agent with PostgreSQL-backed memory
For production systems, keep the prompt small and move durable context into Postgres. The agent should fetch only what it needs for the current decision.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain.agents import create_tool_calling_agent, AgentExecutor
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a fintech operations agent. Persist every important decision."),
("human", "{input}"),
])
tools = [save_message_tool]
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
- •Add a second agent and coordinate through PostgreSQL
This is where multi-agent systems become useful. One agent can classify a request; another can approve or escalate based on stored context.
def save_decision(session_id: str, decision_type: str, payload: dict) -> str:
with engine.begin() as conn:
conn.execute(
text("""
INSERT INTO agent_decisions (session_id, decision_type, payload)
VALUES (:session_id, :decision_type, :payload::jsonb)
"""),
{
"session_id": session_id,
"decision_type": decision_type,
"payload": json.dumps(payload),
},
)
return "decision_saved"
class SaveDecisionInput(BaseModel):
session_id: str
decision_type: str
payload: dict
save_decision_tool = StructuredTool.from_function(
func=save_decision,
name="save_decision",
description="Persist an underwriting or compliance decision",
args_schema=SaveDecisionInput,
)
risk_prompt = ChatPromptTemplate.from_messages([
("system", "You are a risk review agent for fintech workflows."),
("human", "{input}"),
])
risk_agent = create_tool_calling_agent(llm=llm, tools=[save_decision_tool], prompt=risk_prompt)
risk_executor = AgentExecutor(agent=risk_agent, tools=[save_decision_tool], verbose=True)
- •Run an orchestrated workflow
Use one coordinator process to invoke each specialist agent and write shared state into Postgres after each step.
session_id = "sess_1001"
with engine.begin() as conn:
conn.execute(
text("""
INSERT INTO agent_sessions (session_id, customer_id)
VALUES (:session_id, :customer_id)
ON CONFLICT (session_id) DO NOTHING
"""),
{"session_id": session_id, "customer_id": "cust_7788"},
)
result_1 = executor.invoke({
"input": f"Record this customer note for session {session_id}: requested limit increase from $5k to $15k."
})
result_2 = risk_executor.invoke({
"input": f"Review this request for session {session_id}. Approve only if low risk."
})
print(result_1["output"])
print(result_2["output"])
Testing the Integration
Run a direct database check after invoking both agents. You want to verify that messages and decisions landed in the expected tables.
from sqlalchemy import text
with engine.connect() as conn:
messages = conn.execute(
text("SELECT agent_name, role, content FROM agent_messages WHERE session_id=:sid ORDER BY id DESC LIMIT 5"),
{"sid": "sess_1001"},
).fetchall()
decisions = conn.execute(
text("SELECT decision_type, payload FROM agent_decisions WHERE session_id=:sid ORDER BY id DESC LIMIT 5"),
{"sid": "sess_1001"},
).fetchall()
print(messages)
print(decisions)
Expected output:
[('assistant', 'tool', 'saved'), ...]
[('underwriting', {'approved': false, 'reason': 'needs manual review'})]
If those rows are present, your LangChain agents are writing durable state into PostgreSQL correctly.
Real-World Use Cases
- •
Loan origination workflows
- •One agent gathers applicant data.
- •Another checks policy rules.
- •A third stores approval or escalation decisions in PostgreSQL for auditability.
- •
Fraud triage pipelines
- •An intake agent scores suspicious activity.
- •A review agent pulls historical context from Postgres.
- •Final outcomes are persisted for compliance reporting.
- •
Customer servicing copilots
- •A support agent reads account history.
- •A policy agent validates whether an action is allowed.
- •Every interaction is written to Postgres so human reviewers can reconstruct the full chain of reasoning.
The pattern here is simple: let LangChain handle reasoning and tool use; let PostgreSQL handle truth and history. In fintech systems with multiple agents acting on the same case file this separation keeps your architecture auditable and survivable under load.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit