How to Integrate LangChain for payments with PostgreSQL for production AI

By Cyprian AaronsUpdated 2026-04-21
langchain-for-paymentspostgresqlproduction-ai

Combining LangChain for payments with PostgreSQL gives you a clean pattern for agentic commerce: the model can decide when to charge, refund, or look up billing state, while PostgreSQL keeps the source of truth for customers, invoices, and transaction history. In production AI systems, that split matters because the LLM should orchestrate payment actions, not own them.

This setup is useful when you need an agent that can answer billing questions, trigger payment flows, and persist every action for auditability. You get structured tool calls from LangChain and durable relational storage from PostgreSQL.

Prerequisites

  • Python 3.10+
  • A PostgreSQL instance running locally or in your cloud environment
  • A valid LangChain-compatible payments provider account and API key
  • OPENAI_API_KEY or another model provider key if your agent uses an LLM
  • psycopg2-binary or psycopg installed
  • langchain, langchain-openai, and the payments integration package you use in your stack
  • A PostgreSQL database with a dedicated schema for agent transactions

Install the core packages:

pip install langchain langchain-openai psycopg2-binary sqlalchemy

Integration Steps

  1. Set up PostgreSQL tables for payment state and audit logs.

You want a normalized schema that can track intent, execution, and final status. Keep the payment provider response separate from your business record.

import psycopg2

conn = psycopg2.connect(
    host="localhost",
    port=5432,
    dbname="payments_db",
    user="postgres",
    password="postgres"
)

with conn.cursor() as cur:
    cur.execute("""
        CREATE TABLE IF NOT EXISTS payment_requests (
            id SERIAL PRIMARY KEY,
            customer_id TEXT NOT NULL,
            amount_cents INTEGER NOT NULL,
            currency TEXT NOT NULL DEFAULT 'USD',
            status TEXT NOT NULL DEFAULT 'pending',
            provider_payment_id TEXT,
            created_at TIMESTAMP DEFAULT NOW(),
            updated_at TIMESTAMP DEFAULT NOW()
        )
    """)
    cur.execute("""
        CREATE TABLE IF NOT EXISTS payment_audit_log (
            id SERIAL PRIMARY KEY,
            request_id INTEGER REFERENCES payment_requests(id),
            event_type TEXT NOT NULL,
            payload JSONB NOT NULL,
            created_at TIMESTAMP DEFAULT NOW()
        )
    """)
    conn.commit()
  1. Create a LangChain tool that wraps your payments API call.

In production, the LLM should call a tool that performs one job: create or confirm a payment. The exact SDK depends on your provider, but the integration pattern is the same: expose a deterministic Python function as a LangChain tool.

from langchain_core.tools import tool

@tool
def create_payment(customer_id: str, amount_cents: int, currency: str = "USD") -> dict:
    # Replace this with your actual payments SDK call.
    # Example shape only:
    # result = payments_client.payment_intents.create(...)
    provider_payment_id = f"pi_{customer_id}_{amount_cents}"

    return {
        "provider_payment_id": provider_payment_id,
        "status": "requires_confirmation",
        "customer_id": customer_id,
        "amount_cents": amount_cents,
        "currency": currency,
    }

If your payments SDK supports explicit methods like payment_intents.create, charges.create, or refunds.create, call those inside this tool. Keep side effects inside tools, not inside prompt logic.

  1. Wire the tool into a LangChain agent and persist every action to PostgreSQL.

This is where orchestration happens. The agent decides whether to call the payment tool; your app stores the result in Postgres so you can reconcile later.

import json
import psycopg2
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [create_payment]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a billing assistant. Only call tools when payment action is required."),
    ("human", "{input}")
])

agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

response = executor.invoke({
    "input": "Charge customer cust_123 for $49.99 USD"
})

conn = psycopg2.connect(
    host="localhost",
    port=5432,
    dbname="payments_db",
    user="postgres",
    password="postgres"
)

with conn.cursor() as cur:
    cur.execute(
        """
        INSERT INTO payment_requests (customer_id, amount_cents, currency, status, provider_payment_id)
        VALUES (%s, %s, %s, %s, %s)
        RETURNING id
        """,
        ("cust_123", 4999, "USD", response["output"].get("status", "pending"), response["output"].get("provider_payment_id"))
    )
    request_id = cur.fetchone()[0]

    cur.execute(
        """
        INSERT INTO payment_audit_log (request_id, event_type, payload)
        VALUES (%s, %s, %s::jsonb)
        """,
        (request_id, "agent_response", json.dumps(response))
    )
    conn.commit()
  1. Add a reconciliation path that reads PostgreSQL before taking action.

Before creating duplicate charges or refunds, query Postgres first. This prevents double-spend behavior when an agent retries after timeout or partial failure.

def get_latest_payment_status(customer_id: str):
    conn = psycopg2.connect(
        host="localhost",
        port=5432,
        dbname="payments_db",
        user="postgres",
        password="postgres"
    )
    with conn.cursor() as cur:
        cur.execute("""
            SELECT id, status, provider_payment_id
            FROM payment_requests
            WHERE customer_id = %s
            ORDER BY created_at DESC
            LIMIT 1
        """, (customer_id,))
        row = cur.fetchone()

    return row

latest = get_latest_payment_status("cust_123")
print(latest)
  1. Store tool results as structured events for observability and replay.

Don’t just store “success” or “failed”. Save enough context to reconstruct why the agent made a decision and what the provider returned.

def log_event(request_id: int, event_type: str, payload: dict):
    conn = psycopg2.connect(
        host="localhost",
        port=5432,
        dbname="payments_db",
        user="postgres",
        password="postgres"
    )
    with conn.cursor() as cur:
        cur.execute(
            """
            INSERT INTO payment_audit_log (request_id, event_type, payload)
            VALUES (%s, %s, %s::jsonb)
            """,
            (request_id, event_type, json.dumps(payload))
        )
        conn.commit()

log_event(1, "payment_confirmed", {
    "provider_payment_id": "pi_cust_123_4999",
    "status": "confirmed"
})

Testing the Integration

Run a simple end-to-end check by invoking the agent and then reading back from Postgres.

result = executor.invoke({
    "input": "Create a payment for customer cust_456 for $12.00 USD"
})

print("Agent output:", result["output"])

conn = psycopg2.connect(
    host="localhost",
    port=5432,
    dbname="payments_db",
    user="postgres",
    password="postgres"
)

with conn.cursor() as cur:
    cur.execute("""
        SELECT customer_id, amount_cents, currency, status
        FROM payment_requests
        WHERE customer_id = %s
        ORDER BY created_at DESC
        LIMIT 1
    """, ("cust_456",))
    row = cur.fetchone()

print("DB row:", row)

Expected output:

Agent output: {'provider_payment_id': 'pi_cust_456_1200', 'status': 'requires_confirmation', 'customer_id': 'cust_456', 'amount_cents': 1200, 'currency': 'USD'}
DB row: ('cust_456', 1200, 'USD', 'requires_confirmation')

Real-World Use Cases

  • Billing assistants that create charges after verifying account balance and subscription state in PostgreSQL.
  • Refund workflows where the agent checks invoice history before calling the payments API.
  • Dispute resolution agents that pull transaction logs from PostgreSQL and generate structured summaries for support teams.

The production pattern here is simple: LangChain handles decisioning and tool routing; PostgreSQL handles durability and audit trails. Keep those responsibilities separate and your AI billing system stays debuggable under load.


Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides