Platform

Data Modeling

Design data models together in the cloud and share them with your team in different formats, based on your subscription plan, without any coding or conversion.

SqlDBM AI Copilot

Use natural language for data modeling tasks

Model Governance

Create and manage business metadata using a dedicated project role

Snowflake Schema Monitoring

Track and get notified of schema changes in live database environments

Strategic advisors

kent graziano

Kent Graziano

The Data Warrior, Strategic Advisor, Data Vault Master, Author, Speaker, and Tae Kwon Do Grandmaster

gordon wong

Gordon Wong

Leading organizations through analytics transformations, preference for social missions, healthcare, energy, education, and civic engagement

For cloud data platforms

SqlDBM offers secure native connectors to leading data platforms like Snowflake, Databricks, and BigQuery so you can reverse engineer and begin modeling in seconds.

Try modeling now

Automating DDL refreshes from any database with the SqlDBM API

The problem

Keeping a data model in sync with what’s actually running in production is easy when your modeling tool can plug straight into the database. It gets ugly fast when the database is on-prem, sits behind a VPN, lives in a customer environment you can only reach through a jumphost, or simply isn’t internet-reachable for compliance reasons. The workflow defaults to something like:

  1. SSH in, run a manual export
  2. Copy the file out somehow
  3. Drag-and-drop it into the modeling tool
  4. Repeat every time anyone touches the schema

That cycle decays the moment people get busy. The model goes stale, conventions drift, and downstream consumers stop trusting it.

The pattern: extract locally, push remotely

The two halves of the problem don’t need a continuous connection — they don’t even need to run on the same machine.

  1. An extraction script that produces real DDL from inside the network (or from a backup).
  2. The SqlDBM REST API to receive the DDL, version it, and land it into a project.
  3. A scheduler (cron, Task Scheduler, CI job, a .bat on a workstation) to run this on a cadence.

Claude can write the extraction script for whatever database you point it at — it knows the system catalogs and DDL-emission functions for Postgres, MySQL, SQL Server, Snowflake, BigQuery, Oracle, and the rest. You describe the schema you want extracted; it produces a parameterized SQL that walks the catalogs and assembles CREATE TABLE / CREATE INDEX / ALTER TABLE statements that match the database’s own canonical form.

Postgres DDL extraction

Here’s the heart of a Postgres extractor. It takes a target schema name as a bind parameter and returns one DDL statement per row, ordered so that imports never fail on forward references. It uses pg_get_constraintdefpg_get_indexdef, and pg_get_viewdef so the output is the database’s own canonical form rather than a hand-rolled approximation.

 

WITH params AS (
    SELECT $1::text AS schema_name
),

-- CREATE SCHEMA
schema_ddl AS (
    SELECT 0 AS sort_order, nspname AS obj_schema, '' AS obj_name,
           'CREATE SCHEMA IF NOT EXISTS ' || quote_ident(nspname) || ';' AS ddl
    FROM pg_namespace
    WHERE nspname = (SELECT schema_name FROM params)
),

-- ENUM types
enum_ddl AS (
    SELECT 1, n.nspname, t.typname,
           'CREATE TYPE ' || quote_ident(n.nspname) || '.' || quote_ident(t.typname)
           || ' AS ENUM ('
           || string_agg(quote_literal(e.enumlabel), ', ' ORDER BY e.enumsortorder)
           || ');'
    FROM pg_type t
    JOIN pg_enum e ON e.enumtypid = t.oid
    JOIN pg_namespace n ON n.oid = t.typnamespace
    WHERE n.nspname = (SELECT schema_name FROM params)
    GROUP BY n.nspname, t.typname
),

-- CREATE TABLE with columns, types, defaults, NOT NULL.
-- Constraints come separately as ALTER TABLEs so forward references are fine.
tables_ddl AS (
    SELECT 2, n.nspname, c.relname,
           'CREATE TABLE ' || quote_ident(n.nspname) || '.' || quote_ident(c.relname) || ' (' || E'\n    '
           || string_agg(
                  quote_ident(a.attname) || ' ' || format_type(a.atttypid, a.atttypmod)
                  || CASE WHEN a.attnotnull THEN ' NOT NULL' ELSE '' END
                  || COALESCE(' DEFAULT ' || pg_get_expr(ad.adbin, ad.adrelid), ''),
                  ',' || E'\n    ' ORDER BY a.attnum
              )
           || E'\n);'
    FROM pg_class c
    JOIN pg_namespace n ON n.oid = c.relnamespace
    JOIN pg_attribute a ON a.attrelid = c.oid AND a.attnum > 0 AND NOT a.attisdropped
    LEFT JOIN pg_attrdef ad ON ad.adrelid = a.attrelid AND ad.adnum = a.attnum
    WHERE c.relkind = 'r' AND n.nspname = (SELECT schema_name FROM params)
    GROUP BY n.nspname, c.relname
),

-- ALTER TABLE ... ADD CONSTRAINT (PK, FK, UNIQUE, CHECK)
constraints_ddl AS (
    SELECT 3, n.nspname, con.conname,
           'ALTER TABLE ' || quote_ident(n.nspname) || '.' || quote_ident(cl.relname)
           || ' ADD CONSTRAINT ' || quote_ident(con.conname)
           || ' ' || pg_get_constraintdef(con.oid) || ';'
    FROM pg_constraint con
    JOIN pg_class cl ON cl.oid = con.conrelid
    JOIN pg_namespace n ON n.oid = cl.relnamespace
    WHERE n.nspname = (SELECT schema_name FROM params)
),

-- CREATE INDEXskip the ones that back PK/UNIQUE constraints
indexes_ddl AS (
    SELECT 4, n.nspname, ic.relname,
           pg_get_indexdef(i.indexrelid) || ';'
    FROM pg_index i
    JOIN pg_class ic ON ic.oid = i.indexrelid
    JOIN pg_class tc ON tc.oid = i.indrelid
    JOIN pg_namespace n ON n.oid = tc.relnamespace
    WHERE n.nspname = (SELECT schema_name FROM params)
      AND NOT i.indisprimary
      AND NOT EXISTS (SELECT 1 FROM pg_constraint con WHERE con.conindid = i.indexrelid)
),

-- CREATE OR REPLACE VIEW
views_ddl AS (
    SELECT 5, n.nspname, c.relname,
           'CREATE OR REPLACE VIEW ' || quote_ident(n.nspname) || '.' || quote_ident(c.relname)
           || ' AS ' || pg_get_viewdef(c.oid, true)
    FROM pg_class c
    JOIN pg_namespace n ON n.oid = c.relnamespace
    WHERE c.relkind = 'v' AND n.nspname = (SELECT schema_name FROM params)
)

SELECT ddl FROM (
    SELECT * FROM schema_ddl
    UNION ALL SELECT * FROM enum_ddl
    UNION ALL SELECT * FROM tables_ddl
    UNION ALL SELECT * FROM constraints_ddl
    UNION ALL SELECT * FROM indexes_ddl
    UNION ALL SELECT * FROM views_ddl
) all_ddl
ORDER BY sort_order, obj_schema, obj_name;

A few things worth calling out:

  • Schema is bound as a parameter, not string-interpolated, so the script is safe to run against anything.
  • Constraints and indexes emit as separate ALTER TABLE / CREATE INDEX statements ordered after the table creation block, so cross-table foreign keys don’t fail on import.
  • The script glue (Python, in this case) just opens the connection, runs the SQL with the schema name bound, joins the rows with newlines, and you have a single DDL string ready to send.

Pushing into SqlDBM

Three endpoints cover the whole lifecycle:

Action Endpoint
List projects (look up an existing project’s ID) GET /projects
Create a new project from DDL POST /projects
Push a new revision to an existing project POST /projects/{projectId}/revisions/last

Auth is a Bearer token. The body shape is essentially the same on both POSTs:

{
  "dbType": "postgreSQL",
  "sourceFormat": "ddl",
  "projectName": "Production schema",
  "payload": "-- your assembled DDL string here --",
  "addToDiagram": [{"diagramName": "Main"}]
}

A successful response is 202 Accepted with an empty body. To grab the new project’s ID after a create, re-query GET /projects and find it by name.

The pattern for a recurring sync is straightforward: create the project once via POST /projects, then keep refreshing it with POST /projects/{projectId}/revisions/last. Each refresh becomes a new revision in SqlDBM’s history, so you get free version diffing across runs.

 

Strict mode, the nuclear option

The revisions endpoint accepts an optional boolean:

strictMode (boolean) — If true, ignore project lock and create a new revision on top of the last saved one. Default: false.

That’s exactly what you want when an automated job runs at 02:00 and you don’t want a human’s open browser tab to block the nightly sync. It’s also exactly what you don’t want when two engineers are mid-edit on the same project — strictMode will happily overwrite the in-flight work. Rule of thumb: enable it for unattended pipelines on projects that nobody edits by hand; leave it off for anything where a human might be working in parallel.

Putting it together

The whole loop fits in one Python file plus a one-liner that re-runs it on whatever cadence makes sense. On Windows it’s a .bat driven by Task Scheduler:

@echo off
python "%~dp0\extract_and_post_ddl.py" ^
    --schema <your_schema> ^
    --project-id <your_project_id> ^
    --revision-name "Nightly DDL refresh"

On Linux/macOS it’s a one-line cron entry pointing at the same Python script.

That’s the whole pattern. No inbound connectivity to the database, no humans in the loop, every change captured as a versioned revision in SqlDBM. Once it’s wired up you stop thinking about it — which is the whole point.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Strategic advisors

kent graziano

Kent Graziano

The Data Warrior, Strategic Advisor, Data Vault Master, Author, Speaker, and Tae Kwon Do Grandmaster

gordon wong

Gordon Wong

Leading organizations through analytics transformations, preference for social missions, healthcare, energy, education, and civic engagement

For cloud data platforms

SqlDBM offers secure native connectors to leading data platforms like Snowflake, Databricks, and BigQuery so you can reverse engineer and begin modeling in seconds.

Try modeling now

Platform

Data Modeling

Develop data models collaboratively in the cloud and share them with your organization in various modeling styles and formats with no coding or conversion required

Model Governance

Create and manage business metadata using a dedicated project role

Snowflake Schema Monitoring

Track and get notified of schema changes in live database environments