Skip to main content

Samples

The samples/ directory in the repository contains 16 self-contained example projects. Each sample demonstrates a specific feature or database workflow with a working configuration, seed data, and a Makefile with ready-to-run targets.

All Samples

#NameDescriptionDatabasesDocker
01basic-diffBasic schema drift detection and data diff with MySQLMySQLYes
02postgres-diffPostgreSQL diff with advanced migration scenariosPostgreSQLYes
03sqlite-diffSQLite diff with full schema migration workflowSQLiteNo
04drop-column-safetyDROP COLUMN with safety controls and allow flagsSQLiteNo
05modify-columnMODIFY COLUMN: type changes, nullability, defaultsSQLiteNo
06index-supportIndex operations: create, drop, unique indexesSQLiteNo
07table-operationsCREATE TABLE and DROP TABLE detectionSQLiteNo
08foreign-key-supportForeign key add, drop, and constraint handlingSQLiteNo
09dependency-orderingCorrect migration ordering for FK dependenciesSQLiteNo
10conflict-detectionConflict detection: same PK, different valuesSQLiteNo
11resolution-engineConflict resolution engine with ours/theirs strategiesSQLiteNo
12interactive-resolutionInteractive resolve-conflicts command walkthroughSQLiteNo
13html-report-viewerHTML report generation with --html flagSQLiteNo
14streaming-large-datasetsLarge dataset streaming with batch-size and parallel flagsSQLiteNo
15mssql-supportFull MSSQL workflow: schema diff, data diff, gen-pack, applyMSSQLYes
16oracle-supportFull Oracle workflow: schema drift, data diff, gen-pack, applyOracleYes

Running a SQLite Sample

SQLite samples have no external dependencies. From the project root:

cd samples/03-schema-migrations
make diff
make gen-pack

Running a Docker Sample

MySQL, PostgreSQL, MSSQL, and Oracle samples require Docker and Docker Compose.

# MySQL example
cd samples/01-basic-schema-drift
make up # start containers
make seed # load schema and data
make diff # run deepdiffdb diff
make gen-pack # generate migration pack
make down # stop and remove containers

Sample 14: Streaming Large Datasets

Sample 14 includes a Go seed script that generates 500,000 orders, 100,000 products, and 200,000 audit log rows in SQLite to demonstrate the performance impact of batch size and parallelism settings:

cd samples/14-streaming-large-datasets
make seed # generate ~800k rows
make diff # hash with default settings (batch=10000, parallel=1)
make diff-fast # hash with batch=5000, parallel=4
make diff-sequential # hash with batch=0 (full scan, pre-v0.7 behaviour)

Sample 16: Oracle Support

Sample 16 starts two Oracle XE 21c containers (prod on port 1521, dev on port 1522) using the gvenzl/oracle-xe:21-slim-faststart image and seeds them with schema and data drift:

cd samples/16-oracle-support
make up # start Oracle containers (takes ~60s for XE to initialise)
make wait-healthy # wait for both containers to be ready
make seed # run SQLPlus init scripts
make diff # run deepdiffdb diff
make gen-pack # generate migration pack
make down # stop containers