
Introduction
MotherDuck is a serverless analytics service built on DuckDB. It hosts DuckDB databases in the cloud and keeps the same SQL surface you’d use locally. PostgreSQL is what most apps run on for transactional data.
So you usually want both: Postgres for the app, MotherDuck for analytics. The part in the middle that copies tables across is what Sling does.
This guide replicates a PostgreSQL schema into MotherDuck with Sling, in both full-refresh and incremental modes. The CLI output and row counts below come from an actual run, not a fabricated one.
Installing Sling
Sling is a single binary. Pick whichever install method fits your environment:
# Homebrew (macOS)
brew install slingdata-io/sling/sling
# curl (Linux)
curl -LO 'https://github.com/slingdata-io/sling-cli/releases/latest/download/sling_linux_amd64.tar.gz' \
&& tar xf sling_linux_amd64.tar.gz \
&& rm -f sling_linux_amd64.tar.gz \
&& chmod +x sling
# Scoop (Windows)
scoop bucket add sling https://github.com/slingdata-io/scoop-sling.git
scoop install sling
# Python (pip)
pip install sling
Confirm the install:
sling --version
Full installation notes are in the Sling CLI Getting Started Guide.
Configuring the PostgreSQL Source
Sling reads connection details from ~/.sling/env.yaml, environment variables, or sling conns set. For PostgreSQL you’ll need host, port, database, user, and password.
Using sling conns set:
sling conns set PG_SOURCE type=postgres host=host.ip user=myuser \
database=mydb password=mypass port=5432
Or in ~/.sling/env.yaml:
connections:
PG_SOURCE:
type: postgres
host: host.ip
user: myuser
password: mypass
port: 5432
database: mydb
sslmode: require
schema: public
Test it:
sling conns test PG_SOURCE
The PostgreSQL connection docs cover SSL, IAM auth, and other options.
Configuring the MotherDuck Target
A MotherDuck connection needs the database name and a service token. You can generate a token from the MotherDuck UI.
sling conns set MOTHERDUCK type=motherduck \
database=my_db motherduck_token=eyJhbGciOi...
Or the URL form:
sling conns set MOTHERDUCK url="motherduck://my_db?motherduck_token=eyJhbGciOi..."
Or in ~/.sling/env.yaml:
connections:
MOTHERDUCK:
type: motherduck
database: my_db
motherduck_token: eyJhbGciOi...
Test it:
sling conns test MOTHERDUCK
Full options (attach modes, copy method, DuckDB version pinning) are in the MotherDuck connection docs.
A Full-Refresh Replication
For this run the PostgreSQL source has three tables in a seo_demo_pg_motherduck schema:
customers— 5,000 rowsorders— 30,000 rows, with anupdated_attimestampevents— 60,000 rows, with anoccurred_attimestamp
The replication file lives next to wherever you want to run Sling from:
# replication.yaml
source: PG_SOURCE
target: MOTHERDUCK
defaults:
mode: full-refresh
object: seo_demo_pg_motherduck.{stream_table}
streams:
seo_demo_pg_motherduck.customers:
primary_key: [customer_id]
seo_demo_pg_motherduck.orders:
primary_key: [order_id]
update_key: updated_at
seo_demo_pg_motherduck.events:
primary_key: [event_id]
update_key: occurred_at
A few things to point out:
object: seo_demo_pg_motherduck.{stream_table}is a runtime variable. Sling substitutes the source table name into the target object, so you don’t repeat yourself per stream.primary_keyandupdate_keyare set even though the mode here isfull-refresh. The next section flips to incremental without touching those declarations; only the mode changes.- The target schema gets created automatically by Sling on the first run. No manual
CREATE SCHEMAneeded.
Run it:
sling run -r replication.yaml
Real output, trimmed for readability:
INF Sling Replication [3 streams] | PG_SOURCE -> MOTHERDUCK
INF [1 / 3] running stream seo_demo_pg_motherduck.customers
INF reading from source database
INF writing to target database [mode: full-refresh]
INF created table "seo_demo_pg_motherduck"."customers"
INF inserted 5000 rows into "seo_demo_pg_motherduck"."customers" in 11 secs [425 r/s] [390 kB]
INF execution succeeded
INF [2 / 3] running stream seo_demo_pg_motherduck.orders
INF created table "seo_demo_pg_motherduck"."orders"
INF inserted 30000 rows into "seo_demo_pg_motherduck"."orders" in 14 secs [2,131 r/s] [2.6 MB]
INF execution succeeded
INF [3 / 3] running stream seo_demo_pg_motherduck.events
INF created table "seo_demo_pg_motherduck"."events"
INF inserted 60000 rows into "seo_demo_pg_motherduck"."events" in 9 secs [6,036 r/s] [3.3 MB]
INF execution succeeded
INF Sling Replication Completed in 40s | PG_SOURCE -> MOTHERDUCK | 3 Successes | 0 Failures
95,000 rows across three tables, end to end, in 40 seconds. The _tmp tables that show up in the full log are Sling’s staging step before it swaps the data into the final target. They get cleaned up automatically.
Verification
A count(*) from MotherDuck right after the run:
select 'customers' as t, count(*) c from seo_demo_pg_motherduck.customers
union all select 'orders', count(*) from seo_demo_pg_motherduck.orders
union all select 'events', count(*) from seo_demo_pg_motherduck.events;
+-----------+-------+
| T | C |
+-----------+-------+
| customers | 5000 |
| orders | 30000 |
| events | 60000 |
+-----------+-------+
A small sample to confirm the data made the trip with types intact:
select event_id, customer_id, event_type, region, occurred_at
from seo_demo_pg_motherduck.events
order by event_id limit 5;
+----------+-------------+------------+--------+-------------------------------+
| EVENT_ID | CUSTOMER_ID | EVENT_TYPE | REGION | OCCURRED_AT |
+----------+-------------+------------+--------+-------------------------------+
| 1 | 2 | click | us-2 | 2025-01-01 00:00:01 +0000 UTC |
| 2 | 3 | signup | us-3 | 2025-01-01 00:00:02 +0000 UTC |
| 3 | 4 | purchase | us-4 | 2025-01-01 00:00:03 +0000 UTC |
| 4 | 5 | page_view | us-5 | 2025-01-01 00:00:04 +0000 UTC |
| 5 | 6 | click | us-6 | 2025-01-01 00:00:05 +0000 UTC |
+----------+-------------+------------+--------+-------------------------------+
Numeric, varchar, and timestamp columns round-tripped cleanly. Nullable columns (region is null on every seventh row in the source) are preserved as nulls, not as the string "NULL".
Switching to Incremental
Full-refreshing a 60,000-row table every day is fine. Full-refreshing a 600-million-row event table every day is not. Sling’s incremental mode reads only the rows newer than the highest update_key already in the target.
Drop customers from the streams (it changes slowly enough to keep on full-refresh in a separate run, or rebuild weekly) and switch the mode:
# replication-incremental.yaml
source: PG_SOURCE
target: MOTHERDUCK
defaults:
mode: incremental
object: seo_demo_pg_motherduck.{stream_table}
streams:
seo_demo_pg_motherduck.orders:
primary_key: [order_id]
update_key: updated_at
seo_demo_pg_motherduck.events:
primary_key: [event_id]
update_key: occurred_at
Insert 1,000 new orders and 2,500 new events on the source (this simulates a day’s worth of data flowing in), then run again:
sling run -r replication-incremental.yaml
INF Sling Replication [2 streams] | PG_SOURCE -> MOTHERDUCK
INF [1 / 2] running stream seo_demo_pg_motherduck.orders
INF getting checkpoint value (updated_at)
INF writing to target database [mode: incremental]
INF inserted 1000 rows into "seo_demo_pg_motherduck"."orders" in 9 secs [104 r/s] [86 kB]
INF execution succeeded
INF [2 / 2] running stream seo_demo_pg_motherduck.events
INF getting checkpoint value (occurred_at)
INF writing to target database [mode: incremental]
INF inserted 2500 rows into "seo_demo_pg_motherduck"."events" in 6 secs [358 r/s] [137 kB]
INF execution succeeded
INF Sling Replication Completed in 20s | PG_SOURCE -> MOTHERDUCK | 2 Successes | 0 Failures
The getting checkpoint value line is where Sling looks at the target, finds the largest updated_at already present, and uses that as the lower bound on the source query. Only the new rows come across:
select 'orders' as t, count(*) c from seo_demo_pg_motherduck.orders
union all select 'events', count(*) from seo_demo_pg_motherduck.events;
+--------+-------+
| T | C |
+--------+-------+
| orders | 31000 |
| events | 62500 |
+--------+-------+
Orders went from 30,000 to 31,000. Events went from 60,000 to 62,500. Matches what was inserted on the source.
If you need updates as well as inserts (a row’s updated_at changes and the existing row should be replaced rather than duplicated), keep mode: incremental and make sure primary_key is set. Sling will upsert against the primary key instead of appending. The replication modes docs cover the trade-offs.
Common Tweaks
A few options you’ll reach for once the basics are in place:
- Schema and column casing. MotherDuck (DuckDB) is case-sensitive, and Sling defaults to keeping the source casing. Add
target_options: { column_casing: snake }underdefaultsif your Postgres source has mixed-case identifiers and you want a clean snake_case target. - Add new columns automatically. When the source schema changes, set
target_options: { add_new_columns: true }so Sling alters the MotherDuck table on the next run. Without it, new source columns get dropped at the boundary. - Pick a copy method. The default for MotherDuck is
csv_http. For very wide rows or large text values, switch toarrow_httpviacopy_method: arrow_httpin the connection config. It’s usually faster and avoids CSV escaping edge cases. - Filter at the source. Use a custom
sql:block in a stream to project columns or filter rows before they leave Postgres. Cheaper than dragging unused columns to MotherDuck.
Where to Go Next
The same replication pattern works for any of Sling’s 30+ database sources into MotherDuck: MySQL, SQL Server, Snowflake, BigQuery, and the rest. Swap the source connection and leave the target alone.
If you’d rather store flat files than warehouse tables, see PostgreSQL to S3 as Parquet, which uses the same replication file shape with a file-system target. For a local DuckDB setup instead of a managed MotherDuck one, see PostgreSQL to DuckDB. For team workflows with scheduling and alerting on top of the same CLI, look at the Sling Platform.
Questions go to Discord or GitHub Issues.

