Show HN: Duck-UI – Browser-Based SQL IDE for DuckDB
demo.duckui.comI built Duck-UI, a web-based SQL editor that runs DuckDB entirely in your browser via WebAssembly. No backend required.
The Problem: Every time I needed to query csv, parquet, or even to play with SQL, I had to either: (a) spin up a Jupyter notebook (b) use the CLI (c) upload to a hosted service.
Friction at every step (TOO MUCH to load a csv or even to test some sql (study)...
The Solution: DuckDB's WASM runtime lets us run SQL analysis client-side. Load CSV/JSON/Parquet files from disk or URL, write SQL, get results instantly. Data stays on your machine. What It Does:
SQL editor with autocomplete & syntax highlighting Import CSV, JSON, Parquet, Arrow (local or remote URLs) Query history, keyboard shortcuts, theme toggle Persistent storage via OPFS (data survives browser refresh) Optional: Connect to external DuckDB servers One-liner Docker deployment or Node 20+ dev server
Technical Details:
DuckDB compiled to WASM; query execution in-browser OPFS-backed persistence Apache 2.0 licensed Runs on Chrome 88+, Firefox 79+, Safari 14+
Use Cases:
Learning SQL without setting up databases Ad-hoc data exploration (CSV → SQL in seconds) Quick prototyping before shipping to production Privacy-conscious workflows (no data leaves your browser)
GitHub: https://github.com/ibero-data/duck-ui Live Demo: https://demo.duckui.com Quick Start: docker run -p 5522:5522 ghcr.io/ibero-data/duck-ui:latest
Would love feedback on: (1) Use cases I'm missing (2) Performance bottlenecks you hit (3) Features that would make this your default SQL scratchpad.
There is an embedded one in DuckDB for a while now and it's great. I get the apeal of yours but this one is much easier to use for same cases:
https://duckdb.org/2025/03/12/duckdb-ui
This is not a self hosted one though. You can not use default ui offline, you can not guarantee data safety
It's very weird they don't offer it by default, but there are workarounds.
(You can use it offline)
https://github.com/duckdb/duckdb-ui/issues/62
some of the comments on that thread are surprising. Are people not aware that software can be bundled in such a way as to run on machines not having internet access?
Anyone know if there is a similar selfhosted/run local option?
`duckdb -ui` and you can launch a local server bound to 127.0.0.1
Is the DuckDB Wasm that you are providing the same thing as the DuckDB Wasm provided by DuckDB?
I ask because I am under the impression that the 'DuckDB Wasm' client provided by DuckDB doesn't yet support all of the DuckDB functions.
So I am interested to know if this has implemented more, fewer, or the same set of functions.
Really excited about the future of DuckDB:
1. DuckLake is the best datalake spec and their team is improving on the extension rapidly.
2. With DuckDB WASM, you can make apps that would normally have 2 to 3 second latency for network calls work in < 200ms.
We use it as our built-in datalake at Definite and couldn't be happier with it.
0 - https://ducklake.select/
1 - https://www.definite.app/blog/ducklake
Agreed. I just wish that the vector support were fully supported. It has been experimental for a long time.
I am curious is anyone using DuckDB in prod?
Yes, we run DuckDB + DuckLake in prod for https://www.definite.app/
Is DuckLake compatible with Iceberg? I remember they create a new catalog.
of course, why wouldn't you?
I was using it even before it hit 1.0
If you want a desktop version check out qstudio: https://www.timestored.com/qstudio/help/duckdb-sql-editor it integrates duckdb functionality for parquet csv and to pivot data
I'd love to make it work with flightsql or HTTP endpoints returning arrow IPC [0]! Did you consider using perspective for last-mile charting [1]? Building your own seems like a huge chunk of work. Well done!
[0] https://duckdb.org/docs/stable/clients/wasm/data_ingestion#a... [1] https://github.com/finos/perspective
Perspective is also getting direct support for DuckDB soon! https://github.com/finos/perspective/pull/3062
This is cool, thanks. I use the embedded UI but I’m going to play around with yours too.
DuckDB is the single-most impressive piece of software I’ve used in my career. I’m mangling terabytes of parquets daily and it just handles them effortlessly; the bindings also also well-written.
TRUE! It's amazing and I have in other project too! The idea of of this app 100% in browser came from handling lots of CSV's from different people in my former company... Just to load in excel it took forever, then I came up with this, it made my life much easier, hope it makes yours too!
I just leave 'duckdb --ui' running on my computer at all times. While the functionality is great, I'm really not happy that UI itself isn't open source and instead controlled by motherduck. There are many quick and easy improvements that will probably never get made, as motherduck has no real incentive to improve it at this point.
Wonder if this work could be ported as a replacement to duckdb local UI?
I would love for this to be the case... I'm also not a fan of mother's duck ui... Also I created this project like 2 weeks before they launch it (that's why it's named duckui...) I would choose another name but well I had bought the domain already hahaha... But SURE, I would like this to be the best UI ever for all of us... Just need some ideas/help to implement all missing parts...
What is a duckdb server? I was under the impression there is no server in duckdb, just the client.
In theory there's none... DuckDB is like Sqlite, it's a file, but in this case it's 100% wasm so theres zero interaction with any "server", it's all on Browser. One example of DuckDB in server is mother duck... It makes .duckdb files "available" on the cloud.
When DuckDB queries across multiple sources (say, Postgres and a CSV) does it first load all data into DuckDB or is it smart enough to only pull minimal data needed for the query on the fly?
possible, seems this is done in other modes.
quote - google ai mode:
"DuckDB offers robust capabilities for querying data stored partially on S3, particularly when dealing with Parquet files. This is achieved through several optimization techniques:
Predicate Pushdown: When you apply a WHERE clause to filter data, DuckDB can "push down" this filter directly into the Parquet file scan. If the Parquet file contains zonemaps (metadata about value ranges within columns), DuckDB can use this information to skip reading entire sections of the file that do not contain relevant data, significantly reducing the amount of data transferred from S3.
Projection Pushdown: When you select only specific columns in your SELECT statement, DuckDB automatically reads only those required columns from the Parquet file. This means you avoid downloading and processing unnecessary data, leading to faster queries and reduced S3 transfer costs.
HTTP Range Reads: DuckDB leverages HTTP range headers when interacting with S3 (or other object storage supporting range reads). This allows it to fetch only the necessary parts of the Parquet file, such as metadata or specific column chunks, rather than downloading the entire file."
« How does it handle [multi-source] joins ? » is the obvious next question.
In memory, and if larger than memory it makes .duckdbtmp files to work from
I haven't had a chance to play w this yet, but thank you for building and sharing this -- great writeup, sounds v useful and compelling!
The autocomplete is really good, UI is snappy as well. Well done!
Thanksss, happy you liked, thanks for trying it out!
do we have analytics product that build on top of duck db yet?????
doesn't work well on phones, run query button is not visible.
Thank you for the feedback, adding to the roadmap right now!