at the moment I use PG + Tiger Data - couldn't find a mysql equivalent
so this as one.
All I want is effectively what clickhouse does in PG. I have a single table that I need fast counts on and clickhouse can do the counts fast but I have to go through the entire sync/replication to do that.
A quick scan of TimeSeries always seemed like it was really only best setup for that and to use it another way would be a bit of a struggle.
For about a year releases include a vector storage type, so it will be interesting to see it compared in performance with what Alibaba did.
Just wanted to plug that out. Given how often Postgres is plugged on HN, I think people ignore how versatile mariadb is.
The most interesting part of this is the improvements to transaction handling that it seems they've made in https://github.com/alibaba/AliSQL/blob/master/wiki/duckdb/du... (its also a good high level breakdown of MySQL internals too). Ensuring that the sync between the primary tables and the analytical ones are fast and most importantly, transactional, is awesome to see.
This isn't new either, people have been building OLAP storage engines into MySQL/Postgres for years, e.g., pg_ducklake and timescale.
BTW, Would be great to hear apavlo’s opinion on this.
The real version control history might be full of useless internal Jira ticket references, confidential information about products, in Mandarin, not even in git... there's a thousand reasons to surface only a minimal fake git version history, hand-crafted from major releases.
I might make the argument that paying the tax of delivering what you're arguing for has so many significant downsides in the end you'd have something you wouldn't really want anyway
but Tiger Data is more optimized for TimeSeries data - https://www.tigerdata.com/docs/use-timescale/latest/hypercor...
I do wish too there was an embedded click house like db in Postgres
https://vettabase.com/mariadb-columnstore-sql-limitations/#I...
Why I Believe MySQL is More Suited than PostgreSQL for DuckDB Integration Currently, there are three mainstream solutions in the ecosystem: pg_duckdb, pg_mooncake, and pg_lake. However, they face several critical hurdles. First, PostgreSQL's logical replication is not mature enough—falling far behind the robustness of its physical replication—making it difficult to reliably connect a PG primary node to a DuckDB read-only replica via logical streams.
Furthermore, PostgreSQL lacks a truly mature pluggable storage engine architecture. While it provides the Table Access Method as an interface, it does not offer standardized support for primary-replica replication or Crash Recovery at the interface level. This makes it challenging to guarantee data consistency in many production scenarios.
MySQL, however, solves these issues elegantly:
Native Pluggable Architecture: MySQL was born with a pluggable storage engine design. Historically, MySQL pivoted from MyISAM to InnoDB as the default engine specifically to leverage InnoDB's row-level MVCC. While previous columnar attempts like InfoBright existed, they didn't reach mass adoption. Adding DuckDB as a native columnar engine in MySQL is a natural progression. It eliminates the need for "workaround" architectures seen in PostgreSQL, where data must first be written to a row-store before being converted into a columnar format.
The Power of the Binlog Ecosystem: MySQL’s "dual-log" mechanism (Binlog and Redo Log) is a double-edged sword; while it impacts raw write performance, the Binlog provides unparalleled support for the broader data ecosystem. By providing a clean stream of data changes, it facilitates seamless replication to downstream systems. This is precisely why OLAP solutions like ClickHouse, StarRocks, and SelectDB have flourished within the MySQL ecosystem.
Seamless HTAP Integration: When using DuckDB as a MySQL storage engine, the Binlog ecosystem remains fully compatible and intact. This allows the system to function as a data warehouse node that can still "egress" its own Binlog. In an HTAP (Hybrid Transactional/Analytical Processing) scenario, a primary MySQL node using InnoDB can stream Binlog directly to a downstream MySQL node using the DuckDB engine, achieving a perfectly compatible and fluid data pipeline.
In the MySQL replication, GTID is crucial for ensuring that no transaction is missed or replayed repeatedly. We handle this in two scenarios (depending on whether binlog is enabled):
- log_bin is OFF: We ensure that transaction in DuckDB are committed before the GTID is written to disk (in the mysql.gtid_executed table). Furthermore, after a crash recovery, we perform idempotent writes to DuckDB for a period of time (the principle is similar to upsert or delete+insert). Therefore, at any given moment after a crash recovery, we can guarantee that the data in DuckDB is consistent with the primary database.
- log_bin is ON: Unlike the previous scenario, we no longer rely on the `mysql.gtid_executed` table; we directly use the Binlog for GTID persistence. However, a new problem arises: Binlog persistence occurs before the Storage Engine commits. Therefore, we created a table in DuckDB to record the valid Binlog position. If the DuckDB transaction fails to commit, the Binlog will be truncated to the last valid position. This ensures that the data in DuckDB is consistent with the contents of the Binlog.
Therefore, if the `gtid_executed` on the replica server matches that of the primary database, then the data in DuckDB will also be consistent with the primary database.I understand that MySQL follows a specific pluggable storage architecture. I also understand that the direct equivalent in PG appears to be table access methods (TAM). However, you don't need to use TAM to build this - I'd argue FDWs are much more suitable.
Also, I think this design assumes that you'd swap PG's storage engine and replicate data to DuckDB through logical replication. The explanation then notes deficiencies in PG's logical replication.
I don't think this is the only possible design. pg_lake provides a solid open source implementation on how else you could build this solution, if you're familiar with PG: https://github.com/Snowflake-Labs/pg_lake
All up, I feel this explanation is written from a MySQL-first perspective. "We built this valuable solution for MySQL. We're very familiar with MySQL's internals and we don't think those internals hold for PostgreSQL."
I agree with the solution's value and how it integrates with MySQL. I just think someone knowledgeable about PostgreSQL would have built things in a different way.
Should I ever participate in a Chinese speaking forum, I'd certainly use an LLM for translation as well.
Anyone using it in prod even with the beta status?
Let's all hope Ali will pick it up :)
I'm fully invested on Postgres though.
And I get the benefit of resiliency and DR for free.
If you are a developing for My SQL and you are using Java/kotlin/closure/scala consider this as well.
"MaterializedMySQL"
Sadly that feature seems to have been thrown out, probably due to complexity.
https://github.com/ClickHouse/ClickHouse/discussions/44887#d...
https://www.percona.com/blog/complete-walkthrough-mysql-to-c...
They bought peerdb and offer it as clickhouse pipes so I suspect the incentive to support that feature is pretty low
1. integrate an off the shelf OLAP engine
forward OLAP queries to it
deal with continued issues keeping the two datasets in sync
2. rebase OLTP and OLAP engines to use a unified storage layer
storage layer supports both page-aligned row-oriented files and column-oriented files and remote files
still have data and semantic inconsistencies due to running two engines
3. merge the engines
policy to automatically archive old records to a compressed column-oriented file format
option to move archived record files to remote object storage, fetch on demand
queries seamlessly integrate data from freshly updated records and archived records
only noticeable difference is queries for very old records seem to take a few seconds longer to get the results back