The wall between operational databases and analytical lakehouses has long forced engineering teams to choose between speed and scale. Now, TigerData is tearing that wall down.
The company behind Tiger Postgres and TimescaleDB today announced Tiger Lake, a new architecture that transforms Postgres into a live, bidirectional engine for both real-time operations and deep analytics. No pipelines, no lock-in, no compromise.
“With Tiger Lake, we’ve built a native, bidirectional bridge between Postgres and the lakehouse,” said Mike Freedman, co-founder and CTO of TigerData. “It’s the architecture we believe the industry has been waiting for.”
A Fix for the Fragile Stack
Many teams today rely on duct-taped systems, such as Kafka, Flink, ETL jobs, to shuttle data between the database and data lake. These setups are brittle and high-maintenance, especially as the need for real-time, production-grade intelligence grows.
“We stitched together Kafka, Flink, and custom code to stream data from Postgres to Iceberg—it worked, but it was fragile and high-maintenance,” said Kevin Otten, Director of Technical Architecture at Speedcast. “Tiger Lake replaces all of that with native infrastructure. It’s not just simpler—it’s the architecture we wish we had from day one.”
Tiger Lake removes that complexity. Developers can continuously replicate any Postgres table into an Iceberg-backed lakehouse, while using Tiger Postgres as the live system of record. Crucially, it’s not a one-way sync. When deeper analytics are computed on the lakehouse, such as historical aggregates or machine learning features, Tiger Lake pushes the results back into Postgres, ready to be served in customer-facing apps and agents.
Real-Time Meets Scale
What sets Tiger Lake apart is the ability to unify two traditionally distinct layers of the data stack.
Unlike transactional databases that falter under analytical workloads, or data lakes that aren’t production-ready, Tiger Postgres (enhanced with TimescaleDB) is purpose-built for high-ingest, real-time analytics, and concurrent queries. Tiger Lake turns it into a coordination layer between fast and vast: real-time at the edge, and deep analytics at the core.
“Postgres has become the operational heart of modern applications, but until now, it’s existed in a silo from the lakehouse,” said Freedman. “Tiger Lake unifies both, natively and without compromise.”
This unlocks a new set of use cases that need both live context and historical depth, including agentic apps, AI copilots, and user-facing analytics. Developers can query ML features and semantic rollups straight from Postgres, even though they were computed in Iceberg.
Open by Design
TigerData’s message is clear: lock-in is out.
Tiger Lake is built on open standards like Apache Iceberg and integrates natively with AWS S3 Tables. It avoids proprietary runtimes or metadata formats, giving teams flexibility to compose their stack however they want, whether with Spark, Snowflake, or other tools in the open ecosystem.
Innovative brands such as Speedcast, Monte Carlo, and others are already deploying Tiger Lake in production, starting with native support for AWS S3 Tables.
This philosophy marks a sharp contrast to all-in-one platforms that fuse storage, compute, and query under a proprietary layer.
Now in Beta, With More on the Horizon
Tiger Lake is available today in public beta via Tiger Cloud. The first release focuses on syncing Postgres and TimescaleDB hypertables to AWS S3 Tables and streaming data back into Postgres.
Planned enhancements include direct querying of Iceberg catalogs from Tiger Postgres and full round-trip workflows that make deep analytics seamlessly available at the point of user interaction.
With Tiger Lake, TigerData aims to collapse the divide between the application and analytics layer, and in doing so, reshape how intelligent systems are built.