frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•32s ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•4m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•4m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•5m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•5m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•7m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•7m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•8m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•8m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•8m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
1•simonw•9m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•9m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•10m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•12m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•12m ago•1 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
2•eatitraw•18m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•18m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•20m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•21m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•22m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•22m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
3•birdmania•22m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
7•samasblack•24m ago•2 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•26m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•26m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•27m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•29m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•30m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•30m ago•1 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•31m ago•0 comments
Open in hackernews

Pg_ClickHouse: A Postgres extension for querying ClickHouse

https://clickhouse.com/blog/introducing-pg_clickhouse
117•spathak•1mo ago

Comments

graovic•1mo ago
This is pretty good. It will allow us to use PostgREST as an API endpoint to query the ClickHouse database directly
justtheory•1mo ago
Ooh, neat idea!
saisrirampur•1mo ago
Good idea! Btw, ClickHouse does provide a HTTP interface directly, too! https://clickhouse.com/docs/interfaces/http
oulipo2•1mo ago
What are the typical uses of PostgREST? is it just when you want to make your database accessible to various languages over HTTP because you don't want to use an ORM and connect to your db? But besides that, for an entreprise solution, why would you use PostgREST to develop your backend rather than, say, use an ORM in your language and make direct queries? (honest question)
lillecarl•1mo ago
You skip the backend entirely and query from the frontend. PostgREST and Postgres is your backend. If you want extra sauce on top you route those paths to an application that does whatever extra imperative operations you need.
jascha_eng•1mo ago
This always sounds super messy to me but I guess supabase is kind of the same thing and especially for side projects it seems like a very efficient setup.
oulipo2•1mo ago
So a kind of "mini-Firebase" ? and then you have security through row-based security?

But this also means your users can generate their own queries, possibly doing some weird stuff taking down the db, so I assume it's more for "internal tools"?

charrondev•1mo ago
Yeah definitely not for public facing things of any capacity.

No matter your size unless you have a trivial amount of data, if you expose a full SQL query language you can be hit be a DOS attack pretty trivially.

This ignores that row level security is also not enough on its own to implement an even moderately capable level of access controls.

onedognight•1mo ago
The name of the project is a reference to P. G. Wodehouse[0] for those unaware.

[0] https://www.gutenberg.org/ebooks/author/783

justtheory•1mo ago
LOL
sevg•1mo ago
Hmm, no.

It’s just like all the other postgres extensions named “pg_foo”, and the clear and obvious choice for “foo” in this case is “clickhouse”.

Unless this is some bad joke that has flown over my head.

justtheory•1mo ago
I will never un-see it now, tbh
__s•1mo ago
definitely a joke, not even that bad
DetroitThrow•1mo ago
"I am never wrong, sir" -onedognight
yayitswei•1mo ago
That was my first impression as well.
tempest_•1mo ago
This is nice because there are a lot of clickhouse fdw implementations and none of them are well maintained from what I can tell.
saisrirampur•1mo ago
Appreciate you chiming in! We evaluated almost all the FDWs and landed on clickhouse_fdw (built by Ildus) as the most mature option. However, it hadn’t been maintained since 2020. We used it as the base, and the goal is to take it to the next level.

Our main focus is comprehensive pushdown capabilities. It was very surprising to see how much the Postgres FDW framework has evolved over the years and the number and types of hooks it now provides for push down. This is why we decided to lean into FDW than build an extension bottoms up. But we may still do that within pg_clickhouse for a few features, wherever FDW framework becomes a restriction.

We’ve made notable progress over the last few months, including support for pushdown of custom aggregations and SEMI JOINs/basic subqueries. Fourteen of twenty-two TPCH queries are now fully pushdownable.

We’ll be doubling down to add pushdown support for much more complex queries, CTEs, window functions, and more. More on the future here - https://github.com/ClickHouse/pg_clickhouse?tab=readme-ov-fi... All with the goal of enabling users to build fast analytics from the Postgres layer itself but still using the power of ClickHouse!

DetroitThrow•1mo ago
>All with the goal of enabling users to build fast analytics from the Postgres layer itself but still using the power of ClickHouse!

That would be incredible! So many times I want to reach for ClickHouse but whatever company I'm at has so much inertia built into PG. Pleease add CTE support.

And yes I'm aware of PeerDB or whatever that project is called. This is still or even more helpful.

__s•1mo ago
You're replying to the CEO of PeerDB. We recognize CDC is only one tool in the integration toolbox, which is why we're prioritizing this
saisrirampur•1mo ago
Totally! Making things way easier on the app and query side is very important, which is why we plan to invest heavily in this going forward.

With respect to data replication, it gets really hard and has its challenges as data sizes grow - reliably moving tens of terabytes at speed, handling intricate quirks around replication slots, enterprise-grade observability etc. PeerDB/ClickPipes is designed to solve these problems. I wrote a blog post covering this in more detail here: https://clickhouse.com/blog/postgres-cdc-year-in-review-2025

That said, point taken - we will ensure query and app migration is seamless as well and reduce friction in integrating Postgres and ClickHouse. pg_clickhouse is a step in that direction! :)

kiwicopple•1mo ago
(Note: we work closely with the clickhouse team so this is not to intended to detract from their launch, simply to point out maintained options.)

Our CH wrapper is actively maintained, with push down, parameterized views, and async streaming: https://supabase.github.io/wrappers/catalog/clickhouse/

We see a lot of companies choosing CH with PG - it’s fantastic

saisrirampur•1mo ago
Thank you, Paul! Great to see Supabase wrappers evolve. I really love the async streaming feature. It helps address use cases involving (reliably) moving larger datasets from ClickHouse to Postgres for supporting (stricter) transactional workloads.

Very excited to continue working closely to further integrate these amazing open source database technologies and make it easier for users. :)

oulipo2•1mo ago
I'm using Postgres as my base business database, and thinking now about linking it to either DuckDb/DuckLake or Clickhouse...

what would you recommend and why?

I understand part of the interest of pg_clickhouse is to be able to use "pre-existing Postgres queries" on an analytical database without having to change anything, so if I am building my database now and have no legacy, would pg_clickhouse make sense, or should I do analytics differently?

Also, would you have some kind of tutorial / sample setup of a typical business application in Postgres and kind of replication in clickhouse to make analytics queries? so I can see how Clickhouse would be typically used?

saisrirampur•1mo ago
Great question! If you’re starting a greenfield application, pg_clickhouse makes a lot of sense since you’ll be using a unified query layer for your application.

Now, coming to your question about replication: you can use PeerDB (acquired by ClickHouse https://github.com/PeerDB-io/peerdb), which is laser-focused and battle-tested at scale for Postgres-to-ClickHouse replication. Once the data is replicated into ClickHouse, you can start querying those tables from within Postgres using pg_clickhouse. In ClickHouse Cloud, we offer ClickPipes for Postgres CDC/replication, which is a managed service version of PeerDB and is tightly integrated with ClickHouse. Now there could be non-transcational tables that you can directly ingest to ClickHouse and still query using pg_clickhouse.

So TL;DR: Postgres for OLTP; ClickHouse for OLAP; PeerDB/ClickPipes for data replication; pg_clickhouse as the unified query layer. We are actively working on making this entire stack tightly integrated so that building real-time apps becomes seamless. More on that soon! :)

oulipo2•1mo ago
Nice! Right now I'm using Timescaledb, do you think it makes sense to move to a Postgres+CH setup instead? or only if I hit the limit of timescaledb?

Also what would be the benefit for me of querying clickhouse from Postgres, rather than directly through my backend via an ORM/SDK? is that because it would allow me to do JOINs?

What would be the typical setup if I want to JOIN analytical data (eg my IoT device readings) from CH with some business data (eg the user owning the device) from my Postgres? Would I replicate that business data to CH to do the join there, or would that be typically the exact use-case for pg_clickhouse?

saisrirampur•1mo ago
Great questions! ClickHouse is a purpose-built analytical database with thousands of optimizations for analytics, which is why it’s typically faster and more scalable than TimescaleDB. Here’s a post that covers real scenarios where users have moved workloads from Timescale to ClickHouse: https://clickhouse.com/blog/timescale-to-clickhouse-clickpip...

If your operational (OLTP) tables are reasonably big, the recommended approach is to replicate them into ClickHouse and let ClickHouse handle the joins. This avoids cross-database joins and lets the execution be pushed fully into ClickHouse. You can use ClickPipes/PeerDB to make that super easy. https://clickhouse.com/docs/integrations/clickpipes/postgres... https://clickhouse.com/docs/integrations/clickpipes/postgres

Where pg_clickhouse fits: If you’re already using Postgres for OLTP and want to offload analytics to ClickHouse without rewriting your app, the pg_clickhouse extension helps. It lets you run OLTP and OLAP queries from Postgres, while pushing the analytical queries—and their joins—down to ClickHouse, where the replicated data lives. Going native i.e. querying ClickHouse directly for OLAP will be the most optimal and is recommended if your analytics is advanced/complex. We will be evolving pg_clickhouse over the coming months to support pushdown for more and more complex/advanced queries :)

oulipo2•1mo ago
Very interesting! So right now I'm developing the backend, so I can still move analytics to CH, but I'm still wondering whether it would make sense because it might not be so large that it requires it (eg 50G/year of data I'd say)

And on the other hand, I can imagine that there could be plenty of footguns with replication to another database (not instant, what about schema changes, backfills, what if some database is shutdown for update while replicating, etc), so I'm a bit cautious about having a complex setup right now

Would you have some basic examples of a "mini-backend" Postgres+Clickhouse replication, using docker-compose + Typescript/Python or something, so I could play with it and take a look at what could be the operational complexity?

saisrirampur•1mo ago
You should just give it a shot in 10-15 min and see how it looks with ClickHouse. We made it that simple with ClickPipes :). Don’t intend to sell here, but it is as simple as signing up for trial on ClickHouse Cloud and clicking a few buttons and start seeing PG data getting synced.

In regards to footguns with replication, totally understand you being cautious. Last 2 years at PeerDB/ClickPipes was laser focused on just Postgres CDC to provide a dead simple yet highly reliable experience. The product has 100s of features, addresses 100s of footguns and actively being enhanced. Sharing some customers using this production https://clickhouse.com/blog/postgres-cdc-year-in-review-2025... You should give it a shot to see how easy it is. :)

In regards to sample apication, here is one, https://github.com/ClickHouse/HouseClick It showcases PG + CH stack. We just merged a PR to integrate pg_clickhouse too. The good news is that, there is a blog planned in a couple of weeks which showcases a tightly integrated experience PG +CH with CDC and pg_clickhouse, all in OSS. It will have docker-compose too. Your question adds up to what we are thinking next, I couldn’t resist myself to reveal it. ;) :)

oulipo2•1mo ago
Nice! Looking forward to reading it!
mritchie712•1mo ago
We released a meltano target for DuckLake[0]. dlt has one now too. Pretty easy to sync pg -> ducklake.

I've been really happy with DuckLake, happy to answer any questions about it.

DuckDB has always felt easier to use vs. Clickhouse for me, but both are great options. If I were you, I'd try both options for a few hours with your use case and pick the one that feels better.

0 - https://www.definite.app/blog/target-ducklake

saisrirampur•1mo ago
I love DuckDB from a product perspective and appreciate the engineering excellence behind it. However, DuckDB was primarily built for seamless for in-process analytics, data science, data-preparation/ETL workloads than real-time customer facing analytics.

ClickHouse’s bread and butter is real-time analytics for customer-facing applications, which often come with demanding concurrency and latency requirements.

Ack, totally makes sense that both are amazing technologies - you could try both and test them at the scale your real-time application may reach, and then choose the technology that best fits your needs. :)

Ritewut•1mo ago
I tested DuckDB and even Motherduck and this was my takeaway. Square hole, round peg situation.
oulipo2•1mo ago
Nice, what would be your typical setup?

You keep like 1 year's worth of data in your "business database", and then archive the rest in S3 with parquet and query with DuckDB ?

And if you want to sync everything, even "current data", to do datascience/analytics, can you just write the recent data (eg the last week of data or whatever) in S3 every hours/days to get relatively up-to-date data? And doesn't that cause the S3 data to grow needlessly (eg does it replace, rather than store an additional copy of recent data each hour?)

Do you have kind of "starter project" for a Postgres + DuckLake integration that I could look at to see how it's used in practice, and how it makes some operations easier?

mritchie712•1mo ago
Once you have meltano installed, it's just be:

    ```
    meltano run tap-postgres target-ducklake
    ```
Setting up meltano would be a bit more involved[0]

0 - https://www.notion.so/luabase/Postgres-to-DuckLake-example-2...

jascha_eng•1mo ago
Depending on your workload you might also be able to use Timescale to have very fast analytical queries inside postgres directly. That avoids having to replicate the data altogether.

Note that I work for the company that built timescale (Tiger Data). Clickhouse is cool though, just throwing another option into the ring.

Tbf in terms of speed Clickhouse pulls ahead on most benchmark, unless you want to join a lot with your postgres data directly then you might benefit from having everything in one place. And of course you avoid the sync overhead.

oulipo2•1mo ago
I'm indeed already using Timescaledb, I was wondering if I would really gain something from adding clickhouse
saisrirampur•1mo ago
More on use-cases involving TimescaleDB replication/migration to ClickHouse https://clickhouse.com/blog/timescale-to-clickhouse-clickpip...
wkrp•1mo ago
I was using Timescale for a small project of mine and eventually switched to Clickhouse. While there was a 2-4x disk space reduction, the major benefits have operational (updates & backups). The documentation is much better since Timescale's mixes their cloud product documentation in, really muddying the water.

Despite that, man it is really nice to be able to join your non-timeseries data in your queries (perhaps the fdw will allow this for clickhouse? I need to look into that). If you don't have to deal with the operations side too much and performance isn't a problem, Timescale is really nice.

oulipo2•1mo ago
Can you tell me more about why timescale doesn't perform in your opinion? My use case for timescale would be to gather my IoT telemetry data (perhaps 20/100 points per second) and store eg 1 year worth of it to do some analysis and query some past data, then offload that to parquet files on S3 for older data

I'd like to be able to use that for alert detection, etc, and some dashboard metrics, so I was thinking that it was the kind of perfect use-case for timescale, but because I haven't been using it yet "at scale" (not deployed yet) I don't know how it will behave

How do you do JOINs with business data for Clickhouse then? Do you have to do some kind of weird process where you query CH, then query Postgres, then join "manually" in your backend?

wkrp•1mo ago
I was a little unclear, I think Timescale performs quite well. Just that in my (very limited) experience, Clickhouse performs better on the same data.

I actually have a blogpost on my experience with it here: https://www.wkrp.xyz/a-small-time-review-of-timescaledb/ that goes into a bit more detail as to my use case and issues I experienced. I'm actually half-way through writing the follow up using Clickhouse.

As detailed in the blog post, my data is all MMO video game stats such as item drops. With Timescale, I was able to join an "items" table with information such as the item name and image url in the same query as the "item_drops" table. This way the data includes everything needed for presentation. To accomplish the same in clickhouse, I create an "items" table and an "items_dict" dictionary (https://clickhouse.com/docs/sql-reference/dictionaries) that contains the same data. The Clickhouse query then JOINs the item_dict against item_drops to achieve the same thing.

If you know the shape of your data, you can probably whip up some quick scripts for generating fake versions and inserting into Timescale to get a feel for storage and query performance.

exagolo•1mo ago
If not having to adjust queries is a major driver for your considerations, then I would highly recommend looking at SQLGlot (https://github.com/tobymao/sqlglot), a transpiler that makes you (more) independent of query dialects. They already support 30 dialects (big vendors such as Snowflake, Databricks, BigQuery, but also loads of the specialists such as ClickHouse, SingleStore or Exasol). Repo is maintained extremely well.

Picking the best solution for your concrete workload (and your future demands) should be equally important to the implementation effort, to avoid that you run into walls later on. At least as long as data volume, query complexity or concurrency scalability can be challenges.

Olshansky•1mo ago
Added to https://github.com/Olshansk/postgres_for_everything.
N_Lens•1mo ago
At this stage it may be possible to build one's entire application stack inside of postgres extensions.
justtheory•1mo ago
Yes. YES! That's the idea. :-)