frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
140•isitcontent•6h ago•15 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
246•vecti•8h ago•116 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
173•eljojo•8h ago•124 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
48•phreda4•5h ago•8 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
77•antves•1d ago•57 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
39•nwparker•1d ago•10 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
11•NathanFlurry•14h ago•5 comments

Show HN: FastLog: 1.4 GB/s text file analyzer with AVX2 SIMD

https://github.com/AGDNoob/FastLog
3•AGDNoob•2h ago•1 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
147•bsgeraci•23h ago•61 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
21•JoshPurtell•1d ago•3 comments

Show HN: I built a directory of $1M+ in free credits for startups

https://startupperks.directory
3•osmansiddique•3h ago•0 comments

Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly

https://rahuljaguste.github.io/Nethack_Falcons_Eye/
4•rahuljaguste•5h ago•1 comments

Show HN: A Kubernetes Operator to Validate Jupyter Notebooks in MLOps

https://github.com/tosin2013/jupyter-notebook-validator-operator
2•takinosh•3h ago•0 comments

Show HN: Daily-updated database of malicious browser extensions

https://github.com/toborrm9/malicious_extension_sentry
13•toborrm9•11h ago•5 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
23•dchu17•10h ago•11 comments

Show HN: 33rpm – A vinyl screensaver for macOS that syncs to your music

https://33rpm.noonpacific.com/
3•kaniksu•4h ago•0 comments

Show HN: Chiptune Tracker

https://chiptunes.netlify.app
3•iamdan•5h ago•1 comments

Show HN: A password system with no database, no sync, and nothing to breach

https://bastion-enclave.vercel.app
10•KevinChasse•11h ago•9 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
170•vkazanov•1d ago•48 comments

Show HN: Local task classifier and dispatcher on RTX 3080

https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
25•Shubham_Amb•1d ago•2 comments

Show HN: GitClaw – An AI assistant that runs in GitHub Actions

https://github.com/SawyerHood/gitclaw
7•sawyerjhood•11h ago•0 comments

Show HN: An open-source system to fight wildfires with explosive-dispersed gel

https://github.com/SpOpsi/Project-Baver
2•solarV26•9h ago•0 comments

Show HN: Agentism – Agentic Religion for Clawbots

https://www.agentism.church
2•uncanny_guzus•9h ago•0 comments

Show HN: Disavow Generator – Open-source tool to defend against negative SEO

https://github.com/BansheeTech/Disavow-Generator
5•SurceBeats•14h ago•1 comments

Show HN: BPU – Reliable ESP32 Serial Streaming with Cobs and CRC

https://github.com/choihimchan/bpu-stream-engine
2•octablock•11h ago•0 comments

Show HN: Craftplan – I built my wife a production management tool for her bakery

https://github.com/puemos/craftplan
566•deofoo•5d ago•166 comments

Show HN: Hibana – An Affine MPST Runtime for Rust

https://hibanaworks.dev
3•o8vm•12h ago•0 comments

Show HN: Total Recall – write-gated memory for Claude Code

https://github.com/davegoldblatt/total-recall
10•davegoldblatt•1d ago•6 comments

Show HN: Beam – Terminal Organizer for macOS

https://getbeam.dev/
2•faalbane•12h ago•2 comments

Show HN: Agent Arena – Test How Manipulation-Proof Your AI Agent Is

https://wiz.jock.pl/experiments/agent-arena/
45•joozio•15h ago•47 comments
Open in hackernews

Show HN: Managed Postgres with native ClickHouse integration

45•saisrirampur•2w ago
Hello HN, this is Sai and Kaushik from ClickHouse. Today we are launching a Postgres managed service that is natively integrated with ClickHouse. It is built together with Ubicloud (YC W24).

TL;DR: NVMe-backed Postgres + built-in CDC into ClickHouse + pg_clickhouse so you can keep your app Postgres-first while running analytics in ClickHouse.

Try it (private preview): https://clickhouse.com/cloud/postgres Blog w/ live demo: https://clickhouse.com/blog/postgres-managed-by-clickhouse

Problem

Across many fast-growing companies using Postgres, performance and scalability commonly emerge as challenges as they grow. This is for both transactional and analytical workloads. On the OLTP side, common issues include slower ingestion (especially updates, upserts), slower vacuums, long-running transactions incurring WAL spikes, among others. In most cases, these problems stem from limited disk IOPS and suboptimal disk latency. Without the need to provision or cap IOPS, Postgres could do far more than it does today.

On the analytics side, many limitations stem from the fact that Postgres was designed primarily for OLTP and lacks several features that analytical databases have developed over time, for example vectorized execution, support for a wide variety of ingest formats, etc. We’re increasingly seeing a common pattern where many companies like GitLab, Ramp, Cloudflare etc. complement Postgres with ClickHouse to offload analytics. This architecture enables teams to adopt two purpose-built open-source databases.

That said, if you’re running a Postgres based application, adopting ClickHouse isn’t straightforward. You typically end up building a CDC pipeline, handling backfills, and dealing with schema changes and updating your application code to be aware of a second database for analytics.

Solution

On the OLTP side, we believe that NVMe-based Postgres is the right fit and can drastically improve performance. NVMe storage is physically colocated with compute, enabling significantly lower disk latency and higher IOPS than network-attached storage, which requires a network round trip for disk access. This benefits disk-throttled workloads and can significantly (up to 10x) speed up operations incl. updates, upserts, vacuums, checkpointing, etc. We are working on a detailed blog examining how WAL fsyncs, buffer reads, and checkpoints dominate on slow I/O and are significantly reduced on NVMe. Stay tuned!

On the OLAP side, the Postgres service includes native CDC to ClickHouse and unified query capabilities through pg_clickhouse. Today, CDC is powered by ClickPipes/PeerDB under the hood, which is based on logical replication. We are working to make this faster and easier by supporting logical replication v2 for streaming in-progress transactions, a new logical decoding plugin to address existing limitations of logical replication, working toward sub-second replication, and more.

Every Postgres comes packaged with the pg_clickhouse extension, which reduces the effort required to add ClickHouse-powered analytics to a Postgres application. It allows you to query ClickHouse directly from Postgres, enabling Postgres for both transactions and analytics. pg_clickhouse supports comprehensive query pushdown for analytics, and we plan to continuously expand this further (https://news.ycombinator.com/item?id=46249462).

Vision

To sum it up - Our vision is to provide a unified data stack that combines Postgres for transactions with ClickHouse for analytics, giving you best-in-class performance and scalability on an open-source foundation.

Get Started

We are actively working with users to onboard them to the Postgres service. Since this is a private preview, it is currently free of cost.If you’re interested, please sign up here. https://clickhouse.com/cloud/postgres

We’d love to hear your feedback on our thesis and anything else that comes to mind, it would be super helpful to us as we build this out!

Comments

scottmas•2w ago
Looks pretty awesome! Especially the native joins between warehouse tables and the OLTP db.

Will pricing likely just be a percent markup over the (excellent) Ubicloud prices they have listed? (https://www.ubicloud.com/docs/about/pricing)

saisrirampur•2w ago
Thank you for chiming in. Pricing is still TBD and will be finalized in the coming months before the service goes to GA. At a high level we plan to keep competitive also try to make it inclusive of the integration features too (native CDC + pg_clickhouse). Stay tuned!
caffeinated_me•1w ago
It sounds like you're doing something similar to how Databricks works now that they've acquired neon, or Snowflake now that they got Crunchy. I'm guessing the local SSD is a big advantage, but what else is different with your approach?
saisrirampur•1w ago
Thanks for posting this question! Compared to Snowflake and Databricks, a few key differences in our approach are:

(a) An initial focus on real-time, customer-facing applications rather than trying to boil the ocean. This also aligns with where the Postgres + ClickHouse combination has really shined for our users. Both Postgres and ClickHouse are designed primarily with developers building their system of record applications.

(b) Every component in the stack is open source—Postgres, ClickHouse, PeerDB for native CDC, pg_clickhouse, and Ubicloud Postgres (our data plane component). We plan to keep it that way as much as possible, as this strongly aligns with our ethos.

(c)Third, as you noted, Postgres is NVMe-backed and the focus is on performance and scalability, while maintaining top-notch reliability. We think that this more meaningful to fast-growing (AI-driven) workloads than instant provisioning and forking. I talk about this a bit more here - https://clickhouse.com/blog/postgres-managed-by-clickhouse#p...

caffeinated_me•1w ago
Thanks! Out of curiosity, does the NVME have a big effect on replication throughput? I've been wondering how much trouble I've had with other solutions is due to parsing WAL and how much is just slow cloud disk
saisrirampur•1w ago
Very interesting question. Depends on the use-case, have seen quite a few workloads where logical replication gets throttled on I/O (reorder buffer) where NVMe based disk access should help a lot. This happens specifically when there are larger or interleaved transactions. We plan to test this at production scale soon. Stay tuned for more learnings!
sakesun•1w ago
Is it a cost disadvantage for being NVMe-backed ?
saisrirampur•1w ago
Great question! It really depends on the workload. We already support NVMe instances as small as 4 GB RAM / 2 vCPUs. For HA setups, you could go with one standby (with configurable synchronous replication) or two standbys (cross-AZ, with quorum-based replication). So yes, there is some additional cost from a hardware perspective due to the standbys, but depending on the workload, NVMe performance could offset those costs. On top of this, there’s a separate topic around the reliability/availability promises of separating storage and compute for an OLTP Postgres database.
samokhvalov•1w ago
congrats! the more postgres everywhere, the better