frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•57s ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
3•sakanakana00•4m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•6m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•7m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•8m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
3•Nive11•8m ago•4 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•12m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•15m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•17m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•19m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•23m ago•0 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•26m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•28m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•28m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•29m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•34m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•40m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•41m ago•1 comments

Slop News - The Front Page right now but it's only Slop

https://slop-news.pages.dev/slop-news
1•keepamovin•46m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•48m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
4•tosh•54m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•58m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•58m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
4•goranmoomin•1h ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

4•throwaw12•1h ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
3•senekor•1h ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
2•myk-e•1h ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
4•myk-e•1h ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•1h ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
6•1vuio0pswjnm7•1h ago•0 comments
Open in hackernews

Scaling request logging with ClickHouse, Kafka, and Vector

https://www.geocod.io/code-and-coordinates/2025-10-02-from-millions-to-billions/
136•mjwhansen•4mo ago

Comments

rozenmd•3mo ago
Great write-up!

I had a similar project back in August when I realised my DB's performance (Postgres) was blocking me from implementing features users commonly ask for (querying out to 30 days of historical uptime data).

I was already blown away at the performance (200ms to query what Postgres was doing in 500-600ms), but then I realized I hadn't put an index on the Clickhouse table. Now the query returns in 50-70ms, and that includes network time.

fermuch•3mo ago
Materialized views are a great tool for aggregating data in CH since they are automatically updated on insert from the original table. I recommend you to take a look and try it out, maybe it'll go down to single digit milliseconds!
ansgri•3mo ago
And there are 2 kinds of those: the other is refreshable materialized views, which run on schedule, can have dependencies between them, thus can implement quite complex data transformation pipelines.
nasretdinov•3mo ago
BTW you could've used e.g. kittenhouse (https://github.com/YuriyNasretdinov/kittenhouse, my fork) or just a simpler buffer table, with 2 layers and a larger aggregation period than in the example.

Alternatively, you could've used async insert functionality built into ClickHouse: https://clickhouse.com/docs/optimize/asynchronous-inserts . All of these solutions are operationally simpler than Kafka + Vector, although obviously it's all tradeoffs.

devmor•3mo ago
There were a lot of simpler options that came to mind while reading through this, frankly.

But I imagine the writeup eschews myriad future concerns and does not entirely illustrate the pressure and stress of trying to solve such a high-scale problem.

Ultimately, going with a somewhat more complex solution that involves additional architecture but has been tried and tested by a 3rd party that you trust can sometimes be the more fitting end result. Assurance often weighs more than simplicity, I think.

nasretdinov•3mo ago
While kittenhouse is, unfortunately, abandonware (even though you can still use it and it works), you can't say the same about e.g. async inserts in ClickHouse: it's a very simple and robust solution to tackle exactly the problem the PHP (and some other languages') backends often face when trying to use ClickHouse
ajayvk•3mo ago
Yes, had similar questions. Wouldn't tuning the settings for the buffer table have helped avoid the TOO_MANY_LINKS error?
frenchmajesty•3mo ago
Thanks for sharing I enjoyed reading this.
tlaverdure•3mo ago
Thanks for sharing. I really enjoyed the breakdown, and great to see small tech companies helping each other out!
mperham•3mo ago
Seems weird not to use Redis as the buffering layer + minutely cron job. Seems a lot simpler than installing Kafka + Vector.
SteveNuts•3mo ago
Vector is very simple to operate and (mostly) stateless, and can handle buffering if you choose.

Kafka and Redis is a "pick your poison" IMO, scaling and operating those have their own headaches.

otterley•3mo ago
Redis isn’t a good durable message queue.
albertgoeswoof•3mo ago
Currently at the millions stage with https://mailpace.com relying mostly on Postgres

Tbh this terrifies me! We don’t just have to log the requests but also store the full emails for a few days, and they can be up to 50 mib in total size.

But it will be exciting when we get there!

fnord77•3mo ago
How does Clickhouse compare to Druid, Pinot or Star Tree?
jamesblonde•3mo ago
Here's a good performance study by OneHouse comparing Clickhouse, StarRocks, Trino:

https://www.onehouse.ai/blog/apache-spark-vs-clickhouse-vs-p...

Druid is real-time analytics, similar to Clickhouse. StarRocks is best at Joins - Clickhouse is not good for joins.

manish_gill•3mo ago
> Clickhouse is not good for joins

This is less and less true as time goes on tbh. 25.9 introduced Join Reordering as well - https://clickhouse.com/blog/clickhouse-release-25-09

saisrirampur•3mo ago
Sai from ClickHouse here. Very compelling story! Really love your emphasis on using the right tool for the right job - power of row vs column stores.

We recently added a MySQL/MariaDB CDC connector in ClickPipes on ClickHouse Cloud. This would have simplified your migration from MariaDB.

https://clickhouse.com/docs/integrations/clickpipes/mysql https://clickhouse.com/docs/integrations/clickpipes/mysql/so...

ch2026•3mo ago
1) clickhouse async_insert would have solved all your issues: https://clickhouse.com/docs/optimize/asynchronous-inserts

1a) If you’re still having too many files/parts, then fix your partition by, and mergetree primary key.

2) why are you writing to kafka when vector dev does buffering / batching?

3) if you insist on kafka, https://clickhouse.com/docs/engines/table-engines/integratio... consumes directly from kafka (or since you’re on CHC, use clickhouse pipes) — what’s the point of vector here?

Your current solution is unnecessarily complex. I’m guessing the core problem is your merge tree primary key is wrong.

momothereal•3mo ago
Writing to Kafka allowed them to continue their current ingestion process into MariaDB at the same time as ClickHouse. Kafka consumer groups allow the data to be consumed twice by different consumer pools that have different throughput without introducing bottlenecks.

From experience the Kafka tables in ClickHouse are not stable at a high volumes, and harder to debug when things go sideways. It is also easier to mutate your data before ingestion using Vector's VRL scripting language vs. ClickHouse table views (SQL) when dealing with complex data that needs to be denormalized into a flat table.

ch2026•3mo ago
> Writing to Kafka allowed them to continue their current ingestion process into MariaDB at the same time as ClickHouse.

The one they're going to shut down as soon as this works? Yeah, great reason to make a permanent tech choice for a temporary need. Versus just keeping the MariaDB stuff exactly the same on the PHP side and writing to 2 destinations until cutover is achieved. Kafka is wholly unnecessary here. Vector is great tech but likely not needed. Kafka + Vector is absolutely the incorrect solution.

Their core problem is the destination table schema (which they did not provide) and a very poorly chosen primary key + partition.

est•3mo ago
can you just buffer some writes in Vector and eliminate Kafka?

I setup some Vector to buffer ElasticSearch writes years ago, also for logs, it ran so well without any problems that I almost fogot about it.

anticodon•3mo ago
Or vice versa: make ClickHouse ingest batches directly from Kafka. Messages are already buffered in Kafka, I don't get why Vector is necessary here.
ch2026•3mo ago
tbh the only thing they needed was a correct schema that didn’t constantly spawn new parts and async_insert enabled.
est•3mo ago
https://clickhouse.com/docs/knowledgebase/kafka-to-clickhous...

For anyone if curious.

pachico•3mo ago
I shared this article internally and my peers were impressed about how similar it is to our final implementation. (It differs in the fact that we use Redis as queue.)

Happy to exchange notes about our journey too.

Cheers

solatic•3mo ago

  Geocodio offers a pay-as-you-go metered plan where users get 2,500 free geocoding lookups per day. This means we need to:
  Track the 2,500 free tier requests
  Continue tracking above that threshold for billing
  Let users view their usage in real-time on their dashboard
  Give admins the ability to query this data for support and debugging
  Store request details so we can replay customer requests when debugging issues
Just on the basis of what you wrote here, I'm not convinced ClickHouse is the right tool. ClickHouse very much would help with helping you crunch statistics for latencies etc., but just for billing and getting individual query data? 1) push the request to Kafka/Pub Sub/etc. 2) one consumer pushing to TigerBeetle for tracking request usage within the free tier and other billing 3) one consumer to push individual requests to object storage, which scales out infinitely-ish, allows you to get full request details for an individual request, lifecycle rules will automatically async delete old requests for you. If request statistics is important for business analysis, then instead of (boring) object storage you could look at one of the newer Iceberg-based options on top of object storage, e.g. S3 tables; as long as querying an individual request remains fast and getting statistics can be generated, say, for a nightly report. Another cheap approach could hook up another consumer to the PubSub, any request with too-high latency above a reasonable threshold, dump it into a Slack channel with a reference to the request ID so someone can look into debugging it.
matthewaveryusa•3mo ago
I shimmed vector in my log pipeline recently and it really is a wonderfully simple and powerful tool. It's where I transform logs of software I don't own in to prometheus metrics and drop useless logs from making it to loki.
enether•3mo ago
weird you have to adopt Kafka AND Vector just to batch a bit of writes into Clickhouse...