frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Filing the corners off my MacBooks

https://kentwalters.com/posts/corners/
723•normanvalentine•9h ago•368 comments

Starfling: A one-tap endless orbital slingshot game in a single HTML file

https://playstarfling.com
112•iceberger2001•2d ago•30 comments

1D Chess

https://rowan441.github.io/1dchess/chess.html
784•burnt-resistor•16h ago•142 comments

Artemis II safely splashes down

https://www.cbsnews.com/live-updates/artemis-ii-splashdown-return/
829•areoform•8h ago•267 comments

Installing every* Firefox extension

https://jack.cab/blog/every-firefox-extension
348•RohanAdwankar•10h ago•40 comments

Chimpanzees in Uganda locked in eight-year 'civil war', say researchers

https://www.bbc.com/news/articles/cr71lkzv49po
316•neversaydie•13h ago•179 comments

Great at gaming? US air traffic control wants you to apply

https://www.bbc.com/news/articles/ce84rvx0e6do
22•1659447091•3h ago•25 comments

20 years on AWS and never not my job

https://www.daemonology.net/blog/2026-04-11-20-years-on-AWS-and-never-not-my-job.html
114•cperciva•2h ago•15 comments

WireGuard makes new Windows release following Microsoft signing resolution

https://lists.zx2c4.com/pipermail/wireguard/2026-April/009561.html
460•zx2c4•16h ago•130 comments

AI assistance when contributing to the Linux kernel

https://github.com/torvalds/linux/blob/master/Documentation/process/coding-assistants.rst
292•hmokiguess•13h ago•189 comments

Industrial design files for Keychron keyboards and mice

https://github.com/Keychron/Keychron-Keyboards-Hardware-Design
364•stingraycharles•15h ago•110 comments

Bevy game development tutorials and in-depth resources

https://taintedcoders.com/
75•GenericCanadian•2d ago•14 comments

Volunteers turn a fan's recordings of 10K concerts into an online treasure trove

https://apnews.com/article/aadam-jacobs-collection-concerts-internet-archive-chicago-b1c9c4466a2d...
18•geox•2d ago•0 comments

Flashback to a time when government reports were works of art

https://www.chicagotribune.com/2026/04/08/transportation-library-northwestern/
16•NaOH•2d ago•2 comments

A practical guide for setting up Zettelkasten method in Obsidian

https://desktopcommander.app/blog/zettelkasten-obsidian/
45•rkrizanovskis•2d ago•19 comments

CPU-Z and HWMonitor compromised

https://www.theregister.com/2026/04/10/cpuid_site_hijacked/
315•pashadee•18h ago•89 comments

JSON formatter Chrome plugin now closed and injecting adware

https://github.com/callumlocke/json-formatter
208•jkl5xx•13h ago•109 comments

Helium is hard to replace

https://www.construction-physics.com/p/helium-is-hard-to-replace
303•JumpCrisscross•17h ago•207 comments

Investigating Split Locks on x86-64

https://chipsandcheese.com/p/investigating-split-locks-on-x86
53•ingve•3d ago•16 comments

Productive Procrastination

https://www.maxvanijsselmuiden.nl/blog/productive-procrastination/
11•maxvij•2h ago•6 comments

Quien – A better WHOIS lookup tool

https://github.com/retlehs/quien/
31•bretthopper•4h ago•9 comments

Italo Calvino: A traveller in a world of uncertainty

https://www.historytoday.com/archive/portrait-author-historian/italo-calvino-traveller-world-unce...
70•lermontov•8h ago•13 comments

The Bra-and-Girdle Maker That Fashioned the Impossible for NASA

https://thereader.mitpress.mit.edu/the-bra-and-girdle-maker-that-fashioned-the-impossible-for-nasa/
82•sohkamyung•1d ago•4 comments

What is RISC-V and why it matters to Canonical

https://ubuntu.com/blog/risc-v-101-what-is-it-and-what-does-it-mean-for-canonical
125•fork-bomber•2d ago•84 comments

The Seasons Are Wrong

https://kentwalters.com/posts/seasons/
13•NikxDa•3h ago•14 comments

Watgo – A WebAssembly Toolkit for Go

https://eli.thegreenplace.net/2026/watgo-a-webassembly-toolkit-for-go/
91•ibobev•13h ago•6 comments

Launch HN: Twill.ai (YC S25) – Delegate to cloud agents, get back PRs

https://twill.ai
66•danoandco•15h ago•62 comments

A compelling title that is cryptic enough to get you to take action on it

https://ericwbailey.website/published/a-compelling-title-that-is-cryptic-enough-to-get-you-to-tak...
218•mooreds•15h ago•118 comments

Intel 486 CPU announced April 10, 1989

https://dfarq.homeip.net/intel-486-cpu-announced-april-10-1989/
165•jnord•20h ago•152 comments

OpenClaw’s memory is unreliable, and you don’t know when it will break

https://blog.nishantsoni.com/p/ive-seen-a-thousand-openclaw-deploys
108•sonink•13h ago•121 comments
Open in hackernews

ArkFlow: High-performance Rust stream processing engine

https://github.com/arkflow-rs/arkflow
170•klaussilveira•11mo ago

Comments

habobobo•11mo ago
Looks interesting, how does this compare to arroyo and vector.dev?
tormeh•11mo ago
Also curious about any comparison to Fluvio.
necubi•11mo ago
(I'm the creator of Arroyo)

I haven't dug deep into this project, so take this with a grain of salt.

ArkFlow is a "stateless" stream processor, like vector or benthos (now Redpanda Connect). These are great for routing data around your infrastructure while doing simple, stateless transformations on them. They tend to be easy to run and scale, and are programmed by manually constructing the graph of operations.

Arroyo (like Flink or Rising Wave) is a "stateful" stream processor, which means it supports operations like windowed aggregations, joins, and incremental SQL view maintenance. Arroyo is programmed declaratively via SQL, which is automatically planned into a dataflow (graph) representation. The tradeoff is that state is hard to manage, and these systems are much harder to operate and scale (although we've done a lot of work with Arroyo to mitigate this!).

I wrote about the difference at length here: https://www.arroyo.dev/blog/stateful-stream-processing

fer•11mo ago
Previous discussion (46 days ago): https://news.ycombinator.com/item?id=43358682
shawabawa3•11mo ago
seems like a simplified equivalent of https://vector.dev/

a major difference seems to be converting things to arrow and using SQL instead of using a DSL (vrl)

sofixa•11mo ago
> seems like a simplified equivalent of https://vector.dev/

No? Vector is for observability, to get your metrics/logs, transform them if needed, and put them in the necessary backends. Transformation is optional, and for cases like downsampling or converting formats or adding metadata.

ArkFlow gets data from stuff like databes and message queues/brokers, transforms it, and puts it back in databases and message queues/brokers. Transformation looks like a pretty central use case.

Very different scenarios. It's like saying that a Renault Kangoo is a simplified equivalent of a BTR-80 because both have wheels, engine and space for stuff.

rockwotj•11mo ago
Its a rust port of Redpanda Connect (benthos), but with less connectors

https://github.com/redpanda-data/connect

necubi•11mo ago
Vector is often used for observability data (in part because it's now owned by Datadog) but it's not limited to that. It's a general purpose stateless stream processing engine, and can be used for any kind of events.
sofixa•11mo ago
Vector started for observability data only, and that's why they got bought by Datadog.
hoherd•11mo ago
Incidentally arkflow implements VRL https://github.com/arkflow-rs/arkflow/pull/273
muffa•11mo ago
Looks very similar to redpanda-connect/benthos
coreyoconnor•11mo ago
How do you educate people on stream processing? For pipeline like systems stream processing is essential IMO - backpressure/circuit breakers/etc are critical for resilient systems. Yet I have a hard time building an engineering team that can utilize stream processing; Instead of just falling back on synchronous procedures that are easier to understand (But nearly always slower and more error prone)
serial_dev•11mo ago
It's important to consider whether it's worth it, even?

I worked on stream processing, it was fun, but I also believe it was over-engineered and brittle. The customers also didn't want real-time data, they looked at the calculated values once a week, then made decisions based on that.

Then, I joined another company that somehow had money to pay 50-100 people, and they were using CSV, sh scripts, batch processing, and all that. It solved the clients' needs, and they didn't need to maintain a complicated architecture and the code that could have been difficult to reason about otherwise.

The first company with the stream processing after I left, was bought by a competitor at fire sale price, some of the tech were relevant for them, but the stream processing stuff was immediately shut down. The acquiring company had just simple batch processing and they were printing money in comparison.

If you think it's still worth going with stream processing, give your reasoning to the team, and most reasonable developers would learn it if they really believe it's a significantly better solution for the given problem.

Not to over-simplify, but if you can't convince 5 out of 10 people to learn to make their job better, it's either that the people are not up to the task, or you are wrong that stream processing would make a difference.

senderista•11mo ago
Yeah that reminds me of a startup I worked at that did real-time analytics for digital marketing campaigns. We went to all kinds of trouble to update dashboards with 5-minute latency, and real-time updates made for impressive sales demos, but I don't think we had a single customer that actually needed to make business decisions within 24 hours of looking at the data.
serial_dev•11mo ago
We were doing TV ads analytics by detecting ads on TV channels and checking web impact (among other things). The only thing is, most of these ads are deals made weeks or months in advance, so customers checked analytics about once before a renewal… so not sure it needed to be near real time…
wging•11mo ago
https://mcfunley.com/whom-the-gods-would-destroy-they-first-...
nemothekid•11mo ago
I agree. Unless the downstream data is going to be used to feed a system to make automated decisions (ex. HFT or Ad buying), having real time analytics is usually never worth the cost. It's almost always easier and more robust to have high tail latencies for humans to consume and as computers get faster and faster that tail latency decreases.

Systems that needed complex streaming architectures in 2015 could probably be handled today with fast disk and large postgres instance (or BigQuery).

porridgeraisin•11mo ago
Many successful ads feedback loops run at 15 minute granularities as well!
wwarner•11mo ago
personally i think streaming is quite a bit simpler. but as you you point out, no one cares!
carefulfungi•11mo ago
Batch processing is just stream processing with a really big window ;-). More seriously, I find streaming windows are often the disconnect. Surprisingly often, users don't want windowed results. They want aggregation, filtering, uniqueness, ordering, and reporting over some batch. Or, they want to flexibly specify their window / partitioning / grouping for each reporting query. Modern OLAP systems are plenty fast enough to do that on the fly for most use cases - so even older streaming patterns like stream processing for real time stats in parallel with batch to an OLAP system aren't worth the complexity. Just query the DB and cache...
timeinput•11mo ago
Fundamentally I think the question is what kind of streams are you processing?

My concept of stream processing is trying to process gigabits to gigabytes a second, and turn it into something much much smaller so that it's manageable to database and analyze. To my mind for 'stream processing' calling malloc is sometimes too expensive let alone using any of the technologies called out in this tech stack.

I understand back pressure, and circuit breakers, but they have to happen at the OS / process level (for my general work) -- a metric that auto scales a microservice worker after going through prometheus + an HPA or something like that ends up with too many inefficiencies to make things practical. A few threads on a single machine just work, but end up taking ages to engineer a 'cloud native' solution.

Once I'm down to a job a second (and that job takes more than a few seconds to run to hide the framework's overhead) or less things like Airflow start to work, and not just fall flat, but at that point are these expensive frame works worth it? I'm only producing 1-1000 jobs a second.

Stream processing with these frameworks like Faust, Airflow, Kafka Streams etc, all just seem like brittle overkill once you start trying to actually deploy and use them. How do I tune the PostgreSQL database for Airflow? How do I manage my S3 life cycles to minimize cost?

A task queue + an HPA really feels more like the right kind of thing to me at that scale vs really caring too much about back pressure, etc when the data rate is 'low', but I've generally been told by colleagues to reach for more complicated stream processors that perform worse, are (IMO) harder to orchestrate, and (IMO) harder to manage and deploy.

jandrewrogers•11mo ago
There are both technical and organizational challenges created by stream processing. I like stream processing and have done a lot of work on high-performance stream engines but I am not blind to the practical issues.

Companies are organized around an operational tempo that reflects what their systems are capable of. Even if you replace one of their systems with a real-time or quasi-real-time stream processing architecture, nothing else in the organization operates with that low of a latency, including the people. It is a very heavy lift to even ask them to reorganize the way they do things.

A related issue is that stream processing systems still work poorly for some data models and often don’t scale well. Most implementations place narrow constraints on the properties of the data models and their statefulness. If you have a system sitting in the middle of your operational data model that requires logic which does not fit within those limitations then the whole exercise starts to break down. Despite its many downsides, batching generalizes much better and more easily than stream processing. This could be ameliorated with better stream processing tech (as in, core data structures, algorithms, and architecture) but there hasn’t been much progress on that front.

jll29•11mo ago
Very interesting - is WARC support on the roadmap?
dayjah•11mo ago
Do you mean this: https://en.m.wikipedia.org/wiki/WARC_(file_format) ?

Can you help me understand how this would plug into stream processing? My immediate thought is for web page interaction replays — but that seems sort of exotic a use case?

gotoeleven•11mo ago
How do the creators of this plan to make money?
beanjuiceII•11mo ago
get people onboard as open source..then flip to some other license add some pricing tiers and now those users become customers even if they don't like it. tried and true methodology
amelius•11mo ago
You can always fork it
insane_dreamer•11mo ago
Does this include broker capabilities? If not, what's a recommended broker these days (for hosting in the cloud, i.e., an EC2 instance; I know AWS has its own Mqtt Broker but it's quite pricy for high volumes).
xyst•11mo ago
So Kafka Connect and Kafka Streams but with rust?
chenquan•11mo ago
Hello, I am the founder of this project and I am very happy that a friend has shared it.

ArkFlow is positioned as a lightweight distributed stream processing engine that integrates streaming batches. With the help of datafusion's huge ecosystem and ArkFlow's scalable capabilities, we hope to build a huge data processing ecosystem to help the community simplify the threshold for data processing, because we always believe that flowing data can generate greater value.

Finally, thanks to everyone for their attention.

fnord123•11mo ago
What does lightweight mean?
undefuser•11mo ago
I would like to understand more. What are the potential use cases for this tool?
gue-ni•11mo ago
"High-performance" is just a meaningless buzzword if you don't have any benchmarks or performance comparisons to comparable software
disintegrator•11mo ago
Very similar in appearance to Redpanda Connect (Benthos) which isn’t a bad thing at all. Would be good to elaborate on how error handling is done and what message delivery guarantees it comes with.