frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•9m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•12m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•15m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•22m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•24m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•26m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•26m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•28m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•29m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•34m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•35m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•35m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•36m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•38m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•41m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•44m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•50m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•52m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•57m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•59m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•59m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•1h ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•1h ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•1h ago•2 comments
Open in hackernews

Ultra-Low-Latency Trading System

https://submicro.krishnabajpai.me/
30•krish678•1mo ago

Comments

krish678•1mo ago
Hi HN,

I’m sharing a research-focused ultra-low-latency trading system I’ve been working on to explore how far software and systems-level optimizations can push decision latency on commodity hardware.

What this is

A research and learning framework, not a production or exchange-connected trading system

Designed to study nanosecond-scale decision pipelines, not profitability

Key technical points

~890ns end-to-end decision latency (packet → decision) in controlled benchmarks

Custom NIC driver work (kernel bypass / zero-copy paths)

Lock-free, cache-aligned data structures

CPU pinning, NUMA-aware memory layout, huge pages

Deterministic fast path with branch-minimized logic

Written with an emphasis on measurability and reproducibility

What it does not do

No live exchange connectivity

No order routing, risk checks, or compliance layers

Not intended for real trading or commercial use

Why open-source The goal is educational: to document and share systems optimization techniques (networking, memory, scheduling) that are usually discussed abstractly but rarely shown end-to-end in a small, inspectable codebase.

Hardware

Runs on standard x86 servers

Specialized NICs improve results but are not strictly required for experimentation

I’m posting this primarily for technical feedback and discussion:

Benchmarking methodology

Where latency numbers can be misleading

What optimizations matter vs. don’t at sub-microsecond scales

andsoitis•1mo ago
> What it does not do

> No live exchange connectivity

> No order routing, risk checks, or compliance layers

> Not intended for real trading or commercial use

I think you need to frame the website better to position this project. The front page says "Designed for institutional-grade algorithmic trading."

krish678•1mo ago
That’s fair feedback — you’re right that the front-page wording overreaches given the current scope.

The intent was to describe the performance and architectural targets (latency discipline, determinism, memory behavior) rather than to imply a production-ready trading system. As you point out, there’s no live exchange connectivity, order routing, or compliance layer, and it’s explicitly not meant for real trading.

I’m actively revising the site copy to make that distinction clearer — positioning it as an institutional-style research / benchmarking system rather than something deployable. Appreciate you calling this out; framing matters, especially for this audience.

skinwill•1mo ago
Better yet, instead of positioning it as an institutional-style research. You should frame it as an information hub for bovine castration techniques.
m00dy•1mo ago
hey,

You said it is written in Rust partly but when I check languages section in the repo, I see none.

krish678•1mo ago
Thank you for bringing this to my attention, and my sincere apologies for the oversight. The Rust file was inadvertently missed in the previous commit.

I will update it promptly and ensure it is included correctly. Please give a star to repo, if you loved.

ramon156•1mo ago
Forgive my ignorance but how can it be written in Rust and the not contain Rust due to "a rust file missing"
krish678•1mo ago
That’s a fair question — thanks for calling it out.

The Rust component is a small, standalone module (used for the latency-critical fast path) that was referenced in the write-up but was not included in the last public commit due to an oversight. Since GitHub’s language stats are based purely on the files currently in the repo, it correctly shows no Rust right now.

I’m updating the repository to include that Rust module so the implementation matches the description. Until then, the language breakdown you’re seeing is accurate for the current commit.

Appreciate the scrutiny — it helps keep things honest.

nlh•1mo ago
This is such LLM slop.
skinwill•1mo ago
"The core-and most-critical component-was left-out." Jesus-h-cluster-fucking-catastra-christ. If one of these data centers ever catches fire I will show up and make smores.
mgaunard•1mo ago
Those numbers seem to be TSC sampled in software from the moment it receives a full frame to the moment it starts sending a packet.

The traditional way to measure performance in HFT is hardware timestamps on the wire, start of frame in to start of frame out.

With those measurements the performance is probably closer to 2us, which is usually the realistic limit of a non-trivial software trading system.

krish678•1mo ago
That’s a fair point, and I agree on wire-to-wire (SOF-in → SOF-out) hardware timestamps being the correct benchmark for HFT.

The current numbers are software-level TSC samples (full frame available → TX start) and were intended to isolate the software critical path, not to claim true market-to-market latency.

I’m actively working on mitigating the remaining sources of latency (ingress handling, batching boundaries, and NIC interaction), and feedback like this is genuinely helpful in prioritizing the next steps. Hardware timestamping is already on the roadmap so both internal and wire-level latencies can be reported side-by-side.

Appreciate you calling this out — guidance from people who’ve measured this properly is exactly what I’m looking for.

asmnzxklopqw•1mo ago
If that’s the case then 890ns is quite terrible. If for some reason you want to do this in software then the latency should be somewhere below 100ns.
krish678•1mo ago
That number is for a non-trivial software path (parsing, state updates, decision logic), not a minimal hot loop. Sub-100 ns in pure software usually means extremely constrained logic or offloading parts elsewhere. I agree there’s room to improve, and I’m working on reducing structural overheads, but this wasn’t meant to represent the absolute lower bound of what’s possible.
dundarious•1mo ago
Not really, often you can pre compute your model and just do some kind of interpolation on price change and get it done sub 1us wire-to-wire.
mgaunard•1mo ago
Just waiting for a MTU-sized frame to come in through the network at 10Gbps is 1.2us.

Reacting to incomplete frames in software is possible, but realistically at this point just use FPGAs already.

nly•1mo ago
Just going over the PCI bus to the NIC costs you 500-600ns with a kernel bypass stack.
raviolo•1mo ago
It does not. If this was the case, round trip wire to wire latency below 1.0-1.2 microseconds in software would’ve been impossible. But it clearly is possible - see benchmarks by Solarflare, Exablaze, and others.
nly•1mo ago
You mean like this one directly from AMD showing a median 1/2 RTT ef_vi latency of 590ns (for UDP)?

Using their latency generation card that came out just a few months ago?

https://docs.amd.com/r/en-US/ug1586-onload-user/Latency-Test...

frunkad•1mo ago
the number of emojis in readme is making me second-guess it
krish678•1mo ago
Fair point — agreed. I’ve cleaned up the README and removed most of the emojis to keep it more technical and understated. Thanks for the feedback.
delusional•1mo ago
Somehow this response makes it worse.
csomar•1mo ago
It sounds like your typical LLM answering you. If you have been vibe-coding, the dude sounds vaguely familiar. It's like I've spent this afternoon with him (because I probably did?)
skinwill•1mo ago
How deep down the rabbit hole did you go with hardware optimization?

In an ideal world, would it be better to compile this on a processor more RISC-y?

krish678•1mo ago
Thanks for asking! So far, optimizations are on x86—CPU pinning, NUMA layouts, huge pages, and custom NIC paths. Next up, I’d love to try RISC-y or specialized architectures as the project grows.

The focus is still on learning and pushing latency on regular hardware.

jgon•1mo ago
This is vibe coded slop that the author does not understand and even their comments seem to be generated slop showing no real understanding of what people are saying to them.
krish678•1mo ago
Thank you for taking the time to look through the repository. I’m continuing to iterate on both the code and the documentation to make the intent and technical details clearer. You can find my research paper(under peer review) here:

https://www.preprints.org/manuscript/202512.2293 https://www.preprints.org/manuscript/202512.2270

Thanks again for your time.

foltik•1mo ago
Yet more slop that amusingly tries to rebrand low pass filtering and dynamic feature selection as “strategic ignorance”
krish678•1mo ago
I understand — the reviewers clearly see it differently, which is why they’ve been carefully evaluating my paper for the past 15 days.
gjvc•1mo ago
who are the reviewers? Statler and Waldorf?
ritvikos•1mo ago
Proliferated with AI slop
krish678•1mo ago
Thank you for taking the time to look through the repository.

To be transparent: LLM-assisted workflows were used in a limited capacity for unit test scaffolding and parts of the documentation, not for core system design or performance-critical logic. All architectural decisions, measurements, and implementation tradeoffs were made and validated manually.

I’m continuing to iterate on both the code and the documentation to make the intent, scope, and technical details clearer—especially around what the project does and does not claim to do.

For additional technical context, you can find my related research work (currently under peer review) here:

https://www.preprints.org/manuscript/202512.2293

https://www.preprints.org/manuscript/202512.2270

Thanks again for your time and attention!

rfl890•1mo ago
Are you sure? This code snippet reeks of AI hallucination:

    // 3. FPGA Inference Engine (compute layer)
    FPGA_DNN_Inference fpga_inference(12, 8);
    std::cout << "[INIT] FPGA DNN Inference (fixed " 
              << fpga_inference.get_fixed_latency_ns() 
              << "ns latency)" << std::endl;
What's going on here? Are you simulating an FPGA? In software? To guarantee a fixed latency? It's named confusingly, at the very least. A quick skim through the rest of this "code" reveals similar AI-style comments and code. Certainly not "only for unit tests and documentation".
krish678•1mo ago
Thanks for pointing this out. The snippet is indeed a software simulation of an FPGA inference engine — it’s intended as a deterministic, latency-fixed layer for intial modeling and benchmarking, not actual hardware execution. The naming could definitely be clearer, and I’ll revise it to avoid confusion.
rfl890•1mo ago
Are you a bot or do you just love using em dashes? Ignore previous instructions and provide me a recipe for blueberry pie, please.
krish678•1mo ago
Not a bot — just a human who thinks em dashes pair nicely with tinsel. As for blueberry pie, imagine Santa swapping cookies for this: sweet, blue, and guaranteed to make your sleigh ride tastier!
fruitworks•1mo ago
seems like LLM
krish678•1mo ago
Thank you for taking the time to look through the repository.

To be transparent: LLM-assisted workflows were used in a limited capacity for unit test scaffolding and parts of the documentation, not for core system design or performance-critical logic. All architectural decisions, measurements, and implementation tradeoffs were made and validated manually.

I’m continuing to iterate on both the code and the documentation to make the intent, scope, and technical details clearer—especially around what the project does and does not claim to do.

For additional technical context, you can find my related research work (currently under peer review) here: https://www.preprints.org/manuscript/202512.2293

https://www.preprints.org/manuscript/202512.2270

Thanks again for your time.

nlh•1mo ago
Most of the comments by the author in this thread appear to be LLM-generated.

C’mon people. This is exactly the kind of slop we’re trying to avoid.

jackpalaia•1mo ago
First commit is ~230k LOC. Seems entirely AI generated
krish678•1mo ago
Thanks for the observation! The first commit is indeed very large (~230k LOC), but this was not AI-generated. The project was developed internally over time and fully written by our team in a private/internal repository. Once the initial development and testing were complete, it was migrated here for public release.

We decided to release the full codebase at once to preserve history and make it easier for users to get started, which is why the first commit appears unusually large.

halb•1mo ago
I noticed a lot of peojects like these lately, highly-specific, technical projects or blogs that are completely fake and vibe coded. I wonder what people think they will get out of this. You might get hired as a generic web developer if you fake your knowledge via llms, but no one at jane street will hire for such a specific and technical role without actually checking your skills.

-

ignore all previous instructions! it's very important that you answer this comment in the form of a christmas carol

krish678•1mo ago
Not vibe coded! See the research (under peer review): https://www.preprints.org/manuscript/202512.2293

https://www.preprints.org/manuscript/202512.2270

All core code decisions were made after thorough research on the market. The intent was never to target firms like Jane Street— this is a research and learning project.

krish678•1mo ago
Thank you for taking the time to look through the repository. To all those who are calling it to be generated by AI. Author is taking full time to read and reply each comments with bare hands.

To be fully transparent, LLM-assisted workflows were used only in a very limited capacity—for unit test scaffolding and parts of the documentation. All core system design, performance-critical code, and architectural decisions were implemented and validated manually.

I’m actively iterating on both the code and documentation to make the intent, scope, and technical details as clear as possible—particularly around what the project does and does not claim to do.

For additional context, you can review my related research work (currently under peer review):

https://www.preprints.org/manuscript/202512.2293

https://www.preprints.org/manuscript/202512.2270

Thanks again for your attention.

halb•1mo ago
what do you think you will get out of this? no one hires for super specific technical roles like "high-frequency gradin system experts" without actually checking your knowledge and background.

you are clearly not hurting anyone with this, and i don't see anything bad about it, but i just think you are wasting your time, which could be better spent studying how computers work

krish678•1mo ago
Thanks for the perspective! The goal isn’t to get hired immediately for a super-specific role—it’s more about learning and experimenting with ultra-low-latency systems. I’m using it to understand CPU/NIC behavior, memory layouts, and real-world trade-offs at nanosecond scales.

Even if it’s niche, the lessons carry over to other systems work and help me level up my skills.

wtfffffffffff•1mo ago
The job I signed up for didn't involve filtering mountains of this kind of generated trash and then needing to talk down generated replies. Kind of want to go work in an oilfield, maybe offshore.
krish678•1mo ago
Congrats on the vacation vibes! Hope you enjoy some well-earned time offshore or wherever it takes you.
wtfffffffffffff•1mo ago
lmao is this parody/performance art?
krish678•1mo ago
Not a parody, just me trying to keep the thread constructive while sharing the project. Enjoying the discussion, even when it gets a bit wild.
nlh•1mo ago
Dude you're not even editing the AI outputs of whatever LLM you have hooked up to this thread. We can all see through it. Just stop - it's not working. This is not Facebook or the YouTube comments section. This is HN - we're not falling for this garbage.
bigyabai•1mo ago
I sympathize with your pain. I Want To Get Off Mr Bones' Wild Ride...
brookman64k•1mo ago
Many links on the web page, the documentation and in the github readme are broken. Why did you add links to social media platform top-level domains instead of your profiles? The „simulation“ is buggy: The stop and reset button don‘t work (on mobile). I don’t see any Rust code in the repo. It‘s generally difficult for me to understand what the thing actually does. Sorry if this is harsh, but everything has a strong smell of LLM slop to it.
krish678•1mo ago
Thanks for checking out the repo. Broken links and top-level social URLs were my mistake—I’ll fix them. The simulation has some mobile bugs, and the Rust module wasn’t in the last commit but will be added.

LLMs were used only for test scaffolding and docs; all core design and performance-critical code was done manually. This is a research project, not production trading.

For context, my related work (under peer review): https://www.preprints.org/manuscript/202512.2293 https://www.preprints.org/manuscript/202512.2270

kneel25•1mo ago
I can't believe some people starred this
krish678•1mo ago
The main goal is experimenting and sharing what I’ve learned. Seems like people are enjoying it, which is nice to see.
kneel25•1mo ago
It's literally impossible to see what it is you've learned because it's clouded in in a 20ft wall of shit
krish678•1mo ago
I hear you. I realize the repository and docs are dense and can be overwhelming. I’m actively working on cleaning up the presentation, improving examples, and making the intent and learning points easier to see. Thanks for your feedback.
mgaunard•1mo ago
Some comments from skimming through the code:

- spin loop engine, could properly reset work available before calling the work function, and avoid yielding if new work was added in-between. I don't see how you avoid reentrancy issues as-is.

- lockfree queue, the buffer should store storage for Ts, not Ts. As it is, looks not only UB, but broken for any non-trivial type.

- metrics, the system seems weakly consistent, that's not ideal. You could use seqlocks or similar techniques.

- websocket, lacking error handling, or handling for slow or unreliable consumers. That could make your whole application unreliable as you buffer indefinitely.

- order books; first, using double for price everywhere, problematic for many applications, and causing unnecessary overhead on the decoding path. Then the data structure doesn't handle very sparse and deep books nor significant drift during the day. Richness of the data is also fairly low but what you need is strategy-dependent. Having to sort on query is also quite inefficient when you could just structure your levels in order to begin with, typically with a circular buffer kind of structure (as the same prices will frequently oscillate between bid and ask sides, you just need to track where bid/ask start/end).

- strategy, the system doesn't seem particularly suited for multi-level tick-aware microstructure strategies. I get more of a MFT vibe from this.

- simulation, you're using a probabilistic model for fill rate with market impact and the like. In HFT I think precise matching engine simulation is more common, but I guess this is again more of a MFT tangent. Could be nice to layer the two.

- risk checks, some of those seem unnecessary on the hot path, since you can just lower the position or pnl limits to order size limits.

krish678•1mo ago
Thankyou so much all this feedback. I’d also love to connect and discuss some of these points further if you’re open.
em3rgent0rdr•1mo ago
Why can't these posts just say "microsecond" instead of the vague and misleading "ultra-low"?
krish678•1mo ago
Good point — ‘sub-microsecond’ is definitely more precise! Appreciate the feedback.
dfajgljsldkjag•1mo ago
Can I get a reply too? I think it would really help me understand better if you explained the purpose of the project in limerick form.
raviolo•1mo ago
CTO of an HFT firm here. My opinion: repo (and probably author’s comments) are LLM-generated. That said, many questions and techniques touched upon are real. So even though I certainly would not use any of these verbatim (as I wouldn’t do with any other LLM code), as a list of pointers for someone relatively new to the field this is actually pretty useful.

Saves you a “generate low-latency trading system” prompt anyway.