frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
1•schwentkerr•1m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
1•blenderob•2m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
1•gmays•3m ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
1•gurjeet•3m ago•0 comments

Show HN: I built a toy compiler as a young dev

https://vire-lang.web.app
1•xeouz•5m ago•0 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•5m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
1•nicholascarolan•7m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•8m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•8m ago•0 comments

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
1•mooreds•9m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
5•mindracer•10m ago•1 comments

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•10m ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
1•Brajeshwar•11m ago•0 comments

Why Claude Cowork is a math problem Indian IT can't solve

https://restofworld.org/2026/indian-it-ai-stock-crash-claude-cowork/
1•Brajeshwar•11m ago•0 comments

Show HN: Built an space travel calculator with vanilla JavaScript v2

https://www.cosmicodometer.space/
2•captainnemo729•11m ago•0 comments

Why a 175-Year-Old Glassmaker Is Suddenly an AI Superstar

https://www.wsj.com/tech/corning-fiber-optics-ai-e045ba3b
1•Brajeshwar•11m ago•0 comments

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
1•ghazikhan205•13m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•14m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•14m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•14m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•15m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•15m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•15m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•16m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•17m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•19m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•20m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•21m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•21m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•22m ago•1 comments
Open in hackernews

Optimizing FizzBuzz in Rust

https://github.com/nrposner/fizzcrate
30•Bogdanp•5mo ago

Comments

jasonjmcghee•5mo ago
Reminds me of the famous thread on stack overflow. I'll link the rust one directly, but one cpp answer claims 283 GB/s - and others are in the ballpark of 50GB/s.

The rust one claims around 3GB/s

https://codegolf.stackexchange.com/a/217455

You can take this much further! I think throughput is a great way to measure it.

Things like pre-allocation, no branching, constants, simd, etc

hyperhello•5mo ago
Maybe I’m missing something but can’t you unroll it very easily by 15 prints at a time? That would skip the modulo checks entirely, and you could actually cache everything but the last two or three digits.
Terretta•5mo ago
> Maybe I’m missing something but can’t you unroll it very easily by 15...

Sure, 3 x 5 = 15. But, FTA:

But then, by coincidence, I watched an old Prime video and decided to put the question to him: how would you extend this to 7 = "Baz"?

He expanded the if-else chain: I asked him to find a way to do it without explosively increasing the number of necessary checks with each new term added. After some hints and more discussion...

Which is why I respectfully submit almost all examples of FizzBuzz including the article's first are "wrong" while the refactor is "right".

As for the optimizations, they don't focus on only 3 and 5, they include 7 throughout.

ainiriand•5mo ago
In my opinion a more accurate measure when you go down to the micro seconds level is TSC directly from the CPU. I've built a benchmark tool for that https://github.com/sh4ka/hft-benchmarks

Also I think that CPU pining could help in this context but perhaps I need to check the code in my machine first.

vlovich123•5mo ago
How does this compare with divan?
ainiriand•5mo ago
Divan is what I used as reference for some parts of my work (mostly the CPU timestamp parts). My project is less complete, but it will also include other important benchmarks for HFT, like current network I/O or some real trading patterns like order placement overhead.
joshka•5mo ago
If you're going to the effort of writing a procmacro, you may as well output a string from the macro instead of code.

If you're going idiomatic rust, then you might instead output a type that has a display impl rather than generating code that writes to stdout.

Etherlord87•5mo ago
> At this point, I'm out of ideas. The most impactful move would probably be to switch to a faster terminal... but I'm already running Ghostty! I thought it was a pretty performant terminal to being with!

But what is the point? Why do you want to optimize the display? If you want to be able to fizz-buzz for millions of numbers, then you want to... Well realistically only compute them just before they are displayed.

Arnavion•5mo ago
Because the display is the bottleneck.
Etherlord87•5mo ago
Can you present some real life scenario where this is an issue? Let's say you want to display the result on a webpage - the bottleneck of creating a DOM structure with all <div>s and similar tags would be much more significant, but what you should do instead is just create a scrollbar, and enough divs to fill the scrolling area, and as a user drags the scrollbar's slider, you adjust the height of the divs by the modulo of scrolled_height / div_height, and then populate those divs with the right values for the scrolled range (that you can easily compute on each scrollbar event).

The only reason to care about those microseconds is when you want to really fill the console with millions of lines, but you shouldn't actually want to do that, I think ever?

Arnavion•5mo ago
If OP is looking for ideas, there are two intermediate steps between the extremes of "write every line to stdout" and "build up a buffer of the whole output and then write it to stdout".

1. `stdout().lock()` and `writeln!()` to that. By default using `print*!()` will write to `stdout()` which takes a process-wide lock each time. (Funnily enough they use .lock() in the "build up a buffer of the whole output" section, just to do one .write_all() call, which is the one time they don't need to use .lock() because Stdout's impl of write_all() will only take the lock once anyway.)

2. Wrap the locked stdout in a `BufWriter` and `writeln!()` to that. It won't flush on every line, but it also won't buffer the entire output, so it's a middle point between speed and memory usage.

---

For the final proc macro approach, there is the option to unroll the loop in the generated code, and the option to generate a &'static str literal of the output.

bjoli•5mo ago
I remember writing it in high school end ending up using a wheel (circular data structure) to avoid any modulo at all. Then my HS teacher said that it should be extensible so I wrote a wheel generator.

Despite writing things in scheme I ended up being the fastest. It is no magic bullet, but if you only want the regular fizz buzz it is a simple way to just about double the speed.

setr•5mo ago
Isn't a circular array implemented with a modula to begin with? I don’t see how you bypass it
teo_zero•5mo ago

  m3=m5=m7=1
  for ...
    ...
    m3 = m3==2?0:m3+1
    m5 = m5==4?0:m5+1
    m7 = m7==6?0:m7+1
bjoli•5mo ago
You keep two counters.
atiedebee•5mo ago
> But the obvious possibilities almost certainly won't be performant: integer modulo is a single CPU instruction [...]

Yes it is a single instruction, but that is not indicative of the actual performance. Modulo on x86 is done through the div instruction which takes tens of cycles. When you compile the code you'll likely see a multiply + shift instead because you modulo by a constant.