frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•2m ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•4m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

1•amichail•5m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•11m ago•1 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•13m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•13m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•14m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•15m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•16m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•16m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•17m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•19m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
4•codexon•19m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•20m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•24m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•25m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•25m ago•1 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•25m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•26m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•29m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•29m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•31m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•33m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•34m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•34m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•34m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
2•vyrotek•35m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•37m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•39m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•43m ago•1 comments
Open in hackernews

Optimizations That Aren't

https://zeux.io/2010/11/29/optimizations-that-arent/
19•daniel_alp•6mo ago

Comments

taeric•6mo ago
Point 4 really resonates with me. And it often lends itself with the idea of a budget. Both in terms of speed and memory. How much memory do you have at a given spot of the application? How much time? Can you meaningfully make use of any savings.

Sometimes, you will find slack in unexpected places, as well. Places that have extra time compared to what they used. Or, more common, things that could have used more memory. It is amazing what you can do with extra memory. (Indeed, I think the majority of algorithmic advances that people love to talk about come from using extra memory?)

pnt12•6mo ago
I did some work in this area, concerning data pipelines, and it was a fun experience.

It's really satisfying to optimize (or any kind of refactor) on well tested code. Change the code, run the test, fix if it fails, keep it if it passes. Sometimes the code was not well tested, but it was slow, so there was double the reason to test and improve.

Having deterministic data for comparison is also good in a different perspective: slower feedback loop, but usually more variety, with edge cases you didn't think of. Transforming thousands of data points and getting 0 diffs compared to the original results is quite the sanity check!

Measuring can be difficult but really rewarding. I was doing this very technical work, but constantly writing reports on the outcomes (tables and later plots) and got great feedback from managers/clients, not only about the good results (when they happened, not always!) but also about the transparency and critical analysis.

We didn't really work with acceptance levels though. It was usually "this is slow now, and we expect more data later, so it must be faster". But it makes sense to define concrete acceptance criteria, it's just not always obvious. We'd go more in terms of priorities: explore the slow parts, come up with hypothesis, chase the most promising ones, depending on risk/reward. Easy fixes for quick wins, long stretches for potential big gains - but try to prototype first to validate before going on long efforts that may be fruitless.

kccqzy•6mo ago
> Measure the performance of the target code in a specific situation

A difficult part of optimization is actually trying to make the code work well in multiple specific situations. This often happens in library code where different users call your code with very different sizes of inputs. Sometimes a dumb algorithm works better. Sometimes a fancier algorithm with better big-O but bigger constant factors works better. In practice people try to measure them according to the input size and dynamically choose the algorithm based on the size. This has the pitfall of the heuristic not keeping up with hardware. It also becomes intractable if the performance characteristics depend on multiple factors, then it's trying to encode the minimum in a multi-dimensional space. This work involved in optimization is just exhausting.

addaon•6mo ago
The other approach here is to provide access to the multiple implementations, documentation as to the (main) sensitivities for their performance, and let the caller do their own benchmarking to select the right one, based on the specific situations they care about. It's a bit of kicking the can down the road, but it's also a bit of allowing your customers (at least the ones who care) to get the best results possible.