frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•44s ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•2m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•12m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•16m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•17m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•23m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•23m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
1•irreducible•23m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•25m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•30m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•41m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•46m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•52m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•54m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
3•akagusu•54m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•56m ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•3 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
35•mfiguiere•1h ago•20 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•1h ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•1h ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments
Open in hackernews

Memory optimizations to reduce CPU costs

https://ayende.com/blog/203011-A/memory-optimizations-to-reduce-cpu-costs
56•jbjbjbjb•5mo ago

Comments

userbinator•5mo ago
The given task can be accomplished with not more than a few kilobytes of RAM, a constant independent of the input and output sizes, but unfortunately I suspect the vast majority of programmers now have absolutely no idea how to do so.
Radle•5mo ago
Only real programmers know how to do that.
01HNNWZ0MV43FF•5mo ago
Okay Fermat
shoo•5mo ago
i can see how it'd be possible to transform from the input tabular format to the json format, streaming record by record, using a small constant amount of memory, provided the size of input records was bounded independent of the record count. need to maintain position offset into the input across records, but that's about it

but, maybe we'd need to know more about how the output data is consumed to know if this would actually help much in the real application. if the next stage of processing wants to randomly access records using Get(int i), where i is the index of the item, then even if we transform the input to JSON with a constant amount of RAM, we still have to store this output JSON somewhere so we can Get those items.

the blog post mentioned "padding", i didn't immediately understand what that was referring to (padding in the output format?) but i guess it must be talking about struct padding, where the items were previously stored as an array of structs, while the code in the article transposed everything into homogeneous arrays, eliminating the overhead of padding

vrnvu•5mo ago
Padding in the post refers to memory alignment.

If we had an "array of structs" instead of "struct of arrays" it would be: string(8) + long(8) + int(4) + padding(4) = 24 bytes

xnorswap•5mo ago
How about you enlighten us rather than just taunt us with your superior knowledge?
bilbo-b-baggins•5mo ago
If the “Task” is outputting the JSON for terms to a file, it can be streamed one row at a time - with memory reused after each row is read and the output written. That could be done with a few KB of program space assuming you’re parsing the CSV and outputting the JSON manually instead of using a larger library.

The problem isn’t well constrained because it seems to imply that for some reason it needs to be all accessible in memory, doesn’t specify the cardinality of terms, doesn’t specify whether Get(i) is used in a way that requires that particular interface for accessing a row by number.

If I were to do it, I’d just parse a Page at a time and update a metadata index saying Page P contains entries starting at N. The output file could be memmapped and only the metadata loaded, allowing directly index into the correct Page which could be quickly scanned for a record, and would maybe use 1-2MB of RAM for metadata and whatever Pages are actually being touched.

But like I said the problem is not well constrained enough for even a solution like that to be optimal, since it would suffer from full dataset sequential or random access, as opposed to hot Pages and a long tail.

/shrug specs matter if you’re in the optimization phase

userbinator•5mo ago
Apparently you're not interested in thinking either, which is another thing I've noticed with many developers these days...

The sibling comment provided a good hint already. All you need to store are some file offsets, amounting to a few dozen bytes.

userbinator•5mo ago
Thank you for demonstrating your ignorance.
foota•5mo ago
Smaller things are faster to copy etc.,. The fun part is that the opposite is true as well, when you have some constant load on a service, if you make the requests faster then you will have less requests in flight at once (Little's law) and the aggregate memory consumed by those requests while they are in flight will hence be less.
Panzer04•5mo ago
That's not even the point they're really making here, IMO.

The significant decrease they talk about is a side effect of their chosen language having a GC. This means the strings take more work to deal with than expected.

This feels more like this speaks to the fact that the often small costs associated with certain operations do eventually add up. it's not entirely clear in the post where and when the cost from the GC is incurred, though; I'd presume on creation and destruction?

tialaramex•5mo ago
Even without a GC actual strings are potentially expensive because each of them is a heap allocation, if you have a small string optimisation you avoid this for small strings (e.g. popular C++ standard library string types can have up to 22 bytes of SSO, the Rust CompactString has 24 bytes) but I wouldn't expect a GC language to have the SSO.
yvdriess•5mo ago
The cost of a string array is paid on every GC phase. That array may/contains references so the gc has to check each element every time to check if anything changed. An int array cannot contain references so it can be skipped.

edit: There are tricks to not traverse a compound object every time, but assume that at least one of the 80M objects in that giant array gets modified in between GC activations.

Panzer04•5mo ago
That seems like a huge burden, surely not? How often would a GC typically check for hanging references?
yvdriess•5mo ago
That's most of the work performed by a marking GC.

How much a GC is of total cpu cost totally depends on the application, the GC implementation and the language. It's famously hard to measure what the memory management overhead is, GC in production is anywhere between 7-82% (Cai ISPASS2022). I measured about 19% geomean overhead in accurate simulation by ignoring instructions involved in GC/MM in python's pyperf benchmarks.

sgarland•5mo ago
The stunning inefficiency of storing the key with every value, even without any GC-specific issues, should give one pause.
extraisland•5mo ago
Is this similar to data locality?

https://gameprogrammingpatterns.com/data-locality.html

sharts•5mo ago
Couldn’t one just lazy load and parse as needed instead of loading potentially 40 million rows into memory? Or better yet, if you have that many…databases exist