frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Google in Your Terminal

https://gogcli.sh/
1•johlo•27s ago•0 comments

Shannon: Claude Code for Pen Testing

https://github.com/KeygraphHQ/shannon
1•hendler•41s ago•0 comments

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
1•Bender•5m ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•5m ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•6m ago•0 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•gmays•7m ago•0 comments

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•7m ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•7m ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•8m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
3•Bender•8m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•10m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
1•mrkO99•10m ago•0 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
1•bri3d•13m ago•0 comments

The P in PGP isn't for pain: encrypting emails in the browser

https://ckardaris.github.io/blog/2026/02/07/encrypted-email.html
2•ckardaris•15m ago•0 comments

Show HN: Mirror Parliament where users vote on top of politicians and draft laws

https://github.com/fokdelafons/lustra
1•fokdelafons•16m ago•1 comments

Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?

1•Chance-Device•17m ago•0 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
1•ColinWright•20m ago•0 comments

Jim Fan calls pixels the ultimate motor controller

https://robotsandstartups.substack.com/p/humanoids-platform-urdf-kitchen-nvidias
1•robotlaunch•23m ago•0 comments

Exploring a Modern SMTPE 2110 Broadcast Truck with My Dad

https://www.jeffgeerling.com/blog/2026/exploring-a-modern-smpte-2110-broadcast-truck-with-my-dad/
1•HotGarbage•23m ago•0 comments

AI UX Playground: Real-world examples of AI interaction design

https://www.aiuxplayground.com/
1•javiercr•24m ago•0 comments

The Field Guide to Design Futures

https://designfutures.guide/
1•andyjohnson0•25m ago•0 comments

The Other Leverage in Software and AI

https://tomtunguz.com/the-other-leverage-in-software-and-ai/
1•gmays•27m ago•0 comments

AUR malware scanner written in Rust

https://github.com/Sohimaster/traur
3•sohimaster•29m ago•1 comments

Free FFmpeg API [video]

https://www.youtube.com/watch?v=6RAuSVa4MLI
3•harshalone•29m ago•1 comments

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
2•PaulHoule•34m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•34m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•35m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
2•Brajeshwar•36m ago•0 comments

Ask HN: Will LLMs/AI Decrease Human Intelligence and Make Expertise a Commodity?

1•mc-0•37m ago•1 comments

From Zero to Hero: A Brief Introduction to Spring Boot

https://jcob-sikorski.github.io/me/writing/from-zero-to-hello-world-spring-boot
1•jcob_sikorski•37m ago•1 comments
Open in hackernews

A friendly tour of process memory on Linux

https://www.0xkato.xyz/linux-process-memory/
246•0xkato•3mo ago

Comments

sleepytimetea•3mo ago
Website blocked as a threat/unsafe domain.
drbig•3mo ago
False alarm.
0xkato•3mo ago
lol
offmycloud•3mo ago
What browser blocked it?
foobiekr•3mo ago
Umbrella seems to be blocking it, for one.
jeroenhd•3mo ago
Sounds like your security software is broken. https://www.virustotal.com/gui/url/9e0c8d513f58a8053284b8145...
icedchai•3mo ago
Are you on your work laptop? Your corporate IT "security theater" department may not recognize .xyz as a valid TLD.
drbig•3mo ago
Instruction pipelining and this is exactly why I wish we still have the time to go back to "it is exactly as it is", think the 6502 or any architecture that does not pretend/map/table/proxy/ringaway anything.

That, but a hell lot of it with fast interconnect!

... one can always dream.

taeric•3mo ago
I'm curious how this dream is superior to where we are? Yes, things are more complex. But it isn't like this complexity didn't buy us anything. Quite the contrary.
harry8•3mo ago
> ...buy us anything.

Totally depends on who "us" and isn't. What problem is being solved etc. In the aggregate clearly the trade off has been beneficial to the most people. If what you want to do got traded, well you can still dream.

taeric•3mo ago
Right, but that was kind of my question? What is better about not having a lot of these things?

That is, phrasing it as a dream makes it sound like you imagine it would be better somehow. What would be better?

layer8•3mo ago
Things would be simpler, more predictable and tractable.

For example, real-time guarantees (hard time constraints on how long a particular type of event will take to process) would be easier to provide.

taeric•3mo ago
But why do we think that? The complexity would almost certainly still exist. Would just now be up a layer. With no guarantees that you could hit the same performance characteristics that we are able to hit today.

Put another way, if that would truly be a better place, what is stopping people from building it today?

layer8•3mo ago
Performance wouldn’t be the same, and that’s why nobody is manufacturing it. The industry prefers living with higher complexity when it yields better performance. That doesn’t mean that some people like in this thread wouldn’t prefer if things were more simple, even at the price of significantly lower performance.

> The complexity would almost certainly still exist.

That doesn’t follow. A lot of the complexity is purely to achieve the performance we have.

taeric•3mo ago
I'm used to people arguing for simpler setups because the belief is that they could make them more performant. This was specifically the push for RISC back in the day, no?

To that end, I was assuming the idea would be that we think we could have faster systems if we didn't have this stuff. If that is not the assumption, I'm curious what the appeal is?

layer8•3mo ago
That’s certainly not the assumption here. The appeal is, as I said, that the systems would be more predictable and tractable, instead of being a tarpit of complexity. It would be easier to reason about them, and about their runtime characteristics. Side-channel attacks wouldn’t be a thing, or at least not as much. Nowadays it’s rather difficult to reason about the runtime characteristics of code on modern CPUs, about what exactly will be going on behind the scenes. More often than not, you have to resort to testing how specific scenarios will behave, rather than being able to predict the general case.
taeric•3mo ago
I guess I don't know that I understand why you would dream of this, though? Just go out and program on some simpler systems? Retro computing makes the rounds a lot and is perfectly doable.
harry8•3mo ago
Think about using a modern x86-64 cpu core to run one process with no operating system. Know exactly what is in cache memory. Know exactly what deadlines you can meet and guarantee that.

It's quite a different thing to running a general purpose OS to multiplex each core with multiple processes and a hardware walked page table, TLB etc.

Obviously you know what you prefer for your laptop.

As we get more and more cores perhaps the system designs that have evolved may head back toward that simplicity somewhat? Anything above %x cpu usage gets its own isolated, un-interrupted core(s)? Uses low cost IPC? Hard to speculate with any real confidence.

taeric•3mo ago
I just don't know that I see it running any better for the vast majority of processes that I could imagine running on it. Was literally just transcoding some video, playing a podcast, and browsing the web. Would this be any better?

I think that is largely my qualm with the dream. The only way this really works is if we had never gone with preemptive multitasking, it seems? And that just doesn't seem like a win.

You do have me curious to know if things really do automatically pin to a cpu if it is above a threshold. I know that was talked of some, did we actually start doing that?

harry8•3mo ago
> Was literally just transcoding some video, playing a podcast, and browsing the web.

Yeah that's the perfect use case for current system design. Nobody sane wants to turn that case into an embedded system running a single process with hard deadline guarantees. Your laptop may not be ideal for controlling a couple of tonnes of steel at high speed, for example. Start thinking about how you would design for that and you'll see the point (whether you want to agree or not).

taeric•3mo ago
Apologies, almost missed that you had commented here.

I confess I assumed writing controllers for a couple of tonnes of steel at high speed would not use the same system design as a higher level computer would? In particular, I would not expect most embedded applications to use virtual memory? Is that no longer the case?

harry8•2mo ago
"Hard Real Time" is the magic phrase to go as deep as you want to.
taeric•2mo ago
This isn't really answering my question. Have they started using virtual memory in hard real time applications? Just generally searching the term confirms that they are still seen as not compatible.
harry8•2mo ago
In addition to search engines you can learn a great deal about all sorts of things using an LLM. This works well enough if you don't want to pay. They are very patient and you canb go as deep as you want. https://duckduckgo.com/?q=DuckDuckGo+AI+Chat&ia=chat&duckai=...
ojbyrne•3mo ago
The article is essentially describing virtual memory (with enhancements) which predates the 6502 by a decade or so.
Delk•3mo ago
IMO it's not even quite right in its description. The first picture that describes virtual memory shows all processes as occupying the same "logical" address space with the page table just mapping pages in the "logical" address space to physical addresses one-to-one. In reality (at least in all VM systems I know of) each process has its own independent virtual address space.
loeg•3mo ago
But why?
drbig•3mo ago
The point is that we should acknowledged those "cheats" came with their reasons and that they did improve performance etc. But, they also did come with a cost (Meltdown, Spectre anyone?) and fundamentally introduced _complexities_, which at today's level of manufacturing and end of Moore's law may not be the best tradeoffs.

I'm just expressing the general sentiment of distaste for piling stuff upon stuff and holding it with a duct-tape, without ever stepping back and looking at what we have, or at least should have, learnt and where we are today in the technological stack.

eru•3mo ago
Do you want to throw out out-of-order-execution and pipelining while you are at it, too?

I'm semi-serious: there are actually modern processor designs that put this burden on the programmer (or rather their fancy compiler / code generator) in order to keep the silicon simple. See eg https://en.wikipedia.org/wiki/Groq#Language_Processing_Unit

mhavelka77•3mo ago
"mmap, without the fog"

I don't know if this is just me being paranoid, but every time I see a phrase like this in an article I feel like it's co-written by an LLM and it makes me mad...

puika•3mo ago
The article does feel like Gemini when you ask it to explain you something in layman terms, but co-authored by chatgpt with nonsense like "without the fog".
ramon156•3mo ago
I love these tiny explainers! Even if I already know what it's about, having a confirmation helps throughout reading.