frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AlphaFace: High Fidelity and Real-Time Face Swapper Robust to Facial Pose

https://arxiv.org/abs/2601.16429
1•PaulHoule•28s ago•0 comments

Scientists discover “levitating” time crystals that you can hold in your hand

https://www.nyu.edu/about/news-publications/news/2026/february/scientists-discover--levitating--t...
1•hhs•2m ago•0 comments

Rammstein – Deutschland (C64 Cover, Real SID, 8-bit – 2019) [video]

https://www.youtube.com/watch?v=3VReIuv1GFo
1•erickhill•2m ago•0 comments

Tell HN: Yet Another Round of Zendesk Spam

1•Philpax•3m ago•0 comments

Postgres Message Queue (PGMQ)

https://github.com/pgmq/pgmq
1•Lwrless•6m ago•0 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
1•cui•9m ago•1 comments

NY lawmakers proposed statewide data center moratorium

https://www.niagara-gazette.com/news/local_news/ny-lawmakers-proposed-statewide-data-center-morat...
1•geox•11m ago•0 comments

OpenClaw AI chatbots are running amok – these scientists are listening in

https://www.nature.com/articles/d41586-026-00370-w
2•EA-3167•11m ago•0 comments

Show HN: AI agent forgets user preferences every session. This fixes it

https://www.pref0.com/
4•fliellerjulian•13m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model

https://github.com/ghostty-org/ghostty/pull/10559
2•DustinEchoes•15m ago•0 comments

Show HN: SSHcode – Always-On Claude Code/OpenCode over Tailscale and Hetzner

https://github.com/sultanvaliyev/sshcode
1•sultanvaliyev•15m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/microsoft-appointed-a-quality-czar-he-has-no-direct-reports-and-no-b...
2•RickJWagner•17m ago•0 comments

Multi-agent coordination on Claude Code: 8 production pain points and patterns

https://gist.github.com/sigalovskinick/6cc1cef061f76b7edd198e0ebc863397
1•nikolasi•17m ago•0 comments

Washington Post CEO Will Lewis Steps Down After Stormy Tenure

https://www.nytimes.com/2026/02/07/technology/washington-post-will-lewis.html
5•jbegley•18m ago•1 comments

DevXT – Building the Future with AI That Acts

https://devxt.com
2•superpecmuscles•19m ago•4 comments

A Minimal OpenClaw Built with the OpenCode SDK

https://github.com/CefBoud/MonClaw
1•cefboud•19m ago•0 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
3•amitprasad•20m ago•0 comments

The Internal Negotiation You Have When Your Heart Rate Gets Uncomfortable

https://www.vo2maxpro.com/blog/internal-negotiation-heart-rate
1•GoodluckH•21m ago•0 comments

Show HN: Glance – Fast CSV inspection for the terminal (SIMD-accelerated)

https://github.com/AveryClapp/glance
2•AveryClapp•22m ago•0 comments

Busy for the Next Fifty to Sixty Bud

https://pestlemortar.substack.com/p/busy-for-the-next-fifty-to-sixty-had-all-my-money-in-bitcoin-...
1•mithradiumn•23m ago•0 comments

Imperative

https://pestlemortar.substack.com/p/imperative
1•mithradiumn•24m ago•0 comments

Show HN: I decomposed 87 tasks to find where AI agents structurally collapse

https://github.com/XxCotHGxX/Instruction_Entropy
1•XxCotHGxX•27m ago•1 comments

I went back to Linux and it was a mistake

https://www.theverge.com/report/875077/linux-was-a-mistake
3•timpera•29m ago•1 comments

Octrafic – open-source AI-assisted API testing from the CLI

https://github.com/Octrafic/octrafic-cli
1•mbadyl•30m ago•1 comments

US Accuses China of Secret Nuclear Testing

https://www.reuters.com/world/china/trump-has-been-clear-wanting-new-nuclear-arms-control-treaty-...
3•jandrewrogers•31m ago•1 comments

Peacock. A New Programming Language

2•hashhooshy•35m ago•1 comments

A postcard arrived: 'If you're reading this I'm dead, and I really liked you'

https://www.washingtonpost.com/lifestyle/2026/02/07/postcard-death-teacher-glickman/
4•bookofjoe•37m ago•1 comments

What to know about the software selloff

https://www.morningstar.com/markets/what-know-about-software-stock-selloff
2•RickJWagner•40m ago•0 comments

Show HN: Syntux – generative UI for websites, not agents

https://www.getsyntux.com/
3•Goose78•41m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/ab75cef97954
2•birdculture•41m ago•0 comments
Open in hackernews

Subnanosecond Flash Memory

https://www.nature.com/articles/s41586-025-08839-w
57•thund•9mo ago

Comments

thund•9mo ago
> The Dirac channel flash shows a program speed of 400 picoseconds, non-volatile storage and robust endurance over 5.5 × 10^6 cycles.
rcxdude•9mo ago
So, about 2ms of use? That lifetime seems like this is far from practical as a replacement for SRAM.
codedokode•9mo ago
Can it be used as FPGA config RAM? Also, maybe for faster SSDs?
dgoldstein0•9mo ago
My guess is they are thinking of it more as a replacement for dram or disk. I didn't read far enough to learn if it failed after all those cycles or just "we stopped testing it". Either way it sounds promising.
Someone•9mo ago
I think they mean write-read cycles.
jmpman•9mo ago
Would it be lower power than DRAM?
Tepix•9mo ago
Yes!
infogulch•9mo ago
If it's low power you can stack it higher without worrying so much about it melting. This might make a great high capacity cache.
kryptiskt•9mo ago
The cycles are programming cycles, not clock cycles. Their stability measurement was 60,000 seconds.
kvemkon•9mo ago
Recent discussion:

Researchers develop picosecond-level flash memory device (19.04.2025)

https://news.ycombinator.com/item?id=43735452

userbinator•9mo ago
to demonstrate that the device remains stable even after 60,000s

A little over 16 hours? That's suspiciously short. The endurance vs retention curve isn't clear from this article either; they say "10 years" and "5.5 million cycles" but it seems more like you either get 10 years and 1 cycle, or 5.5M cycles to immediate failure with no regard to retention.

It reminds me of this old paper on testing USB drives for endurance, where they just hammered at the flash until it failed to program immediately and "concluded" that the endurance was many orders of magnitude higher than the manufacturer's specifications, with no attention paid to retention at all: https://www.usenix.org/event/fast10/tech/full_papers/boboila...

tlb•9mo ago
It's more like DRAM with a much longer refresh time (60 ks instead of 60 ms).
IanCal•9mo ago
60,000 seconds was the amount of time it was tested for, they then extrapolate that out linearly. It doesn't need refreshing that often.
tlb•9mo ago
DRAM will also normally hold most of its data for 1000x longer than the rated (usually 60 ms) refresh time. This has sometimes been used to recover secrets from powered down computers. The rated refresh time is chosen to give near-zero errors over years of operation, accounting for worse-case leakage from any bit, but most bits leak much less than that.
wtallis•9mo ago
I think older systems also tended to use a fixed refresh rate, rather than refreshing more often as the DRAM temperature rises. Temperature sensors on DRAM are more common now, so systems don't have to be so conservative with the refresh intervals.
dragontamer•9mo ago
> 60,000 seconds was the amount of time it was tested for,

Surely if they already have a test setup, then having a test last for 600,000 seconds isn't very hard?

Things that look linear for a short period end up being exponential over longer periods. I don't think we can assume linear extrapolation here. There could be physics at play where exponential degeneration of the voltage occurs.

Its a good start of a test. But it seems weird in that a paper like this would have taken much more than ~1 week to write, so making a test last ~1 week for their calculations seems within the feasibility of this group. But its oddly missing data.

londons_explore•9mo ago
Even if it did have a 16 hour retention, this memory would have plenty of uses.

Adding a couple of percent of ECC data tends to 10x retention anyway, so there is a direct engineering trade off between retention and capacity.

foxyv•9mo ago
That would be perfect for any high performance working storage on replicated distributed applications. Stuff like Redis, Cassandra, etc... You would never turn off the power anyways. Places where you are otherwise using RAM disk or TMPFS storage.
bob1029•9mo ago
Assume the memory is instant. We still need to communicate with it across physical distance. How far away the memories are in space is way more critical than the speed of any one element in isolation.

Why are we constrained to such a relatively small amount of L1 cache? What would stop us from extending this arbitrarily?

ben-schaaf•9mo ago
> How far away the memories are in space is way more critical than the speed of any one element in isolation.

Correct me if I'm wrong, but I don't think this is true. If you put instant memory a whole meter away it'll still only have a round trip of 6.6ns at the speed of light, which is approximately the latency of L2. Given how close L2 is, I don't think distance is a large factor of its latency.

davemp•9mo ago
The problem is resistance and parasitic capacitance scales with wire length, e^(-t/RC) is going to limit your max frequency.
bobmcnamara•9mo ago
HP was the only CPU vendor I recall that went for massive L1 with their PA-RISC chips, some with 1-2MB of L1. I'm going to say a large L1 is ~1MB for this comment.

There are power, speed, and complexity trade-offs in cache design. Here were a few of them:

Direct-mapping is the simplest approach, and means a given address can only exist in one location, but a problem occurs when two addresses map to the same cache line - one is evicted even if there's plenty of space in the cache elsewhere.

What if we built an associative cache, where every line had an address indicator? Then we can fully use the cache space. But it's far more complicated to search: a miss requires checking every cacheline. If it's fast, so does a hit.

Many systems today use a mix. Smaller caches are often direct mapped. Larger caches tend to use a combination of 2-8 direct-mapped caches where an address can be searched in at the same time, or within a few cycles of each other.

Another problem is evictions becoming a future cache miss. With only a large L1, a cacheline was either in fast L1 or in DRAM. There's often a write buffer or victim cache between them to try to hide dirty eviction latency, but a subsequent fetch will pay the DRAM access cost. As we scale the cache size up, L1 access speed becomes challenging and eventually it's more effective to limit the L1, and build a even larger, slower L2, and then we get the advantage that L2 is still faster than DRAM and we can prefetch into it.

This cache hierarchy tends to fit typical access patterns better as well - for many workloads most accesses will tend to be clumped together. For streaming workloads like video processing that won't fit in a L1 cache anyway, the algorithms are usually aware of row/column/striding impacts on cache utilization already.

There's probably more to consider, like SMP.

bcrl•9mo ago
PA-RISC clock speeds were relatively low for the time of their introduction. For example, the PA-8900 was launched in May 2005. The low end Athlon 64 of May 2005 was clocked at 2.2GHz (and eventually hit 3.2GHz) versus the PA-RISC top clock of 1.1GHz. You can always build a larger L1 cache in the same process tech if you're willing to sacrifice clock speed sufficiently.
jeffbee•9mo ago
L1 cache has fast access because it is small. If you make it larger, you necessarily make it slower to access.
hinkley•9mo ago
Access time is a a function of areal density, not unlike hard drives. Plus access logic increases at least logarithmic to the number of lines in the cache, so you’re probably looking at in the neighborhood of 50% slower for every doubling of L1 cache size.

Now as to why cache sizes didn’t increase much once clock speeds stagnated and feature size continued to decline, I couldn’t say. But L3 caches didn’t used to exist, and L2 has gotten bigger.

ein0p•9mo ago
And then throw in some co-located compute there using chiplets, entirely bypassing the memory bus and PCIe. That'd be _the_ ideal Transformer substrate. Memory bandwidth bottleneck just disappears for the most part. Memory size bottleneck, too.