frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

ShowHN: Make OpenClaw Respond in Scarlett Johansson’s AI Voice from the Film Her

https://twitter.com/sathish316/status/2020116849065971815
1•sathish316•1m ago•0 comments

CReact Version 0.3.0 Released

https://github.com/creact-labs/creact
1•_dcoutinho96•3m ago•0 comments

Show HN: CReact – AI Powered AWS Website Generator

https://github.com/creact-labs/ai-powered-aws-website-generator
1•_dcoutinho96•3m ago•0 comments

The rocky 1960s origins of online dating (2025)

https://www.bbc.com/culture/article/20250206-the-rocky-1960s-origins-of-online-dating
1•1659447091•9m ago•0 comments

Show HN: Agent-fetch – Sandboxed HTTP client with SSRF protection for AI agents

https://github.com/Parassharmaa/agent-fetch
1•paraaz•10m ago•0 comments

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
5•witnessme•14m ago•1 comments

Effects of Zepbound on Stool Quality

https://twitter.com/ScottHickle/status/2020150085296775300
2•aloukissas•17m ago•1 comments

Show HN: Seedance 2.0 – The Most Powerful AI Video Generator

https://seedance.ai/
1•bigbromaker•20m ago•0 comments

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•26m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
6•alephnerd•29m ago•2 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•29m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
1•pbradv•32m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
3•hasheddan•32m ago•0 comments

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
3•ArtemZ•44m ago•5 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•45m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•47m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
5•duxup•49m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•51m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•1h ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•1h ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•1h ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•1h ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•1h ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•1h ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
2•g1raffe•1h ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•1h ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
3•rolph•1h ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•1h ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•1h ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•1h ago•1 comments
Open in hackernews

Subnanosecond Flash Memory

https://www.nature.com/articles/s41586-025-08839-w
57•thund•9mo ago

Comments

thund•9mo ago
> The Dirac channel flash shows a program speed of 400 picoseconds, non-volatile storage and robust endurance over 5.5 × 10^6 cycles.
rcxdude•9mo ago
So, about 2ms of use? That lifetime seems like this is far from practical as a replacement for SRAM.
codedokode•9mo ago
Can it be used as FPGA config RAM? Also, maybe for faster SSDs?
dgoldstein0•9mo ago
My guess is they are thinking of it more as a replacement for dram or disk. I didn't read far enough to learn if it failed after all those cycles or just "we stopped testing it". Either way it sounds promising.
Someone•9mo ago
I think they mean write-read cycles.
jmpman•9mo ago
Would it be lower power than DRAM?
Tepix•9mo ago
Yes!
infogulch•9mo ago
If it's low power you can stack it higher without worrying so much about it melting. This might make a great high capacity cache.
kryptiskt•9mo ago
The cycles are programming cycles, not clock cycles. Their stability measurement was 60,000 seconds.
kvemkon•9mo ago
Recent discussion:

Researchers develop picosecond-level flash memory device (19.04.2025)

https://news.ycombinator.com/item?id=43735452

userbinator•9mo ago
to demonstrate that the device remains stable even after 60,000s

A little over 16 hours? That's suspiciously short. The endurance vs retention curve isn't clear from this article either; they say "10 years" and "5.5 million cycles" but it seems more like you either get 10 years and 1 cycle, or 5.5M cycles to immediate failure with no regard to retention.

It reminds me of this old paper on testing USB drives for endurance, where they just hammered at the flash until it failed to program immediately and "concluded" that the endurance was many orders of magnitude higher than the manufacturer's specifications, with no attention paid to retention at all: https://www.usenix.org/event/fast10/tech/full_papers/boboila...

tlb•9mo ago
It's more like DRAM with a much longer refresh time (60 ks instead of 60 ms).
IanCal•9mo ago
60,000 seconds was the amount of time it was tested for, they then extrapolate that out linearly. It doesn't need refreshing that often.
tlb•9mo ago
DRAM will also normally hold most of its data for 1000x longer than the rated (usually 60 ms) refresh time. This has sometimes been used to recover secrets from powered down computers. The rated refresh time is chosen to give near-zero errors over years of operation, accounting for worse-case leakage from any bit, but most bits leak much less than that.
wtallis•9mo ago
I think older systems also tended to use a fixed refresh rate, rather than refreshing more often as the DRAM temperature rises. Temperature sensors on DRAM are more common now, so systems don't have to be so conservative with the refresh intervals.
dragontamer•9mo ago
> 60,000 seconds was the amount of time it was tested for,

Surely if they already have a test setup, then having a test last for 600,000 seconds isn't very hard?

Things that look linear for a short period end up being exponential over longer periods. I don't think we can assume linear extrapolation here. There could be physics at play where exponential degeneration of the voltage occurs.

Its a good start of a test. But it seems weird in that a paper like this would have taken much more than ~1 week to write, so making a test last ~1 week for their calculations seems within the feasibility of this group. But its oddly missing data.

londons_explore•9mo ago
Even if it did have a 16 hour retention, this memory would have plenty of uses.

Adding a couple of percent of ECC data tends to 10x retention anyway, so there is a direct engineering trade off between retention and capacity.

foxyv•9mo ago
That would be perfect for any high performance working storage on replicated distributed applications. Stuff like Redis, Cassandra, etc... You would never turn off the power anyways. Places where you are otherwise using RAM disk or TMPFS storage.
bob1029•9mo ago
Assume the memory is instant. We still need to communicate with it across physical distance. How far away the memories are in space is way more critical than the speed of any one element in isolation.

Why are we constrained to such a relatively small amount of L1 cache? What would stop us from extending this arbitrarily?

ben-schaaf•9mo ago
> How far away the memories are in space is way more critical than the speed of any one element in isolation.

Correct me if I'm wrong, but I don't think this is true. If you put instant memory a whole meter away it'll still only have a round trip of 6.6ns at the speed of light, which is approximately the latency of L2. Given how close L2 is, I don't think distance is a large factor of its latency.

davemp•9mo ago
The problem is resistance and parasitic capacitance scales with wire length, e^(-t/RC) is going to limit your max frequency.
bobmcnamara•9mo ago
HP was the only CPU vendor I recall that went for massive L1 with their PA-RISC chips, some with 1-2MB of L1. I'm going to say a large L1 is ~1MB for this comment.

There are power, speed, and complexity trade-offs in cache design. Here were a few of them:

Direct-mapping is the simplest approach, and means a given address can only exist in one location, but a problem occurs when two addresses map to the same cache line - one is evicted even if there's plenty of space in the cache elsewhere.

What if we built an associative cache, where every line had an address indicator? Then we can fully use the cache space. But it's far more complicated to search: a miss requires checking every cacheline. If it's fast, so does a hit.

Many systems today use a mix. Smaller caches are often direct mapped. Larger caches tend to use a combination of 2-8 direct-mapped caches where an address can be searched in at the same time, or within a few cycles of each other.

Another problem is evictions becoming a future cache miss. With only a large L1, a cacheline was either in fast L1 or in DRAM. There's often a write buffer or victim cache between them to try to hide dirty eviction latency, but a subsequent fetch will pay the DRAM access cost. As we scale the cache size up, L1 access speed becomes challenging and eventually it's more effective to limit the L1, and build a even larger, slower L2, and then we get the advantage that L2 is still faster than DRAM and we can prefetch into it.

This cache hierarchy tends to fit typical access patterns better as well - for many workloads most accesses will tend to be clumped together. For streaming workloads like video processing that won't fit in a L1 cache anyway, the algorithms are usually aware of row/column/striding impacts on cache utilization already.

There's probably more to consider, like SMP.

bcrl•9mo ago
PA-RISC clock speeds were relatively low for the time of their introduction. For example, the PA-8900 was launched in May 2005. The low end Athlon 64 of May 2005 was clocked at 2.2GHz (and eventually hit 3.2GHz) versus the PA-RISC top clock of 1.1GHz. You can always build a larger L1 cache in the same process tech if you're willing to sacrifice clock speed sufficiently.
jeffbee•9mo ago
L1 cache has fast access because it is small. If you make it larger, you necessarily make it slower to access.
hinkley•9mo ago
Access time is a a function of areal density, not unlike hard drives. Plus access logic increases at least logarithmic to the number of lines in the cache, so you’re probably looking at in the neighborhood of 50% slower for every doubling of L1 cache size.

Now as to why cache sizes didn’t increase much once clock speeds stagnated and feature size continued to decline, I couldn’t say. But L3 caches didn’t used to exist, and L2 has gotten bigger.

ein0p•9mo ago
And then throw in some co-located compute there using chiplets, entirely bypassing the memory bus and PCIe. That'd be _the_ ideal Transformer substrate. Memory bandwidth bottleneck just disappears for the most part. Memory size bottleneck, too.