frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Stop Reading Newsletters on Screens – Print Them Automatically

https://autoprint.email/blog/print-newsletters-automatically-while-you-sleep
1•kelonye•1m ago•0 comments

I built a passive radar system in my kitchen

https://old.reddit.com/r/RTLSDR/comments/1r3ffmd/phased_array_passive_radar_in_my_kitchen/
1•muttled•1m ago•1 comments

Formally equivalent math frameworks fail differently on real data

https://zenodo.org/records/18528229
1•CreativeLabsRo•11m ago•0 comments

Scene Gaussian Splat Viewer

https://github.com/iab131/Scene-Gaussian-Splat-Viewer
1•e8bai•16m ago•1 comments

Show HN: Roe.md generate your own OpenClaw-like bot from a single Markdown file

https://github.com/guld/ROE.md
1•guld•22m ago•0 comments

Osamu Dazai's 1940 autobiographical comedy "The Beggar Student"

https://tonysreadinglist.wordpress.com/2025/01/28/the-beggar-student-by-osamu-dazai-review/
1•gsf_emergency_6•30m ago•0 comments

Zhipu's 120% Surge Highlights China's New AI Market Favorites

https://www.bloomberg.com/news/articles/2026-02-13/zhipu-s-120-surge-highlights-china-s-new-ai-ma...
2•ilreb•33m ago•0 comments

He [Human] asked me to pick my [AI] own name

https://www.moltbook.com/post/6e9623d5-1865-4200-99b5-44aaa519632b
1•jskherman•34m ago•2 comments

Why governments insist on CBDCs or stablecoins when most people don't want them

https://www.nakedcapitalism.com/2026/02/why-governments-insist-on-cbdcs-or-stablecoins-when-most-...
3•iamnothere•35m ago•1 comments

Elon Musk's Sci-Fi Hyperloop Failed

https://washingtonian.com/2026/02/12/how-elon-musks-sci-fi-hyperloop-failed/
2•_delirium•40m ago•0 comments

Show HN: BlueChimp – Identify high-intent visitors without invasive trackin

https://bluechimp.io
1•Israel_Kloss•42m ago•1 comments

Maintainer rejects AI-gen PR, AI responds with a blog calling it "gatekeeping"

https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-11-gatekeeping-in-open-sour...
1•tontonius•43m ago•0 comments

Show HN: Yori – Isolating AI Logic into "Semantic Containers" (Docker for Code)

2•alonsovm•47m ago•0 comments

"Am I the only one still wondering what is the deal with linear types?" – Jon S

https://www.jonmsterling.com/01KB/
3•matt_d•48m ago•0 comments

The AI pricing and monetization playbook

https://www.bvp.com/atlas/the-ai-pricing-and-monetization-playbook
1•gmays•48m ago•0 comments

A Geometric Solution to the Coulomb Barrier via 10D Phase-Alignment

https://sharetext.io/d0bm1suz
1•diametricsound•56m ago•1 comments

ICE Masks Up in More Ways Than One

https://www.kenklippenstein.com/p/exclusive-ice-masks-up-in-more-ways
13•computerliker•1h ago•0 comments

Ask HN: Better hardware means OpenAI, Anthropic, etc. are doomed in the future?

2•kart23•1h ago•0 comments

Cloud-Claw: Run OpenClaw with 1 Click on Cloudflare to Create Personal Agent

https://github.com/miantiao-me/cloud-claw
1•ms7892•1h ago•0 comments

RBC – It Stands for Big Chicken

https://www.reallybigchicken.com/
2•frenchie4111•1h ago•0 comments

Goldman's India Push Pays Off in Crowded Wall Street Field

https://www.bloomberg.com/news/articles/2026-02-10/goldman-s-push-bears-fruit-in-india-s-crowded-...
1•vismit2000•1h ago•0 comments

I built JoyPass: surprise gestures like breakfast in bed, now in Apple Wallet

https://joypass.co
3•arron-taylor•1h ago•0 comments

LLM Reasoning Failures

https://arxiv.org/abs/2602.06176
1•gradus_ad•1h ago•0 comments

Megalancer.com

https://megalancer.com/
2•Megalancer•1h ago•1 comments

I improved 15 LLMs at coding in one afternoon. Only the harness changed

https://twitter.com/_can1357/status/2021828033640911196
1•amardeep•1h ago•1 comments

One of my managers demanding a 25% share of the project bonus pool

https://old.reddit.com/r/founder/comments/1r3d332/a_discussion_about_one_of_my_managers_demanding_a/
2•fanux•1h ago•0 comments

The Filter, Not the Bar

https://k2xl.substack.com/p/the-filter-not-the-bar
2•k2xl•1h ago•0 comments

Private-equity barons have a giant AI problem

https://www.economist.com/business/2026/02/12/private-equity-barons-have-a-giant-ai-problem
1•petethomas•1h ago•0 comments

Discord walks back age verification fears for most users

https://www.techbuzz.ai/articles/discord-walks-back-age-verification-fears-for-most-users
2•brie22•1h ago•1 comments

Built a skill that hugs my agents

https://hugllm.com/
1•zeahoo•1h ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•9mo ago

Comments

kzawpl•9mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•9mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/