frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Uber Airport Demand Forecasting

https://www.uber.com/blog/forecasting-models-to-improve-availability-at-airports/
1•brettcvz•4m ago•0 comments

China's Hottest New App Is 'Are You Dead Yet'

https://www.wired.com/story/china-are-you-dead-yet-app/
1•Kholin•5m ago•0 comments

Why India's plan to make AI companies pay for training data should go global

https://restofworld.org/2026/india-ai-data-license-fee/
2•Brajeshwar•10m ago•0 comments

Google please stop blocking my parody website

https://ivanca.github.io/censorship/2026/01/13/google-stop-blocking-my-parody-website/
1•AmbroseBierce•17m ago•0 comments

FateTell – Chinese I Ching and BaZi AI with physics-based interaction

https://fatetell.com/
2•Farigh4•17m ago•1 comments

Artificial StupidIntelligence and Airport Sinks

https://www.deobald.ca/essays/2026-01-13-artificial-stupidintelligence-and-airport-sinks/
1•deobald•17m ago•0 comments

The Jock Programming Language

https://jock.org
2•Antibabelic•19m ago•0 comments

Leash by StrongDM

https://leash.strongdm.ai/
1•handfuloflight•21m ago•0 comments

GLM-Image: High-Fidelity Image Generation

https://glmimage.com
1•sarkory•26m ago•1 comments

Why do so many students have ADHD?

https://unherd.com/2026/01/why-do-so-many-students-have-adhd/
2•jnord•29m ago•0 comments

Show HN: Soulcaster – Cluster feedback, spin up an agent to fix it

https://www.soulcaster.dev/
1•codecracker3001•35m ago•0 comments

What Will Work (and Won't) in SaaS in 2026:- Lessons from Building 100 Tools

https://digiwares.xyz/blog/saas-2026-predictions
3•digi_wares•40m ago•1 comments

Locksport Network Directory

https://locksport.net/
2•memoriesofsmell•40m ago•1 comments

Warmer climate, spicier food. But which country is the spiciest?

https://bigthink.com/strange-maps/temperature-spiciness-spectrum/
2•thunderbong•42m ago•0 comments

My Conversation with Stacey Abrams

https://snyder.substack.com/p/live-with-professor-timothy-snyder
1•hkhn•43m ago•0 comments

Democracy Is Under Attack

https://10stepscampaign.org/
13•hkhn•45m ago•0 comments

China may crack down on "Singapore-washed" tech companies

https://www.axios.com/2026/01/13/china-meta-manus-singapore
2•doppp•46m ago•0 comments

Ancient Rome meets modern technology

https://apnews.com/article/italy-palatine-hill-livestream-tours-emperor-frescoes-e63327f43424e125...
1•frm88•48m ago•1 comments

New Physics Paper: Resolving the Hubble Tension [pdf]

https://github.com/localtimeacceleration/LTA/blob/main/lta_paper.pdf
1•localtimeaccel•50m ago•1 comments

Apple Creator Studio

https://www.apple.com/apple-creator-studio/
1•aenean•1h ago•1 comments

Experiments with Kafka's head-of-line blocking (2023)

https://www.artur-rodrigues.com/tech/2023/03/21/kafka-head-of-line-blocking.html
1•teleforce•1h ago•0 comments

Side Stepping Head of Line Blocking in Kafka Consumers (2025)

https://medium.com/@michael.diggin/side-stepping-head-of-line-blocking-in-kafka-consumers-6e7cbe4...
2•teleforce•1h ago•0 comments

GLM-Image Auto-Regressive for Dense-Knowledge and High-Fidelity Image Generation

https://z.ai/blog/glm-image
3•ledak•1h ago•0 comments

Building Threat Models with MCP and AI Agents

https://www.detectionatscale.com/p/threat-modeling-ai-agents-mcp
2•gmays•1h ago•0 comments

AI will compromise your cybersecurity posture

https://rys.io/en/181.html
8•gmays•1h ago•2 comments

Tea App Checker

https://teaappchecker.com
2•thefirstname•1h ago•1 comments

Show HN: Why Apple's Security Transparency Is a Double-Edged Sword for iOS 18.5

https://medium.com/@ryu360i/cves-as-feature-catalogs-the-terrifying-reality-of-automated-version-...
2•ryuzaburo•1h ago•0 comments

Dylan Araps has taken up farming

https://dylan.gr/1768295794
3•planet36•1h ago•1 comments

Matthew McConaughey Trademarks Himself to Fight AI Misuse

https://www.wsj.com/tech/ai/matthew-mcconaughey-trademarks-himself-to-fight-ai-misuse-8ffe76a9
8•petethomas•1h ago•2 comments

gpui – A fast, productive UI framework for Rust from the creators of Zed

https://www.gpui.rs/
4•doodlesdev•1h ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•8mo ago

Comments

kzawpl•8mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•8mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/