frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

DNS Firewalling with MISP and Technitium DNS Server

https://zaferbalkan.com/technitium-misp/
1•feldrim•1m ago•1 comments

Ray Marching Soft Shadows in 2D

https://www.rykap.com/2020/09/23/distance-fields/
1•memalign•8m ago•0 comments

Git commit-based annual performance reviewer [video]

https://youtube.com/shorts/9OpklP_TtCY
1•javaskrrt•8m ago•0 comments

You Are Insignificant. That's a Good Thing

https://www.joanwestenberg.com/p/you-are-insignificant-that-s-a-good-thing
1•gpi•13m ago•0 comments

The High Price of Environmental Responsibility

https://rodgercuddington.substack.com/p/the-high-price-of-environmental-responsibility
2•freespirt•15m ago•1 comments

An LED panel that shows the aviation around you

https://github.com/AxisNimble/TheFlightWall_OSS
1•yzydserd•17m ago•0 comments

Artificial muscles, or Robotics 2.0 (RU) (Anthropomorphic robotics)

https://habr.com/ru/articles/969722/
1•chromoblob•19m ago•1 comments

iPhone Fold Will Be Creaseless and Cost $2,400, Report Says

https://www.cnet.com/tech/mobile/iphone-fold-will-be-creaseless-and-cost-2400-report-says/
1•thunderbong•19m ago•0 comments

Generative UI: A rich, custom, visual interactive user experience for any prompt

https://research.google/blog/generative-ui-a-rich-custom-visual-interactive-user-experience-for-a...
1•pramodbiligiri•19m ago•0 comments

Ask HN: Ever done large contract work?

3•ripped_britches•20m ago•0 comments

Meditation as Wakeful Relaxation: Unclenching Smooth Muscle

https://psychotechnology.substack.com/p/meditation-as-wakeful-relaxation
1•eatitraw•20m ago•0 comments

Turn Claude threads into Notion-grade assets you can trust

https://claudeai2notion.aluo.app
1•chatgpt2notion•24m ago•0 comments

Immigrations: Gestiona Tu Visa LLC exposed 67GB of immigrant's data

https://medium.com/@newschu.substack.com/immigrations-gestiona-tu-visa-llc-exposed-67gb-of-immigr...
1•khyy_•29m ago•0 comments

A Deep Dive into MCP and the Future of AI Tooling

https://a16z.com/a-deep-dive-into-mcp-and-the-future-of-ai-tooling/
1•stosssik•29m ago•0 comments

Claude Is Broken in Armenian

https://twitter.com/dyushag/status/1993143599286886525
1•ag8•29m ago•0 comments

California prosecutors used AI to file inaccurate motion in criminal case

https://www.theguardian.com/us-news/2025/nov/26/prosecutor-ai-inaccurate-motion
1•trusche•34m ago•0 comments

The Emoji Layer

https://github.com/jrecyclebin/emojilayer
1•jrecyclebin•35m ago•0 comments

Palo Alto Networks to Acquire Chronosphere (Creators of M3DB)

https://www.paloaltonetworks.com/company/press/2025/palo-alto-networks-to-acquire-chronosphere--n...
3•leo_e•36m ago•0 comments

Mixpanel Security Breach

https://mixpanel.com/blog/sms-security-incident/
7•jaredwiener•36m ago•1 comments

The Nerd Reich – Silicon Valley Fascism and the War on Democracy

https://www.simonandschuster.com/books/The-Nerd-Reich/Gil-Duran/9781668221402
4•brunohaid•46m ago•0 comments

Intellect-3: A 100B+ MoE trained with large-scale RL

https://www.primeintellect.ai/blog/intellect-3
2•meetpateltech•48m ago•0 comments

3D visualization of audio latent spaces (AI Vector Map of Audio)

https://github.com/lyramakesmusic/latent-musicvis
2•caust1c•1h ago•1 comments

Families tracked victims and survivors in real time in Hong Kong tower fire

https://www.abc.net.au/news/2025-11-27/how-families-tracked-hong-kong-fatal-fire-in-real-time-onl...
3•charlieyu1•1h ago•0 comments

Elevating Intelligence via Efficient Model and Tool Orchestration

https://arxiv.org/abs/2511.21689
2•georgehe9•1h ago•0 comments

Ask HN: How many screens do you usually work with?

1•vpaulus•1h ago•4 comments

Show HN: GitHub Activity Analytics Powered by ClickHouse

https://velocity.clickhouse.com/
2•saisrirampur•1h ago•0 comments

Agentic Learner with Grow-and-Refine Multimodal Semantic Memory

https://arxiv.org/abs/2511.21678
1•badmonster•1h ago•0 comments

Artificial 'nose' tells people when certain smells are present

https://www.science.org/content/article/artificial-nose-tells-people-when-certain-smells-are-present
3•ashishgupta2209•1h ago•0 comments

Turning old bread into flour and then into tasty tortillas

https://www.rnz.co.nz/life/food/turning-old-bread-into-flour-and-then-into-tasty-tortillas
2•colinprince•1h ago•0 comments

I run a personal IPv6 BGP network. Netflix is blocking a /64 of our /36. Why?

https://www.neelc.org/posts/netflix-blocks-our-ipv6/
5•ericdiao•1h ago•1 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•6mo ago

Comments

kzawpl•6mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•6mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/