frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Jobstocks.ai – Live hiring momentum for 1k public companies

https://jobstocks.ai/
1•TalO•58s ago•0 comments

A.I. Has Changed My Classroom, but Not for the Worse

https://www.nytimes.com/2025/11/25/magazine/ai-higher-education-students-teachers.html
1•cainxinth•3m ago•0 comments

Encoderfile v0.1.0: Deploy Encoder Transformers as Single Binary Executables

https://blog.mozilla.ai/encoderfile-v0-1-0-deploy-encoder-transformers-as-single-binary-executables/
1•theshrike79•3m ago•0 comments

Trillions Spent and Big Software Projects Are Still Failing

https://spectrum.ieee.org/it-management-software-failures
1•pseudolus•4m ago•0 comments

Claude 4 Opus just one-shotted my app idea in 30 seconds

https://www.aithings.dev/
2•rutagandasalim•6m ago•3 comments

Direction-Aware Arrow Shape using corner-shape

https://css-tip.com/arrow/
1•robin_reala•7m ago•0 comments

Show HN: Words that help me think

https://plastithink.com
1•andsko•7m ago•0 comments

Show HN: Chess960v2 – The New Fischer Random Chess (over 400 rounds played)

https://chess960v2.com/en
1•lavren1974•7m ago•0 comments

Nuptial Flight

https://en.wikipedia.org/wiki/Nuptial_flight
1•red369•11m ago•1 comments

Making Crash Bandicoot (2011)

https://all-things-andy-gavin.com/video-games/making-crash/
2•davikr•12m ago•0 comments

Indie game developers have a new sales pitch: being 'AI free'

https://www.theverge.com/entertainment/827650/indie-developers-gen-ai-nexon-arc-raiders
2•doener•14m ago•0 comments

Etälääkäri, joka tarjoaa rauhallista, selkeää ja empaattista tukea ihmisill

https://etalaakari.com/
1•EmiliaKorhonen•17m ago•1 comments

Dangers, Solution of Relying on AI Chatbots for Mental Health, Parasocial

https://hstsethi.vercel.app/posts/lifestyle/dangers-relying-ai-mental-health-parasocial-relations...
2•catstor•18m ago•1 comments

The SPACE of Developer Productivity

https://queue.acm.org/detail.cfm?id=3454124
1•gtirloni•18m ago•0 comments

Does Dioxus Spark Joy?

https://fasterthanli.me/articles/does-dioxus-spark-joy
2•birdculture•18m ago•0 comments

To help kids 'climb the ivy,' Chinese uproot families for Silicon Valley schools

https://www.sfchronicle.com/bayarea/article/chinese-families-ivy-league-schools-21164622.php
1•TMWNN•21m ago•1 comments

Scientists identify five structural eras of the human brain over a lifetime

https://medicalxpress.com/news/2025-11-scientists-eras-human-brain-lifetime.html
1•pseudolus•21m ago•0 comments

A dormant volcano erupted in Ethiopia sending ash plumes toward Yemen and Oman

https://abcnews.go.com/International/wireStory/volcano-erupts-northern-ethiopia-sending-ash-plume...
2•ls-a•22m ago•0 comments

H00k.dev

2•steebono•25m ago•2 comments

Robots and AI Are Already Remaking the Chinese Economy

https://www.wsj.com/tech/ai/ai-robots-china-manufacturing-89ae1b42
1•pseudolus•26m ago•1 comments

How China Lost Mongolia – Does Taipei Still Claim It? [video]

https://www.youtube.com/watch?v=QOIWrmonY0E
1•hunglee2•27m ago•0 comments

Adolescence lasts into 30s – new study shows four pivotal ages for your brain

https://www.bbc.com/news/articles/cgl6klez226o
3•beardyw•28m ago•0 comments

The 30-foot sea cow quickly hunted to extinction because of its tasty meat

https://www.nationalgeographic.com/history/article/stellers-sea-cow-30-foot-hunted-extinction
2•cbzbc•31m ago•0 comments

Show HN: Martini-Kit, create multiplayer games without writing networking code

https://martini.blueprintlab.io/
1•yaoke259•32m ago•0 comments

Nvidia Says It's Not Enron in Private Memo Refuting Accounting Questions

https://www.barrons.com/articles/nvidia-stock-ai-accounting-allegations-366f16ac?gaa_at=eafs&gaa_...
4•zerosizedweasle•34m ago•2 comments

Qwen2.5 Coder 1.5B Roblox

https://huggingface.co/umjunsik1323/Qwen2.5-Coder-1.5B-roblox
1•umjunsik132•34m ago•1 comments

Ask HN: Scaling local FAISS and LLM RAG system (356k chunks)architectural advice

1•paul2495•35m ago•0 comments

DroneEye – Real-Time European Drone Incident Intelligence Dashboard

https://www.droneeye.eu
3•Gyarbij•36m ago•0 comments

Negative Differential Conductance in triangular molecular assemblies

https://arxiv.org/abs/2508.05575
1•peter_d_sherman•39m ago•1 comments

Schema.org: create, maintain, and promote schemas for structured data

https://schema.org/docs/about.html
3•doener•39m ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•6mo ago

Comments

kzawpl•6mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•6mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/