frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

For the Holidays: Epic Interactive Fiction of the Millennial Period

https://www.filfre.net/2025/12/huge-for-the-holidays-epic-interactive-fiction-of-the-millennial-p...
1•ibobev•1m ago•0 comments

Looking for technical co-founder for real estate marketplace in North Africa

1•mbkaj•4m ago•0 comments

An HTTP surface for Claude Code CLI

https://github.com/pattern-zones-co/koine
1•mathewpetty•6m ago•0 comments

A Supposedly Fun Thing I'll Never Do Again [pdf]

https://harpers.org/wp-content/uploads/2008/09/HarpersMagazine-1996-01-0007859.pdf
1•mosiuerbarso•7m ago•0 comments

TS Zip

https://www.bellard.org/ts_zip/
2•cyanf•8m ago•0 comments

How are coding assistants evaluated? SWE-Bench Pro Explorer

https://marginlab.ai/explorers/swe-bench-pro/
2•qwesr123•8m ago•0 comments

Show HN: Aligning AI with Entropy Instead of 'Human Values' ( Paper)

1•NyX_AI_ZERO_DAY•9m ago•1 comments

Olaf: Bringing an Animated Character to Life in the Physical World [video]

https://www.youtube.com/watch?v=-L8OFMTteOo
1•janpot•17m ago•0 comments

"Awesome Production Machine Learning" Github List

https://github.com/EthicalML/awesome-production-machine-learning
2•axsaucedo•18m ago•0 comments

International maps of cities coloured by street/road/ave/etc.

https://erdavis.com/2019/09/20/the-beautiful-hidden-logic-of-cities-worldwide/
2•fanf2•19m ago•1 comments

WANem – The Wide Area Network emulator (2014)

https://wanem.sourceforge.net/
1•basemi•22m ago•0 comments

Sewage can be used to heat and cool buildings

https://apnews.com/article/climate-wastewater-sewage-heating-sustainable-energy-2cbeb696ddff16d9a...
1•montroser•22m ago•0 comments

Tech Talk: Improving Window Resize Behavior

https://www.electronjs.org/blog/tech-talk-window-resize-behavior
2•nikwen•29m ago•0 comments

Agentic browsers: a note for my friends

https://www.dvsj.in/on-agentic-browsers
2•ctxc•29m ago•0 comments

I tricked GPT-4 into suggesting 112 non-existent packages

https://github.com/dariomonopoli-dev/codegate-cli/issues/1
1•mondra•31m ago•0 comments

Russias Next Space Station

https://arstechnica.com/space/2025/12/russia-is-about-to-do-the-most-russia-thing-ever-with-its-n...
1•Anon84•34m ago•0 comments

Help my website is too small

https://lukeplant.me.uk/blog/posts/help-my-website-is-too-small/
1•wofo•34m ago•1 comments

Transient hepatic reconstitution of trophic factors enhances aged immunity

https://www.nature.com/articles/s41586-025-09873-4
1•bookofjoe•40m ago•0 comments

Kong's AI Gateway Benchmark Against Portkey and LiteLLM

https://konghq.com/blog/engineering/ai-gateway-benchmark-kong-ai-gateway-portkey-litellm
1•nkko•42m ago•0 comments

Ask HN: Is anyone interested in a local-only app that analyzes caddy log files?

1•BrunoBernardino•43m ago•0 comments

"Special Forms in Lisp" by Kent Pitman (1980)

https://nhplace.com/kent/Papers/Special-Forms.html
1•networked•48m ago•0 comments

Built a memory-efficient Python library for large-scale TF-IDF

https://github.com/purijs/fasttfidf
1•jspuri•51m ago•1 comments

UnifyBio: Power Tools for Translational Data Science – Benjamin Kamphaus [video]

https://www.youtube.com/watch?v=HU-uwSUZETw
2•todsacerdoti•52m ago•2 comments

Show HN: Dbzero – Code as if you have infinite RAM (Python persistence engine)

https://github.com/dbzero-software/dbzero
4•dbzero•53m ago•1 comments

What's new in Swift: December 2025 Edition

https://www.swift.org/blog/whats-new-in-swift-december-2025/
2•g0ld3nrati0•56m ago•0 comments

Capital One is wary about its rising Amazon cloud AI costs

https://www.businessinsider.com/nvidia-memo-capital-one-explores-aws-alternatives-ai-control-cost...
1•cebert•57m ago•1 comments

2025 was the beginning of the end of the TV brightness war

https://www.theverge.com/tech/841054/tv-brightness-hdr-2025
1•jnord•58m ago•0 comments

James Webb Space Telescope confirms first 'runaway' supermassive black hole

https://www.space.com/astronomy/black-holes/james-webb-space-telescope-confirms-1st-runaway-super...
1•jnord•58m ago•0 comments

Intel's new Arizona fab, where the chipmaker's fate hangs in the balance

https://www.cnbc.com/2025/12/19/intel-aims-to-find-clients-and-catch-tsmc-with-new-chip-fab-in-ar...
1•giuliomagnifico•1h ago•0 comments

OpenAI might train on responses API data

1•kissgyorgy•1h ago•0 comments
Open in hackernews

"A milion token context" Big AI says. But the model is accurate for 2-4K tokens

https://unagent.eu/2025/04/22/misleading-promises-of-long-context-llm/
2•kzawpl•7mo ago

Comments

kzawpl•7mo ago
Over last two years there were claims of better long context capabilities for LLM, but that is often tested on exact text search. New benchmark called NoLiMa shows that long context capability of LLM is still poor, if you want LLM to perform some abstraction and reasoning.
vessenes•7mo ago
Meh. NoLima is helpful, in that it shows what we all "feel" working with models -- there's a marked dropoff in accuracy and intelligence as we get past 4-32k of context, depending on the model.

But, it seems unreasonable to be super worried about this -- a year or two ago, models couldn't easily find needles in haystacks of long context. As training and test strategies delivered trainable content, this became a thing that could be done perfectly across millions of tokens of context. There has not been a good way to incentivize models to do anything more but remember locations yet.

We are (mostly) paying the full costs of attending to the entire context in current architectures, and it seems pretty reasonable that we will therefore be able to train those architectures to more fully attend across context if we get the right training data into (ideally) an RL loop.

NoLima is an okay test, but I think the most recent OpenAI tests are significantly better and quite interesting; OpenAI-MRCR and Graphwalks are both super smart ideas about how to programmatically generate data that is easy to evaluate and forces better cross context attention.

From their 4.1 announcement: Graphwalks fills the context window with a directed graph composed of hexadecimal hashes, and then asks the model to perform a breadth-first search (BFS) starting from a random node in the graph. We then ask it to return all nodes at a certain depth.

MRCR asks for direct quotes at semantically identified locations in the text, e.g. poems about tapirs, bears and ballerinas, as well as stories about tapirs, bears and ballerinas are generated, perhaps fifty each. The system is asked "give me the third poem about tapirs". This requires counting, conceptual attention, and also distinguishing between stories and poems.

They only test their own models on MRCR for the benchmark graph, but it's still worth reviewing: the accuracy curves are super interesting. https://openai.com/index/gpt-4-1/