frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Active context extraction > passive context capture with LLMs

2•foundress•6mo ago
As models are getting better, context windows are expanding, tokens are getting cheaper, there is an explicit race after context.

Context is the holy grail. With the right context, models can read your data, situation and constraints to generate more relevant output. Better context lets you tell the model what you mean in less iterations.

Context capture takes different forms, however.

The browsers, screen recorders, products syncing with email, calendar, drive where you keep your information are getting traction. I believe passive context is largely solved.

Another form of context that is very poorly tapped into is the one hidden in your own brain. These are patterns learnt from previous data and feedback you have seen, your thinking process and constraints, your tacit domain knowledge and world model, your preferences and interpretation of reality.

The real bottleneck is the ability to get that information out of my human brain into the model, as efficiently and precisely as possible.

Active extraction is broken. We burn hours translating what’s in our head into prompts, specs or comments.

You write a 500 word prompt, let’s say, then realize you forgot the one nuance that actually matters or certain constraints that would have impact. You split tasks into micro-prompts because dumping the whole mental model at once is impossible.You often start from zero vs iterating further as the returns on iterations are diminishing and costly from a token spend and time perspective.

As humans, we can maybe juggle 3-4 things at once; complex specs can be composed of 10-100 different concepts, crossing this limit. It does not help that LLMs still demand big monolithic prompts. We end up offloading a lot of details and memory to the models.

No one is truly going after this problem. In fact, the ones who should are not incentivized to.

Most of the revenue generated in AI today is in fact accelerated by this bottleneck, hence the companies developing productivity tools are not truly motivated to address it.

So where is the next productivity leap? Models that can read our mind better than we can and preempt every need? Models and products that passively get every possible piece of context about me? Brain-computer interfaces?

Interfaces that can shrink the mind-to-model gap, help the model do what I mean, tools that almost let me think out loud in real-time, capture nuance without friction, and refine my intent , are going to have the most impact today.

We have built such a tool internally at Ntropy and have been using it for a while. To set up and refine almost all our LLM pipelines. Today, we are sharing it with the world .

Below are some raw thoughts and design principles that we used to make it

Mixed initiative. A productive human-to-model interface needs to be dialogue driven, where the model is more proactive and initiates follow ups that are precise and lead you to chunk by chunk thinking vs asking for a straightforward dump of thought . It takes out and infers what you really want it to do chunk by chunk from your brain. Visual scaffolding. Our brains often require structure and scaffolding that is permanent and gets updated as we add or remove detail or change input. Real-time and continuous spec evals. Everyone is focused on output evaluations that are important and effective, however are very costly and not straightforward to act on. Also, often misleading. They are biased towards your own dataset and lack ground truth. Continuous input evals and context quality assessment will completely change LLM powered development and work in general, including evaluations and the developer experience.

As we continue using the tool for production inputs, the thinking and the list of these items is evolving rapidly. We cannot wait for more people to try and share their experience for us to improve and add to it. Will share the link in comments.

Comments

foundress•6mo ago
https://www.theaifluencycompany.com
chaisan•6mo ago
reminds me of this idea of Do What I Mean (DWIM) coined in the 60s by Warren Teitelman. more relevant now than ever

Zig Package Manager Changes

https://ziglang.org/devlog/2026/#2026-02-06
1•jackhalford•46s ago•0 comments

Neutron Scans Reveal Hidden Water in Martian Meteorite

https://www.universetoday.com/articles/neutron-scans-reveal-hidden-water-in-famous-martian-meteorite
1•geox•1m ago•0 comments

Deepfaking Orson Welles's Mangled Masterpiece

https://www.newyorker.com/magazine/2026/02/09/deepfaking-orson-welless-mangled-masterpiece
1•fortran77•3m ago•1 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
2•nar001•5m ago•1 comments

SpaceX Delays Mars Plans to Focus on Moon

https://www.wsj.com/science/space-astronomy/spacex-delays-mars-plans-to-focus-on-moon-66d5c542
1•BostonFern•5m ago•0 comments

Jeremy Wade's Mighty Rivers

https://www.youtube.com/playlist?list=PLyOro6vMGsP_xkW6FXxsaeHUkD5e-9AUa
1•saikatsg•6m ago•0 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
1•sam256•8m ago•0 comments

AI Command and Staff–Operational Evidence and Insights from Wargaming

https://www.militarystrategymagazine.com/article/ai-command-and-staff-operational-evidence-and-in...
1•tomwphillips•8m ago•0 comments

Show HN: CCBot – Control Claude Code from Telegram via tmux

https://github.com/six-ddc/ccbot
1•sixddc•9m ago•1 comments

Ask HN: Is the CoCo 3 the best 8 bit computer ever made?

1•amichail•11m ago•0 comments

Show HN: Convert your articles into videos in one click

https://vidinie.com/
2•kositheastro•14m ago•0 comments

Red Queen's Race

https://en.wikipedia.org/wiki/Red_Queen%27s_race
2•rzk•14m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
2•gozzoo•17m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•17m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
2•tosh•18m ago•1 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•18m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•23m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•26m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•29m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•30m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
2•michalpleban•30m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•31m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
2•mitchbob•31m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
2•alainrk•32m ago•1 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•33m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
2•edent•36m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•40m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•40m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•45m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
7•onurkanbkrc•46m ago•0 comments