frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: A unique twist on Tetris and block puzzle

https://playdropstack.com/
1•lastodyssey•1m ago•0 comments

The logs I never read

https://pydantic.dev/articles/the-logs-i-never-read
1•nojito•2m ago•0 comments

How to use AI with expressive writing without generating AI slop

https://idratherbewriting.com/blog/bakhtin-collapse-ai-expressive-writing
1•cnunciato•3m ago•0 comments

Show HN: LinkScope – Real-Time UART Analyzer Using ESP32-S3 and PC GUI

https://github.com/choihimchan/linkscope-bpu-uart-analyzer
1•octablock•3m ago•0 comments

Cppsp v1.4.5–custom pattern-driven, nested, namespace-scoped templates

https://github.com/user19870/cppsp
1•user19870•5m ago•1 comments

The next frontier in weight-loss drugs: one-time gene therapy

https://www.washingtonpost.com/health/2026/01/24/fractyl-glp1-gene-therapy/
1•bookofjoe•8m ago•1 comments

At Age 25, Wikipedia Refuses to Evolve

https://spectrum.ieee.org/wikipedia-at-25
1•asdefghyk•10m ago•3 comments

Show HN: ReviewReact – AI review responses inside Google Maps ($19/mo)

https://reviewreact.com
2•sara_builds•11m ago•1 comments

Why AlphaTensor Failed at 3x3 Matrix Multiplication: The Anchor Barrier

https://zenodo.org/records/18514533
1•DarenWatson•12m ago•0 comments

Ask HN: How much of your token use is fixing the bugs Claude Code causes?

1•laurex•15m ago•0 comments

Show HN: Agents – Sync MCP Configs Across Claude, Cursor, Codex Automatically

https://github.com/amtiYo/agents
1•amtiyo•16m ago•0 comments

Hello

1•otrebladih•17m ago•0 comments

FSD helped save my father's life during a heart attack

https://twitter.com/JJackBrandt/status/2019852423980875794
2•blacktulip•20m ago•0 comments

Show HN: Writtte – Draft and publish articles without reformatting, anywhere

https://writtte.xyz
1•lasgawe•22m ago•0 comments

Portuguese icon (FROM A CAN) makes a simple meal (Canned Fish Files) [video]

https://www.youtube.com/watch?v=e9FUdOfp8ME
1•zeristor•24m ago•0 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
2•gnufx•26m ago•0 comments

Transcribe your aunts post cards with Gemini 3 Pro

https://leserli.ch/ocr/
1•nielstron•30m ago•0 comments

.72% Variance Lance

1•mav5431•31m ago•0 comments

ReKindle – web-based operating system designed specifically for E-ink devices

https://rekindle.ink
1•JSLegendDev•33m ago•0 comments

Encrypt It

https://encryptitalready.org/
1•u1hcw9nx•33m ago•1 comments

NextMatch – 5-minute video speed dating to reduce ghosting

https://nextmatchdating.netlify.app/
1•Halinani8•33m ago•1 comments

Personalizing esketamine treatment in TRD and TRBD

https://www.frontiersin.org/articles/10.3389/fpsyt.2025.1736114
1•PaulHoule•35m ago•0 comments

SpaceKit.xyz – a browser‑native VM for decentralized compute

https://spacekit.xyz
1•astorrivera•36m ago•0 comments

NotebookLM: The AI that only learns from you

https://byandrev.dev/en/blog/what-is-notebooklm
2•byandrev•36m ago•2 comments

Show HN: An open-source starter kit for developing with Postgres and ClickHouse

https://github.com/ClickHouse/postgres-clickhouse-stack
1•saisrirampur•36m ago•0 comments

Game Boy Advance d-pad capacitor measurements

https://gekkio.fi/blog/2026/game-boy-advance-d-pad-capacitor-measurements/
1•todsacerdoti•37m ago•0 comments

South Korean crypto firm accidentally sends $44B in bitcoins to users

https://www.reuters.com/world/asia-pacific/crypto-firm-accidentally-sends-44-billion-bitcoins-use...
2•layer8•38m ago•0 comments

Apache Poison Fountain

https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fcee1d5
1•atomic128•39m ago•2 comments

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•40m ago•2 comments

Google in Your Terminal

https://gogcli.sh/
1•johlo•41m ago•0 comments
Open in hackernews

Active context extraction > passive context capture with LLMs

2•foundress•6mo ago
As models are getting better, context windows are expanding, tokens are getting cheaper, there is an explicit race after context.

Context is the holy grail. With the right context, models can read your data, situation and constraints to generate more relevant output. Better context lets you tell the model what you mean in less iterations.

Context capture takes different forms, however.

The browsers, screen recorders, products syncing with email, calendar, drive where you keep your information are getting traction. I believe passive context is largely solved.

Another form of context that is very poorly tapped into is the one hidden in your own brain. These are patterns learnt from previous data and feedback you have seen, your thinking process and constraints, your tacit domain knowledge and world model, your preferences and interpretation of reality.

The real bottleneck is the ability to get that information out of my human brain into the model, as efficiently and precisely as possible.

Active extraction is broken. We burn hours translating what’s in our head into prompts, specs or comments.

You write a 500 word prompt, let’s say, then realize you forgot the one nuance that actually matters or certain constraints that would have impact. You split tasks into micro-prompts because dumping the whole mental model at once is impossible.You often start from zero vs iterating further as the returns on iterations are diminishing and costly from a token spend and time perspective.

As humans, we can maybe juggle 3-4 things at once; complex specs can be composed of 10-100 different concepts, crossing this limit. It does not help that LLMs still demand big monolithic prompts. We end up offloading a lot of details and memory to the models.

No one is truly going after this problem. In fact, the ones who should are not incentivized to.

Most of the revenue generated in AI today is in fact accelerated by this bottleneck, hence the companies developing productivity tools are not truly motivated to address it.

So where is the next productivity leap? Models that can read our mind better than we can and preempt every need? Models and products that passively get every possible piece of context about me? Brain-computer interfaces?

Interfaces that can shrink the mind-to-model gap, help the model do what I mean, tools that almost let me think out loud in real-time, capture nuance without friction, and refine my intent , are going to have the most impact today.

We have built such a tool internally at Ntropy and have been using it for a while. To set up and refine almost all our LLM pipelines. Today, we are sharing it with the world .

Below are some raw thoughts and design principles that we used to make it

Mixed initiative. A productive human-to-model interface needs to be dialogue driven, where the model is more proactive and initiates follow ups that are precise and lead you to chunk by chunk thinking vs asking for a straightforward dump of thought . It takes out and infers what you really want it to do chunk by chunk from your brain. Visual scaffolding. Our brains often require structure and scaffolding that is permanent and gets updated as we add or remove detail or change input. Real-time and continuous spec evals. Everyone is focused on output evaluations that are important and effective, however are very costly and not straightforward to act on. Also, often misleading. They are biased towards your own dataset and lack ground truth. Continuous input evals and context quality assessment will completely change LLM powered development and work in general, including evaluations and the developer experience.

As we continue using the tool for production inputs, the thinking and the list of these items is evolving rapidly. We cannot wait for more people to try and share their experience for us to improve and add to it. Will share the link in comments.

Comments

foundress•6mo ago
https://www.theaifluencycompany.com
chaisan•6mo ago
reminds me of this idea of Do What I Mean (DWIM) coined in the 60s by Warren Teitelman. more relevant now than ever