frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Trump announces 100% tariff on computer chip

https://www.usatoday.com/story/money/2025/08/08/trump-tariff-chip-semiconductor-consumer-prices-impact/85562097007/
1•taimurkazmi•4m ago•0 comments

After User Backlash, OpenAI Is Bringing Back Older ChatGPT Models

https://www.cnet.com/tech/services-and-software/after-user-backlash-openai-is-bringing-back-older-chatgpt-models/
1•pera•4m ago•0 comments

Dyson Sphere Could Bring Humans Back from the Dead

https://www.popularmechanics.com/science/a65615574/dyson-sphere-digital-resurrection-human-immortality/
1•Bluestein•5m ago•0 comments

Labubu AI

https://labubuai.net
1•MintNow•7m ago•0 comments

Classification of the Approaches to the Technological Resurrection

https://www.academia.edu/36998733/Classification_of_the_approaches_to_the_technological_resurrection
1•Bluestein•10m ago•0 comments

LLM advises to delete the Linux dynamic linker during a troubleshooting session

https://old.reddit.com/r/linux4noobs/comments/1mlveoo/help/
2•Santosh83•24m ago•0 comments

The Most Nihilistic Conflict on Earth

https://www.theatlantic.com/magazine/archive/2025/09/sudan-civil-war-humanitarian-crisis/683563/
1•YeGoblynQueenne•30m ago•1 comments

Show HN: I'm trying to quit vape and hoping someone could join me

https://www.iquitvape.com/
1•jayqinohboi•35m ago•0 comments

What Declarative Languages Are

https://semantic-domain.blogspot.com/2013/07/what-declarative-languages-are.html
1•fanf2•41m ago•0 comments

I cancelled my Chat GPT subscription today

6•dontlike2chat•43m ago•0 comments

Onion: Stack Language Compiled to Lua

https://github.com/yumaikas/onion
2•Bogdanp•48m ago•0 comments

A Fully Automatic Morse Code Teaching Machine (1977)

https://c2.com/morse/
1•austinallegro•53m ago•0 comments

'It's missing something': AGI, superintelligence and a race for the future

https://www.theguardian.com/technology/2025/aug/09/its-missing-something-agi-superintelligence-and-a-race-for-the-future
1•nhojb•53m ago•0 comments

Workers whose jobs AI can do less likely than other workers to be unemployed

https://eig.org/ai-and-jobs-the-final-word/
1•JumpCrisscross•53m ago•0 comments

Hospital Shift Scheduling with OR-Tools

https://barkeywolf.consulting/posts/hospital-scheduling/
1•jjhbarkeywolf•56m ago•0 comments

Show HN: The "Firebase" for MCP Servers – Build, test, and deploy MCP servers

https://www.contexaai.com/
1•rupesh_raj29•56m ago•0 comments

Ask HN: Which Do you know any open source games?

2•Forgret•1h ago•1 comments

The new era of house music

https://open.spotify.com/playlist/2sCu2R0XnUTw9na0ofT4vb
1•playlsd•1h ago•0 comments

Logarithmic mean energy optimization a metaheuristic algorithm

https://www.nature.com/articles/s41598-025-00594-2
1•bryanrasmussen•1h ago•1 comments

Money Habits That Separate Successful Traders from the Rest

https://propfirmfx.com/
1•malavika_manoj•1h ago•1 comments

Systemic Racism and Memetics

https://medium.com/luminasticity/on-systemic-racism-f708ac2efe51
2•bryanrasmussen•1h ago•0 comments

Spent $510 on cursor in the last 30d – AMA

1•xucian•1h ago•1 comments

Globe TV: Free Live TV Worldwide

https://globetv.app/
1•thunderbong•1h ago•0 comments

Why Deep Learning Works Unreasonably Well

https://www.youtube.com/watch?v=qx7hirqgfuU
1•phildawes•1h ago•0 comments

CMakeDependencyDiagram – Interactive target dependency visualization for CMake

https://github.com/renn0xtek9/CMakeDependencyDiagram
1•renn0xtek9•1h ago•1 comments

Time to Talk Numbers

https://hugston.com/articles/Time_to_talk_numbers
1•trilogic•1h ago•1 comments

Who use and how are use the hand scan data?

1•aurelien•1h ago•0 comments

Why Paying for Spotify Mostly Pays Taylor Swift

https://mertbulan.com/2025/08/10/why-paying-for-spotify-mostly-pays-taylor-swift/
3•mertbio•1h ago•2 comments

Culture Game Over

https://web.archive.org/web/20171018143123/https://www.numair.com/culture/game-over
1•kwie•1h ago•1 comments

We're building "klarna" but for your annual software subscriptions

https://www.annualize.co/
2•bfayyumii•1h ago•2 comments
Open in hackernews

Active context extraction > passive context capture with LLMs

2•foundress•2d ago
As models are getting better, context windows are expanding, tokens are getting cheaper, there is an explicit race after context.

Context is the holy grail. With the right context, models can read your data, situation and constraints to generate more relevant output. Better context lets you tell the model what you mean in less iterations.

Context capture takes different forms, however.

The browsers, screen recorders, products syncing with email, calendar, drive where you keep your information are getting traction. I believe passive context is largely solved.

Another form of context that is very poorly tapped into is the one hidden in your own brain. These are patterns learnt from previous data and feedback you have seen, your thinking process and constraints, your tacit domain knowledge and world model, your preferences and interpretation of reality.

The real bottleneck is the ability to get that information out of my human brain into the model, as efficiently and precisely as possible.

Active extraction is broken. We burn hours translating what’s in our head into prompts, specs or comments.

You write a 500 word prompt, let’s say, then realize you forgot the one nuance that actually matters or certain constraints that would have impact. You split tasks into micro-prompts because dumping the whole mental model at once is impossible.You often start from zero vs iterating further as the returns on iterations are diminishing and costly from a token spend and time perspective.

As humans, we can maybe juggle 3-4 things at once; complex specs can be composed of 10-100 different concepts, crossing this limit. It does not help that LLMs still demand big monolithic prompts. We end up offloading a lot of details and memory to the models.

No one is truly going after this problem. In fact, the ones who should are not incentivized to.

Most of the revenue generated in AI today is in fact accelerated by this bottleneck, hence the companies developing productivity tools are not truly motivated to address it.

So where is the next productivity leap? Models that can read our mind better than we can and preempt every need? Models and products that passively get every possible piece of context about me? Brain-computer interfaces?

Interfaces that can shrink the mind-to-model gap, help the model do what I mean, tools that almost let me think out loud in real-time, capture nuance without friction, and refine my intent , are going to have the most impact today.

We have built such a tool internally at Ntropy and have been using it for a while. To set up and refine almost all our LLM pipelines. Today, we are sharing it with the world .

Below are some raw thoughts and design principles that we used to make it

Mixed initiative. A productive human-to-model interface needs to be dialogue driven, where the model is more proactive and initiates follow ups that are precise and lead you to chunk by chunk thinking vs asking for a straightforward dump of thought . It takes out and infers what you really want it to do chunk by chunk from your brain. Visual scaffolding. Our brains often require structure and scaffolding that is permanent and gets updated as we add or remove detail or change input. Real-time and continuous spec evals. Everyone is focused on output evaluations that are important and effective, however are very costly and not straightforward to act on. Also, often misleading. They are biased towards your own dataset and lack ground truth. Continuous input evals and context quality assessment will completely change LLM powered development and work in general, including evaluations and the developer experience.

As we continue using the tool for production inputs, the thinking and the list of these items is evolving rapidly. We cannot wait for more people to try and share their experience for us to improve and add to it. Will share the link in comments.

Comments

foundress•2d ago
https://www.theaifluencycompany.com
chaisan•2d ago
reminds me of this idea of Do What I Mean (DWIM) coined in the 60s by Warren Teitelman. more relevant now than ever