frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•38s ago•1 comments

Google in Your Terminal

https://gogcli.sh/
1•johlo•2m ago•0 comments

Shannon: Claude Code for Pen Testing

https://github.com/KeygraphHQ/shannon
1•hendler•2m ago•0 comments

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
1•Bender•6m ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•6m ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•8m ago•0 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•gmays•8m ago•0 comments

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•9m ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•9m ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•10m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
3•Bender•10m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•12m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
1•mrkO99•12m ago•0 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
1•bri3d•14m ago•0 comments

The P in PGP isn't for pain: encrypting emails in the browser

https://ckardaris.github.io/blog/2026/02/07/encrypted-email.html
2•ckardaris•17m ago•0 comments

Show HN: Mirror Parliament where users vote on top of politicians and draft laws

https://github.com/fokdelafons/lustra
1•fokdelafons•17m ago•1 comments

Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?

1•Chance-Device•19m ago•0 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
1•ColinWright•21m ago•0 comments

Jim Fan calls pixels the ultimate motor controller

https://robotsandstartups.substack.com/p/humanoids-platform-urdf-kitchen-nvidias
1•robotlaunch•25m ago•0 comments

Exploring a Modern SMTPE 2110 Broadcast Truck with My Dad

https://www.jeffgeerling.com/blog/2026/exploring-a-modern-smpte-2110-broadcast-truck-with-my-dad/
1•HotGarbage•25m ago•0 comments

AI UX Playground: Real-world examples of AI interaction design

https://www.aiuxplayground.com/
1•javiercr•26m ago•0 comments

The Field Guide to Design Futures

https://designfutures.guide/
1•andyjohnson0•26m ago•0 comments

The Other Leverage in Software and AI

https://tomtunguz.com/the-other-leverage-in-software-and-ai/
1•gmays•28m ago•0 comments

AUR malware scanner written in Rust

https://github.com/Sohimaster/traur
3•sohimaster•30m ago•1 comments

Free FFmpeg API [video]

https://www.youtube.com/watch?v=6RAuSVa4MLI
3•harshalone•30m ago•1 comments

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
2•PaulHoule•35m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•36m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•37m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
2•Brajeshwar•37m ago•0 comments

Ask HN: Will LLMs/AI Decrease Human Intelligence and Make Expertise a Commodity?

1•mc-0•39m ago•1 comments
Open in hackernews

Would you use an LLM that follows instructions reliably?

3•gdevaraj•8mo ago
I'm considering a startup idea and want to validate whether others see this as a real problem.

In my experience, current LLMs (like GPT-4 and Claude) often fail to follow detailed user instructions consistently. For example, even after explicitly telling the model not to use certain phrases, follow a strict structure, or maintain a certain style, it frequently ignores part of the prompt or gives a different output every time. This becomes especially frustrating for complex, multi-step tasks or when working across multiple sessions where the model forgets the context or preferences you’ve already given.

This isn’t just an issue in writing tasks—I've seen the same problem in coding assistance, task planning, structured data generation (like JSON/XML), tutoring, and research workflows.

I’m thinking about building a layer on top of existing LLMs that allows users to define hard constraints and persistent rules (like tone, logic, formatting, task goals), and ensures the model strictly follows them, with memory across sessions.

Before pursuing this as a startup, I’d like to understand:

Have you experienced this kind of problem?

In what tasks does it show up most for you?

Would solving it be valuable enough to pay for?

Do you see this as something LLM providers will solve themselves soon, or is there room for an external solution?

Comments

ggirelli•8mo ago
> Have you experienced this kind of problem? In what tasks does it show up most for you? I have experienced this type of problem. A colleague asked an LLM to convert a list of items in a text to a table. The model managed to skip 3 out of 7 items from the list somehow.

> Would solving it be valuable enough to pay for? Do you see this as something LLM providers will solve themselves soon, or is there room for an external solution?

The solution I have found so far is to prompt the model to write and execute code to make responses more reproducible. In that way most of the variability ends up in the code, but the code outputs tend to be more consistent, at least in my experience.

That said, I do feel like current providers will start to or are already working on this.

gdevaraj•8mo ago
Thank you for your time and feedback.
proc0•8mo ago
It's the central problem with AI right now! If this was fixed it wouldn't matter if they were elementary school AIs, they would still be useful if the output was consistent. If they were reliable, then you can find an upper bound on their capabilities and you would instantly know anything below that you can automate with confidence. Right now, they might do certain tasks even at PhD level but there is no guarantee that they won't fail miserably at some completely trivial task.
gdevaraj•8mo ago
Thank you for your feedback.
dtagames•8mo ago
Prompting and RAG are the only tools you have, like everyone else. What is "tone?" That's not deterministic. You're asking an LLM to predict tone. And logic? Forget it.

To validate (or really, dismiss) this idea, try it with your own RAG app or even with Cursor. There's just no way you can stack enough prompts to turn predictions into determinism.