frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•1m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•4m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•4m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
1•tosh•10m ago•0 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•11m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•11m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•14m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•17m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•17m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•17m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•17m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•19m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•21m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•23m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•25m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•26m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•26m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•29m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•32m ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•34m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•35m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•37m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•37m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•40m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
3•chartscout•43m ago•1 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•46m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•47m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•52m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•54m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•57m ago•0 comments
Open in hackernews

Vibecodeprompts treats prompts like infrastructure

https://vibecodeprompts.com/
2•rubenhellman•1mo ago

Comments

rubenhellman•1mo ago
I have been playing with Vibecodeprompts for a bit and what stood out to me is not the prompts themselves, but the framing.

Most “prompt libraries” assume the problem is wording. As if better adjectives or clever roleplay magically produce reliable systems. That has never matched my experience. The real failure mode is drift, inconsistency, and lack of shared structure once things scale beyond a single chat window.

Vibecodeprompts seems to implicitly accept that prompting is closer to infra than copywriting.

The prompts are opinionated. They encode assumptions about roles, constraints, iteration loops, and failure handling. You can disagree with those assumptions, but at least they are explicit. That alone is refreshing in a space where most tools pretend neutrality while smuggling in defaults.

What I found useful was not copying prompts verbatim, but studying how they are composed. You can see patterns emerge. Clear system boundaries. Explicit reasoning budgets. Separation between intent, process, and output. Guardrails that are boring but effective.

In other words, this is less “here is a magic prompt” and more “here is a way to think about working with models as unreliable collaborators”.

That also explains why this probably will not appeal to everyone. If you want instant magic, this is not it. You still have to think. You still have to adapt things to your domain. But if you are building anything persistent, reusable, or shared with other people, that effort feels unavoidable anyway.

Curious how others here think about this. Do you treat prompts as disposable glue, or as something closer to code that deserves structure, review, and iteration over time?

chrisjj•1mo ago
Seriously? When the same prompt to the same LLM on a different day can give different results seemingly at random?
onion2k•1mo ago
That only matters if the system you're using requires a specific input to achieve the desired outcome. For example, I can write a prompt for Claude Code to 'write a tic tac toe game in React' and it will give me a working tic tac toe game that's written in React. If I repeat the prompt 100 times I'll get 100 different outputs, but I'll only get one outcome: a working game.

For systems where it's the outcome that matters but the output doesn't, prompts will work as a proxy for the code they generate.

Although, all that said, very few systems work this way. Almost all software systems are too fragile to actually be used like that right now. A fairly basic React component is one of the few examples where it could apply.

chrisjj•1mo ago
> For systems where it's the outcome that matters but the output doesn't

Stochastic parrots do not know the difference.

onion2k•1mo ago
They don't know anything at all about outcomes. Systems rarely do, whether it's AI or not. Outcomes are 'output * impact', where the impact is what we measure when we see changes driven by the output of the system. In a good process the impact feeds into the system to produce a better output on the next iteration.
chrisjj•1mo ago
> Outcomes are 'output * impact'

By that definition, your "systems where it's the outcome that matters but the output doesn't" is a null set.

anthk•1mo ago
Except prompts and LLM's are not predictable and experienced programmer is. Ditto with true classical AI Lisps with constraints based solvers, be under Common Lisp, be under custom Lispen such as Zenlisp where everything it's built over few axioms:

https://t3x.org/zsp/index.html

With LLM's you will often lack predictability. If there's any, of course. Because more than I once I had to correct these over trivial errors on TCL, and they often lack cohesion between different answers.

That was solved even under virtual machines for text adventures such as the ZMachine, where a clear relation between objects was pretty much defined from the start and thus a playable world emerged from few rules with the objects themselves, not something built from the start. When you define attributes for objects in a text adventure, it will map the language 1:1 to the virtual machine, and it will behave in a predictable way.

You don't need a 600 page with ANSI C standards+POSIX, GLibc and > 3000 pages long books with the AMD64/i386 ISA's in order to predict a basic behaviour. It's there.

Can LLM's get this? No, by design. They are like huge word predictors with eidetic memory. They might somehow be slightly good on interpolating, but they are useless extrapolating.

They don't understand semantics. OTOH, the Inform6 language tageting the ZMachine interpreter has objects with implicit behaviour in their syntax and a basic syntax parser for the actions from the users. That adds a bit of context generated between the relations of the objects.

The rest it's just decorated descriptions from the programmers, where the ingame answer can be changed once you drop some kind of objects and the like.

Cosmetic changes in the end, because internally there's mapped a action which is indistinguisable from the vanilla output from the Inform6 English library. And Gen-Z ers don't understand that when older people tell them that no LLM will be close to a designed game from a programmer, be in Inform6 or Inform7. Because an LLM's will often mix named input, named output and the implicit named object.