frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•1m ago•0 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•2m ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
1•tempodox•3m ago•0 comments

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•7m ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•10m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
1•petethomas•13m ago•1 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•18m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•33m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•40m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•40m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•43m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•45m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•55m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•56m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•1h ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•1h ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•1h ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•1h ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•1h ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•1h ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
4•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments
Open in hackernews

Vibecodeprompts treats prompts like infrastructure

https://vibecodeprompts.com/
2•rubenhellman•1mo ago

Comments

rubenhellman•1mo ago
I have been playing with Vibecodeprompts for a bit and what stood out to me is not the prompts themselves, but the framing.

Most “prompt libraries” assume the problem is wording. As if better adjectives or clever roleplay magically produce reliable systems. That has never matched my experience. The real failure mode is drift, inconsistency, and lack of shared structure once things scale beyond a single chat window.

Vibecodeprompts seems to implicitly accept that prompting is closer to infra than copywriting.

The prompts are opinionated. They encode assumptions about roles, constraints, iteration loops, and failure handling. You can disagree with those assumptions, but at least they are explicit. That alone is refreshing in a space where most tools pretend neutrality while smuggling in defaults.

What I found useful was not copying prompts verbatim, but studying how they are composed. You can see patterns emerge. Clear system boundaries. Explicit reasoning budgets. Separation between intent, process, and output. Guardrails that are boring but effective.

In other words, this is less “here is a magic prompt” and more “here is a way to think about working with models as unreliable collaborators”.

That also explains why this probably will not appeal to everyone. If you want instant magic, this is not it. You still have to think. You still have to adapt things to your domain. But if you are building anything persistent, reusable, or shared with other people, that effort feels unavoidable anyway.

Curious how others here think about this. Do you treat prompts as disposable glue, or as something closer to code that deserves structure, review, and iteration over time?

chrisjj•1mo ago
Seriously? When the same prompt to the same LLM on a different day can give different results seemingly at random?
onion2k•1mo ago
That only matters if the system you're using requires a specific input to achieve the desired outcome. For example, I can write a prompt for Claude Code to 'write a tic tac toe game in React' and it will give me a working tic tac toe game that's written in React. If I repeat the prompt 100 times I'll get 100 different outputs, but I'll only get one outcome: a working game.

For systems where it's the outcome that matters but the output doesn't, prompts will work as a proxy for the code they generate.

Although, all that said, very few systems work this way. Almost all software systems are too fragile to actually be used like that right now. A fairly basic React component is one of the few examples where it could apply.

chrisjj•1mo ago
> For systems where it's the outcome that matters but the output doesn't

Stochastic parrots do not know the difference.

onion2k•1mo ago
They don't know anything at all about outcomes. Systems rarely do, whether it's AI or not. Outcomes are 'output * impact', where the impact is what we measure when we see changes driven by the output of the system. In a good process the impact feeds into the system to produce a better output on the next iteration.
chrisjj•1mo ago
> Outcomes are 'output * impact'

By that definition, your "systems where it's the outcome that matters but the output doesn't" is a null set.

anthk•1mo ago
Except prompts and LLM's are not predictable and experienced programmer is. Ditto with true classical AI Lisps with constraints based solvers, be under Common Lisp, be under custom Lispen such as Zenlisp where everything it's built over few axioms:

https://t3x.org/zsp/index.html

With LLM's you will often lack predictability. If there's any, of course. Because more than I once I had to correct these over trivial errors on TCL, and they often lack cohesion between different answers.

That was solved even under virtual machines for text adventures such as the ZMachine, where a clear relation between objects was pretty much defined from the start and thus a playable world emerged from few rules with the objects themselves, not something built from the start. When you define attributes for objects in a text adventure, it will map the language 1:1 to the virtual machine, and it will behave in a predictable way.

You don't need a 600 page with ANSI C standards+POSIX, GLibc and > 3000 pages long books with the AMD64/i386 ISA's in order to predict a basic behaviour. It's there.

Can LLM's get this? No, by design. They are like huge word predictors with eidetic memory. They might somehow be slightly good on interpolating, but they are useless extrapolating.

They don't understand semantics. OTOH, the Inform6 language tageting the ZMachine interpreter has objects with implicit behaviour in their syntax and a basic syntax parser for the actions from the users. That adds a bit of context generated between the relations of the objects.

The rest it's just decorated descriptions from the programmers, where the ingame answer can be changed once you drop some kind of objects and the like.

Cosmetic changes in the end, because internally there's mapped a action which is indistinguisable from the vanilla output from the Inform6 English library. And Gen-Z ers don't understand that when older people tell them that no LLM will be close to a designed game from a programmer, be in Inform6 or Inform7. Because an LLM's will often mix named input, named output and the implicit named object.