frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

44•UmYeahNo•1d ago•28 comments

Ask HN: Ideas for small ways to make the world a better place

12•jlmcgraw•12h ago•18 comments

Ask HN: Non AI-obsessed tech forums

21•nanocat•10h ago•17 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

44•Invictus0•1d ago•11 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

2•netfortius•7h ago•1 comments

AI Regex Scientist: A self-improving regex solver

6•PranoyP•14h ago•1 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•4d ago•514 comments

Ask HN: Who is hiring? (February 2026)

312•whoishiring•4d ago•511 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

17•jchung•2d ago•12 comments

Ask HN: Why LLM providers sell access instead of consulting services?

4•pera•20h ago•13 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•22h ago•7 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•10h ago•2 comments

Ask HN: Is it just me or are most businesses insane?

7•justenough•1d ago•6 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•3d ago•122 comments

Kernighan on Programming

170•chrisjj•4d ago•61 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•2 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•2d ago•1 comments

Ask HN: How Did You Validate?

4•haute_cuisine•1d ago•4 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: Have you been fired because of AI?

17•s-stude•4d ago•15 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•3d ago•1 comments

Test management tools for automation heavy teams

2•Divyakurian•1d ago•2 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments

Ask HN: Has anybody moved their local community off of Facebook groups?

23•madsohm•5d ago•18 comments
Open in hackernews

How do you keep AI-generated applications consistent as they evolve over time?

11•RobertSerber•2w ago
Hi HN,

I’ve been experimenting with letting LLMs generate and then continuously modify small business applications (CRUD, dashboards, workflows). The first generation usually works — the problems start on the second or third iteration.

Some recurring failure modes I keep seeing: • schema drift that silently breaks dashboards • metrics changing meaning across iterations • UI components querying data in incompatible ways • AI fixing something locally while violating global invariants

What’s striking is that most AI app builders treat generation as a one-shot problem, while real applications are long-lived systems that need to evolve safely.

The direction I’m exploring is treating the application as a runtime model rather than generated code: • the app is defined by a structured, versioned JSON/DSL (entities, relationships, metrics, workflows) • every AI-proposed change is validated by the backend before execution • UI components bind to semantic concepts (metrics, datasets), not raw queries • AI proposes structure; the runtime enforces consistency

Conceptually this feels closer to how Kubernetes treats infrastructure, or how semantic layers work in analytics — but applied to full applications rather than reporting.

I’m curious: • Has anyone here explored similar patterns? • Are there established approaches to controlling AI-driven schema evolution? • Do you think semantic layers belong inside the application runtime, or should they remain analytics-only?

Not pitching anything — genuinely trying to understand how others are approaching AI + long-lived application state.

Thanks.

Comments

jaynamburi•2d ago
Consistency in AI generated apps usually comes down to treating prompts + outputs like real software artifacts. What’s worked for us: versioned system prompts, strict schemas (JSON + validators), golden test cases, and regression evals on every change. We snapshot representative inputs/outputs and diff them in CI the same way you’d test APIs. Also important: keep model upgrades behind feature flags and roll out gradually.

Real example: in one LLM-powered support tool, a minor prompt tweak changed tone and broke downstream parsers. We fixed it by adding contract tests (expected fields + phrasing constraints) and running batch replays before deploy. Think of LLMs as nondeterministic services you need observability, evals, and guardrails, not just “better prompts.”