frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OkayWebHost– A simple, India-based managed hosting

1•okaywebhost_com•5m ago•0 comments

UK porn traffic down since beginning of age checks but VPN use up, says Ofcom

https://www.theguardian.com/society/2025/dec/10/uk-porn-traffic-down-age-checks-vpn-use-up-ofcom
1•beardyw•6m ago•0 comments

NASA loses contact with MAVEN Mars orbiter

https://www.news9live.com/science/nasa-loses-contact-with-maven-mars-orbiter-2910772
1•ashishgupta2209•6m ago•0 comments

A3: Avoid Memos with an Agenda

https://entropicthoughts.com/a3-avoid-memos-with-an-agenda
1•nsm•11m ago•0 comments

Rhetorical Analysis of One Punch Man (2018)

https://tmportfolio18.commons.gc.cuny.edu/2018/11/12/rhetorical-analysis-of-one-punch-man/
1•wseqyrku•14m ago•0 comments

International Workshop on Plan 9: 2026 Edition

https://iwp9.org
1•birdculture•15m ago•0 comments

How ICE's Plan to Monitor Social Media Threatens Privacy and Civic Participation

https://www.techdirt.com/2025/12/09/how-ices-plan-to-monitor-social-media-threatens-not-just-priv...
1•beardyw•17m ago•0 comments

The Forge Calculator – Crafting Odds and Stats Tool for Roblox

https://theforgecalculator.xyz/
1•lizbo•18m ago•0 comments

Visual Investigation: Trump's immigration data dragnet

https://www.ft.com/register/access
1•t0lo•19m ago•0 comments

Every LLM gateway we tested failed at scale – ended up building Bifrost

https://github.com/maximhq/bifrost
1•PranayBatta•21m ago•1 comments

Notes on structured concurrency, or: Go statement considered harmful

https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
1•ingve•21m ago•0 comments

Ask HN: How Old Are You?

2•WA•28m ago•1 comments

Stop Breaking TLS

https://www.markround.com/blog/2025/12/09/stop-breaking-tls/
3•todsacerdoti•29m ago•0 comments

The undersea mountains where sharks rule

https://www.bbc.com/future/article/20251208-shark-mountains-the-undersea-mountains-where-sharks-rule
1•1659447091•30m ago•0 comments

What if AI was used to distribute work instead of doing the work?

1•mobileturdfctry•35m ago•0 comments

Turner prize 2025: Nnena Kalu is first winner with learning disability

https://www.thetimes.com/culture/art/article/turner-prize-2025-winner-nnena-kalu-0ljqnt6z2
1•petethomas•38m ago•0 comments

Show HN: The Box – Run multiple Claude CLI agents in parallel in the cloud

https://the-box.dev
1•firdavs9512•42m ago•0 comments

Combustion engine cars regain popularity worldwide

https://www.reuters.com/business/energy/combustion-engine-cars-regain-popularity-worldwide-ey-say...
3•alephnerd•44m ago•0 comments

AI Model Timeline

https://www.aitimelines.club
1•hhdyhaha•44m ago•0 comments

A Climate Study Retraction for the Ages

https://www.wsj.com/opinion/a-climate-study-retraction-for-the-ages-49e967e0
2•petethomas•47m ago•0 comments

LearnFlux: AI-Powered Learning Assistant

https://www.learnflux.net/
1•detectmeai•48m ago•0 comments

LLM Benchmark by Databricks – OfficeQA

https://www.databricks.com/blog/introducing-officeqa-benchmark-end-to-end-grounded-reasoning
1•adityanambiar•52m ago•0 comments

Ask HN: Who's solved ugly Stripe receipts?

1•umarmaaz•58m ago•1 comments

Hydrostatic Pressure Induces Osteogenic Differentiation of Single Stem Cells

https://onlinelibrary.wiley.com/doi/10.1002/smsc.202500287
1•PaulHoule•1h ago•0 comments

HuggingFace Skills: Fine-tune any LLM with one sentence for $0.30

https://huggingface.co/blog/hf-skills-training
4•adiian•1h ago•1 comments

Protocol Omega: Defining AI Identity via Topology Instead of Biological Mimicry

https://github.com/IkanRiddle/Protocol-Omega
1•IkanRiddle•1h ago•1 comments

Is any of you using LLMs to create full features in big enterprise apps?

3•not_that_d•1h ago•1 comments

Controversies on DHH's new open source initiative (which is not open source)

https://mastodon.social/@bagder/115692071460280703
1•akabalanza•1h ago•2 comments

Debt-Fueled Deals Are Back on Wall Street

https://www.wsj.com/finance/investing/massive-debt-fueled-deals-are-back-on-wall-street-22c94ac5
1•JumpCrisscross•1h ago•0 comments

Psychedelics disrupt normal link between brain neuronal activity and blood flow

https://source.washu.edu/2025/12/psychedelics-disrupt-normal-link-between-brains-neuronal-activit...
1•XzetaU8•1h ago•0 comments
Open in hackernews

Is any of you using LLMs to create full features in big enterprise apps?

3•not_that_d•1h ago
Let me be clear first. I don't dislike LLMs, I query them, trigger agents to do stuff where I kind of know what the end goal is and to make analisys of small parts of an application.

That said, everytime I give it something a little more complex that do something in a single file script it fails me horribly. Either the code is really bad, or the approach is as bad a someone who doesn't really know what to do or it plains start doing things that I explicitly said not to do in the initial prompt.

I have sometimes asked my LLM fan's coworkers to come and help when that happens and they also are not able to "fix it", but somehow I am the one doing it wrong due "wrong prompt" or "lack of correct context".

I have created a lot of "Agents.md" files, drop files into the context window... Nothing.

When I need to do green field stuff, or PoCs it delivers fast, but then applying it to work inside an existent big application fails.

The only place where I feel as "productive" as I heard from other people is when I do stuff in languages or technologies I don't know at all, but then again, I also don't know if that functional code I get at the end is broken in things I am not aware of.

Are any of you guys really using LLMs to create full features in big enterprise apps?

Comments

linesofcode•25m ago
The quality of an LLM outputs is greatly dependent on how many guard rails you have setup to keep it on track and heuristics to point it on right direction (type checking + running tests after every change for example).

What is health of your enterprise code base? If it’s anything like ones I’ve experienced it’s a legacy mess then it’s absolutely understandable that an LLMs output is subpar when taking on larger tasks.

Also depends on the models and plan you’re on. There is a significant increase in quality when comparing Cursors default model on a free plan vs Opus 4.5 on a maximum Claude plan.

I think a good exercise is to prohibit yourself from writing any code manually and force yourself to do LLM only, might sound silly but it will develop that skill-set.

Try Claude code in thinking mode with the some super powers - https://github.com/obra/superpowers

I routinely make an implementation plan with Claude and then step away for 15 mins while it spins - the results aren’t perfect but fixing that remaining 10% is better than writing 100% of it myself.