frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•3m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•6m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•8m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•16m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•18m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•19m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•20m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•22m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•23m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•27m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•29m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•29m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•30m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•32m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•35m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•38m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•44m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•45m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•51m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•52m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•53m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•55m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•57m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•59m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•1h ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•1h ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•1h ago•2 comments
Open in hackernews

Is any of you using LLMs to create full features in big enterprise apps?

7•not_that_d•1mo ago
Let me be clear first. I don't dislike LLMs, I query them, trigger agents to do stuff where I kind of know what the end goal is and to make analisys of small parts of an application.

That said, everytime I give it something a little more complex that do something in a single file script it fails me horribly. Either the code is really bad, or the approach is as bad a someone who doesn't really know what to do or it plains start doing things that I explicitly said not to do in the initial prompt.

I have sometimes asked my LLM fan's coworkers to come and help when that happens and they also are not able to "fix it", but somehow I am the one doing it wrong due "wrong prompt" or "lack of correct context".

I have created a lot of "Agents.md" files, drop files into the context window... Nothing.

When I need to do green field stuff, or PoCs it delivers fast, but then applying it to work inside an existent big application fails.

The only place where I feel as "productive" as I heard from other people is when I do stuff in languages or technologies I don't know at all, but then again, I also don't know if that functional code I get at the end is broken in things I am not aware of.

Are any of you guys really using LLMs to create full features in big enterprise apps?

Comments

linesofcode•1mo ago
The quality of an LLM outputs is greatly dependent on how many guard rails you have setup to keep it on track and heuristics to point it on right direction (type checking + running tests after every change for example).

What is health of your enterprise code base? If it’s anything like ones I’ve experienced it’s a legacy mess then it’s absolutely understandable that an LLMs output is subpar when taking on larger tasks.

Also depends on the models and plan you’re on. There is a significant increase in quality when comparing Cursors default model on a free plan vs Opus 4.5 on a maximum Claude plan.

I think a good exercise is to prohibit yourself from writing any code manually and force yourself to do LLM only, might sound silly but it will develop that skill-set.

Try Claude code in thinking mode with the some super powers - https://github.com/obra/superpowers

I routinely make an implementation plan with Claude and then step away for 15 mins while it spins - the results aren’t perfect but fixing that remaining 10% is better than writing 100% of it myself.

not_that_d•1mo ago
The code is quite easy to follow to be honest, we have documented a lot of stuff and segmented functionality into libraries that follow an app/feature/models pattern. Almost every service we have, has unit tests explicitly describing what the public api is doing or supposed to do on several scenarios, we never test implementation details.

Given it to new people of course carry questions, but most of them (juniors) could just follow the code given an entry point for that task, this from BE to FE.

I use the github copilot premium models available.

> I routinely make an implementation plan with Claude and then step away for 15 mins while it spins - the results aren’t perfect but fixing that remaining 10% is better than writing 100% of it myself.

I have to be honest, I just did this two times and the amount of code that needed to be fixed, and the mental overload to find open bugs was much more than just guide the LLM on every step. But this was a couple of months ago.

not_that_d•1mo ago
Besides my other response, it can also be I am not smart enough for it.
journal•1mo ago
The quality of an LLM outputs is greatly dependent on the inputs. Your brain is Swiss cheese and LLMs are a filler.
raw_anon_1111•1mo ago
Rule #1 I don’t do agentic coding. I keep my hands on the steering wheel and have it build everything up step by step, validate the code, commit. Repeat.
kasey_junk•1mo ago
With agentic coding people underestimate the agent and over estimate the models value. So it’s important to be specific. What agent are you using? You will see radical performance differences between Claude code and codex compare to copilot for instance. You will also see pretty big differences if you have well groomed, agent specific agents files. Especially if the code base is very large, the agents files need to be able to guide the agent to make connections in the code.

But other than that what I’ve found to be the most important is static tooling. Do you have rules that require tests to be run, do you have linters and code formatters that enforce your standards? Are you using well known tools (build tools, dependency management tools etc) or is that bespoke.

But the less sexy answer is that no, you can’t drop an agent cold into a big codebase and expect it to perform miracles. You need to build out agentic flows as a process that you iterate and improve on. If you prompt an agent and it gets it wrong, evaluate why and build out the tools so next time it won’t get it wrong. You slowly level up the capabilities of the tool by improving it over time.

I can’t emphasize enough the difference in agents though. I’ve been doing a lot of ab tests with copilot against other agents and it’s wild how bad it is, even backed with the same models.

kevinherron•1mo ago
The problem is you still think that the perfect prompt or AGENTS.md or whatever is going to get you a one-shotted (or close) feature in return. There isn't (yet) a model or orchestration framework that is going to take a large feature from start to finish for you.

The reality is that LLMs/agents are just a new way to write code. You still need to understand, more-or-less, how this feature is going to actually work, and how it needs to be implemented, from start to finish.

The difference is that you don't write the code, you tell the LLM to write the code. Once you've figured out the right "chunk size" an LLM can handle it's faster than doing it yourself.

I've found it's actually a little _harder_ in green field projects because the LLM doesn't have guard rails and examples and existing patterns to follow.

rokoss21•1mo ago
Yes, but only when the LLM is treated as an implementation detail, not the feature itself.

In enterprise systems, “full features” built directly on model output tend to fail at the edges: permissions, retries, validation, and auditability. The teams that succeed put a deterministic layer around the model — schemas, tool boundaries, and explicit failure handling.

Once you do that, the LLM stops being the risky part. The architecture is.