frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•54s ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•1m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•2m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•2m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•2m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
1•simonw•3m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•3m ago•0 comments

Show HN: Velocity - Cheaper Linear Clone

https://velocity.quest
1•kevinelliott•4m ago•1 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•5m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
1•nmfccodes•6m ago•0 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
1•eatitraw•12m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•12m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•13m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•15m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•16m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•16m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
2•birdmania•16m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
3•samasblack•18m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•19m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•20m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•21m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•23m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•23m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•23m ago•1 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•24m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•24m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
2•maxmoq•25m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
2•headalgorithm•26m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•26m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•27m ago•1 comments
Open in hackernews

Vector search on our codebase transformed our SDLC automation

https://medium.com/@antonybrahin/grounding-ai-in-reality-how-vector-search-on-our-codebase-transformed-our-sdlc-automation-7d068b1244a8
34•antonybrahin•5mo ago
Hey HN,

In software development, the process of turning a user story into detailed documentation and actionable tasks is critical. However, this manual process can often be a source of inconsistency and a significant time investment. I was driven to see if I could streamline and elevate it.

I know this is a hot space, with big players like GitHub and Atlassian building integrated AI, and startups offering specialized platforms. My goal wasn't to compete with them, but to see what was possible by building a custom, "glass box" solution using the best tools for each part of the job, without being locked into a single ecosystem.

What makes this approach different is the flexibility and full control. Instead of a pre-packaged product, this is a resilient workflow built on Power Automate, which acts as the orchestrator for a sequence of API calls:

Five calls to the Gemini API for the core generation steps (requirements, tech spec, test strategy, etc.).

One call to an Azure OpenAI model to create vector embeddings of our codebase.

One call to Azure AI Search to perform the Retrieval-Augmented Generation (RAG). This was the key to getting context-aware, non-generic outputs. It reads our actual code to inform the technical spec and tasks.

A bunch of direct calls to the Azure DevOps REST API (using a PAT) to create the wiki pages and work items, since the standard connectors were a bit limited.

The biggest challenge was moving beyond simple prompts and engineering a resilient system. Forcing the final output into a rigid JSON schema instead of parsing text was a game-changer for reliability.

The result is a system that saves us hours on every story and produces remarkably consistent, high-quality documentation and tasks.

The full write-up with all the challenges, final prompts, and screenshots is in the linked blog post.

I’m here to answer any questions. Would love to hear your feedback and ideas!

Comments

photon_garden•5mo ago
Curious how they've assessed quality, either qualitatively or quantitatively. How often do the generated documents miss important parts of the codebase or hallucinate requirements? How often do engineers have to redo work because the LLM convincingly told them to build the wrong thing?

You can build real, production-grade systems using LLMs, but these are the hard questions you have to answer.

18cmdick•5mo ago
They haven't.
cyanydeez•5mo ago
Yes. It's amazing we've gotten so far with LLM and everyone believing everyone else has actually validated their claims that _their_ LLM is producing valid output.

Essentially, you got a bunch of nergs generating code and believing that because it looks right, that this means every other subject matter being output is also correct.

antonybrahin•5mo ago
My target was to reduce the manual work of creating documents, it's definitely a draft, needs to be reviewed by an architect and a QA lead before passing it on. The tasks generated will have the actual actionable task, that can be used for prompting in cursor or vs code.
antonybrahin•5mo ago
Yes, it's not tested for large volume yet.
antonybrahin•5mo ago
This is not production ready yet, but based on my preliminary tests, the outputs are about 80% consistent. The plan ofcourse is for the architect to review the specs before getting devs assigned.
AIorNot•5mo ago
One easy way to judge the quality of of the spec the ai generates is to run it a few times on the same story and compare the differences

Curious if you tried that - how much variation does the AI do or does the grounding in codebase and prompts keep it focused and real?

antonybrahin•5mo ago
I haven't done intense tests yet, but based on my preliminary tests, the output is about 80% consistent. The others are like suggesting additional changes.
cratermoon•5mo ago
"outputs a full requirements document, a technical specification, a test plan, and a complete set of ready-to-work tasks"

No talking to those pesky people needed! I’m certain that an llm would spit out a perfectly average spec acceptable to the average user.

antonybrahin•5mo ago
I assume you are me.
WhitneyLand•5mo ago
Does anyone write anymore?

It’s difficult to read posts that rely so heavily on AI generated prose.

Everything’s a numbered/bulleted list and the same old turns of speech describe any scenario.

That aside, what’s really keeping this from being useful is showing some results. How well does this approach work? Who knows. If the data is sensitive, seeing it work on an open source repo would still illuminate.

Also, we hear lots elsewhere about the limitations of relying on embeddings for coding tools, it would be interesting to know how those limitations are overcome here.

antonybrahin•5mo ago
Interesting point on embedding, I'll research more on that. But as of now, in my knowledge, that's the best available way of identifying close matches. I'll try to find if there are any alternatives.
WhitneyLand•5mo ago
Antony, you’d be right to call me out on providing a source. So in case it’s helpful, this is the last place I recall the subject being discussed:

RAG is Dead, Context Engineering is King

https://www.latent.space/p/chroma

antonybrahin•5mo ago
I will check it out and make the updates necessary. Thank you for sharing that.
antonybrahin•5mo ago
Hello HN, sorry for coming here late, it was past mid night for when the post was upped by the mods. I'll try to answer all the questions now, thanks for being patient.