frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

I produced a better way to get agents to make quality code, not just syntax

https://ai-lint.dosaygo.com
1•keepamovin•1h ago

Comments

keepamovin•1h ago
AI Slop is a thing, right? I guess I glossed over it for a long time. But agents do make a bunch of mistakes. My dev process has always been, even before AI, just persevere through. Before AI was writing imperfect code, so I was I. And in my (literally since 10 years old) 30 years of programming experience, and 3 -4 years of AI based one, my strategy has always been: just stay in the code, but get through.

However the last year produced quite a few changes for me. I moved to a new, better location. My life got drastically better, and I started thinking about things from a more open perspective for that and a bunch of other reasons.

Somehow the appeal of pushing myself in multi-week sprints, racing the tasks, staying up until a significant chunk of work was done and I "could rest", was no longer appealing to me. I realized AI can take a lot of the load of me, which I had been wanting for a long time, but with no other way before AI to do that -- I just kep persevering.

So I really leaned into AI to let me stay out of the code, move faster and -- I guess -- break things. But at first it didnt' really feel like "breaking things". It felt like, "development - AI-assistance style". I thought: this is just how it is, we do multiple iterations, some bugs shop up, we fix them, but at least I don't have to be in the code (save my wrists!), and don't need my head deep in the code (save my cognitive space for other things, in real life, a higher perspective).

I think anyone who has done deep work in this field for a long time can relate to what I'm saying about being so deep in work that you lose touch with life outside of that, at least for the time you're working. And the cognitive load context switching, while easier the more you do it, never really felt clean to me.

Don't get me wrong - I loved programming. The feeling of building, solving problems, and being deep in the zone. It was great vibe! When you were flying smoothly, it felt so good. But many days, it justs obstacles. And you are racing to complete that same-sized chink of work that on others days you'd breeze through. It's just the way it is.

And I believes AI changes. But it didn't at first, but I didn't realize it. Becuase it was more cognitively lightweight, higher level version of the same flow I'd done before. Mistakes, iterate.

Now, tho, I think things can be different. I've developed two tools that are really changing the way I do agentic coding and these feel way more aligned with where I want to be and how I want to work. One is https://ai-chat.email - and email bridge to your local CLI agents. It means I can voice type to an agent who stays at home on my laptop doing the actual work, while I provide high level guidance from wherever I am. Total game changer. But it's still kind of in beta, if you want to check it out, beware, still a couple more bugs to figure out to make ti really smooth.

But that easy of UX, or DevX, which I love and need, does not solve the fundamental problem of Slop. AI Slop is not a derogatory term to dismiss all AI creations as somehow "benath the glory of mine own cratmenship" or whatever, I think it can be precise term about how AI are good at syntax and surface patterns, as well as the overall picture of the shape of an application, but they don't always use the right sub-patterns, and building blocks to tie it all together.

So I created a set of scripts that are based on my experience using agents over the last 3 - 4 years, and being intimiately aware of their strengths and weaknesses. The scripts are attempts to provide ways for the agent to make code that belongs in a codebase, not just works. By that I mean that's inline with the grain of the language, that doesn't fight it, that is not just syntactically idiomatic, but cognizant of broader patterns and footguns. All those kind of things that are latent, but maybe not emphasized as important, in all the training data - which perhaps is part of the problem, as the mass of training data means agents regress to the kind of "mean mush" of the Internet's global codebase of everything ever written.

The scripts contain DOs, but also DONTs. With reasons. They are optimized for context injection. There's also clear guidance for the AIs on how to use them, and what to do when they inevitably conflict, and how users can implement overrides and how agents handle that.

So it's more an "operating system" for agentic high-quality coding, written into markdown files. Simple text files is all you need. That's it. Not magic. No scripts. Just instructions that agents can understand and are required follow.

It's still a work in progress, and there's many domains to cover but I'm all about making AI workflows better for me, and for you. Because I made this for me, but I'm not the only person out there working like this.

I saw agents make the same mistakes again and again, in golang, assembly, JavaScript and python. Especially in testing and debugging.

The corpus is arranged into a series of zip file overlays, that you can unzip over your existing packs and they merge because of orthogonal directory trees. In practice, you probably only need 1, 2 or 3 of the more than 5 languages and frameworks normally included in any pack.

There's probably ways we can make it better, but I'll discover that over time. And I'm sure in future, this approach will become redundant as agents and AI become simply far more competent. But for a time, this might be just what you ened. So I'm happy if you try it out. That's my kind of 1000-story elevator pitch, haha. So do you want to make your AI workflows go better? Check it out.

All the paid packs come with security (protecting tokens, secrets, IP, etc) and debugging doctrine, too. Some of this stuff may sound basic: and it is, I think. But the reality is that agents don't always follow the good patterns without being told that's what they need to do.

gigatexal•33m ago
Wait you’re selling a prompt?

Better C Generics: The Extendible _Generic

https://github.com/JacksonAllan/CC/blob/main/articles/Better_C_Generics_Part_1_The_Extendible_Gen...
1•marcodiego•47s ago•0 comments

PowerShell architect retires after decades at the prompt

https://www.theregister.com/2026/01/22/powershell_snover_retires/
1•doppp•2m ago•0 comments

Headcanon Generator

https://www.genstory.app/text-template/headcanon-generator
1•RyanMu•6m ago•0 comments

China no longer Pentagon's top security priority

https://www.bbc.com/news/articles/cj9r8ezym3ro
1•breve•9m ago•0 comments

TikTok US venture to collect precise user location data

https://www.bbc.com/news/articles/cvgnj7v2rr5o
3•colinprince•17m ago•0 comments

The Case Against Humanity

1•codenighter•19m ago•0 comments

If an AI Summarized Your Company Today, Could You Prove It Tomorrow?

https://www.aivojournal.org/if-an-ai-summarized-your-company-today-could-you-prove-it-tomorrow/
1•businessmate•23m ago•0 comments

Test disregard

https://ai-chat.email
1•keepamovin•29m ago•0 comments

Inside vLLM: Anatomy of a High-Throughput LLM Inference System

https://www.aleksagordic.com/blog/vllm
1•mellosouls•30m ago•1 comments

Request for Proposals: The Launch Sequence

https://ifp.org/rfp-launch/
1•gmays•32m ago•0 comments

Show HN: Supe – Give your AI agent a brain, not just memory

https://github.com/xayhemLLC/supe
1•xxayh•32m ago•0 comments

Inference startup Inferact lands $150M to commercialize vLLM

https://techcrunch.com/2026/01/22/inference-startup-inferact-lands-150m-to-commercialize-vllm/
2•mellosouls•37m ago•1 comments

Artemis

https://www.turintech.ai/artemis
1•grodriguez100•40m ago•0 comments

ANN v3: 200ms p99 query latency over 100B vectors

https://turbopuffer.com/blog/ann-v3
1•pbardea•41m ago•0 comments

Life/Art Lessons – Origami

https://isonomiaquarterly.com/archive/volume-3-issue-4/life-art-lessons-origami/
1•s4074433•45m ago•0 comments

B.A.T.M.A.N Protocol Concept (2011)

https://www.open-mesh.org/projects/open-mesh/wiki/BATMANConcept
2•jstrieb•50m ago•0 comments

Maryam: The Mirror and the Map

https://www.infinityfilmsmirzakhani.com
1•mellosouls•51m ago•1 comments

Show HN: ZTerm, a GPU-accelerated terminal emulator built with Rust and GPUI

https://github.com/zerx-lab/zTerm
1•zero-lab•52m ago•0 comments

Considering Strictly Monotonic Time

https://matklad.github.io/2026/01/23/strictly-monotonic-time.html
1•todsacerdoti•54m ago•0 comments

Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health"

https://arstechnica.com/security/2026/01/overrun-with-ai-slop-curl-scraps-bug-bounties-to-ensure-...
2•smsm42•56m ago•0 comments

The Catastrophe Paradox: Disaster as the Hidden Architect of Genius

https://medium.com/@chipmunkworks/the-catastrophe-paradox-39f63f4b773d
2•treelover•57m ago•0 comments

Rust's Standard Library on the GPU

https://www.vectorware.com/blog/rust-std-on-gpu/
4•justaboutanyone•58m ago•0 comments

Coding or Gambling?

https://mcwhittemore.com/posts/2026-01-24-slot-machine-vibes.html
2•mcwhittemore•1h ago•0 comments

Ask HN: Where do you look for semiconductor jobs?

2•johncole•1h ago•0 comments

Why Intel Tanked

https://www.wsj.com/tech/intel-problems-trump-bump-17d2c941
4•johncole•1h ago•0 comments

We Are Witnessing the End of Tesla's EV Empire

https://www.theamericansaga.com/p/we-are-witnessing-the-end-of-teslas
3•senti_sentient•1h ago•0 comments

Show HN: TempleOS Playground

https://ring0.holyc.xyz/
2•AlecMurphy•1h ago•0 comments

Cornell Virtual Workshop: Introduction to CUDA

https://cvw.cac.cornell.edu/cuda-intro
2•vinhnx•1h ago•0 comments

God Emperor Trump

https://en.wikipedia.org/wiki/God_Emperor_Trump
6•KnuthIsGod•1h ago•1 comments

Seeking alignment on product boundaries for an early-stage social platform

https://github.com/wakaka-stack/product-v--foundational-veto
2•kensei•1h ago•1 comments