Beyond this, you need to maintain good test coverage, and you need to have agents red-team your tests aggressively to make sure they're robust.
If you implement these two steps your agent performance will skyrocket. The planning phase will produce plans that claude can iterate on for 3+ hours in some cases, if you tell it to complete the entire task in one shot, and the robust test validation / change set analysis will catch agents solving an easier problem because they got frustrated or not following directions.
The daily struggle was always those 10 line diffs where you have to learn a lot (from the stakeholder, by debugging, from the docs).
The important thing is that this process is entirely autonomous. You create an issue, that hooks the planners, the completion of a plan artifact hooks a test implementer, the completion of tests hooks the code implementer(s, with cheaper models generating multiple solutions and taking the best diff works well), the completion of a solution + PR hooks code+security review, test red teaming, etc.
1. Plan, review the plan.
2. Review the code during changes before even it finish and fix ASAP you see drift.
3. Then again review
4. Add tests & use all quality tools don't rely 100% on LLM.
5. Don't trust LLM reviews for own produced code as it's very biased.
This is basic steps that you do as you like.
Avoid FULL AUTOMATED AGENT pipeline where you review the code only at the end unless it's very small task.
I’ve been thinking about this for months, and still don’t know what to make of it.
The LLM isn't always smart, but it's always attentive. It rewards that effort in a way that people frequently don't. (Arguably this is a company culture issue, but it's also a widespread issue.)
In organizations that value innovation, people will spend time reading and writing. It's a positive feedback loop, almost a litmus test of quality of the work culture.
Organizations many times seem capable to diffuse blame for mistakes within their human beaurocracy but as beaurocracy is reduced with AI, individuals become more exposed.
This alone - in my view - is sufficient counterpressure to fully replace humans in organizations.
Shorter reply: if my AI setup fails I'm the one to blame. If I do a bad job at helping coworkers perform better is the blame fully mine?
It turns out that writing and maintaining documentation is just that universally hated.
In the end, regardless of framework or approach, I believe there is a way to go about using llms that will optimize work for developers. I worked with this tech lead who reviews all PRs and insists on imports arranged in a specific order. I found it insulting but did it anyway. Now I don’t - the not does.
The same way that llms can be really helpful in planning and building out specific things like rest endpoints, small web components, single functions or classes and so on.
Glad people are attempting to work on such potential solution for approaching work to take advantage of these new tools
which part are you specifically uncertain about?
Years later, the industry evolved to integrate both roles, and new methodologies and new roles appear.
Now humans write specs, and AI agents write code. Role separation is a principle of labor division since Plato.
sublinear•11h ago
If the goal is to start writing code not knowing much, it may be a good way to learn how and establish a similar discipline within yourself to tackle projects? I think there's been research that training wheels don't work either though. Whatever works and gets people learning to write code for real can't be bad, right?
weego•10h ago
Editing this kind of configuration has far less cognitive load and loading time, so distractions aren't as destructive to the task as they are when coding. You can then also structure time so that productive agent coding can be happening while you're doing business critical tasks like meetings / calls etc.
I do think this is overkill though, and it's a bad plan and far too early to try and formalize The One Way To Instruct AI How To Code, but every advance is an opportunity to gain career traction so fair play.
jay-baleine•10h ago
The PhiCode runtime for example - a complete programming language with code conversion, performance optimization, and security validation. It was built in 14 days. The commit history provides trackable evidence; manual development of comparable functionality would require months of work as a solo developer.
The "more work" claim doesn't hold up to measurement. AI generates code faster than manual typing while systematic constraints prevent the architectural debt that creates expensive refactoring cycles later. The 5-minute setup phase establishes foundations that enable consistent development throughout the project.
On scalability, the runtime demonstrates 70+ modules maintaining architectural consistency. The 150-line constraint forced modularization that made managing these components feasible - each remains comprehensible and testable in isolation. The approach scales by sharing core context (main entry points, configuration, constants, benchmarks) rather than managing entire codebases.
Teams can collaborate effectively under shared architectural constraints without coordination overhead.
This isn't about training wheels or learning syntax. The methodology treats AI as a systematic development partner focused on architectural thinking rather than ad-hoc prompting. AI handles syntax perfectly - the challenge lies in directing it toward maintainable, scalable solutions at production speed.
Previous attempts at structured AI collaboration may have failed, but this approach addresses specific failure modes through empirical measurement rather than theoretical frameworks.
The perceived 'strictness' provides flexibility within proven constraints. Developers retain complete freedom in implementation approaches, but the constraints prevent common pitfalls like monolithic files or tangled dependencies - like guardrails that keep you on the road.
The project examples and commit histories provide concrete evidence for these development speeds and architectural outcomes.
gravypod•9h ago
I've been looking at the docs and something I don't fully understand is what PhiCode Runtime does? It seems like:
1. Mapping of ligatures -> keywords (ex: ƒ -> def).
2. Caching of 3 types (source content, python parsing, module imports, and python bytecode).
3. Call into phirust-transpiler which seems to try and convert things into rust code?
4. An http api for requesting these operations.
A lot of this seems to be done with regexs. Was there a motivation for doing string replace instead of python -> ast -> conversion -> new ast -> source? What is this code being used for?
CuriouslyC•9h ago
jay-baleine•9h ago
1. Symbol mapping: Yes - ƒ → def, ∀ → for, λ → lambda, π → print, etc. Custom mappings are configurable.
2. Multi-layer caching: Confirmed - source content cache, transpiled Python cache, module import specs, and optimized bytecode with batch writes.
3. PhiRust acceleration: Clarification - it's a Rust-based transpiler that handles the symbol-to-Python conversion for performance, not converting Python to Rust. When files exceed 300KB, the system delegates transpilation to the Rust binary instead of using Python regex processing.
4. HTTP API: Yes - provides endpoints for transpilation, symbol mapping queries, and engine info to enable IDE integration.
The technical decision to use string replacement over AST manipulation came down to measured performance differences.
The benchmarks show 3,000,000+ chars/sec throughput on extreme stress tests and 1,200,000+ chars/sec on typical workloads. Where AST parsing, transformation, and regeneration introduces overhead that makes real-time symbol conversion impractical for large codebases.
The string replacement preserves exact formatting, comments, and whitespace while maintaining compatibility with any Python syntax. Including future language features that AST parsers might not support yet. Each symbol maps directly to its Python equivalent without intermediate representations that can introduce transformation errors.
The cache system includes integrity validation to detect corrupted cache entries and automatic cleanup of temporary files. Cache invalidation occurs when source files change, preventing stale transpilation results. Batch write operations with atomic file replacement ensure cache consistency under concurrent access.
The runtime serves cognitive improvements for domain-specific development. Mathematical algorithms become more readable when written with actual mathematical notation rather than verbose keywords. It can help in game development, where certain functions can benefit from different naming (eg.: def → skill, def → special, def → equipment).
The gradual adoption path matters for production environments. Teams can introduce custom syntax incrementally without rewriting existing codebases since the transpiled output remains standard Python. The multi-layer caching system ensures that symbol conversion overhead doesn't impact execution performance.
Domain-specific languages for mathematics, finance, education, or any field where visual clarity improves comprehension. The system maintains full Python compatibility while enabling cognitive improvements through customizable syntax.
UncleEntity•5h ago
I don't really understand why you need to do anything different when using a parser than the regex method, there's no real reason to have to parse to an AST (with all the python goodness involved with that) at all when the parser can just do the string replacement the same as whatever PhiRust is doing.
I have this peg VM (based on the lpeg papers) I've been poking at for a little while now that, while admittedly I haven't actually tested its speed, I'd be amazed if it couldn't do 3Mb/s...in fact, the main limiting factor seems to be getting bytes off the disk and the parser runtime is just noise compared to that with all the 'musttail' shenanigans going on.
And even that is overkill for simple keyword replacement with all the work done over the years on macro systems needing to be blazing fast -- which is not something I've looked into at all to see how they do their magic except a brief peek at C's macro rules which are, let's just say, complicated.
visarga•8h ago
I agree, the only way to use AI is to constrain it, to provide a safe space where it can bang against the walls to iterate towards the solution. I use documentation, plans and tests as constraint system.
CuriouslyC•9h ago
The velocity taking yourself out of the loop with analytic guardrails buys is just insane, I can't overstate it. The clear plan/guardrails are important though, otherwise you end up with a pile of slop that doesn't work and is unmaintainable.