We built this course to move beyond "vibes-based" AI coding. It focuses on a structured methodology (Investigate -> Analyze -> Explain) specifically designed for enterprise mono-repos with millions of lines of code, where standard autocomplete often fails.
It covers:
Fundamentals: Context management, token limits, and handling hallucinations.
Methodology: Grounding prompts and chain-of-thought patterns.
Practical Tools: Includes open-source tools I wrote like Chunk Hound (code research) and Argu Seek (wide research), plus a curated list of CLI tools (ripgrep, fzf, lazygit) that play well with agents.
The content is open source (MIT) and available as docs, podcasts, or slides. Would love to hear how others are handling context drift in large codebases.
NadavBenItzhak•1h ago
It covers:
Fundamentals: Context management, token limits, and handling hallucinations.
Methodology: Grounding prompts and chain-of-thought patterns.
Practical Tools: Includes open-source tools I wrote like Chunk Hound (code research) and Argu Seek (wide research), plus a curated list of CLI tools (ripgrep, fzf, lazygit) that play well with agents.
The content is open source (MIT) and available as docs, podcasts, or slides. Would love to hear how others are handling context drift in large codebases.