frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: CodeAnt AI – AI Code Reviewer, that understand code and dependencies

https://www.youtube.com/watch?v=uprOvRUUudQ
3•Amartya_jha•7mo ago
Over the last year, we’ve been building CodeAnt AI, working closely with engineering teams struggling with code review quality and speed.

Manual code reviews are slow and repetitive. Reviews today mostly look at what changed — not what the change actually impacts. With more AI-written code, it's getting worse: bigger PRs, faster cycles, less team context.

We wanted to rethink how code reviews are done: → Build structured knowledge of the codebase → Understand infra and dependency changes → Analyze blast radius automatically at PR time

What CodeAnt AI Does (Technical Overview)

Repository Indexing and Graph Building:

When a repo is added, we index the entire codebase and build Abstract Syntax Trees (ASTs).

We map upstream and downstream dependencies across files, functions, types, and modules.

We run custom lightweight language servers for multiple languages to support:

go_to_definition to find symbol declarations

find_all_references to locate usage points

fetch_signatures and fetch_types for richer semantic context

Pull Request Analysis:

When a PR is created:

We detect the diff.

We pull relevant upstream/downstream context for any changed symbols.

We gather connected function definitions, usage sites, interfaces, and infra files touched.

The LLM invokes the language servers (almost like a developer navigating manually) to reason over this structured context, not just the raw diff.

Code Quality Analysis:

Along with AI reasoning, we layer traditional static checks inside PRs:

Detecting duplicate code patterns

Finding dead, unused code blocks

Flagging overly complex functions

Goal: Make linting + AI suggestions seamless, without needing separate tools.

Security and Infrastructure Context:

We maintain an internal curated database of application security issues, mapped to OWASP and CWE.

We run Infrastructure-as-Code (IaC) security checks across:

Terraform, Kubernetes, Docker, CloudFormation, Ansible

You can optionally connect cloud accounts (AWS, GCP, Azure):

We scan your live cloud infra for misconfigurations

We pull cloud resource context into PRs (e.g., when a Terraform PR changes a live VPC rule, we show the potential blast radius).

We monitor End-of-Life (EOL) libraries and third-party package vulnerabilities by scanning the National Vulnerability Database (NVD) every 20 minutes and flagging at PR time.

In short: We try to automate how an experienced developer would actually review a change: → Understand the code structure → Understand where it’s used → Understand how infra/cloud gets affected → Catch quality, security, and complexity issues before merge — without needing extra dashboards or tools.

Teams using CodeAnt AI have reported 50%+ faster code reviews while finding deeper and more actionable problems earlier.

Would love feedback from the HN community — both technical and critical are welcome.

Thanks for checking it out!