Edit: for those who don't frequent HN or reddit every day: https://old.reddit.com/r/google_antigravity/comments/1p82or6...
I think if all you care about is the outcome then sure, you might enjoy AI coding more
If you enjoy the problem solving process (and care about quality) then doing it by hand is way, way more enjoyable
(But would further gamification make it more enjoyable? No, IMO. So maybe all we learn here is that people don't like change in any direction.)
Argue about the value of video games all you like, I would still place them above slot machines any day
I do care about the outcome, which is why the thought of using AI to generate it makes me want to gouge my eyes out
In my view using AI means not caring about the outcome because AI produces garbage. In order to be happy with garbage you have to not care
> https://indianexpress.com/article/technology/tech-news-techn...
dang, please replace the link.
Having a private office instead of an open floor plan for instance
Or not working in the JIRA two week sprint format
Or not having to work with offshore teams that push the burden of quality control onto you
My point is I bet that the Google CEO (and basically every other software CEO) doesn't actually care if software development is enjoyable or not
The enjoyment factor is real. The iteration speed with Claude Code is insane. But the model's suggestions still need guardrails.
For security-focused apps especially, you can't just accept what the LLM generates. We spent weeks ensuring passwords never touch the LLM context - that's not something a vibe-coded solution catches by default.
The productivity gains are real, but so is the need for human oversight on the security-critical parts.
The core approach: browser-use's Agent class accepts a `credentials` parameter that gets passed to custom action functions but never included in the LLM prompt. So when the agent needs to fill a password field, it calls a custom `enter_password()` function that receives the credential via this secure channel rather than having it in the visible task context.
We forked browser-use to add this (github.com/anthropics/browser-use doesn't have it upstream yet). The modification is in `agent/service.py` - adding `credentials` to the Agent constructor and threading it through to the tool registry.
Key parts: 1. Passwords passed via `sensitive_data` dict 2. Custom action functions receive credentials as parameters 3. LLM only sees "call enter_password()" not the actual value 4. Redaction at logging layer as defense-in-depth
Would be happy to clean this up into a standalone pattern/PR. The trickiest part is that it requires changes to the core Agent class, not just custom actions on top.
Two clarifications:
1. We don't ask for your current passwords. The app imports your CSV from your existing password manager (1Password, Bitwarden, etc.), which you already trust with your credentials. We automate the change process - you provide the new passwords you want.
2. Zero passwords leave your machine. The app runs locally. Browser automation happens in a local Playwright instance. The AI (GPT-5-mini via OpenRouter) only sees page structure, never credential values. Passwords are passed to forms via a separate injection mechanism that's invisible to the LLM context.
The "vibe coding" comment was about development speed with AI assistants, not about skipping security review. We spent weeks specifically on credential isolation architecture - making sure passwords can't leak to logs, LLM prompts, or network requests. That's the opposite of careless.
Code's not open source yet, but we're working toward that for exactly the reasons you describe - trust requires verification.
codingdave•2mo ago