There are two common ways to let agents interact with websites today. Traditional scraping is complex, fragile, and breaks the moment a site changes its layout or auth flow. CDP-based browser automation (like browser-use) is more robust, but burns through tokens fast (a single DOM snapshot can be 50K+ tokens) and needs a SOTA model to navigate each page from scratch every time.
bb-browser takes a different approach. Instead of automating the browser generically, it wraps websites into CLI commands via "site adapters." Each adapter is a small JS function that calls a website's own internal APIs from inside a managed Chrome where you're already logged in.
bb-browser site twitter/feed
bb-browser site reddit/hot
bb-browser site github/notifications
Structured JSON back, a few hundred tokens at most.
The trick is that your browser is already the best place for these requests to happen. It has all the cookies, sessions, auth state. Login flows, CSRF tokens, CAPTCHAs, anti-bot detection... the stuff that makes traditional scraping so painful, none of it exists when you fetch from inside the browser.
What I find most interesting about this: operating websites through raw CDP is a hard problem that needs a SOTA model. But running a CLI command is trivial, any model can do it. So the SOTA model only needs to run once to write the adapter. After that, even an open-source model handles it fine.
And not everyone needs to write adapters themselves. There's a community repo (https://github.com/epiral/bb-sites) where people contribute them freely. Every new adapter expands what agents can do, and the combinations open up a lot of possibilities. An agent that can read your Twitter feed, check stock prices, and search Reddit can start doing things none of those can do alone. Someone running a local open-source model gets the same access as someone on Claude or GPT. I think agents should work for everyone, not just people who can afford SOTA tokens.
If you do want to add a new website, there's a guide command built in. Tell your agent "turn this website into a CLI" and it reverse-engineers the site's APIs and writes the adapter.
v0.8.x is pure CDP, managed Chrome instance. npm install -g bb-browser and it works.
yan5xu•1h ago
bb-browser takes a different approach. Instead of automating the browser generically, it wraps websites into CLI commands via "site adapters." Each adapter is a small JS function that calls a website's own internal APIs from inside a managed Chrome where you're already logged in.
Structured JSON back, a few hundred tokens at most.The trick is that your browser is already the best place for these requests to happen. It has all the cookies, sessions, auth state. Login flows, CSRF tokens, CAPTCHAs, anti-bot detection... the stuff that makes traditional scraping so painful, none of it exists when you fetch from inside the browser.
What I find most interesting about this: operating websites through raw CDP is a hard problem that needs a SOTA model. But running a CLI command is trivial, any model can do it. So the SOTA model only needs to run once to write the adapter. After that, even an open-source model handles it fine.
And not everyone needs to write adapters themselves. There's a community repo (https://github.com/epiral/bb-sites) where people contribute them freely. Every new adapter expands what agents can do, and the combinations open up a lot of possibilities. An agent that can read your Twitter feed, check stock prices, and search Reddit can start doing things none of those can do alone. Someone running a local open-source model gets the same access as someone on Claude or GPT. I think agents should work for everyone, not just people who can afford SOTA tokens.
If you do want to add a new website, there's a guide command built in. Tell your agent "turn this website into a CLI" and it reverse-engineers the site's APIs and writes the adapter.
v0.8.x is pure CDP, managed Chrome instance. npm install -g bb-browser and it works.