frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Roo Code 3.42.0 – ChatGPT usage tracking – Grey Screen Fix and more

https://docs.roocode.com/update-notes/v3.42.0
1•hrudolph•4h ago

Comments

hrudolph•4h ago
QOL Improvements

- Adds a usage limits dashboard in the OpenAI Codex provider so you can track your ChatGPT subscription usage and avoid unexpected slowdowns or blocks. - Standardizes the model picker UI across providers, reducing friction when switching providers or comparing models. - Warns you when too many MCP tools are enabled, helping you avoid bloated prompts and unexpected tool behavior. - Makes exports easier to find by defaulting export destinations to your Downloads folder. - Clarifies how linked SKILL.md files should be handled in prompts.

Bug Fixes

- Fixes an issue where switching workspaces could temporarily show an empty Mode selector, making it harder to confirm which mode you’re in. - Fixes a race condition where the context condensing prompt input could become inconsistent, improving reliability when condensing runs. - Fixes an issue where OpenAI native and Codex handlers could emit duplicated text/reasoning, reducing repeated output in streaming responses. - Fixes an issue where resuming a task via the IPC/bridge layer could abort unexpectedly, improving stability for resumed sessions. - Fixes an issue where file restrictions were not enforced consistently across all editing tools, improving safety when using restricted workflows. - Fixes an issue where a “custom condensing model” option could appear even when it was no longer supported, simplifying the condense configuration UI. - Fixes gray-screen performance issues by avoiding redundant task history payloads during webview state updates.

Misc Improvements

- Improves prompt formatting consistency by standardizing user content tags to <user_message>. - Removes legacy XML tool-calling support so new tasks use the native tool format only, reducing confusion and preventing mismatched tool formatting across providers. - Refactors internal prompt plumbing by migrating the context condensing prompt into customSupportPrompts.

Provider Updates

- Removes the deprecated Claude Code provider from the provider list. - Enables prompt caching for the Cerebras zai-glm-4.7 model to reduce latency and repeat costs on repeated prompts. - Adds the Kimi K2 thinking model to the Vertex AI provider.