We're launching CoChat, which extends OpenWebUI with group chat, model switching, and side-by-side comparison.
What makes it different: CoChat is designed for teams working with AI. - Group chat with AI facilitation. Multiple users collaborate in the same thread. The AI detects group discussions, tracks participants, and facilitates rather than dictates. - Switch and compare models. Run GPT, Claude, Mistral, Llama, and others side-by-side or switch mid-conversation. - Intelligent web search. Context-aware search activates only for real-time information. - Artifacts and tool calls. Generate documents and code inline. MCP tool integration coming soon. - No subscription fee. Pay for usage/tokens only at exact list price.
Non-obvious things we learned building this. Through building CoChat, we've learned some surprising things about LLM behavior. I'll share two (happy to discuss more in comments). First, models don't understand they're not the only AI in the room. When you tag a new model into a conversation and ask "what do you think of Claude's response above?", the model assumes it wrote that previous response. It will defend it, build on it, or awkwardly try to reconcile the question with its false memory of writing it. We solved this by injecting model attribution into the conversation context - explicitly marking which model generated each response. Once models understand they're looking at another model's output, they engage critically rather than defensively. The quality of cross-model analysis improved dramatically.
Second, LLMs have a compulsive need to "solve" group conversations. In a multi-user thread, the AI wants to answer every question and resolve every disagreement, even when humans are working something out themselves. System prompts telling it to "facilitate, don't dictate" weren't enough. We had to restructure how we frame the AI's role in group context: it's a participant who speaks when addressed, not an omniscient moderator. Getting this balance right is still ongoing - we're curious how others have approached this. We also ran into interesting challenges around memory and tool execution in multi-user contexts (whose preferences apply? whose tools get executed?) but that's probably a separate post.
Why this matters: Different models excel at different tasks. Current tools lock you into a single vendor. CoChat lets you choose the best model for each task while enabling real team collaboration.
We're planning to submit all updates back to the core project or maintain an active open-source fork.
Try it at: https://cochat.ai
Would love feedback from teams already using AI collaboratively, or anyone interested in model comparison workflows.