AMA on the submission process to OpenAI (timeline, review effort, what they ask for), how I solved user-authenticated content inside the iframe widgets, or why I had to remove certain tools to stay on the fitness side of OpenAI's fitness/health line.
https://www.tredict.com/blog/tredict_chatgpt_app/
Connect with a free ChatGPT account in a couple of clicks, then ask ChatGPT to analyse your activities, rename past sessions, or create structured workouts. Planned workouts sync to Garmin, Coros, Wahoo, Suunto and some more via Tredict. When you ask for it, an interactive Tredict view opens directly in the chat thread, showing the actual activity with charts, map and metrics, or the structured workout you just created.
Two things I find interesting about this:
The app uses MCP UI Apps, not just tools. Tredict's actual activity and plan views render inside the chat as interactive widgets. Most ChatGPT apps I've seen so far are tool-only, the widget pattern is still uncommon. Getting user-authenticated content into those widgets was the hardest part. The widget runs in a sandboxed iframe that has no access to the user's OAuth tokens, and there are basically no documented best practices for this yet.
ChatGPT is also frugal with its context window, so it tends to fetch the activity list and skip the detailed metrics unless you nudge it. A vague "tell me about my run" gets a shallow answer, while "fetch the details and give me a detailed assessment" gets the full analysis. For multi-week plan creation Claude with the same MCP server still works noticeably better. With Claude.ai I can build full structured training plans spanning weeks or even months, with proper periodisation, mixed sport types and individualised intervals based on past activity data. ChatGPT struggles with that scope. The limit sits with the host, not the server. The interactive MCP UI Apps also work in Claude.ai, so the same activity and plan widgets render directly in the chat there too.
Server lives at https://www.tredict.com/api/mcp/v2 and works with any MCP-compatible host. Honestly it works best with Claude.ai, which makes it slightly absurd that my application to be listed in Anthropic's connector directory has been pending without feedback for a while. If any Anthropic folks see this: would genuinely appreciate a status update or even a rejection with reason.