These skills built to work with OpenClaw and other AI agent platforms, including OpenCode and Claude. We have encoded specialized medical research logic directly into our Skills. 1. Scientific Integrity Constraints: Implementing Hard Rules 2. Study type identification: We identify the study type first, then execute different logic paths. 3. Medically Specialized Prompt Logic
A Skill is a structured capability package consisting of: - skill.md: A "contract" containing YAML metadata (trigger logic) and specific operational steps. - Python Scripts: Executable engines called directly via bash under the guidance of the skill.md. In the context of AIPOCH, we define our developed skills as structured capability packages designed for professional medical research tasks, utilizing skill.md as the trigger contract and Python scripts as the execution engine. We have embedded medical research constraints directly into our skill.md, references, and Python scripts.
The Most Frustrating Moment One of our biggest early mistakes was using a cheaper LLM to "vibe coding" the initial batch of scripts. On the surface, it worked. The scripts ran, and the logic seemed okay. The nightmare only surfaced during our audit: we realized the executing agent was silently correcting the script's logic on the fly. Because the agent read the intent in skill.md, it would "patch" the sloppy edge cases and vague error branches in the Python code during execution. The result? We were burning massive amounts of extra tokens just to fix errors that shouldn't have existed. It didn't throw an error; it just showed up on the API bill. We eventually scrapped the lot. We learned the hard way: Quantity isn't a moat; high-quality scripts are.
The project is still in its early stages, and we're continuously refining both the skills and the underlying execution logic.
We'd really appreciate it if you give it a try. All questions / feedback welcome!