I got tired of complex agent frameworks with their orchestrators and YAML configs, so I built something simpler.
AgentU uses two operators for workflows: >> chains steps, & runs parallel. That's it.
```
from agentu import Agent, serve
import asyncio
def search(topic: str) -> str:
return f"Results for {topic}"
# Agent auto-detects available model, connects to authenticated MCP server
agent = Agent("researcher").with_tools([search]).with_mcp([
{"url": "http://localhost:3000", "headers": {"Authorization": "Bearer token123"}}
])
# Memory
agent.remember("User wants technical depth", importance=0.9)
# Parallel then sequential: & runs parallel, >> chains
workflow = (
agent("AI") & agent("ML") & agent("LLMs")
>> agent(lambda prev: f"Compare: {prev}")
)
# Execute workflow
result = asyncio.run(workflow.run())
# REST API with auto-generated Swagger docs
serve(agent, port=8000)
```
Features:
- Auto-detects Ollama models (also works with OpenAI, vLLM, LM Studio)
- Memory with importance weights, SQLite backend
- MCP integration with auth support
- One-line REST API with Swagger docs
- Python functions are tools, no decorators needed
Using it for automated code review, parallel data enrichment, research synthesis.
pip install agentu
GitHub: https://github.com/hemanth/agentu
Open to feedback.
init0•1h ago