HN Discussion: https://news.ycombinator.com/item?id=45399204 https://news.ycombinator.com/item?id=45386248
Deno provides a great sandbox environment for Typescript code execution because of its permissions system which made it easy to spin up code that only has access to fetch and network calls.
Stick an MCP proxy on top of that and you've got "CodeMode" (code intermixed with MCP tool calls) for more advanced workflow orchestration.
https://github.com/jx-codes/codemode-mcp
There's a lot of things that can be improved here. Like a virtual file system for the agent to actually build up its solution instead of being forced to one shot the solution but the bones are there.
fabmilo•4mo ago
jmcodes•4mo ago
If you run `deno check` before executing the code you'd get the type-safety loop (working on this now)
Later I want to see what'd happen if you give the LLM a repo of sorts to store useful snippets and functions with comments for later use. So the LLM itself would save workflows, be able to import them into the Deno environment and chain those together.
It definitely needs a prompt that tells it to use the MCP server but I can see it being pretty powerful.
I only did simple tests like get Reddit posts, their comments, find the weather on those days, stick them in duckdb, and run some social media metric queries.
I could see that same test being: "find me leads, filter by keywords, run against some parquet file stored somewhere using duckdb, craft an email for my boss."
I'm kind of ranting but I think this a pretty exciting approach.
Edit: GraphQL style codegen layer but for all your APIs seems like a pretty obvious middle layer for this, maybe next weekend.
danielser•4mo ago
> Later I want to see what'd happen if you give the LLM a repo of sorts to store useful snippets and functions with comments for later use. So the LLM itself would save workflows, be able to import them into the Deno environment and chain those together.
OMG this is the first thing you should do. We have similar now and its freaking amazing. Just discussing yesterday how I can't remember it going off the rails since implementing automem last week even.
Best thing it does, fully recaps all your daily accomplishments, across all platforms (Claude Code, Claude Desktop, ChatGPT, Cursor).
https://i.postimg.cc/Z0tYGKvf/Screenshot-2025-09-28-at-3-15-... https://i.postimg.cc/SQX6bTzV/Screenshot-2025-09-28-at-3-16-...
Called Automem by a friend of my (Jack Arturo), currently closed-source, though I'm sure you could reverse engineer it enough.
- its a hosted stack of FalkorDB + QDrant - has endpoints for creating/retrieving memories - embeds stuff using ChatGPT models - Uses Graph nodes for relating memories together - Has a dream/sleeping phase which degrades long term memory relevant, finds and tracks patterns and more. - Has an MCP which connects any AI directly to memory - Automated hooks which record memory queues on commit, deploy, learning moments - Automated storing of all queued memories on chat end. - A lot more magic under the hood too.
So in reality you get a near biological memory, useful by any MCP agent. To be fair Jack has about a 2 month head start on the rest of us with this idea haha.
--
The setup were building will be an always running setup, so it also has a scheduling runtime in Node that uses MD files to create automatable workflows, some uses agents, some just run bash. They can call mcps, tools, run commands, log output, use automem etc, all in human readable text.
https://i.postimg.cc/Y246Bnmx/Screenshot-2025-09-28-at-3-11-... https://i.postimg.cc/ThM2zY5Z/Screenshot-2025-09-28-at-3-17-... https://i.postimg.cc/vT6H26T7/Screenshot-2025-09-28-at-3-17-...
PS Keep up the great work on your codemode service, got some great ideas from yours to incorporate to ours that should resolve the one or 2 issues we had outstanding. Will share if I get it working, https://github.com/danieliser/code-mode if it gets any where
danshalev7•4mo ago
jmcodes•4mo ago
Sounds very cool.
I actually didn't end up implementing the memory. Instead I went down the 'get rid of the MCP' route. https://github.com/jx-codes/mcp-rpc
Basically instead of mcp servers you write typescript files that are parsed to generate a typed client, these are executed in one deno sandbox, and the LLM code gets that typed client and its scripts are run in a sandbox with only net allowed.
Been having some fun testing it out today.
If you have time to take a look I would be curious to hear what you think.
https://github.com/jx-codes/mcp-rpc
nivertech•4mo ago
Just because an agent “lives” in the environment, doesn’t make it RL. It needs a reward function, or even better something like Gym.
danielser•4mo ago
manojlds•4mo ago
jmcodes•4mo ago
One thing I ran into is that since the RPC calls are independent Deno processes, you can't keep say DuckDB or SQLite open.
But since it's just typescript on Deno. I can just use a regular server process instead of MCP, expose it through the TS RPC files I define, and the LLM will have access to it.
https://github.com/jx-codes/mcp-rpc https://news.ycombinator.com/item?id=45420133