Or Contextual Shared Variables
Or eXecution Model Language
or..
I think you're more likely to see an effect if you can somehow capture how far a solo founder gets before they need to bring on the second employee. Because LLMs aren't better than me at my job, but they are better than me at many jobs which I can tolerate being done poorly.
If there is a significant technological shift, it'll be when those startups start outperforming the ones for which LLMs weren't available at the start.
/s
Whenever they need to make two or three manual steps or configurations, they rather develop an abstraction layer where you just have to press one button to perform them.
Until that button needs to be accompanied by another manual action. Then they will again develop an abstraction which encapsulates the press of the button along with the augmenting action and is triggered by the pressing of a superior button.
Example: Docker->Kubernetes->Helm or any other tooling that uses YAML to write YAML.
That's optimism
Why am I feeling so old now?
On a more serious note, do any models support this?
because that is perfectly reasonable to the llm that wrote that readme
Armin Ronacher has recently been making some good points about tool composition: https://lucumr.pocoo.org/2025/8/18/code-mcps/
People start developing protocol, standards and overengineering abstractions to get free PR and status. Since AI hype started we have seen so many concepts built upon the basic LLM, from Langchain to CoT chains to MCP to UTCP.
I even attended a conference where one of the speakers was adamant that you couldn't "chain model responses" until Langchain came out. Over and over again, we build these abstractions that distance us from the lower layers and the core technology, leaving people with huge knowledge gaps and misunderstanding of it.
And with LLM's, this cycle got quite fast and it's impact in the end is highly visible - these tools do nothing but poison your context, offering you less control over the response and tie you into their ecosystem.
Every time I tried just listing a list of available functions with a basic signature like:
fn run_search(query: String, engine: String oneOf Bing, Google, Yahoo)
it provided better and more efficient results than poisoning the context with a bunch of tool definitions because "oooh tool calling works that way".
Making a simple monad interface beats using langchain by a margin, and you get to keep control over its implementation and design rather than having to use a design made by someone who doesn't see the pattern.
Keeping control over what goes into the prompt gives you way better control over the output. Keeping things simple gives you a way better control over the flow and architecture.
I don't care that your favorite influencer says differently. If you go and build, you'll experience it directly.
I've seen a lot of influencers suggest "100% assembly", "JavaScript only", "no SQL", which seem quite similar.
Think there is a curve of "reason" to apply when someone is advocating something like this, especially about technology and abstractions.
While in most places adding abstractions to core technology makes sense since "it makes it easier to use/manage/deploy" and it is reasonable to use it, LLM's are a quite different case than usual.
Because usually going downstream makes it harder (i.e. going 100% assembly or 100% JS is a harder thing), but going 100% pure LLM is an easier thing - you don't have to learn new frameworks, no need to learn new abstractions, it is shareable, easy to manage and readable by everyone.
In this case, going upstream is what makes it harder, turns it into code management, makes it harder to reason about and adds inevitable complexity.
If you add a new person on your team and they see that you are using 100% assembly, they have to onboard to it, learn how it works, learn why this was done this way etc etc.
If you add a new person to your team and you see that they are using all these tools and abstractions on top of LLMs its the same.
But if you are just using the core tech, they can immediately understand what is going on. No wrapped prompts, magic symbols, weird abstractions - "oh this is an agent but this is a chain while this is a retriever which is also an agent but it can only be chained to a non-retriever that uses UTCP to call it".
So as always, it is subjective and any advocacy needs to be applied to a curve of reason - in the end, does it make sense?
While using structured outputs is great, it can cause large performance impacts and you lose control over it - i.e. using a smaller model via groq fix the invalid response often times works faster than having a large model generate a structured response.
Have 50 tools? It's faster and more precise to just stack 2 small models or do a search and just pass in basic definitions for each and have it output a function call than to feed it all 50 tools defined as JSON.
While structured response itself is fine, it really depends on the usecase and on the provider. If you can handle the loss of compute seconds, yeah it's great. If you can't, then nothing beats having absolute control over your provider, model and output choice.
Lots of this isn’t project specific in what you suggest as a better approach.
If your setup keeps working better then it’s probably got a lot of common pieces that could be reused, right? Or do you write the parsing from scratch each time?
If it’s reused, then is it that different from creating abstractions?
As an aside - models are getting explicitly trained to use tool calls rather than custom things.
>If it’s reused, then is it that different from creating abstractions?
Because you have control over the abstractions. You have control over what goes into the context. You have control over updating those abstractions and prompts based on your context. You have control over choosing your models instead of depending on models supported by the library or the tool you're using.
>As an aside - models are getting explicitly trained to use tool calls rather than custom things.
That's great,but also they are great at generating code and guess what the code does? Calls functions.
reminds me of the early 2000’s and all the nosql trash
I don't like the gradual reframing of the model itself as being in charge of the tools, aided by a framework that executes whatever the model pumps out. It's not good to abstract away the connection between the text-generator and the actual volatile IO of your program.
Any “tool protocol” is really just a typed function interface.
For decades, we’ve had dozens (hundreds? thousands?) of different formats/languages to describe those.
Why do we keep making more?
Does it really matter who is calling the function and for which purpose? Does it matter if it’s implemented by a server or a command line executable? Does the data transport protocol matter? Does “model” matter?
interface HelloSayer {
/** Says hello **/
String sayHello();
}
Here’s your tool protocol bro> LLM agents are typically prompted to produce actions by generating JSON or text in a pre-defined format, which is usually limited by constrained action space (e.g., the scope of pre-defined tools) and restricted flexibility (e.g., inability to compose multiple tools). This work proposes to use executable Python code to consolidate LLM agents' actions into a unified action space (CodeAct). Integrated with a Python interpreter, CodeAct can execute code actions and dynamically revise prior actions or emit new actions upon new observations through multi-turn interactions. Our extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark shows that CodeAct outperforms widely used alternatives (up to 20% higher success rate). The encouraging performance of CodeAct motivates us to build an open-source LLM agent that interacts with environments by executing interpretable code and collaborates with users using natural language. To this end, we collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. We show that it can be used with existing data to improve models in agent-oriented tasks without compromising their general capability. CodeActAgent, finetuned from Llama2 and Mistral, is integrated with Python interpreter and uniquely tailored to perform sophisticated tasks (e.g., model training) using existing libraries and autonomously self-debug.
"The Universal Tool Calling Protocol (UTCP) is a modern, flexible, and scalable standard for defining and interacting with tools across a wide variety of communication protocols."
Into the trash it goes.
pvtmert•3h ago
/pun
otabdeveloper4•3h ago
blef•2h ago
_zoltan_•2h ago
mrheosuper•2h ago
orphea•59m ago
IanCal•2h ago