https://www.npmjs.com/package/@convo-lang/convo-lang
Here is an example of using in TypeScript: ``` ts import {convo} from "@convo-lang/convo-lang"
const categorizeMessage=convo`
> define
UserMessage = struct(
sentiment: enum("happy" "sad" "mad" "neutral")
type: enum("support-request" "complaint" "compliment" "other")
# A an array of possible solutions for a support-request or complaint
suggestedSolutions?: array(string)
# The users message verbatim
userMessage: string
)
@json UserMessage
> user
Categorize the following user message:
<user-message>
${userMessage}
</user-message>
`console.log(categorizeMessage)
```
And for a userMessage that looks something like:
---- My Jackhawk 9000 broke in half when I was trying to cut the top of my 67 Hemi. This thing is a piece of crap. I want my money back!!! ----
The return JSON object would look like: ``` json
{
"sentiment": "mad",
"type": "complaint",
"suggestedSolutions": [
"Offer a full refund to the original payment method",
"Provide a free replacement unit under warranty",
"Issue a prepaid return shipping label to retrieve the broken item",
"Offer store credit if a refund is not preferred",
"Escalate to warranty/support team for expedited resolution"
],
"userMessage": "My Jackhawk 9000 broke in half when I was trying to cut the top of my 67 Hemi. This thing is a piece of crap. I want my money back!!!"
}```
MARKETING DIVISIONIt basically gives you a formal syntax for orchestrating multi-turn LLM interactions, integrating tool calls + managing context in a predictable, maintainable way...essentially trying being some structure to "prompt engineering" and make it a bit more like a proper, composable programming discipline/model.
Something like that.
But text is just that, while scripts are easier to rely on. I can prompt and document all mechanisms to, say, check code format. But once I add something, say a pre-commit hook, it becomes reliable.
I am looking for a human readable (maybe renderable) way to codify patterns.
I'm actually working on a system that uses Convo-Lang scripts as form "sub-agents" that are controls by a master Convo-Lang script.
And regarding your "maybe renderable" comment, Convo-Lang scripts are parsed and stored in memory as a set of message objects similar to a DOM tree. The ConversationView in the @convo-lang/convo-lang-react NPM package uses the message objects to render a conversation as a chat view and can be extended to render custom components based on tags / metadata that is attached to the messages of the conversation.
As such for anyone working with LLMs, they know most of the work happens before and after the LLM call, like doing REST calls, saving to database, etc. Conventional programming languages work well for that purpose.
Personally, I like JSON when the data is not too huge. Its easy to read (since it is hierarchical like most declarative formats) and parse.
@on user
> onAskAboutConvoLang() -> (
if(??? (+ boolean /m last:3 task:Inspecting message)
Did the user ask about Convo-Lang in their last message
???) then (
@ragForMsg public/learn-convo
??? (+ respond /m task:Generating response about Convo-Lang)
Answer the users question using the following information about Convo-Lang
???
)
)
> user
Who in their right mind would come up with such a "syntax"? An LLM?The triple questions marks (???) are used to enclose natural language that is evaluated by the LLM and is considered an inline-prompt since it is evaluated inline within a function / tool call. I wanted there to be a very clear delineation between the deterministic code that is executed by the Convo-Lang interpreter and the natural language that is evaluated by the LLM. I also wanted there to be as little need for escape characters as possible.
The content in the parentheses following the triple question marks is the header of the inline-prompt and consists of modifiers that control the context and response format of the LLM.
Here is a breakdown of the header of the first inline-prompt: (+ boolean /m last:3 task:Inspecting message)
----
- modifier: +
- name: Continue conversation
- description: Includes all previous messages of the current conversation as context
----
- modifier: /m
- name: Moderator Tag
- description: Wraps the content of the prompt in a <moderator> xml tag and injects instruction into the system describing how to handle moderator tags
----
- modifier: last:{number}
- name: Select Last
- description: Discards all but the last three messages from the current conversation when used with the (+) modifier
----
- modifier: task:{string}
- name: Task Description
- description: Used by UI components to display a message to the user describing what the LLM is doing.
----
Here is a link to the Convo-Lang docs for inline-prompts - https://learn.convo-lang.ai/#inline-prompts
The Convo-Lang CLI allows you to run .convo files directly on the command line or you can embed the language directly into a TypeScript or Javascript applications using the @convo-lang/convo-lang NPM package. You can also use the Convo-Lang VSCode and Cursor extensions to execute prompt directly in your editor.
The Convo-Lang runtime also provides state management for on-going conversations and handles the transport of messages to and from LLM providers. And the @convo-lang/convo-lang-react NPM packages provides a set of UI components for building chat interfaces and generated images.
Put that on the landing page.
---
The idea is that convo files will serve as decencies of target outputs and those outputs could be anything from React components to generated images or videos.
---
This should make for a declarative way of defining generated application and content in a way that is repeatable and easy to modify.
---
I'll implement the same caching strategy that `make` uses to minimize the number of tokens consumed as changes to convo files are made.
---
Anybody have any thoughts or suggestions?
Convo-Lang originally started off as a prompt templating and conversation state management system. It gave me a way to load a prompt template into a chat interface and reuse the same code to handle sending messages between the user and an LLM. This was in the early days of OpenAI when DaVinci was the top model.
As Convo-Lang grow in complexity I created a VSCode extension for syntax highlighting to make templates easier to read and write. And as new patterns like RAG, JSON Mode and tool calling hit the scene I added support for them. Before long I had a pretty decent framework that was easy to integrate into TypeScript applications and solved most of my AI needs.
As I built more applications that used tool calling I realized that I was writing less TypeScript, and a good amount of the TypeScript I as writing was basic callback functions called by tools the LLM decided to invoke. At that point I realized if I created a simple scripting language that could do basic things like make an HTTP requests I could build the majority of my agents purely in Convo-Lang and encapsulate all of its logic a single file.
I found the idea of single file that encapsulated an agent in a simple text file very appealing, and then I did as I do. I ignore all of my other responsibilities as a developer for the next few days and built a thing \(ᵔᵕᵔ)/
After those few sleepless nights I had a full fledge programming language and a runtime and CLI that could execute it. It's been about a year and a half since then and I've continued to improve and refine the language.
Links:
Convo-Lang Docs - https://learn.convo-lang.ai/
GitHub - https://github.com/convo-lang/convo-lang
Core NPM package - https://www.npmjs.com/package/@convo-lang/convo-lang
All NPM package - https://www.npmjs.com/~convo-lang
VSCode extension - https://marketplace.visualstudio.com/items?itemName=iyio.con...
r/ConvoLang sub Reddit - https://www.reddit.com/r/ConvoLang/
Any stars on GitHub would be much appreciated, thank you.
benswerd•5mo ago
Stuff like a lot of this needing to be A/B tested, models hot swapped, and versioned in a way thats accessible to non technical people?
How do you think about this in relation to tools like BAML?