GlyphLang
GlyphLang replaces verbose keywords with symbols that tokenize more efficiently:
# Python
@app.route('/users/<id>')
def get_user(id):
user = db.query("SELECT * FROM users WHERE id = ?", id)
return jsonify(user)
# GlyphLang
@ GET /users/:id {
$ user = db.query("SELECT * FROM users WHERE id = ?", id)
> user
}
@ = route, $ = variable, > = return. Initial benchmarks show ~45% fewer tokens than Python, ~63% fewer than Java.
In practice, that means more logic fits in context, and sessions stretch longer before hitting limits. The AI maintains a broader view of your codebase throughout.Before anyone asks: no, this isn't APL with extra steps. APL, Perl, and Forth are symbol-heavy but optimized for mathematical notation, human terseness, or machine efficiency. GlyphLang is specifically optimized for how modern LLMs tokenize. It's designed to be generated by AI and reviewed by humans, not the other way around. That said, it's still readable enough to be written or tweaked if the occasion requires.
It's still a work in progress, but it's a usable language with a bytecode compiler, JIT, LSP, VS Code extension, PostgreSQL, WebSockets, async/await, generics.
everlier•3w ago
For example see this prompt describing an app: https://textclip.sh/?ask=chatgpt#c=XZTNbts4EMfvfYqpc0kQWpsEc...
goose0004•3w ago
The approach with GlyphLang is to make the source code itself token-efficient. When an LLM reads something like `@ GET /users/:id { $ user = query(...) > user }`, that's what gets tokenized (not a decompressed version). The reduced tokenization persists throughout the context window for the entire session.
That said, I don't think they're mutually exclusive. You could use textclip.sh to share GlyphLang snippets and get both benefits.
everlier•3w ago
Here's it in plain text to be more visible:
``` textclip.sh→URL gen: #t=<txt>→copy page | ?ask=<preset>#t=→svc redirect | ?redirect=<url>#t=→custom(use __TEXT__ placeholder). presets∈{claude,chatgpt,perplexity,gemini,google,bing,kagi,duckduckgo,brave,ecosia,wolfram}. len>500→auto deflate-raw #c= base64url encoded, efficient≤16k tokens. custom redirect→local LLM|any ?param svc. view mode: txt display+copy btn+new clip btn; copy→clipboard API→"Copied!" feedback 2s. create mode: textarea+live counters{chars,~tokens(len/4),url len}; color warn: tokens≥8k→yellow,≥16k→red; url≥7k→yellow,≥10k→red. badge gen: shields.io md [!alt](target_url); ```
It uses math notation to heavily compress the representation while keeping information content relatively preserved (similarly to GlyphLang. Later, LLM can comfortably use it to describe service in detail and answer user's questions about it. Same is applicable to arbitrary information, including source code/logic.