zlacker

[return to "Show HN: GlyphLang – An AI-first programming language"]
1. everli+R1[view] [source] 2026-01-11 00:02:02
>>goose0+(OP)
Arguably, math notation and set theory already has everything that we need.

For example see this prompt describing an app: https://textclip.sh/?ask=chatgpt#c=XZTNbts4EMfvfYqpc0kQWpsEc...

◧◩
2. goose0+V5[view] [source] 2026-01-11 00:40:03
>>everli+R1
That's an awesome tool! I think textclip.sh solves a different problem though (correct me if I'm wrong - this is the first I've been exposed to it). Compression at the URL/transport layer helps with sharing prompts, but the token count still hits you once the text is decompressed and fed into the model. The LLM sees the full uncompressed text.

The approach with GlyphLang is to make the source code itself token-efficient. When an LLM reads something like `@ GET /users/:id { $ user = query(...) > user }`, that's what gets tokenized (not a decompressed version). The reduced tokenization persists throughout the context window for the entire session.

That said, I don't think they're mutually exclusive. You could use textclip.sh to share GlyphLang snippets and get both benefits.

[go to top]