zlacker

[return to "Show HN: GlyphLang – An AI-first programming language"]
1. rubyn0+26[view] [source] 2026-01-11 00:40:49
>>goose0+(OP)
I think there’s a certain amount of novelty to this, and the aesthetic of the language I find pleasing, but I’m a little confused… Admittedly, I didn’t read the entire doc and only quickly glanced at the source… But is it just transpiling Golang code to and from this syntax, or is it intended to be a whole language eventually? Can folks able to just import golang packages or do they have to only use what packages are currently supported?

Additionally I have two thoughts about it:

1. I think this might be more practical as a transparent layer so users can write and get Golang (or whatever) the original language was back. Essentially making it something only the model reads/outputs.

2.) Longer term it seems like both NVidia and AMD along with the companies training/running the models are focused on driving down cost per token because it’s just too damn high. And I personally don’t see a world where AI becomes pervasive without a huge drop in cost token— it’s not sustainable for companies running the models and end users really can’t afford the real costs as they are today. My point being, will this even be necessary in a 12-18 months?

I could totally be missing things or lacking the vision of where this could go but I personally would worry that anything written with this has a very short shelf life.

That’s not to say it’s not useful in the meantime, or not a cool project, more so if there is a longer term vision for it, I think it would be worth calling out.

◧◩
2. goose0+uw3[view] [source] 2026-01-12 06:13:51
>>rubyn0+26
GLyphLang is intended to be a whole standalone language. It's implemented in Go, but it doesn't transpile to or from it. It has its own lexer, parser, type checker, bytecode compiler, and stack-based VM. If it helps, the compilation pipeline currently looks like this:

source (.glyph) -> AST -> bytecode (.glyphc) -> VM.

While the original intent was to have something tailored to AI that a human could manage, I'm realizing (to your point) that will absolutely not be necessary sometime in the likely near future. I've started working on making GlyphLang itself significantly more token-friendly and am adding a top layer that will essentially do what I think you've suggested. I'm adding expand and compact commands for bidirectional conversion between symbols and keywords that will allow engineers to continue developing with more familiar syntaxes on a top layer (.glyphx), while LLMs will generate actual .glyph code. Once completed, the pipeline will look like this:

.glyphx (optional) -> .glyph -> AST -> bytecode -> VM

Regarding #2, that's a great point and actually something I considered, though admittedly maybe not long enough. Regardless, I've tried to develop this with a value proposition that isn't purely about cost (though that does drive a lot of this). I'm also working on these 3 points: 1. Reduced hallucinations: symbols are unambiguous - there shouldn't be confusion between def/fn/func/function across languages (no formal benchmarks yet, but they're planned) 2. Context window efficiency: fitting more code in context allows for better reasoning about larger codebases, regardless of cost 3. Language-neutrality (someone else brought this up): symbols work the same whether the model was trained on English, Spanish, or code

I think even if tokens become free tomorrow, fitting 2x more code in a context window will still significantly improve output quality. Hopefully it will be necessary or at the very least helpful in the next 12-18 months, but who knows. I really appreciate the questions, comments, and callout!

[go to top]