The Meanderings of Designing Compilers
Recently, I’ve taken upon myself the herculean task of single-handedly designing a compiler for a completely new language. Yes, preposterous. The idea of this new language went something like, we have Logo that can design really neat patterns with just a little math, why not have one that could make either moving patterns or a programming language that could convert something like basic Markdown to video format.
So, there is a company that claims just this: Narakeet but it’s proprietary.
The project is kind of way out of my league, and so I thought I would start small. First make a basic compiler for Logo, or even COW. Found a couple of videos and I’m just ruthlessly going to list 5-6 that I shall follow through with:
- Building a Compiler by Immo Landwerth
- Write Your Own Compiler by Phil Trelford
- The Presentation Phil used in that video.
- Hjalfi writes a compiler
These are seriously long-ass videos, sigh… it’s gonna take some serious time.
So anyways, I was watching Phil Trelford’s video and he explains the most crucial term related to compiler design: Abstract Syntax Tree. And essentially, this is one of those terms that align with natural language processing.
If you’re familiar with programming, you’ll understand that at the crux of what a compiler does is it is dealing with the craziness of whatever code you write and converting it to its own sort of craziness that it understands better. It is for that reason that we create parsers. A parser’s job is to look at your code and create an AST. I found this really neat guide explaining all of this: https://tomassetti.me/guide-parsing-algorithms-terminology/.
The Meandering
No neat way to change topics than to label it a meandering. You probably get the idea of this website, my writings, and maybe even my existence. If not then, I want you to understand one aspect of it: I try to bend towards all sorts of topics because to talk simply in computer science terms would be too claustrophobic. To present new ideas in the field, we must think outside the means to actualize them. If you think about any programming language, you always think in the terms of a stack. If we were to tomorrow create a mind from these languages, we’d forever be constrained by that stack. Here’s what I present… why aren’t languages more graph? Traditional languages like C, although great, are kind of boxed. They go through line-by-line, maybe come across a function, load things on the stack, and eventually if there’s a run-time issue, they just stop. That’s that for them. Humans don’t. We don’t even read line-by-line, it’s what catches our interest that we move on to. There’s nothing that stops us from continuously compiling and making some sort of weighted decision of how to move forward. But on the other hand, we have the ability to go back to something done by us previously, to try and make ammends.
What would it mean for a language to be graph?
A graph language would have the ability to move back and forth taking weighted decisios that don’t range from only True to False. It would have memory of previous runs sponataneously increasing as it makes up for experience.
draft: https://blog.opencog.org/page/2/ https://www.tutorialspoint.com/automata_theory/context_free_grammar_introduction.htm https://en.wikipedia.org/wiki/Context-free_grammar
draft 2(concurrency): https://sceweb.uhcl.edu/helm/RationalUnifiedProcess/process/workflow/ana_desi/co_cncry.htm https://www.classes.cs.uchicago.edu/archive/2018/spring/12300-1/lab6.html