Home
Blog

Compilation

July 8, 2020 3:50pm

Yesterday I've pulled a bunch of notes from my repo and converted them into notes on this website, and ended up rereading most of them. I've been thinking about decomposition note and process of compilation (i.e. switching domains)...

Within functions, time space and frequency space are just two domains which generate the same function. In programming, two different language also are two domains in which the same thing is generated. Even having problem statement in natural language and then doing the act of 'programming' to convert it to machine code is also two domains in which the same thing exists. It's fun to think about all the domains, computational, linguistic or mathematical, and their crossover, and how they just end up translating one to the other and vice versa.

Some domains (models) are equally expressive, but some aren't. It's also then fun to think about mathematical models that don't encompass the entire mathematical notation and all of the low-level math objects, but some more high level, such as groups, rings,... they also represent a new model in whose terms to think about. Finding connections and conclusion in one helps the other one and vice versa. Maybe finding theorems in one is easier than in the other, and transforming between the two models back and forth, and moving in small steps in either is a way to go. This reminds a lot of category theory obviously, and it is indeed the way mathematics is being done today already. You get blocked in one space, then a completely another research area makes progress and links it to the previously unsolvable problem, et voila!

I think about programs that a certain model can generate like some trees with branches, but also branches that can reconnect and then split again (a directed acyclic graph). But now think about other models and their trees being able to also merge in. Their branches coming to the same points in space (where point is a start/end of a branch and represent one program) and then again diverging, taking paths that only one of them can reach, until they meet again at a later time perhaps.

"Compilation" (in computer science sense) is a about finding those meeting points of two different generative trees, and switching from program in first domain, to the version of the same program written/generated within the second model.

A now imagine an entire forest of trees of mathematical objects (and thus functions), natural languages, and known computational models, all intertwined and dense. Yet, it's full of holes and imperfect.

Q: How does Godel's incompleteness theorems fit into that picture?

I deem "compilation" to be a very important part of automatic generation of new theorems, algorithms, programs or whatever it may be. Having ability to convert a statement expressed with natural language into a complete computer program could be a powerful one. Of course, prooftesting, debugging and covering all possible outcomes still aren't accounted for by it, nevertheless it's an interesting thing to think about.

Another very interesting thing is finding model from the program. So, given a specific program, find models that generate it. For scientific advance, this is perhaps crucial for discovering new fundamental building blocks of nature from the theories that we are already familiar with. And possibility of having two theories (for example relativity and quantum), finding their meeting point (a program/theorem that they both can generate) and then running search over all models that can generate that same program. The result could be a model that generates both of those models, or something completely new.

Another approach could be to simply run model search over each of the two theories separately, and results may yield a model that generates both, or it would be great if we could prove that there, for sure, is no model that can generate both.

This "model search" is automated reductionism basically, it split things further and further and tests the program trees that it can generate. However, that it indeed is one of the hardest problems of every scientist - finding a model; and for all I know, there is no way to automatically create those, not even an intuitive one. Basic process is something like: get a lot of data about behavior and then try to generalize and create abstractions. So you start from the more specific, complex objects and then you remove their properties, combine them with themselves or some other "abstracted" objects and see what their interactions can generate (i.e. can they generate a full dataset/pattern of behavior). This is literally how you abstract classes in programming when doing object-oriented programming (OOP).

This process of "removing properties" is not automated, nor we can conceptualize what does it mean actually. (TODO: check if there's already a formalization of this, but currently can't think of any, either in type theory, category theory, set theory, or computation terms) But being able to formally approach it would have amazing consequences for scientific method and science. It also begs the questions - what is a property.

Q: what is a property of an object?

(TODO: pull notes from google keep)