Home
Blog

Death of functions?

March 17, 2019

Post was originally published on Medium - Death of functions?


It all started with counting sheep. At least it’s the earliest example that I know of. It was thousands of years ago, and people didn’t know about numbers, they haven’t yet developed many of the mathematical tools that we today learn about very early in schools. Yet, shepherds needed to somehow keep track of their sheep and make sure none of them got lost or forgotten. For every sheep that went outside they’d put one stone aside on a pile. Later, when retrieving the herd, they’d take one stone from that pile for every sheep that returned. If there were any stones remaining it meant that some sheep were left behind, and then they’d go out to find them.

In this case it was “a stone for a sheep”, but that was one of the first examples of what is to become a foundation of all human work — mapping. And functions, they do exactly that — map things.

:: scientific method

Scientific method is a set of steps that you go through in order to develop a theory about something, and as you may have guessed it, the entire science is based on it. With that “theory about something” you gain a model, i.e. some kind of description of its behavior which helps you predict it in future cases and you can use that to your advantage. Here’s one way to explain how it works:

  1. observe a process
  2. propose a hypothesis about how it works (create a model)
  3. test the hypothesis (check if your model’s predictions match with real data)
    • if your tests confirm your hypothesis → great, you now have a model
    • if it fails → go to (2) and find an alternative

And this is how every scientific area works basically. It proposes a solution to a problem, tests it and uses it until it’s proven incorrect, and then it improves upon it and iterates further. It is also what distinguishes science from other areas of human “belief” systems — it gives you a way to falsify it.

This method lacks a lot of important parts. It doesn’t state “that something definitely is objectively correct” (i.e. it is only correct up to a model), it just says that “for all we know for now, it’s not incorrect”. It doesn’t tell us how to create further iterations. Doesn’t tell us how to find new hypotheses or how to construct models. It doesn’t tell us what’s important and what not, which measurements are correlated and which are not. Yet still, it’s incredibly powerful as it is, and all the achievements of the human race could more or less be attributed solely to it.

:: biological aspect

Even if the scientific method is imperfect it enables us to predict behavior of systems around us in advance, giving us ability to act on it sooner, to even consider including it in our decision making now that we have ability to project outcomes of our decisions ahead of time and thus benefit practically from all of it.

From a biological standpoint, this serves a quite practical purpose — we use it to survive. Thanks to our ability to map things, i.e. match objects and their behavior, and then match that behavior with predictions about its properties under different circumstances, we were able to learn what is good to eat and what bad. We were able to learn about day-night cycles, weather seasons, dangers… we were able to learn anything and everything.

It’s a good question whether our brains work as a mapping mechanism already by itself, or if one of its emergent behavior levels has an ability to learn new mappings among other things. In another words, at what level of emergence of brain’s inner workings do we get the mapping ability, or is it maybe built into the very core of our brain and we could never do anything besides it.

We later managed to develop natural languages. And language is nothing more than a mapping between a symbol (a word) and collection of properties that are tied together and we use that one specific word to describe them altogether. An “apple” is a system that is relatively small, usually red or green, quite round, has a specific taste… Languages are incredibly detailed and large sets of mappings between different behaviors and properties that we encounter, on one side, and words, on the other side. Although there are probably thousands of different languages, they do differ in a way you construct complex descriptions, and they have different meanings of specific words, yet they all share the commonality that the symbols in them (spoken or written) map to something that they are encoding. The ability to present things more efficiently through the use of symbols and their exchange between individuals is what allowed us humans to achieve so much compared to other animals (but of course there are some other, additional reasons).

A more formal view at “mapping” is through “functions/morphisms”. There is an “input/source” and there is an “output/target”, and there is some magic that relates one to the other, or converts/transforms one to the other etc. And our scientific method works exactly with the idea that given some specific input setup, we are able to get a singular, correct answer that is its output, an outcome.

:: physics

Even with things like quantum physics which sound like potentially being able to rain all over our singular, correct, deterministic answer, that is not the case. We are limited by some other things, such as having only local knowledge, imperfect descriptions of states etc. So in both classical and quantum physics, the scientific method is heavily used, and it has allowed us to make a big progress and hasn’t yet failed per se. It allowed us to iterate over and over again and develop better and better theories.

But that doesn’t mean that the Method is without its limitations in physics. Some theories in physics are not scientific, in the sense that they cannot be proved nor disproved. There are parts of super-symmetry and string theory that aren’t scientific in that regard (a lot more knowledgeable people have more details on this probably).

It also bounds us to the idea of time and causality. There is always one moment/state, then you apply the rules for system’s evolution and then you get the next moment/state. In physics it’s quite obvious in those terms, but in general, functions have bound us to thinking in successive steps, states, automatons, inputs-and-outputs.

That also means that there should be a before and an after to everything. It bounds us then to thinking about the start of everything and the end of everything. About the start of time and the end of it. About the next point in space along the same vector. Is space then infinite? Is time? Not enough parameters when working with one form of function? Just add more parameters/dimensions. It pushes us towards more and more complex descriptions because its simpler forms are insufficient. What about smaller number of dimensions then? The holographic principle? Ultimately, what about zero dimensions? All hell breaks loose. It tortures us with questions about why anything exists at all, and does it only change form through various transformations but is ever-present? If every behavior is a composition of something else by parameterizing it enough, where is the start, what are the primitive components that give rise to such behavior that we observe (e.g. elementary particles)? How big or small can we go, does it ever end? Discrete and continuous structures emerge and force us to decide which is it.

:: computation and math

Computation is a general study of systems and their abilities. Since all our observations are based on functions, it is no surprise that the most basic computational model is functional. Fact that there are some other computational models that are equivalent to it is just an extra thing that also shouldn’t come as a surprise. Everything is a function either way.

Even if calling it “Turing-complete” doesn’t make us to think about functions right away but state machines rather, just think about it as “lambda-complete”. Suddenly, all computational models become “good at most as the functions”. Functions are upper limit because we work in their domain, so it’s kind of no surprise. Even in models with states, transitions between states are basically mappings, so it also becomes very obvious again (disregarding thoughts about distinctions about finite and infinite programs, and their mappings between two different models).

Foundations of mathematics are also categorical or type theoretic. The fact that we first discovered some other foundations that later proved to be equivalent (or close enough depending on what we pick as basic rules) to these functional ones is to be expected because we maybe got used to thinking not in terms of functions but objects, but hey, those are the same thing, so discovering that entire math deductive system is functional is just normal and obvious there as well.

Functions force us to have a start in math as well, as they do in physics. And since it means that we have to start from something, those are some sets of axioms, or in case of computation, set of basic transformations. And here, we are unable to divide those into smaller chunks. There, questions of discrete vs continuous doesn’t even make sense like it does in physics perhaps. And working with exactly picked sets of axioms means that there are precise things that they can or cannot do and generate as their consequence. For some things we can’t even say if they could be done (Gödel incompleteness theorems).

:: artificial intelligence

In recent years, machine learning has exploded. Although some of its properties are not understood well yet, interpretation of it as the optimization algorithms of the mappings on a given structure through repeated exposure to inputs and outputs makes sense and explains some other properties. Its correlation to classical learning that we humans utilize is also apparent. So, although missing several important properties and explanations, they are obviously good enough for many practical purposes, and they’ve already proven themselves in the industry.

But AI is also concerned with problems of consciousness and qualia (subjective perception), among other things. Those are particularly interesting because it seems they show some limitations of scientific method that no other things do.

Consciousness, in my opinion, is a “simple” consequence of deciding on what to focus computational resources that our brain has, through many layers of information renormalization and lossy compression, coupled with other symbolical items that we like to purposely separate (like the one’s self). Again, IMO, it’s a question of distribution of energy and biochemical activity throughout the time in a very complex network such as our brain. But it still evades the question such as “am I twice as awake/conscious than I was 10 minutes ago?”. Before dwelling onto our inability to ask those kinds of questions, first a few words about qualia as well. The problem of qualia (subjective perception of colors, pain, feelings, …) seems to have successfully and completely evaded all scientific approaches so far. There as well, questions such as “how much angrier am I today when compared to yesterday”, “is what I see as red same as what you see and call red” don’t make much sense and just completely stun us in-place. Besides obvious chemical and physical properties that we can measure when a human experiences something, we still don’t have a clue how to relate those to our qualia of it.

Since the scientific method assumes that you can hypothesize a model, and that is some mapping, our inability to define inputs and outputs for potential models or consciousness or qualia suggest that we are missing something crucial. Since we use specific words to ask those questions, by using words at all we already expect to wrap answers in a way that uses mappings and offer some other combination of words as the answer. There are multiple versions of how this fact might be interpreted, or speculated if it’s a fact at all, but certainly one of the possibilities is that overall approach to modelling it as a mapping is insufficient and we will never be able to resolve those problems that way.

:: limits

There is also a quite obvious problem that all we do is ultimately described by natural language. Even mathematical axioms and basic rules are described by natural language first. And since functions are insufficient, our language is as well. Does it mean that we need a new language? New ontological structures? New math? From the theory of computation we have partial answers to these questions, and they are not optimistic.

If we put all sentences that our natural language, the one and the only we would ever be able to understand by computational model of our brain, can generate in (finite or infinite) collection H, and all the sentences that can be generated by computational machine of our Universe in collection U, it may happen that H is only a subset of U. It might happen that we will never be able to answer many of the things about our universe, that we’ll be getting inconsistent conclusions because we are missing other critical pieces. In a deductive graph of our universe’s basic components (computational language), our brain’s graph might only be a subgraph of it, and sharing only some points with it might be completely insufficient to understand it all. It may also be the case that functional’s graph of consequences is also a subgraph of the H itself, instead of being a complete match like we assume currently.

(I’m aware that this analysis is based on theory of computation that is basically a theory of functions, so speaking about non-functional objects (whatever those might be and however they might look) in a functional manner might not make sense at all) Finding the guilty one (which one puts a limit on our computational capabilities) in the chain made of:

universe — brain — language — math — rest of science

is a tough one. And even after finding all upper limits of all the capabilities, it’s a matter of maximizing those for each link in the chain and doing the most we can with it.

:: conclusion

So, if not functions, what then?

But really, this is stepping in the direction of pure meta-science, by its very nature. Something that not even mathematicians have yet dared or tried to formalize. Something that might be proved to be completely wrong later. And those kind of things are in the domain of philosophy. And yes, the philosophy might be able to offer some potential for further work in this area already. Relational theory, structural realism (OSR)… those may have the power to offer new things. But their vagueness and lack of formalism is currently overwhelming. Some modifications to already existing theories such as information theory might be able to help it as well. Whatever it might be, I believe that it would also be able to explain why functions worked so far and for so long, and would become a direct upgrade to them. Yet, if limitations of mappings continue to surface, the need for at least some alternative will become only more and more apparent.

Functions have proven to be very useful and practical, no doubt about it. They’ve stood the test of time quite well, but are showing sings of weakness. They have their place definitely. But they do have limitations. They have problems. They are insufficient. And hopefully I gave a glimpse into thinking about many problems in modern science not being there on their own, but because they work with wrong kinds of structures from the very start. There are probably alternatives lurking in the future of meta-science that will show that functions are only a subset of our full potential (and it is something that I’m hoping to be the case and that I’m working for). But so far, this is what we have to deal with. And over many centuries and millennia, by analyzing functions using themselves, we’ve concluded that they are imperfect and insufficient.

Functions have said it, about themselves: “we’re done”.