Home
Blog

Computational approach to the scientific method

August 11, 2020 3:34am

Recently, I've been baffled by the sheer simplicity of the process of discovery and scientific originality when viewed through the lens of lignuistics and optimization. But, thinking further along those lines, it makes it quite obvious what each of those methods accomplishes and how.

It seems that there us only a few categories that you can put all scientific breakthroughs into. The basic categories are:

  • comparing two different problem structures and equalizing them
  • small modification of an existing idea
  • adding new assumptions; extending the model

The first category reuses ideas. It compares structure of a problem to something else that shares that same structure but was/is easier to work with, and conclusions from the first can be applied to the second problem, and vice versa. This is basically what category theory is doing.

Example for the second category would be instead of assuming that energy has a continous value, discretize it. That kind of assumption lead to one of the biggest breakthroughs in the physics, namely quantum physics. This kind of "tweaking" of the hypothesis smells very much like parametrization in machine learning etc.

The third category very radically alteres the base theory, usually by completely substituting it with something new and testing if a new hypothesis successfully generates the original theory. It is a classical reductionist approach.

All the categories are actually doing an optimization. Their basis for optimization is that of objects' properties. The search space? All the combinations of the properties. All hypotheses are expressed as bundles of mathematical objects with their own properties and relations between them. That is the total search space.

Every next category increases the amount of variation over the base assumption - first one doesn't change anything, just reinterprets structures in another domain and gives them another context to work in. The second one alters properties only partially, by updating some small or big part of it. The last one is "the most original one", and creates a hypothesis anew, the one that, as its consequence, generates the base theory (this very what I meant in the last few paragraphs here). All categories of scientific alterations change properties of the base assumption, it's just the amount and the type of alteration that are different.

Decomposing every current theory into a purely mathematical one and altering properties one by one and exploring what kind of outcomes those new models generate is the way the science is being done already, but it can be done computationally as well.

Of course, computational requirements for something like that, decomposing even the most basic theories and generating enough additional theories over their mutations, are probably far, far beyond the reach, and especially out of scope is the ability to confirm that a certain theory is useful or true in practice.

Viewed as a purely optimiztion problem, and with known basis and search space, science seems less intimidating.

However, once again, a question of what 'properties' are, still remains.