Functional versus Explanatory Models for Learning

Antony Van der Mude
Science and Philosophy
5 min readJan 16, 2022

--

Learning is the creation of a cognitive model from experience. In machine learning, there are a number of different types of these models:

  • Some models are numeric, such a linear regression or polynomial fitting.
  • Some models are composed of functional subunits — neural networks being an example.
  • Some models are probabilistic.
  • Some models are classifiers, such as decision trees
  • And so on.

I am going to make a distinction between two different types of learning models: functional models and explanatory (or causal) models. Neural networks and polynomials are functions. Models like classifiers and some probabilistic models can provide an explanation for their decisions.

The reason for this distinction is because of the purposes that the models are intended for. Basically, if an end-state is positive — that is, the goal is desired — it is important to learn a functional model. On the other hand, if an end-state is negative — you want to avoid it — then you need a causal model.

Let’s see how this distinction works out in human reasoning.

Disgust vs. Cleanliness

It has been noted that people, when presented with some object that elicits a sense of disgust, will often be able to point out some aspect of the object that makes it disgusting. This can be by sight or smell, but it can even be some factor in the preparation of the object, like having the object come in contact with something that is itself associated with disgust.

On the other hand, cleanliness, such as in religious practices, is normally associated with ceremonies that result in that state: ritual baths, for example.

Moral Choices

It has been postulated that moral choices can be explained by a dual-process model: there are the automatic-emotional processes and the conscious-controlled processes. It has been noted that “characteristically deontological judgments are preferentially supported by automatic emotional responses, while characteristically consequentialist judgments are preferentially supported by conscious reasoning and allied processes of cognitive control”.

It is interesting to consider the distinction. The term deontological refers to moral systems that are based on the rules that one should follow. Consequentialist moral systems are based on the outcome. But there is that same distinction between desired goals and negative avoidances. A deontological morality is a functional morality that implicitly says. If you follow the rules, you will have a positive outcome (or avoid a negative one). The consequentialist, though, makes predictions about the end-state: the consequences. Therefore, the reasoning process is more deliberative, often considering explanations for the possible choices.

Physical Actions

With physical actions, the distinction is more clear-cut. A baseball player learns how to catch the ball (the positive end-state) by creating a functional model that works at its best when it works unconsciously. On the other hand, if doing something that could be dangerous, like rock climbing, the person who does this, while having a lot of automatic, unconscious abilities, also spends time visualizing the actions to take before the climb is attempted.

In each of these examples, the reason for the distinction is clear. If the goal is a positive one, it is important to achieve that goal in and of itself. It is not as important to analyze why you are achieving that goal — as long as you achieve it. In fact, analysis of “why” is inefficient and counter-productive. On the other hand, if a goal is a negative that you want to avoid, then if you have achieved that end-state, you have already lost. The model did not help you. Instead, a useful model should be predictive: it should state “if you do this you will get in trouble” or “watch out, danger ahead”. This type of model is best if it is explanatory and causal, since it is trying to predict the future. The future is not here, and the present can evolve in various ways. So the model should be general enough to anticipate the various alternatives.

Therefore, learning models for positive goals and negative avoidances are qualitatively different. A functional model cannot be inverted to become a causal model.

Another interesting implication is that if machine learning is considered to be the optimization of an objective criterion — some sort of a fitness function — then the objective criterion for positive goals over a possible space of actions could be totally different from the criterion for the negative avoidances. This is usually true simply due to the fact that an input space, that is, the space of actions that the algorithm can take, is quite often a space where a given action is neither positive nor negative — it just is. It is most likely that the places where the positive goals and the negative avoidances are found in the input space are distinct, but it could happen that the two can coincide: that there are some conditions where you reach a positive goal with negative consequences. But the important point is that positive and negative goals are not mirror images of the other.

So it is important that when designing a learning system, to have the nature of the outcome in mind — is it a goal to be attained or avoided? For example, with a self-driving car, it is perfectly acceptable to have a neural network to positively identify objects, but it would be a bad idea to design a functional model to avoid those objects — an explanatory model would be better.

(All images are from Wikimedia)

--

--

Antony Van der Mude
Science and Philosophy

Computer programmer, interested in philosophy and religious pantheism