Machine learning looks like an emerging computer science field, but it actually is very multidisciplinary in nature. Apart from computer science, it combines, biology, neurology, psychology and cognitive science (and perhaps other fields as well).

The reason behind this multidisciplinary character is that machine learning tries to imitate nature, which means that it tries to understand nature first. This cannot be done with computer science.

The goal of Machine Learning is to simulate human intelligence and to ultimately reach Artificial General Intelligence.

I have recently come across this amazing article, titled “The Brain as a Universal Learning Machine”, by Jacob Cannell. The author does a brilliant job presenting the resemblance of our brain to a Universal Learning Machine (ULM).The fact that it is two years old means nothing else than the fact that I am new to the field.

The subject looked extremely interesting and I spent some time looking around for information on the topic, having this article as a guide.

To better grasp the importance of the ULM hypothesis, we must first have a look at what other options we have, if we are to understand and reverse engineer our brain. The author takes the same approach and I claim no originality for it. I am just following his footsteps.

Please note that for the rest of the post, for simplicity, I will be using both the terms brain and mind interchangeably, to refer to both the organ and the intellectual abilities of a human at the same time.

The evolutionary psychology and the modularity of mind

Neural networks
Machine learning
Artificial intelligence
Reverse engineering ourselves
April 13, 2017

|

Nikos Detsikas

Our brain can be viewed as a machine that can learn anything and not as a set of innate evolved domain specific abilities, obtained throughout time. This may mean that our Machine Learning theories may actually be the right way of approaching Artificial General Intelligence.

Machine learning looks like an emerging computer science field, but it actually is very multidisciplinary in nature. Apart from computer science, it combines, biology, neurology, psychology and cognitive science (and perhaps other fields as well).

The reason behind this multidisciplinary character is that machine learning tries to imitate nature, which means that it tries to understand nature first. This cannot be done with computer science.

The goal of Machine Learning is to simulate human intelligence and to ultimately reach Artificial General Intelligence.

I have recently come across this amazing article, titled “The Brain as a Universal Learning Machine”, by Jacob Cannell. The author does a brilliant job presenting the resemblance of our brain to a Universal Learning Machine (ULM).The fact that it is two years old means nothing else than the fact that I am new to the field.

The subject looked extremely interesting and I spent some time looking around for information on the topic, having this article as a guide.

To better grasp the importance of the ULM hypothesis, we must first have a look at what other options we have, if we are to understand and reverse engineer our brain. The author takes the same approach and I claim no originality for it. I am just following his footsteps.

Please note that for the rest of the post, for simplicity, I will be using both the terms brain and mind interchangeably, to refer to both the organ and the intellectual abilities of a human at the same time.

The evolutionary psychology and the modularity of mind

The evolutionary psychology and the modularity of mind, are theoretical notions that come close together when trying to explain human behavior and the human mind. They both constitute very extensive scientific fields with many decades of research behind them.

For the purposes of my post, I will only borrow some of their most basic ideas but the reader is welcome to study them in depth as there is a multitude of resources out there. Please, post any interesting points that elude me, in the comments section.

The theory of evolutionary psychology, posits that human behavior, and consequently the human mind (brain) has developed mechanisms to solve the problems it has faced throughout the history of its evolution. As a result, the human mind, became more and more equipped with algorithms, processes or functions that solved these problems. These psychological adaptations can be called intelligence.

To explain this in simpler terms, evolution in human nature was the result of the adaptation of human behavior (or human mind or human brain, any set of terms can be used here) to the recurring problems it would face throughout its history on earth.

Similarly, the modularity of mind theory, studies more or less the same notions, but from a more physical approach. It suggests that the human brain is a set of innate neural structures, also called modules, each one performing a specific function. The keywords here are innate and module.

Innate, in this context, means, that we are born with these functions already carved in our brains. We have inherited them as a result of the evolutionary processes that preceded.

The other term we highlighted, the term module, refers to a processing unit in our brain. The usage of the term posits that the brain consists of a set of units performing or implementing specific processes, functions and algorithms. In addition to that, the brain can have many layers of modules, each one using modules defined in lower and more primitive layers.

If the brain consists of a set of algorithms that solve specific problems and perform domain specific tasks, and a set of algorithms of those algorithms (and algorithms of algorithms of algorithms and so on), it makes us visualize the process of human thinking and human behavior as a tree like structure with a single root and several paths of action from the root to the most primitive function layers. Some of these tasks can take place in parallel, depending on how their respective processing units in the brain have evolved (which in turn was decided by the evolution process and natural selection).

Each one of these processing units has its own physical form and structure, just like each algorithm is different from the others, with its own sequence of commands.

Universal learning machines

So far, we have presented the previous dominant approach to how the human brain works and thinks. This served only as an introduction to the theory of the Universal Learning Machine, which presents a fundamentally different approach.

A Universal Learning Machine (ULM) can learn anything given the proper data and feedback mechanism, also called the reward function. Its form may be different, depending on the task it learns and exercises but its building blocks and architectural principles will be the same across all of its variations.

This idea is extremely compatible with machine learning theories, which describe models (neural networks, Support vector machines and other learning algorithms) that can learn any function given the proper training data (the notion of feedback or reward is inherent in the definition of the training data, as this data has been already tagged/labeled, but reinforcement learning may eliminate that dependency).

According to the universal learning machine approach, there is no or extremely little domain specific functionality hardwired in our brains. Our brains consist of general purpose learning modules, whose characteristics may differ (in terms of size, depth, input and output), but their building blocks are the same, just like the building blocks of all neural networks are the same. The characteristics in which they differ can be thought of as their hyperparameters, similarly to the hyperparameters of the machine learning algorithms.

We are not born with evolved innate functional modules but we rather come equipped with learning modules.

This theory, seems to be compatible with many aspects of human behavior. Let us consider new born babies. They come to life with very limited sensory, motor and cognitive capabilities. They can hardly move, they do not sense the environment very well and they have (or at least they look like they have) very primitive mental capabilities. However, they seem to have one capability instantly available to them to the fullest: the ability to learn.

They are not instructed, taught or anything like that. Whatever they learn, they learn it on their own, meaning that they are equipped with all the necessary learning machinery and reward circuitry.

If I may make a wild simplification here, it seems that any kind of knowledge that can be learned is not stored/packaged in the DNA. What seems to be stored are the learning mechanisms. They get deployed and start learning.

The difference between what the two theories suggest seems to be like the difference between finding the edges, or the corners in an image with specific purposed edge detection or corner detection algorithms (for example Canny, Harris, Shi-Tomasi detectors) and by training a generic Neural Network on the edge or corner detection problems, with no domain specific functional blocks.

Speaking of supporting evidence for the theory of the Universal Learning Machine, one can do nothing else than stand in awe in front of this: “Induction of visual orientation modules in auditory cortex”. The authors of this pioneering study, attempted to redirect ferret retinal projections from the visual cortex, onto which they were originally wired, to the auditory cortex. The results of the experiment had showed that the auditory cortex can adapt to learn to handle visual signals. In other words, the ferrets can learn to see with their auditory cortex. Perhaps not perfectly, but as Jacob Cannell says in his original article, these imperfections may have come from other factors, like surgical imperfections.

Why is this significant to the Universal Learning Machine hypothesis? Because it gives evidence that the various brain parts learn in a similar manner and all it changes is the data and the feedback mechanism. This information strengthens the idea that the human brain consists of identical general purpose learning/processing modules.

Conclusion

The conclusion I choose to draw from getting acquainted with The Universal Learning Machine hypothesis, is that we are probably on the right track trying to simulate human intelligence through the Machine learning theories we have been inventing and developing. The path we have been walking on seems to be the right one. If we get to Artificial General Intelligence one day, we will probably do it by advancing on that path.

Perhaps, what is missing is not some new magical and radically different theory that will explain everything and let us build the machines we are dreaming of.

Maybe it is a matter of scale and computational power. Maybe we just need to keep building larger and larger versions of the machine learning algorithms we already know and execute them on ever more powerful sets of processing units.

If this is true, then we will achieve the necessary scale one day … and we will surpass it too … and that day we will no longer be the most intelligent species in the known universe!