Practopoiesis is a theory on how life organizes into a mind. It proposes the principles by which adaptive systems organize. It is a general theory of what it takes to be biologically intelligent. Being general, the theory is applicable to the brain as much as it is applicable to artificial intelligence (AI) technologies. What makes the theory so general is that it is grounded in the principles of cybernetics, rather than describing the physiological implementations of those mechanisms (inhibition/excitation, plasticity, etc.).
In practopoiesis there is no longer a cycle: action->representation->action… . Instead, practopoietic theory works with actions only, which interact and form a hierarchy: One action is in a service of another action. This hierarchy starts with actions of gene expression mechanisms and ends with our overt behavior. Perception and cognition are then understood as emergent properties of those cybernetics-like actions.
Nikola Danaylov, aka Socrates of Singularityweblog, interviewed me on practopoiesis:
There are several advantages of grounding theories in such abstract ways. One advantage is the applicability of the theory to system relying on different physical implementation e.g., brain vs. other organs, artificial intelligence algorithms, control systems, etc. Another advantage is the ability to detect what is missing in our current theories. In fact, an important contribution of practopoietic theory is the insight that our classical approach to organization of the brain — based on neural networks and the learning mechanisms for the connectivity of those networks — is not sufficient. One additional adaptive mechanism is needed and only then can we explain or mimic biological-like intelligence. The formal theory is presented in this paper downloadable from arXiv.org.
For those who need to strengthen their ability to think in terms of cybernetics, I can recommend the following review paper on cybernetic theory.
The theory may be appealing to anyone who is:
- interested in the principles of organization of biological systems, from genes to higher functions.
- interested in the mind-body problem and the problems of consciousness. Practopoietic theory proposes a mechanism for awareness about the surrounding world.
- interested in bringing AI technologies to a higher level i.e., to strong AI. The theory offers directions.
- seeking general theoretical basis for explanation of psychological phenomena. Practopoiesis may help give birth to a general theory of psychology.
- not quite satisfied with the existing theories of brain function and in search for alternative approaches and new concepts.
- interested in the big picture: biology, neuroscience, behavior, philosophy of mind.
Practopoietic theory consists of two parts. The first part is the foundation. This is where the basic principles of adaptive systems are formulated. These principles can be applied to various biological processes, not only to the brain. Also, the first part can be applied to non-biological systems, such as AI. The second part applies those principles to human mind and to the mind/body problem. The second part explains the ways in which the mind is special and different from any other adaptive system.
Here, I introduce a few implications that follow from the conclusions drawn in the second part. But I heavily simplify! Do not take the following as a synopsis of the theory. The actual theory is specified in the downloadable manuscript.
Classically, a neuronal network is considered to be the mechanism that is responsible for the generation of our behavior and mental processes. It contains all the knowledge in its architecture (i.e., the weights); it receives inputs; it produces the outputs. Traditionally, it is thought that it is only a question of a proper architecture and a sufficient number of neurons until the system exhibits adaptability, intelligence, and consciousness similar to that of the living organisms. The needed network architecture is thought to come from a combination of genetically predetermined programs and plasticity mechanisms that operate throughout the lifetime and provide the network with the knowledge about the world.
Practopoietic theory proposes several general principles of adaptive organization of systems. From these principles follows that the classical system cannot possibly produce adaptive behavior or intelligence similar to that of a human or animals. One adaptive component is missing! Thus, even if the classical system would be endowed with million times more neurons and synapses than the human brain, it would still not have enough flexibility to match biological intelligence.
Practopoietic theory also identifies the properties of this “missing” component and explains what the organism gains from such an additional mechanism. The key advantages are in the ability to adapt to everyday situations. For example, when you enter a new room, there is a whole new set of possibilities of how you may have to behave in this space and what may be expected to happen. There is a difference between entering a bar, a business meeting, a store, or a rest room. One needs to adapt to a new situation accordingly and quickly. You need to look around and collect information until you build a proper understanding of the situation. Where are the doors? Where are the chairs? Where is the counter? Who else is in the room? Do I like them? Do they like me? Should I care? … From the principles of practopoietic theory it follows that this adaptive act cannot be achieved through neural plasticity and inhibition/excitation mechanisms of the network alone. Rather, an additional, third mechanism is needed. That missing mechanism has somewhat different function from either plasticity or excitation+inhibition. This mechanism is named anapoiesis and is referring to “re-creation” of knowledge that once existed but was since (partly) lost. The functioning of anapoiesis is described in detail in the mentioned paper downloadable from arXiv.org.
The paper discusses anapoiesis as a general principle of adaptation. It does not discuss the actual physiological implementation of anapoiesis as this is yet to be investigated empirically. This is one of the future research topics that I would like to undertake. However, the paper does discuss the relationship between anapoiesis and a number of cognitive phenomena, such as perception, working memory, conceptual knowledge, problem solving, and the distinction between automatic and controlled processes. The paper ends with the Searle’s Chinese Room argument and the problem of providing artificial intelligence algorithms with an ability to understand the surrounding world.
Re-construction of knowledge
In the classical system, the knowledge about the world is stored in synapses, not in the learning rules. In this system, the rules of the plasticity mechanisms–those that drive changes in synapses–do not contain any particular knowledge learned throughout lifetime. For example, the fact that “2 plus 2 is 4″ or that “Leonardo da Vinci painted Mona Lisa” is traditionally not presumed to be stored in the learning rules.
Anapoiesis, in contrast, presumes something similar to storing some of the most important knowledge one level below, in the “learning rules”. Thus, in systems that employ anapoiesis, knowledge is stored across multiple levels and it can move from one level to another: from that stored in the “learning rules” to that stored in the network properties (see Figure). That way, the system can, in a way, learn its “learning rules”, which can then be responsible for storing much of the knowledge that the organism acquires through its lifetime.
The great advantage of such low-level knowledge is that it can be stored in a much more abstract, general form than what can be stored directly in the network. For example, one can store the general properties of chairs, not of any particular chair. Then, when a particular chair is encountered the rich set of smart “plasticity mechanisms” is used to quickly re-organize the network and adjust it on the spot. As a result, the network gets swiftly prepared for interacting with that chair. The system is adjusted to that particular situation. The knowledge is reconstructed as it once was when the system interacted with similar objects in the past. This process of reconstructive adjustments enables the system to understand the situation in which it finds itself. This also helps make it aware of the surrounding world. This reconstruction process provides all of the following: understanding, awareness, perception and categorization of the sensory inputs.
The necessity for additional structures
The reconstruction process cannot rely on the classical plasticity mechanisms acting directly on the network primarily for the reason that the classical plasticity mechanisms here have a different function. Their role is to enable the learning of how to perform anapoiesis. Thus, here, plasticity operates one level bellow the network anatomy.
In classical systems it is quite easy to understand that plasticity is indispensable; It cannot be replaced by network anatomy. That is, there is no way in which a network architecture can be wired such that the network would not need plasticity in a situation of learning a new type of behavior. For similar reasons, the additional mechanisms of anapoiesis are indispensable. There is no way for a classical network to perform in a human-like way the adaptive tasks of perception, recognition, categorization, decision making, etc., by relying on architecture and plasticity mechanisms only. Rather, one more set of mechanisms is needed–one that affects in a special way the network architecture: It is able to learn adaptively when and how the network should be affected.
The reasons for these organizational needs and the cybernetic theory that underlies the principles of system organization are described in details in the freely downloadable manuscript.
Combinatorics of knowledge
The main reason that anapoiesis is needed is the exploding combinatorics of all possible situations in which an organism may find itself and may have to generate appropriate behavior in order to survive. Today’s artificially intelligent systems can do their jobs without anapoiesis because their environments are not nearly as complex and brutal as those of the real biological systems. Lab robots work with toy problems and computer algorithms have human engineers to take care of them. None of them could survive alone–out in the wild. For a snake in Amazons and a Lion in savanna, the situation is much different. And so is for a human being. The number of situations that either of them can possibly encounter is too large for storage of knowledge at just one level (i.e., at the synapses). Also, the variety of novel situations that may occur is too large to be simply based on the experience with the past situations. These organisms need to be able to store their knowledge in a very general form, applicable for many situations. Then, in each given situation they need to infer, in real time, how this general knowledge applies. Anapoiesis is the process of that inference.
The question of whether anapoiesis is needed or not is a question of whether, in principle, a perceptron of a sufficient size (number of units, layers, connections) can mimic human intelligence. Practopoieis explains why the answer is no. To store all the knowledge necessary for a network to exhibit human-like intelligent behavior, the system would probably require a network of a size that exceeds the size of the universe. Moreover, there would be no proper training set to learn those connections. And if there was enough samples in the training set, the age of the universe may not be a long enough period of time to complete the learning.
Anapoiesis relies on knowledge stored in a general form, such that it requires relatively few resources. In each novel situation, this general knowledge is then used to reconstruct specific knowledge applicable to that situation. The price paid for that flexibility with a lean structure is awareness. The system has to continually update its current knowledge, which makes it effectively conscious.
First part: The principles of adaptive systems
-Requisite variety (Wikipedia)
-Good regulator theorem (Wikipedia)
-Levels of adaptive organization
-Specificity vs. generality of cybernetic knowledge
-Downward pressure for adjustment
-Practopoietic loop of causation (animated)
Second part: Application to the brain/mind (and AI)
-Distinction between T2- and T3-systems
-Logical abduction (Wikipedia)
-Extended Wisconsin card-sorting test
-Chinese room argument (Wikipedia)
Nikolić, D. (2014)
Practopoiesis: How cybernetics of biology can help AI