Authors

Damir Dobric

Abstract

Many algorithms today provide a good machine learning solution in the specific problem domain, like pattern recognition, clustering, classification, sequence learning, image recognition, etc. They are all suitable for solving some particular problem but are limited regarding flexibility. For example, the algorithm that plays Go cannot do image classification, anomaly detection, or learn sequences. Inspired by the functioning of the neocortex, this work investigates if it is possible to design and implement a universal algorithm that can solve more complex tasks more intelligently in the way the neocortex does. Motivated by the remarkable replication degree of the same and similar circuitry structures in the entire neocortex, this work focuses on the idea of the generality of the neocortex cortical algorithm and suggests the existence of canonical cortical units that can solve more complex tasks if combined in the right way inside of a neural network. Unlike traditional neural networks, algorithms used and created in this work rely only on the finding of neural sciences. Initially inspired by the concept of Hierarchical Temporal Memory (HTM), this work demonstrates how Sparse Encoding, Spatial- and Sequence-Learning can be used to model an artificial cortical area with the cortical algorithm called Neural Association Algorithm (NAA). The proposed algorithm generalises the HTM and can form canonical units that consist of biologically inspired neurons, synapses, and dendrite segments and explains how interconnected canonical units can build a semantical meaning. Results demonstrate how such units can store a large amount of information, learn sequences, build contextual associations that create meaning and provide robustness to noise with high spatial similarity. Inspired by findings in neurosciences, this work also improves some aspects of the existing HTM and introduces the newborn stage of the algorithm. The extended algorithm takes control of a homeostatic plasticity mechanism and ensures that learned patterns remain stable. Finally, this work also delivers the algorithm for the computation over distributed mini-columns that can be executed in parallel using the Actor Programming Model.

Keywords

AI, computational intelligence, htm, Hierarchical Temporal Memory, brain, cortical learning algorithm, neocortex, synapse, meaning, scaling, dendrite, intelligence

Document Type

Thesis

Publication Date

2024

Creative Commons License

Creative Commons Attribution-No Derivative Works 4.0 International License
This work is licensed under a Creative Commons Attribution-No Derivative Works 4.0 International License.

Share

COinS