The Capabilities of Neural Systems Depend on a Hierarchically Structured World

Date

August 2021

Authors

Blazek, Paul Joseph

Journal Title

Journal ISSN

Volume Title

Publisher

Content Notes

Abstract

The study of the human mind spans thousands of years, from the earliest philosophers to modern neuroscience. However, there still remains an incredible gap in our understanding of how cognitive functions arise from the biology of the brain. This has greatly hampered attempts to understand how the brain works, what its limitations are, and how to replicate it in artificial intelligence systems. Here I propose a general framework to understand how cognitive processes can be encoded by networks of interconnected neurons. I have taken a theoretical and computational approach, using artificial neural networks as a high-level quantitative model of basic neuroscientific principles. Neural networks are capable of reasoning by means of a series of specialized distinctions made by individual neurons that are integrated hierarchically. This framework enables the study of how the capabilities of neural systems are dependent on structural and functional constraints. Biological constraints on neural coding and network size and topology limit the complexity of stimuli that can be comprehended by the network. Surprisingly, though, neural networks are capable of comprehending much more complex stimuli than has been previously described, provided that the inherent distinctions between these stimuli are hierarchically structured. Functional constraints on neural networks require that they be able to perform cognitive processes and be able to reason in a way that can be communicated with other people. I have proposed a novel neurocognitive model which, when implemented in deep neural networks, is able to simulate a wide variety of cognitive processes. It is consistent with experimental evidence from neuroscience and theories from philosophy and psychology. By directly implementing symbolic reasoning within the structure and function of the network, it becomes possible to overcome many of the fundamental problems that face modern artificial intelligence systems, including their lack of explainability, robustness, and generalizability. This culminated in a novel algorithm that translates neural networks to human-understandable code, providing a complete picture of how neural networks can reason. All of these results suggest that neural systems require the world to be hierarchically structured in order to comprehend it, a direct reflection of their own hierarchical organization.

General Notes

Table of Contents

Citation

Related URI