The Capabilities of Neural Systems Depend on a Hierarchically Structured World

dc.contributor.advisorReynolds, Kimberly A.en
dc.contributor.committeeMemberPfeiffer, Brad E.en
dc.contributor.committeeMemberToprak, Erdalen
dc.contributor.committeeMemberZinn, Andrew R.en
dc.contributor.committeeMemberLin, Miloen
dc.creatorBlazek, Paul Josephen
dc.creator.orcid0000-0001-5962-6444
dc.date.accessioned2023-09-14T22:28:52Z
dc.date.available2023-09-14T22:28:52Z
dc.date.created2021-08
dc.date.issuedAugust 2021
dc.date.submittedAugust 2021
dc.date.updated2023-09-14T22:28:52Z
dc.description.abstractThe study of the human mind spans thousands of years, from the earliest philosophers to modern neuroscience. However, there still remains an incredible gap in our understanding of how cognitive functions arise from the biology of the brain. This has greatly hampered attempts to understand how the brain works, what its limitations are, and how to replicate it in artificial intelligence systems. Here I propose a general framework to understand how cognitive processes can be encoded by networks of interconnected neurons. I have taken a theoretical and computational approach, using artificial neural networks as a high-level quantitative model of basic neuroscientific principles. Neural networks are capable of reasoning by means of a series of specialized distinctions made by individual neurons that are integrated hierarchically. This framework enables the study of how the capabilities of neural systems are dependent on structural and functional constraints. Biological constraints on neural coding and network size and topology limit the complexity of stimuli that can be comprehended by the network. Surprisingly, though, neural networks are capable of comprehending much more complex stimuli than has been previously described, provided that the inherent distinctions between these stimuli are hierarchically structured. Functional constraints on neural networks require that they be able to perform cognitive processes and be able to reason in a way that can be communicated with other people. I have proposed a novel neurocognitive model which, when implemented in deep neural networks, is able to simulate a wide variety of cognitive processes. It is consistent with experimental evidence from neuroscience and theories from philosophy and psychology. By directly implementing symbolic reasoning within the structure and function of the network, it becomes possible to overcome many of the fundamental problems that face modern artificial intelligence systems, including their lack of explainability, robustness, and generalizability. This culminated in a novel algorithm that translates neural networks to human-understandable code, providing a complete picture of how neural networks can reason. All of these results suggest that neural systems require the world to be hierarchically structured in order to comprehend it, a direct reflection of their own hierarchical organization.en
dc.format.mimetypeapplication/pdfen
dc.identifier.urihttps://hdl.handle.net/2152.5/10196
dc.language.isoenen
dc.subjectArtificial Intelligenceen
dc.subjectBrainen
dc.subjectDeep Learningen
dc.subjectModels, Neurologicalen
dc.subjectNeural Networks, Computeren
dc.titleThe Capabilities of Neural Systems Depend on a Hierarchically Structured Worlden
dc.typeThesisen
dc.type.materialtexten
thesis.degree.departmentGraduate School of Biomedical Sciencesen
thesis.degree.disciplineMolecular Biophysicsen
thesis.degree.grantorUT Southwestern Medical Centeren
thesis.degree.levelDoctoralen
thesis.degree.nameDoctor of Philosophyen

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
BLAZEK-PRIMARY-2022-1.pdf
Size:
24.83 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.84 KB
Format:
Plain Text
Description: