Concept superposition and learning in standard and brain-constrained deep neural networks

Garagnani, M.ORCID logo. 2025. 'Concept superposition and learning in standard and brain-constrained deep neural networks'. In: 34th Annual Computational Neuroscience Meeting (CNS 2025), Workshop on "Brains and AI". Florence, Italy 9 July 2025. [Conference or Workshop Item] (Forthcoming)
Copy

The ability to combine (or ‘‘superpose’’) multiple internal conceptual representations is a fundamental skill we constantly rely upon, crucial in complex tasks such as mental arithmetic, abstract reasoning, and language comprehension. As such, any artificial system aspiring to implement these aspects of general intelligence should be able to support this operation.

In this talk, I will first propose a tentative operative definition that enables determining whether any – artificial or biological – cognitive agent can be formally considered capable to carry out concept combination, and then show results of recent computational simulations illustrating how deep, brain-constrained networks trained with biologically grounded (Hebb-like) continual learning mechanisms exhibit the spontaneous emergence of internal circuits (cell assemblies) that naturally support superposition. Finally, I will try to identify some of the functional and architectural characteristics of such networks that facilitate the natural emergence of this feature, and which, in contrast, modern / classical deep NNs generally lack, concluding by suggesting possible directions for the development of future, better cognitive AI systems.

Full text not available from this repository.

Atom BibTeX OpenURL ContextObject in Span OpenURL ContextObject Dublin Core Dublin Core MPEG-21 DIDL Data Cite XML EndNote HTML Citation METS MODS RIOXX2 XML Reference Manager Refer ASCII Citation
Export

Downloads