Concept superposition and learning in standard and brain-constrained deep neural networks
The ability to combine (or ‘‘superpose’’) multiple internal conceptual representations is a fundamental skill we constantly rely upon, crucial in complex tasks such as mental arithmetic, abstract reasoning, and language comprehension. As such, any artificial system aspiring to implement these aspects of general intelligence should be able to support this operation.
In this talk, I will first propose a tentative operative definition that enables determining whether any – artificial or biological – cognitive agent can be formally considered capable to carry out concept combination, and then show results of recent computational simulations illustrating how deep, brain-constrained networks trained with biologically grounded (Hebb-like) continual learning mechanisms exhibit the spontaneous emergence of internal circuits (cell assemblies) that naturally support superposition. Finally, I will try to identify some of the functional and architectural characteristics of such networks that facilitate the natural emergence of this feature, and which, in contrast, modern / classical deep NNs generally lack, concluding by suggesting possible directions for the development of future, better cognitive AI systems.
| Item Type | Conference or Workshop Item (Talk) |
|---|---|
| Departments, Centres and Research Units | Computing |
| Date Deposited | 06 Jun 2025 09:00 |
| Last Modified | 06 Jun 2025 09:00 |