Hearn R, Granger R (2008). Learning hierarchical representations and behaviors. AAAI Fall Symposium: Naturally-Inspired Artificial Intelligence.
Hearn R, Granger R
Learning to perform via reinforcement typically requires ex- tensive search through an intractably large space of possible behaviors. In the brain, reinforcement learning is hypothe- sized to be carried out in large measure by the basal ganglia / striatal complex (Schultz 2000; Granger 2005), a phylogenet- ically old set of structures that dominate the brains of reptiles. The striatal complex in humans is integrated into a tight loop with cortex and thalamus (Granger and Hearn 2007); the re- sulting cortico-striatal loops account for the vast majority of all the contents of human forebrain. Studies of these systems have led to hypotheses that the cortex is learning to construct large hierarchical representations of perceptions and actions, and that these are used to substantially constrain and direct search (Granger 2006) that would otherwise be blindly pur- sued by the striatal complex (as, perhaps, in reptiles). This notion has led to construction of a modular system in which loops of thalamocortical models and striatal models inter- act such that hierarchical representation learning in the for- mer exerts strong constraints on the trial-and-error reinforce- ment learning of the latter, while reciprocally the latter can be thought of as testing hypotheses generated by the former. We report on explorations of these models in the context of learning complex behaviors by example, in simulated envi- ronments and in real robots.