What is “Brain Engineering”?

"If I cannot build it, I do not understand it." That was Nobel laureate Richard Feynman -- and by his metric, we understand a bit about physics, less about chemistry, and almost nothing yet about biology. 

[Granger R (2011). How brains are built: Principles of computational neuroscience. Cerebrum; The Dana Foundation. http://dana.org/news/cerebrum/detail.aspx?id=30356] .

The Brain Engineering Laboratory has as its goal a fundamental understanding of the brain: its mechanisms, operation, and behaviors. There has been explosive growth of information about the brain from a broad range of fields including neuroanatomy, physiology, biochemistry and behavior, and tools from mathematics, computer science and engineering are brought to bear to make sense of the voluminous data. Our laboratory contributes to these fields and uses these data in an integrative way to construct hypotheses of how the brain operates to enable us to think, perceive, feel, and act.

Inevitably, as a scientific field arrives at an understanding of its object of study, we are able to use the information in a proactive way: to construct synthetic models of the system, to enhance its effectiveness, and to fix it when it breaks. For instance, as biological systems have become increasingly understood, it has become possible to diagnose diseases, to develop drugs to treat them, and to build devices that mimic and can even supplant their operation, such as artificial hearts and limbs. 

The fields of medicine and pharmacology have grown from these fundamental biological findings. The future of brain science will be no less productive, and no less dramatic in its effect on our understanding, and its influence on our day to day lives.

How are “Brain Engineering” and “Neural Networks” Related?

A) The field of neural networks began as psychologists and artificial intelligence researchers had been constructing computer models of human behavior, and noted a number of key facts:


Tasks that were easiest for humans were hardest for computers (e.g., perception, language, planning), and tasks that were easiest for computers (e.g., "expert" systems, medical diagnosis, game playing) were hardest for humans.


Difficult tasks such as recognizing a complex image (e.g., a face) could be accomplished by people in less than half a second; but brain cells (neurons) could only respond about every 10 milliseconds (one hundredth of a second), leaving time for only about 50-100 steps to accomplish recognition. No one yet knows of a computer program that could carry out such a complex task in just 100 steps. Therefore, the fact that millions of neurons operate together, in parallel with each other, may somehow be a key.


Parallel computers were becoming possible to build, and all computers were becoming much faster, enabling larger and more complex systems to be built.

This led to the design of software systems that borrowed characteristics from brains, to explore new ways of computing. These systems, variously termed "massively parallel" computing, "parallel distributed processing (PDP)," "connectionism," "brain-style" or "brain-like" computation, "artificial neural networks," or, eventually, just "neural networks," contained a few shared elements: many simple processors (like neurons), which could only process simple numbers (scalars), extensive connectivity among processors, parallel operation of all processors.

Powerful statistical methods were developed within this framework, and the results have become used in a range of applications. The question remained of the relationship between these new methods, and the mechanisms actually being carried out in the human brain. Were the "artificial neural network" systems capturing the essence of brain computation, and only omitting characteristics that had biological roles, but added no important computational power?

Our lab and others investigate detailed designs of real brain areas, taking advantage of an explosion of new data and insights from the growing fields of neurobiology. We have found that hitherto ignored details of the anatomical wiring diagrams and physiological operating mechanisms of brain circuits suggest powerful algorithms that bear little resemblance to those in neural networks, and indeed were unexpected from psychological or neuroscience studies. For example, our models of superficial cortical layers perform the unexpectedly complex task of hierarchical clustering [Granger R. (2006) Engines of the brain: The computational instruction set of human cognition. AI Magazine, 27: 15-32.] ; our cortical deep-layer models perform a type of hash coding [Rodriguez A, Whitson J, Granger R (2004) Derivation and analysis of basic computational operations of thalamocortical circuits. J. Cognitive Neurosci, 16: 856-877.] ; hippocampal field CA3 performs time dilation [Granger, R., Wiebe, S., Taketani, M., Ambros-Ingerson, J., Lynch, G. (1997). Distinct memory circuits comprising the hippocampal region. Hippocampus, 6: 567-578] ; the basal ganglia carry out a form of reinforcement learning [Granger R. (2004) Brain circuit implementation: High-precision computation from low-precision components. In:Replacement Parts for the Brain (T.Berger,Ed) MA: MIT Press.] ; and so on. Each of these algorithms, emergent from different brain structures, adds to the "tool kit" or instruction set of processes that might be carried out by the brain, and the resulting models have many attractive abilities and unusually low computational costs compared to other methods, including neural network methods. [Granger R (2011). How brains are built: Principles of computational neuroscience. Cerebrum; The Dana Foundation. http://dana.org/news/cerebrum/detail.aspx?id=30356] .

How do you know that the brain areas you study are actually carrying out the computational operations that you think they are?

Of course, we don't know; we and all scientists generate hypotheses from the extant data, and continue to test the hypotheses against new data as it arises. We study particular brain circuits "bottom up", hoping that a circuit's natural operation will suggest the computation that it is carrying out. As we construct simplified models, we try to be alert to biological features that, if added in, are consistent with (or even enhance) the hypothesized functions, or are inconsistent. We also attempt as much as possible to identify predictions arising from the models that can be tested via biological or behavioral means, though it is rare that models are able to make specific enough predictions, or that such predictions are testable with any current methods. Nonetheless, as new biological data occur, we continue to check the model against the known constraints, to either strengthen the model or modify it, or, if necessary, discard a refuted model for a particular brain structure, and begin again.

What experimental predictions of your models have you tested, either behaviorally or biologically?

In the olfactory system, we have studied behavioral [Granger, R., Staubli, U., Powers, H., Otto, T., Ambros-Ingerson, J., and Lynch, G. (1991). Behavioral tests of a prediction from a cortical network simulation. Psychol. Sci., 2: 116-118] and physiological predictions [McCollum, J., Larson, J., Otto, T., Schottler, F. Granger, R., and Lynch, G. (1991). Short-latency single-unit processing in olfactory cortex. J. Cog. Neurosci., 3: 293-299] ; in the hippocampus we have identified a sequence-dependent form of long-term potentiation [Granger, R., Whitson, J., Larson, J. and Lynch, G. (1994). Non-Hebbian properties of LTP enable high-capacity encoding of temporal sequences. Proc. Nat'l. Acad. Sci., 91: 10104-10108] ; and in behavioral tests of human visual thalamocortical processing, we have recently verified predictions from a thalamocortical model [Granger et al., submitted] . In addition, we have done extensive behavioral studies of the nature of glutamatergic neurotransmitter receptors in the process of learning, via specific pharmacological manipulation of these receptors in animals [Granger, R., Deadwyler, S., Davis, M., Moskowitz, B., Kessler, M., Rogers, G., and Lynch, G. (1996). Facilitation of glutamate receptors reverses an age-associated memory impairment in rats. Synapse, 22: 332-337] and in young and aged human subjects [Lynch, G., Kessler, M., Rogers, G., Ambros-Ingerson, J., Granger, R. and Schehr, R. (1996). Psychological effects of a drug that facilitates brain AMPA receptors. Int J Clinical Psychopharmacol, 11: 13-19; Ingvar, M., Ambros-Ingerson, J., Davis, M., Granger, R., Kessler, M., Rogers, G, Schehr, R., and Lynch., G. (1997). Enhancement by an ampakine of memory encoding in humans. Exper. Neurol., 146: 553-559; Lynch, G., Granger, R., Davis, M., Ambros-Ingerson, J., Kessler, M., Schehr, R. (1997). Evidence that a positive modulator of glutamate receptors improves recall in elderly human subjects. Experimental Neurol., 145: 89-92] . 

Recent studies have been aimed at uncovering the structure of representation in cortical systems; we have engaged in a series of neuroimaging experiments on the transformation of information from perceptual input features, through to internal conceptual representations (e.g., the sound of a dog's bark, to the concept of a dog.) These so-called "percept to concept" studies span modalities from auditory to visual, sometimes involving both [Lee Y, Janata P, Frost C, Hanke M, Granger R (2011) Investigation of melodic contour processing in the brain using multivariate pattern-based fMRI. NeuroImage, doi: 10.1016/j.neuroimage.2011.02.006.]

If your models are actually doing what brain circuits do, then do they turn out to be useful methods for applications?

The algorithms derived from various brain areas have turned out to be so unexpectedly effective and efficient that they have found use in a variety of real-world applications, ranging from military to industrial to medical uses. As examples, the Navy has used our systems to process signals from radar and other signal detectors [Kowtha, V., Satyanarayana, P., Granger, R., and Stenger, D. (1994). Learning and classification in a noisy environment by a simulated cortical network. Proc. Third Ann. Comp. & Neural Systems Conf., Boston: Kluwer, pp. 245-250] ; 

a hardware and software system derived from our cortical models has been used to analyze EEG data in normal and early Alzheimer's subjects, as a potential device for aiding clinicians in the early detection of Alzheimer's Disease [Benvenuto, J., Jin, Y., Casale, M., Lynch, G., Granger, R. (2002). Identification of diagnostic evoked response potential segments in Alzheimer's Disease. Exper. Neurology, 176: 269-276; 

Granger, R. (2001). Method and computer program product for assessing neurological conditions and treatments using evoked response potentials. U.S. Patent # 6,223,074 (54 claims)] . 

(See the "Applications" page on this site.) 

Work has been done to recognize moving objects in videos, as they change size, orientation, shadow, light, and are partially blocked (occluded) by other objects. [Dhulekar N, Felch A, Granger R (2010) Tracking moving objects improves recognition. Int'l Conf on Image Processing, Computer Vision, & Pattern Recognition (IPCV), pp.798-803]. 

Based on the massively parallel design of brains, we have developed novel computer architectures for parallel machine designs; these have been constructed into working hardware systems. [Moorkanikara J, Chandrashekar A, Felch A, Furlong J, Dutt N, Nicolau A, Veidenbaum A, Granger R. (2007) Accelerating brain circuit simulations of object recognition with a Sony Playstation 3. International Workshop on Innovative Architectures (IWIA)]; 

Moorkanikara J, Felch A, Chandrashekar A, Dutt N, Granger R, Nicolau A, Veidenbaum A. (2009) Brain- derived vision algorithm on high-performance architectures. Int'l Journal of Parallel Prog., 37: 345-369.] . 

Novel sensor-rich robots have been designed capable of learning complex perceptual and motor behaviors [Felch A, Granger R (2010) Sensor-rich robots driven by real-time brain circuit algorithms. In: Neuromorphic and brain-based robots (Krichmar & Wagatsuma, Eds)] .

What does “brain engineering” tell us about everyday thought?

If each sub-component of the brain has a particular engineering function, then what does that imply for the real use of the brain, which is thinking? In other words, as we come to understand more about the mechanisms of the brain, will that help us understand how we think? The answer is yes.

When we "think" about something, like getting ready for work in the morning, we are actually using a large number of distinct brain engines, all in concert, and it is their combined activity that is thought. Each of the different brain areas modeled gives rise to a different constituent activity of thought, and areas working together can produce computational algorithms that are different from either of the parts independently. The overall goal of brain engineering is to understand the nature of thought in terms of its constituent brain processes. Significant strides have been taken, but this is a goal that may take many, many years to achieve.