Neuro-electronic Hybrid Systems
Abstract
Artificial neural networks have provided a powerful motif to implement extremely complex functions of high dimensional inputs like images and text. These are now being used to directly control robotic systems, to enable them to work in natural environments, with an aim to create human-like capabilities. However, even the most advanced state-of-the-art deep neural inference engine chip is still multiple orders of magnitude more energy inefficient compared to biological neurons. For instance, the active human brain consumes only about 15W of power and has about 85 billion neurons with an equivalent transistor count of about a trillion. In contrast, the latest silicon chip has about 15 billion transistors and consumes about 250 watts of power. The energy efficiency of modern electronics-based computing is at least 1000x to 100000x worse than biological computation and this gap will take several more decades to bridge. Given that neuronal networks in the brain have an incredible ability to do complex pattern recognition very energy efficiently, can we recruit them in for our computational systems – rather than trying to mimic their capabilities in silicon? This thesis explores and addresses some of the technological challenges to create such neuro-electronics hybrid systems. The nerve tissue as a computational element can be thought of a multi-input, multi-output information transformation unit and if this transformation characteristics can be understood, and be used stably and reliably, then this unit can be made part of a hybrid computational chain, consisting of a mix of biology and electronics.
There are several challenges to overcome to make this possible: a) Growth and maintenance of nerve tissue for sufficiently long periods in an energy efficient manner to ensure their use for specific missions, b) The engineering of high dimensional stimulation and recording from this nerve tissue to allow it to be used in hybrid systems, c) Information encoding and decoding to and from the tissue, d) Understanding the tissues’ information transformation capability and exploring ways to modify/train this and e)Studying the tissue’s information transformation capabilities as a function of time (i.e. its stability, reliability etc). The thesis mainly explores items c) and d) above: namely information transformation to/from the nerve tissue from the external world, and experimental studies in training the nerve tissue to change its functional behavior, leading to some insights into how biological neuronal networks can be leveraged for neuro-electronic hybrid systems.
First, we describe a new method for encoding external stimuli inputs to a neuronal system and decoding the tissue outputs to be interpretable by the external systems. We use the framework of Liquid State Machine to model and understand the computational ability of the nerve tissue. Our proposed encoding method is able to encode much more inputs in a systematic manner when compared to other previous works. Our output decoder is also much simpler and efficient when compared to other similar works.
We then demonstrate a real-time closed loop system with the nerve tissue on a multi-electrode array (MEA), controlling an external toy robot to move around avoiding obstacles. This showed the consistency and stability of responses from the network and the ability of the decoding scheme to map noisy tissue outputs to stable control commands to the robot, over a period of time.
In order to better understand the computational model for such a hybrid system, we next study context dependent computational capability of the neuronal network in the MEA. Our experiments show that the neuronal network has an inherent ability to do context dependent computation, in a robust way, by virtue of its random structure. Probabilistic connections of the network give rise to conjunctive neurons that results in emergent properties of robustly encoding the input stimuli and grouping of related inputs. An appropriate framework for modeling the computation performed by the tissue network is that of reservoir computing or liquid state machine. We perform computer simulations with such a model to show that the tissue’s transformation capabilities will be robust against loss of connections as well as neurons.
We then study the efficacy of prior reported training protocols, based on theta burst stimuli, for a neuronal network in the MEA which attempt to change its stimulus response. We analyze the principal components of the high-dimensional MEA recording of spontaneous activity, pre and post this training stimuli. Using this technique, we determine that the network maintains homeostasis in its activity for data recorded over an 8 hr period. We also find that this homeostasis is temporarily disturbed by the theta burst stimuli but is again restored after some time post the removal of stimuli. However, some electrodes show a more permanent change in their response to specific input stimulus, indicating memory of the training event in the network along specific pathways. These experiments confirm that local plasticity can indeed be achieved via specific stimulus patterns as reported elsewhere, but the network overall tends to maintain homeostasis, indicating that creating large scale network wide changes through external stimulus via MEA is a difficult challenge.
This leads us to conclude that the best way to train the hybrid system, given the limitations of current technology, is to not train the nerve tissue, but restrict the training to the output perceptron layer. With a large enough random tissue network, it will be able to hold a vast reservoir of functions, from which the desired function can be teased out via appropriate training of the output perceptron layer, as proposed by the reservoir computing model.