Low Power Analog Neural Network Framework with MIFGMOS
Abstract
Ever since the beginning of the notion behind the term 'cloud computing,' i.e., to share the pro-
cessing and storage capabilities of a centralized system, there has been a signi cant increase in the
availability of raw data. The challenges faced (e.g., high latency, storage limitations, channel band-
width limitations, downtime) while processing such data in a cloud framework gave birth to edge
computing where the idea is to push the computation to the edge of the network. Edge computing
is a distributed computing paradigm, which off loads cloud because of performing data processing
near to the source.
For real-time applications (e.g., autonomous vehicles, air tra c control systems) where the
latency is of prime concern, the deployment of Deep Neural Networks (DNNs) on the cloud would not
be a feasible option. This is because of substantial inference time, enormous memory requirements,
and numerous CPUs & GPUs, which translates to large power consumption. This difficulty in
latency can be overcome by deploying DNN models on edge devices. Edge devices typically cannot
handle a large DNN because of power and memory constraints. This lack of power and size restricts
the need for small yet efficient implementation of DNN on edge devices. Promising results have been
shown by employing the Extreme Learning Machine (ELM) in terms of faster training and high
accuracy for Multilayer Perceptron (MLP) in applications such as object detection, recognition,
and tracking. MLP being an instance of DNN could be a viable option to be deployed on edge
devices.
This motivates the need for analog implementation of MLP because of its characterizing fea-
tures of low power and small size overcome the issues discussed above. In this work, a novel way
of realizing the ELM framework on a single hidden layer feed-forward neural network (SLFN) is
presented based on a Multiple-Input Floating Gate MOS (MIFGMOS) operational transconduc-
tance amplifier (OTA). A multiple-input version of FGMOS called MIFGMOS is a device which
because of its lossless charge sharing based voltage summation operation, dissipates meager power.
The ability of a programmable threshold voltage and weighted summation of input gate voltage
makes MIFGMOS an ideal device for emulation of biological neurons while working in the sub-
threshold region for low power operation. Also, being able to serve as an analog memory in the
form of statically stored charges renders the use of an input layer synaptic weights arrangement
inessential. From the perspective of an analog neural network framework, the use of MIFGMOS
improves areal density substantially. The transconductance curve of the employed OTA resembles
a highly non-linear activation function (Sigmoid in this case). The slope and maximum level of the
transconductance curve which are the tunable parameters of our setup serve as variability among
activations.
The proposed system has been implemented using 65nm Complementary Metal Oxide Semi-
conductor (CMOS) process technology. The working principle of the implemented system has been
veri ed by employing it for regression and classi cation tasks such as MNIST digit recognition.