The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two. Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly.
Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.
Only one artificial synapse has been produced but researchers at Sandia used 15, measurements from experiments on that synapse to simulate how an array of them would work in a neural network.
Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent. Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.
Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed states in the artificial synapse, which is useful for neuron-type computation models.
In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory. This, however, means they are still using about 10, times as much energy as the minimum a biological synapse needs in order to fire.
The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices. Every part of the device is made of inexpensive organic materials. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons. The softness and flexibility of the device also lends itself to being used in biological environments.
Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing. Nature , 61—64 Zhang, W. Designing crystallization in phase-change materials for universal memory and neuro-inspired computing. Wong, H. Phase change memory. IEEE 98 , — Seo, S. Artificial optic-neural synapse for colored and color-mixed pattern recognition. Shi, J. A correlated nickelate synaptic transistor. Kim, M. Ferroelectric analog synaptic transistors.
Nano Lett. Wang, H. PubMed Google Scholar. A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing. Qian, C. Artificial synapses based on in-plane gate organic electrochemical transistors. ACS Appl. Interface 8 , — CAS Google Scholar. Recent progress in artificial synapses based on two-dimensional van der Waals materials for brain-inspired computing.
Electron Mater. Kang, D. A neuromorphic device implemented on a Salmon-DNA electrolyte and its application to artificial neural networks. Sun, J. Chen, P. NeuroSim: a circuit-level macro model for benchmarking neuro-inspired architectures in online learning. Design Integ. Circuits Syst. Lim, S. Adaptive learning rule for hardware-based deep neural networks using electronic synapse devices.
Neural Comput. Neuro-inspired computing with emerging nonvolatile memorys. IEEE , — Ambrogio, S. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature , 60—67 Neuromorphic computing using non-volatile memory. X 2 , 89— Woo, J. Park, S. Neuromorphic speech systems using advanced ReRAM-based synapse. IEEE Int. Jerry, M. Ferroelectric FET analog synapse for acceleration of deep neural network training. Fuller, E. Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing.
Kim, S. Analog CMOS-based resistive processing unit for deep neural network training. Sun, X. Foster, M. Textbook of Physiology.
Macmillan, Geim, A. Van der Waals heterostructures. Nature , — Novoselov, K. Science , aac Shim, J. Electronic and optoelectronic devices based on two-dimensional materials: from fabrication to application. Paul, T. A high-performance MoS 2 synaptic device with floating gate engineering for neuromorphic computing.
Choi, M. Controlled charge trapping by molybdenum disulphide and graphene in ultrathin heterostructured memory devices. Liu, B. A Fluorographene-based synaptic transistor. Arnold, A. Mimicking neurotransmitter release in chemical synapses via hysteresis engineering in MoS 2 transistors. ACS Nano 11 , — Tran, T. Quantum emission from hexagonal boron nitride monolayers.
Luo, X. Reversible photo-induced doping in WSe 2 field effect transistors. Nanoscale 11 , — Museur, L. Defect-related photoluminescence of hexagonal boron nitride. Hastas, N. Lin, Y. Barrier inhomogeneities at vertically stacked graphene-based heterostructures. Nanoscale 6 , — Yang, C.
All-solide-state synaptic transistor with ultralow conductance for neuromorphic computing. Impact of synaptic device variations on pattern recognition accuracy in a hardware neural network. Lyon, R. A computational model of filtering, detection, and compression in the cochlea. Widrow, B. Stationary and nonstationary learning characteristics of the LMS adaptive filters. IEEE 64 , — MathSciNet Google Scholar.
LeCun, Y. Deep learning. Download references. Ltd, Hwasung, , Korea. Foundry Division, Samsung Electronics Co. You can also search for this author in PubMed Google Scholar. All authors have discussed the results and commented on the manuscript. Correspondence to Jin-Hong Park. In the proposed method, the initial learning is conducted in software, and the behavior of the software-trained network is learned by the hardware network by learning each of the single-layered neurons of the network independently.
The forward calculation of the single-layered neuron learning is implemented on circuit hardware, and followed by a weight updating phase assisted by a host computer. Unlike conventional chip-in-the-loop learning, the need for the readout of synaptic weights for calculating weight updates in each epoch is eliminated by virtue of the memristor bridge synapse and the proposed learning scheme.
The hardware architecture along with the successful implementation of proposed learning on a three-bit parity network, and on a car detection network is also presented. Article :.
0コメント