The low energy consumption and high-speed parallel operation of non-volatile neural devices are the competitive advantages compared with the separated chip. Computing in memory (CIM) has the same protocols and standards for storage and memory, which is the top research to eliminate boundaries.1 In recent years, resistive random access memories (RRAMs) as memristors are integrated with microprocessors and peripheral circuits to realize the artificial intelligence (AI) functionalities of neural networks.2,3 The NeuRRAM-a chip is an advanced RRAM-based CIM chip that offers comparable inference accuracy to software models with four-bit weights for various AI tasks. It also boasts energy efficiency that is twice as good as previous state-of-the-art RRAM-CIM chips across different computational bit-precisions. Additionally, the NeuRRAM-a chip allows for flexible reconfiguration of CIM cores to accommodate diverse model architectures.4,5 From the perspective of energy consumption, three-terminal neural devices have more potential to approach the power of human brain (25W) in large-scale computing.6 However, due to the limitations and deficiencies of array fabrication technology for three-terminal neural devices, synaptic transistors as cross-bar weight combined with functional circuits are rarely explored to completely simulate the neural network.7 And a great quantity of research focuses on the synaptic plasticity of a single device and the non-volatile regulation mechanism.3,8 Consequently, the bionic performance of the synaptic transistor is utilized to expand the fusion circuit and match the high-performance network, which has a great contribution to accelerating the improvement of the system of brain-like computing.9,10
The essence of brain-like computing is learning from the information processing method or structure of biological neural systems and then developing the matching computer theory, chip architecture, and application models and algorithms.11 Brain-like computing is considered a significant research avenue in the post-Moore era, which has the potential to break through a technological bottleneck in future intelligent computing.12 At present, spiking neural networks, which closely replicate biological nervous systems, are a promising technology due to their low-overhead online learning and energy-efficient information encoding, stemming from their intrinsic local training principles. Thus, the comprehensively deepening innovation of SNNs must be explored in all related fields, including model algorithms, software, chip, and data.13 Several multiterminal synaptic devices, including floating-gate synaptic transistors (STs), ferroelectric-gate STs, electrolyte-gate STs, and optoelectronic STs, have been developed for producing synaptic plasticity. This plasticity is classified based on factors such as the retention time and the number of pulses. These devices effectively provide the ability to manipulate synaptic strength.14,15 Respectively, the working principles of the above STs, including the thermal emission or quantum tunneling, promote electrons into the floating gate, the interaction between the carriers in the channel and the polarization of the ferroelectric insulator is known as the coulomb interaction, electrostatic modulation and electrochemical doping and interfacial charge trapping through photogenerated electron pairs. Moreover, the functional layer, comprising a variety of materials (metal oxide, organic material, two-dimensional, quantum dot and perovskite), can enhance or expand the synaptic properties of a system with regard to energy consumption, computing speed, and compatibility.16 However, up to now, the Al applications of metal-organic frameworks (MOFs) in non-volatile neural devices have been rarely reported. MOFs are a type of crystalline porous material that are created by combining polytopic organic ligands with metal centers. These MOFs possess several advantageous characteristics, such as highly ordered pores, a substantial surface area, and a modifiable structure., which conveniently makes designing controlled and multifunctional biological spiking neural devices uncomplicated. Further, deeply introducing the core unit of SNN, SNNs commonly adopt LIF (Leaky Integrate-and-fire) neurons as the fundamental building blocks for constructing neural networks.17,18 The LIF neuron model is a well-known type of neuron that offers a combination of the user-friendliness and simplicity of the Ingrate-and-fire (IF) model, along with the capability to simulate various physiological properties of biological neurons, similar to the Hodgkin-Huxley (H-H) neuron model.19 For synaptic devices, the LIF model is computationally efficient due to its simplicity, making it suitable for large-scale simulations. Then, the LIF model is biologically plausible and can simulate a wide range of physiological properties of biological neurons, such as action potential generation, synaptic integration, and adaptation.20 For the Al application, the LIF model is compatible with a range of learning rules (LTP/LTD and STDP) and can be used to train SNNs for various tasks, such as classification, pattern recognition, and control. Especially, the researches on constructing LIF neuron circuits and composing forward propagation process of SNNs with output signal from synaptic weight cross-bar are rarely reported.21 Therefore, the barrier from the extraction of single device characteristics to the building of an integral neural network system needs more resources to excavate.22 In terms of operation speed, the appropriate data type is conducive to improving the working efficiency of the neural network.23 In addition, the advantage of SNNs is to process complex temporal information which has obvious differences in the frequency domain. The Steady-State Visually Evoked Potential (SSVEP) is a neural reaction that occurs in response to visual stimuli.25 When the eyes receive periodic flashes of light, the brain generates a stable electrical signal that oscillates at the same frequency as the stimulus. This response can be recorded via electroencephalography (EEG) and is typically observed as a periodic waveform at a specific frequency. SSVEP is widely used in the development of brain-computer interfaces (BCIs), which enable individuals to control external devices by monitoring their brain activity.26 For instance, in an SSVEP-based BCI system, users can select different commands or controllers by fixating on visual stimuli that flash at distinct frequencies on a computer screen. The system identifies the choice of the user by analyzing their EEG and executes the corresponding operation. SSVEP-based BCIs have diverse applications in fields such as virtual reality, game control, and medical diagnosis.27,28
In this work, we have proposed the new-type spiking neural network that utilizes ZIF-67 synaptic transistor, LIF neuron circuits, and SSVEP to achieve efficient and accurate neural computations. Forward propagation in our network relies on time sequence coding, accumulation of postsynaptic current, and the membrane potential threshold voltage of LIF neurons. Backpropagation in the proposed SNNs involves determining the iteration update rules and integrating the STDP curve to adjust the synaptic weights between neurons. The functional diversity of the prepared artificial neurons can be clearly observed through the results of STM/LTM, Paired-pulse facilitation (PPF), STDP, and LTP/LTD. More importantly, a LIF circuit capable of producing a matching array output has been simulated, allowing the SNNs to efficiently convert high-frequency information into sparse signals using the four blocks. Ultimately, the task of recognizing EEG signals was achieved using the modified SNN, with the final recognition rate stabilized at 95.2%.