A Sliding‐Kernel Computation‐In‐Memory Architecture for Convolutional Neural Network
Abstract: Presently described is a sliding‐kernel computation‐in‐memory (SKCIM) architecture conceptually involving two overlapping layers of functional arrays, one containing memory elements and artificial synapses for neuromorphic computation, the other is used for storing and sliding convolutional kernel matrices. A low‐temperature metal‐oxide thin‐film transistor (TFT) technology capable of monolithically integrating single‐gate TFTs, dual‐gate TFTs, and memory capacitors is deployed for the construction of a physical SKCIM system. Exhibiting an 88% reduction in memory access operations compared to state‐of‐the‐art systems, a 32 × 32 SKCIM system is applied to execute common convolution tasks. A more involved demonstration is the application of a 5‐layer, SKCIM‐based convolutional neural network to the classification of the modified national institute of standards and technology (MNIST) dataset of handwritten numerals, achieving an accuracy rate of over 95%.
- Standort
-
Deutsche Nationalbibliothek Frankfurt am Main
- Umfang
-
Online-Ressource
- Sprache
-
Englisch
- Erschienen in
-
A Sliding‐Kernel Computation‐In‐Memory Architecture for Convolutional Neural Network ; day:22 ; month:10 ; year:2024 ; extent:12
Advanced science ; (22.10.2024) (gesamt 12)
- Urheber
-
Hu, Yushen
Xie, Xinying
Lei, Tengteng
Shi, Runxiao
Wong, Man
- DOI
-
10.1002/advs.202407440
- URN
-
urn:nbn:de:101:1-2410221445527.569872128530
- Rechteinformation
-
Open Access; Der Zugriff auf das Objekt ist unbeschränkt möglich.
- Letzte Aktualisierung
-
15.08.2025, 07:26 MESZ
Datenpartner
Deutsche Nationalbibliothek. Bei Fragen zum Objekt wenden Sie sich bitte an den Datenpartner.
Beteiligte
- Hu, Yushen
- Xie, Xinying
- Lei, Tengteng
- Shi, Runxiao
- Wong, Man