Optimization of Multi-Level Operation in RRAM Arrays for In-Memory Computing

dc.bibliographicCitation.firstPage1084eng
dc.bibliographicCitation.issue9eng
dc.bibliographicCitation.journalTitleElectronics : open access journaleng
dc.bibliographicCitation.volume10eng
dc.contributor.authorPérez, Eduardo
dc.contributor.authorPérez-Ávila, Antonio Javier
dc.contributor.authorRomero-Zaliz, Rocío
dc.contributor.authorMahadevaiah, Mamathamba Kalishettyhalli
dc.contributor.authorPérez-Bosch Quesada, Emilio
dc.contributor.authorRoldán, Juan Bautista
dc.contributor.authorJiménez-Molinos, Francisco
dc.contributor.authorWenger, Christian
dc.date.accessioned2022-01-21T09:36:06Z
dc.date.available2022-01-21T09:36:06Z
dc.date.issued2021
dc.description.abstractAccomplishing multi-level programming in resistive random access memory (RRAM) arrays with truly discrete and linearly spaced conductive levels is crucial in order to implement synaptic weights in hardware-based neuromorphic systems. In this paper, we implemented this feature on 4-kbit 1T1R RRAM arrays by tuning the programming parameters of the multi-level incremental step pulse with verify algorithm (M-ISPVA). The optimized set of parameters was assessed by comparing its results with a non-optimized one. The optimized set of parameters proved to be an effective way to define non-overlapped conductive levels due to the strong reduction of the device-to-device variability as well as of the cycle-to-cycle variability, assessed by inter-levels switching tests and during 1 k reset-set cycles. In order to evaluate this improvement in real scenarios, the experimental characteristics of the RRAM devices were captured by means of a behavioral model, which was used to simulate two different neuromorphic systems: an 8 × 8 vector-matrix-multiplication (VMM) accelerator and a 4-layer feedforward neural network for MNIST database recognition. The results clearly showed that the optimization of the programming parameters improved both the precision of VMM results as well as the recognition accuracy of the neural network in about 6% compared with the use of non-optimized parameters.eng
dc.description.fondsLeibniz_Fonds
dc.description.versionpublishedVersioneng
dc.identifier.urihttps://oa.tib.eu/renate/handle/123456789/7884
dc.identifier.urihttps://doi.org/10.34657/6925
dc.language.isoengeng
dc.publisherBasel : MDPIeng
dc.relation.doihttps://doi.org/10.3390/electronics10091084
dc.relation.essn2079-9292
dc.rights.licenseCC BY 4.0 Unportedeng
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/eng
dc.subject.ddc530eng
dc.subject.otherIn-memory computingeng
dc.subject.otherInter-levels switchingeng
dc.subject.otherMulti-leveleng
dc.subject.otherProgramming algorithmeng
dc.subject.otherRRAM arrayseng
dc.subject.otherVector-matrix-multiplicationeng
dc.titleOptimization of Multi-Level Operation in RRAM Arrays for In-Memory Computingeng
dc.typeArticleeng
dc.typeTexteng
tib.accessRightsopenAccesseng
wgl.contributorIHPeng
wgl.subjectPhysikeng
wgl.typeZeitschriftenartikeleng
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Optimization of multi-level operation in rram arrays for in-memory computing.pdf
Size:
4.42 MB
Format:
Adobe Portable Document Format
Description:
Collections