img

Notice détaillée

RNNFast

An Accelerator for Recurrent Neural Networks Using Domain-Wall Memory

Article Ecrit par: Samavatian, Mohammad Hossein ; Zhou, Li ; Bacha, Anys ; Teodorescu, Radu ;

Résumé: Recurrent Neural Networks (RNNs) are an important class of neural networks designed to retain and incorporate context into current decisions. RNNs are particularly well suited for machine learning problems in which context is important, such as speech recognition and language translation. This work presents RNNFast, a hardware accelerator for RNNs that leverages an emerging class of nonvolatile memory called domain-wall memory (DWM). We show that DWM is very well suited for RNN acceleration due to its very high density and low read/write energy. At the same time, the sequential nature of input/weight processing of RNNs mitigates one of the downsides of DWM, which is the linear (rather than constant) data access time. RNNFast is very efficient and highly scalable, with flexible mapping of logical neurons to RNN hardware blocks. The basic hardware primitive, the RNN processing element (PE), includes custom DWM-based multiplication, sigmoid and tanh units for high density and low energy. The accelerator is designed to minimize data movement by closely interleaving DWM storage and computation.We compare our design with a stateof- the-art GPGPU and find 21.8× higher performance with 70× lower energy.


Langue: Anglais