A master's thesis at the College of Engineering, University of Basra, discussing the design of a dual-mode accumulative multiplication (MAC) unit with high performance for convolutional neural network models.

The Master’s thesis was discussed by the student Fatima Tariq Hussein from the College of Engineering, University of Basra, Department of Computer Engineering, entitled Designing a dual-mode Multiplication Addition Unit (MAC) with high performance for convolutional neural network models. It included the use of convolutional neural networks (CNNS) on a large scale in various areas of life where CNN models are required. Numerically intensive calculations with a high level of proficiency and timing accuracy to effectively perform data flow processing tasks. MAC multiplication operations are the essence of the CNN models used to carry out cumulative multiplication operations. The performance of CNN networks depends greatly on the performance of its MAC block, specifically on the speed Multiplication operations inside the Mac console
To meet the requirements of advanced CNN applications, this work proposes a new architecture for the flexible MAC accumulative multiplication unit for CNN hardware accelerators, which can be run on fixed-sorted numbers (fx-pt) and floating-sorted numbers (fl-pt). The AVM architecture also adopts the use of only one CSA load-saving accumulator. An n-bit compiler to generate two partial vectors of the result of multiplication instead of using more than one CSA type collector
The results showed that the modified video multiplier 16*16 bit AVM achieved a reduction in time by 18.44% and reduced space by 459.29% compared to its latest counterparts. Likewise, the modified video multiplier AVM 42*42 bit achieved a reduction in time and space used by about 16.33%. and 81.36%, respectively, compared to its current counterpart designs and for the same FBGA family.