Please use this identifier to cite or link to this item:
|Title:||Decoupled processors architecture for accelerating data intensive applications using scratch-pad memory hierarchy||Authors:||Michail, Harris
Milidonis, Athanasios S.
Memory hierarchy (Computer science)
|Issue Date:||2010||Publisher:||Springer||Source:||Journal of Signal Processing Systems, 2010, Volume 59, Issue 3, Pages 281-296||Abstract:||We present an architecture of decoupled processors with a memory hierarchy consisting only of scratch-pad memories, and a main memory. This architecture exploits the more efficient pre-fetching of Decoupled processors, that make use of the parallelism between address computation and application data processing, which mainly exists in streaming applications. This benefit combined with the ability of scratch-pad memories to store data with no conflict misses and low energy per access contributes significantly for increasing the system's performance. The application code is split in two parallel programs the first runs on the Access processor and computes the addresses of the data in the memory hierarchy. The second processes the application data and runs on the Execute processor, a processor with a limited address space-just the register file addresses. Each transfer of any block in the memory hierarchy up to the Execute processor's register file is controlled by the Access processor and the DMA units. This strongly differentiates this architecture from traditional uniprocessors and existing decoupled processors with cache memory hierarchies. The architecture is compared in performance with uniprocessor architectures with (a) scratch-pad and (b) cache memory hierarchies and (c) the existing decoupled architectures, showing its higher normalized performance. The reason for this gain is the efficiency of data transferring that the scratch-pad memory hierarchy provides combined with the ability of the Decoupled processors to eliminate memory latency using memory management techniques for transferring data instead of fixed prefetching methods. Experimental results show that the performance is increased up to almost 2 times compared to uniprocessor architectures with scratch-pad and up to 3.7 times compared to the ones with cache. The proposed architecture achieves the above performance without having penalties in energy delay product costs||URI:||http://ktisis.cut.ac.cy/handle/10488/7310||ISSN:||19398018||DOI:||10.1007/s11265-009-0393-9||Rights:||© 2009 Springer Science + Business Media, LLC|
|Appears in Collections:||Άρθρα/Articles|
Show full item record
checked on Jan 6, 2017
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.