Analyzing Software Cache Configuration for In-line Data Compression
ACM Student Research Competition: Graduate Poster
ACM Student Research Competition: Undergraduate Poster
TimeThursday, 18 November 20218:30am - 5pm CST
LocationSecond Floor Atrium
DescriptionIn order to compute on or analyze larger data sets, application need access to large amounts of DRAM memory. To increase the size of memory requires a costly hardware upgrade. Compressing data structures stored in memory does not require hardware upgrades. Inline compression compresses and decompress data needed by the application as it moves out and into it's working set. Naïve inline compression compresses and decompresses for each memory access which significantly hurts performance. Caching decompressed values in a software managed cache limits the number of compression/decompression operations. The structure of the cache impacts the performance of the application. In this poster, we build and utilize a compression cache simulator to analyze various cache configurations for an application. We evaluate direct-mapped and set-associative caches on five HPC kernels. Results show that as the cache size increases, the hit rate increases. An increase in cache associativity also improves the hit rate.