IoT & Embedded Technology Blog

Graphics Titans Share the Cache and Heterogeneous Architectures Gain from It

by Daniel Mandell | 06/30/2013

Heterogeneous computing is picking up steam with the latest announcements surrounding AMD’s Kaveri chipset and NVIDIA’s Maxwell GPU. Both chips will employ unified memory technologies in which the CPU and GPU can access main memory in tandem. The vast benefits of bridging different processing technologies closer together is why both of these graphics titans continue increasing support and unveiling products featuring heterogeneous architectures. This is why heterogeneous computing will continue to gain traction in several applications, and why embedded end users should prepare for support now.

AMD has done much preparing for Kaveri, featuring the company’s second generation Heterogeneous System Architecture (HSA) which includes the aforementioned Heterogeneous Uniform Memory Access (hUMA). AMD’s hUMA allows for easier sharing of resources within the upcoming Kaveri APU, with features such as Coherent Memory and Pageable Memory. The company leads the HSA Foundation, a non-profit consortium of various semiconductor players with the common goal of simplifying heterogeneous programming – the leading barrier to adoption. The foundation released its first specification in May 2013 and continues to garner contributors and support.

Unified Virtual Memory, NVIDIA’s memory enhancement to the Maxwell GPU, will also allow for shared DDR main memory to enable better memory handling and increased memory bandwidth. The Maxwell GPU will be NVIDIA’s first architecture able to access main memory. NVIDIA’s CUDA parallel computing platform enhances the programmability of heterogeneous platforms, with support for popular languages such as C, C++, C#, Fortran, Java, Python, and more.

Heterogeneous computing enables vast benefits for applications such as console gaming, high-performance computing, life sciences, and more. These upcoming technologies will force end users and developers to adapt to new programming models and interfaces. However, AMD and NVIDIA realize their end markets will only buy what they are capable of using and will continually push support for their respective programming platforms – which is inevitably pushing us towards a heterogeneous future.