Several recent reports stemming from the keynote at least week’s JavaOne conference indicate that IBM is working towards enabling GPU acceleration with Java, one of the most popular programming languages used in software development. This shortly follows the establishment of the OpenPOWER Consortium, an open development alliance based on IBM’s Power architecture, in which an announcement was made that IBM and Nvidia will work together to integrate the Power and CUDA GPU ecosystems. At this point, its hard not to think about the competitive strategies unfolding of other industry leaders such as AMD, ARM, and Intel to capture the burgeoning server markets for high-performance computing (HPC) and big data applications. The end result will determine more than who carves out the most revenues, it will also decide the fate of several processor architectures and their extent in new enterprise applications.
First, from a software standpoint, IBM and Nvidia’s recent announcements will surely attract and retain a lot of developers. Not only is the collaboration enabling two pools of (typically deeply-invested) software developers to pursue more application opportunities, it will also add support for a common high-level language to which OEMs can easily acquire or build out expertise, Java. Nvidia is no newcomer to GPU acceleration in demanding big data applications, and teaming up with IBM to offer a more-holistic solution featuring more than its own proprietary CUDA language and GPU technology will improve integration with prospective businesses’ requirements and objectives.
On the other end, we have the splintering relationship between AMD and Intel. Traditionally an x86-only supplier, AMD will instead be also rolling out the first 64-bit ARM processors for datacenters in 2014, and will supply “processor-agnostic” SeaMicro servers and data center technology. Despite the sudden, uncomfortable ripples by AMD (whom will continue to supply x86 server chips as well), Intel’s x86 ecosystem is still widely used in datacenters around the world. Furthermore, the company has its own line of computing products to compete with GPU acceleration – the Xeon Phi coprocessors, launched in mid-2012. Intel’s processors and coprocessors use common languages, models, and tools – maximizing developer support and preserving software expertise across the Xeon product line.
IBM and Nvidia’s growing relationship is hugely important for each company. Neither on its own would likely make a significant impact in the markets for large and hyperscale computing with respect to Intel and ARM’s growing influence. However, bridging popular CPU and GPU architectures will bode well for CUDA and Power as OEMs increasingly employ heterogeneous architectures to realize the flexibility in simultaneous processing of serial and parallel workloads.