AMD Radeon Open Compute Platform (ROCm) 1.3 launched, Boltzmann coming to fulfillment?

AMD recently has announced that the Radeon Open Compute Platform or ROCm 1.3 has been launched and Heterogeneous Compute Interface for Portability has made significant process.

According to the Radeon Open Compute official website , the annual ACM/IEEE sponsored supercomputing conference in the United States called SC has begun and AMD has started to create their ecosystem, which could overcome and even interact with CUDA technology.

At the same show last year, AMD has announced the Boltzmann Initiative, which is their ambitious plan to overhaul their HPC software stack for GPUs. It had recognizing that much of NVIDIA's continued success has been due to the quality of the CUDA software ecosystem, AMD set out to create an ecosystem that will compete and interact with CUDA. With this the software gap between their company and the Nvidia will be filled up.

AMD is currently updating the participants on the current state of Boltzmann and providing the latest software update to their project. Presently it is under the name of Radeon Open Compute Platform or ROCm for short.

During the SC16, it has announced that ROCm 1.3 has been launched and it has become much closer to completing the Boltzmann Initiative.

The AMD has released the initial 1.0 version of the ROCm in April. This first version was only to the complete scope of the Boltzmann Initiative and that was the only a small part of it. Because of this, the earlier releases of the ROCm were not ready and were available in the beta version.

AMD is also using SC16 to reveal the plans regarding Zen and that it is also going to be supporting ARMv8 AArch64 and IBM POWER8, reports TechFrag.

A year since the announcement AMD has now introduced OpenCL 1.2+ for the platform. Support for the new Polaris GPUs has also been added. Both the RX 400 series, as well as the Radeon Pro series, have been included.

ROCm 1.3 introduces several new features to which include the enabled support for 16bit floating point and integer formats within the platform. This is not the same as packed 16bit formats, which allowed for AMD's 32bit ALUs to process two 16bit instructions at once, gaining FLOPS versus using 32bit instructions.

According the official website of Radeon Open Compute, HCC is the C++ dialect with extensions to launch kernels that can also manage accelerator memory. HIP is another C++ dialect that was designed to make things easy to convert CUDA applications into portable C++ code. HIP can also be used for new projects that might need portability between AMD and Nvidia.

With the Radeon Open Compute Platform AMD might be able to compete with Nvidia better. CUDA has been successful till now but AMD could also be onto something here and the company might actually close the software gap.

Tags
Amd
Real Time Analytics