Nvidia Brings Native CUDA Support to RISC-V CPUs

Nvidia is introducing support for CUDA in RISC-V, a major milestone in the development of open-source compute architectures. Nvidia has already set its keen eye on ARM architecture for some time now, the inclusion in the SFC listing would suggest a keen interest to diverge the CUDA ecosystem beyond its own closed ecosystems.

The announcement was delivered by Frans Sijstermans, Nvidia’s VP of Hardware Engineering and a board of directors member at RISC-V, at the RISC-V summit in China. Per the presentation, RISC-V processors will be able to execute CUDA drivers at the OS level, with CUDA kernels running on Nvidia’s powerful GPUs. This configuration also has DPU (Nvidia manufactured?) suggesting an HPC and data-center focused architecture.

ZOTAC Gaming GeForce RTX 5090

Solid OC DLSS 4 32GB GDDR7 512-bit 28 Gbps PCIE 5.0 Gaming Graphics Card, IceStorm 3.0 Advanced Cooling, Spectra 2.0 ARGB Lighting, ZT-B50900J-10P

$2,599.99Buy Now

Why CUDA on RISC-V Is a Big Deal

Introducing CUDA support for RISC-V may be Nvidia’s response to the increasing popularity of RISC-V in regions like China, especially considering U.S. export controls that limit the sale of AI accelerators including the Nvidia GB200 and GB300. The open nature of RISC-V means that companies can build custom processors based on the specification without licensng fees, and Nvidia’s support of CUDA for the architecture might mean new avenues in HPC, AI and data centers open.

This isn’t Nvidia’s first flirtation with RISC-V. Nearly a year ago researchers were able to execute CUDA code on a RISC-V based Vortex GPGPU with an OpenCL translator. It was not a particularly efficient piece of code, but it demonstrated that it was in fact technically feasible to compile Nvidia’s CUDA code to run on RISC-V.

Competitive Landscape: AMD and ROCm

And AMD has also developed its own compute platform called ROCm, now in its seventh iteration, in an ongoing challenge to Nvidia’s hegemony. I’m sure once this is available RISC-V will be supported from ROCm for it being the open-source answer to CUDA support. Meanwhile, ROCm has seen only so-so traction, and RISC-V native CUDA support by Nvidia could solidify its ecosystem lead, particularly in geographies keen to embrace alternatives to the x86 and ARM architectures.

An Architectural Strategic Change in Compute

With its support for CUDA on RISC-V, Nvidia has thrown its weight behind an open-source architecture movement. The shift also gives RISC-V an even stronger technological flexibility and allow it to compete in up-and-coming regions where RISC-V take-up is growing. Nvidia’s broader embrace of CUDA could be a signal to more developers and hardware makers alike to produce more solutions in the open compute-stacks mold.

This deliberate expansion of CUDA support coincides with a wider tech industry trend toward open and flexible compute platforms. Nvidia’s support for RISC-V also further illustrates the viability of the architecture and its growing importance in the larger compute market.

FAQ

What is the importance of CUDA support for RISC-V?

That means RISC-V processors can now run Nvidia CUDA drivers and interoperate with Nvidia GPUs for compute workloads, providing for more open compute options.

Why is Nvidia adding RISC-V support for CUDA now?

Nvidia is growing the CUDA ecosystem against U.S. export restrictions and growing use of RISC-V in China and other markets.

Is RISC-V’s support for CUDA first class?

Yes, Nvidia recently has had a direct method (unlike its previous indirect ways ala OpenCL translation).

How does AMD’s ROCm stack up to CUDA support?

ROCm is AMD’s open equivalent of CUDA support, and it does work on RISC-V as well, but it still has a small user base.

Join Telegram

Join Now

Leave a Comment