NeuReality Licenses IBM’s Low-Precision AI Cores


//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

Israeli AI accelerator company NeuReality has become the first semiconductor startup to license IBM Research’s low-precision high-performance digital AI cores. As part of the partnership, NeuReality will become a member of the IBM Research AI Hardware Center.

IBM’s AI Hardware Center was established in 2019 to support the company’s target of increasing AI computing performance by a factor of 2.5 per year, with an ambitious overall goal of 1,000-fold performance efficiency (Flops/W) improvement by 2029. IBM has created an ecosystem of commercial and academic partners to help meet those targets.

For now, the IBM Research roadmap focuses on low-precision digital electronics, following years of algorithmic work on maintaining prediction accuracy for 4- and 2-bit computation. Longer term, IBM Research will move to analog computing in an effort to improve performance/power figures. Technologies earmarked for analog computing include resistive and electro-chemical RAM.

NeuReality founders
NeuReality founders (L-R) VP VLSI Yossi Kasus, CEO Moshe Tanach, VP Operations Tzvika Shmueli. (Source: NeuReality)

As part of the new partnership with NeuReality, the companies will collaborate on product requirements for NeuReality’s first ASIC, its system-level product and a software development kit.

The partners will also evaluate NeuReality’s products for use in IBM’s hybrid cloud, for workloads including AI, system flows, virtualization, networking and security. Hybrid clouds combine public and private clouds along with on-premises infrastructure offerings.

NeuReality is currently offering its data center inference system as a prototype platform on Xilinx FPGA cards in support of software development and system-level validation before its NR1 ASIC becomes available.

“Having the NR1-P FPGA platform available today allows us to develop IBM’s requirements and test them before the NR1 server-on-a-chip’s tapeout,” said NeuReality CEO Moshe Tanach. “Being able to develop, test and optimize complex data center distributed features, such as Kubernetes, networking and security before production is the only way to deliver high quality to our customers.”

The NR1 inference server-on-a-chip will be the first ASIC implementation of the NeuReality’s AI-centric architecture. The architecture is designed to mitigate system bottlenecks via a native AI-over-fabric networking, full AI pipeline offload and hardware-based AI hypervisor capabilities, the company said.

Target sectors include cloud and enterprise data centers, alongside carriers, telecom operators and other near-edge computing applications.

NR1 is expected to become available next year.





Source link