Рет қаралды 10,764
Sid Sheth, founder and CEO of d-Matrix, talks with theCUBE Research’s John Furrier at SC24 about the company’s launch of an advanced AI inference computing accelerator card designed to enhance efficiency and reduce AI workload energy costs. The solution aims to address industry challenges, including bandwidth limitations and memory capacity. By focusing on collaborating with various service vendors, d-Matrix ensures flexibility and compatibility without requiring special server configurations, making it accessible for diverse enterprise applications.
Follow theCUBE’s article coverage of SC24 siliconangle.c...
The accelerator card is tailored specifically for inference tasks, optimizing performance for large models, including those with hundreds of billions of parameters, according to Sheth. D-Matrix emphasizes improving user experience, interactivity and cost-efficiency, claiming it outperforms competitors in these areas. D-Matrix has established collaborations with partners such as Liquid AI and Super Micro Computer to enhance its accelerator card's capabilities. This card enables real-time data processing, addressing the evolving needs of enterprises in the rapidly advancing AI landscape, he added.
Check out the full article siliconangle.c...
For more of theCUBE's event coverage www.thecube.net/
Catch up on theCUBE's video coverage of SC24 • 83. LIVE from SC24! Is...
#SC24 #theCUBE #theCUBEResearch #AIInference #AI #dMatrix #LiquidAI #SuperMicroComputer