New techniques efficiently accelerate sparse tensors for massive AI models

Researchers from MIT and NVIDIA developed two complementary techniques that could dramatically boost the speed and performance of high-performance computing applications like graph analytics or generative AI. Both of the new methods seek to efficiently exploit sparsity—zero values—in the tensors. Credit: Image: Jose-Luis Olivares, MIT

Researchers from MIT and NVIDIA have developed two techniques that accelerate the processing of sparse tensors, a type of data structure that’s used for high-performance computing tasks. The complementary techniques could result in significant improvements to the performance and energy-efficiency of systems like the massive machine-learning models that drive generative artificial intelligence.

Tensors …
Read more…….

Be the first to comment

Leave a Reply

Your email address will not be published.


*