Intel Reveals AI Accelerator Chips for Deep Learning in Data Centers

Source: sdxcentral.com

Intel revealed its first chipsets designed for artificial intelligence (AI) in large data centers. The Intel Nervana NNP-T and Nervana NNP-I processors are dedicated AI accelerators built to support increasingly complex computing workloads and techniques derived from deep learning models and inference.

The Intel Nervana NNP-I chip, or Spring Hill, is built on Intel’s 10 nanometer (nm) Ice Lake processor technology, which provides large data centers with high performance computing at a lower energy cost, according to the chipmaker. Spring Hill will also foster deep learning inference and deployment at scale across major data center workloads.

The processor offers increased programmability and includes a dedicated inference accelerator with short latencies, fast code porting, and support for all major deep learning frameworks, according to Intel. The company claims Spring Hill leads in performance and power efficiency for major data center inference workloads.

Intel announced the product earlier this year alongside Facebook, one of the chipmaker’s development partners and initial large enterprises using the AI chip. Intel began developing AI chips in earnest after it acquired Nervana Systems in 2016. The company says the next two generations of the chip are already under development.

Deep Learning at Scale

The Intel Nervana NNP-T chip, or Spring Crest, is also built on Intel’s 10nm Ice Lake processor. The chip is designed to train deep learning models at scale, and by that the chipmaker means it will train networks quickly and do so with minimal impact on energy consumption.

Spring Crest is also built to be flexible and balance an enterprise’s needs for computing, communication, and memory. Intel claims the chip can be programmed to accelerate many existing and yet-to-emerge workloads in data centers.

Research conducted by OpenAI in May 2018 concluded that the amount of compute used in the largest AI training runs have doubled every 3.5 months since 2012. Intel aims to address this exponential increase by focusing on the four primary factors that drive a deep learning training accelerator: power, compute, memory and communication, and scale out (or hardware capacity expansion).

“To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense, and making smarter use of their upstream resources,” said Naveen Rao, VP and GM of Intel’s artificial intelligence group, in a prepared statement. “Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications.”

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence