Tired of Slow Machine Learning? AWS Introduces New Superfast Chips

Amazon has been working hard on this front as more businesses switch to bespoke silicon for their customers’ workloads. To assist with inference learning, the firm developed the Inferentia chip in 2019.

Then, last year, AWS unveiled a second Trainium chip designed particularly for machine learning algorithms. Today, AWS continued to expand on its prior achievements by introducing the Trn1 cloud-based AI processor.

This morning, at the 2018 AWS re:Invent conference in Las Vegas, Amazon CTO Adam Selipsky announced the introduction of a new CPU core.

“So today, I’m excited to announce the new Trn1 instance powered by Trainium, which we expect to deliver the best price-performance for training deep learning models in the cloud and the fastest on EC2,” Selipsky told the re:Invent audience.

“Trn1 is the first EC2 instance with up to 800 gigabytes per second bandwidth. So it’s absolutely great for large scale, multi node distributed training use cases.”

For use cases like image recognition, natural language processing, fraud detection and prediction, he explained that it should be ideal.

In addition to the two integrated M.2 slots, there are four PCI-E expansion slots (x16), which means you can link as many as 16 chips together for even more powerful computing capabilities.

“We can network these together and what we call Ultra clusters consisting of tens of thousands of training accelerators interconnected with petabyte scale networking.

These training Ultra clusters are powered by a powerful machine learning supercomputer for rapidly training the most complex, deep learning models with trillions of parameters,” Selipsky said.

The team also announced that it would collaborate with partners like SAP to take advantage of this increased computing capacity.