New Delhi: Google said it’s developed the fastest machine learning ( ML) training supercomputer in the world that broke AI records in six out of eight industry-leading MLPerf benchmarks.
“The latest results from the industry-standard MLPerf benchmark competition show that Google has built the world’s fastest supercomputer for ML training. Google set performance records in six out of eight MLPerf benchmarks using this supercomputer as well as our latest Tensor Processing Unit (TPU) chip, “a Google blog said.
Google said ML model implementations in TensorFlow, JAX, and Lingvo achieved those results. Within less than 30 seconds four of the eight models have been conditioned from scratch.
Google’s blog explains, “… remember that in 2015, training one of these models on the most sophisticated hardware accelerator available took more than three weeks. Google’s new supercomputer TPU can train the same model nearly five magnitude orders faster just five years later.
MLPerf models are chosen to reflect cutting edge machine learning workloads popular across industry and academia. Google’s supercomputer used for the MLPerf training round is four times the size of the “Cloud TPU v3 Pod” which set three records in the previous competition.
The device includes 4096 TPU v3 chips and hundreds of CPU host machines, all connected through a tailored ultra-fast, ultra-large interconnection. In total this system provides a peak performance of over 430 PFLOPs.
Google said its submissions for MLPerf Training v0.7 demonstrate our commitment to advancing machine learning science and development on a scale and delivering those advances to users through open-source applications, Google products and Google Cloud.