Google unveils plans for a cloud-based “hypercomputer” and new AI processing processors.

Its new AI accelerator is the Cloud TPU v5p

0

2023 has unquestionably been the year of generative AI, and Google is bringing even more advancements in AI to round out the year.

The Cloud TPU v5p, the company’s most potent TPU (formerly known as Tensor Processing Units) to date, and an AI hypercomputer from Google Cloud have been unveiled. “There are more demands for training, tuning, and inference due to the growth in [generative] AI models—with a tenfold increase in parameters annually over the past five years,” stated in a statement Amin Vahdat, Vice President for the Machine Learning, Systems, and Cloud AI team at Google and an Engineering Fellow.

An AI accelerator for model training and serving is the Cloud TPU v5p. Large models, lengthy training times, a preponderance of matrix calculations, and the absence of specialized operations—like TensorFlow or JAX—within the primary training loop are the characteristics that Google took into consideration while designing Cloud TPUs. When utilizing Google’s highest-bandwidth inter-chip interface, each TPU v5p pod provides 8,960 chips.

The Cloud TPU v5p is an evolution of earlier models, such as the v5e and v4. When FLOPS per pod is taken into account, Google claims that the TPU v5p is four times more scalable and has two times more FLOPs than the TPU v4. In comparison to the TPU v4, it can also train LLM models 2.8 times quicker and embed dense models 1.9 times faster.

Then there’s the recently unveiled AI Hypercomputer, which comes with an integrated system with machine learning frameworks, performance-optimized hardware, open software, and adaptable consumption options. It is believed that by combining the two, productivity and efficiency will increase more than if they were examined individually. The hardware of the AI Hypercomputer has been tuned for performance and makes use of Google’s Jupiter data center network technologies.

Quite differently, Google offers “extensive support” for machine learning frameworks like TensorFlow, PyTorch, and JAX to developers through free software. This statement follows the formation of the AI Alliance by IBM and Meta, which emphasizes open source and in which Google is noticeably not engaged. Additionally, two models—Flex Start Mode and Calendar Mode—are introduced by the AI Hypercomputer.

Together with the launch of Gemini, a new AI model that it describes as its “largest and most capable,” and its deployment to Bard and the Pixel 8 Pro, Google also revealed the news. Gemini Pro, Gemini Ultra, and Gemini Nano are the three sizes that will be available.

Thank you for reading this post, don't forget to follow my whatsapp channel


Discover more from TechKelly

Subscribe to get the latest posts sent to your email.

Leave A Reply

Your email address will not be published.

Discover more from TechKelly

Subscribe now to keep reading and get access to the full archive.

Continue reading