Google’s Cloud TPU set to massively boost Machine LearningBy Staff Writer 18 May 2017 | Categories: news
Buckle up, the Artificial Intelligence (AI) game just received a major leg-up. At yesterday’s Google I/O conference, Google CEO Sundar Pichai noted a few new developments regarding the company’s involvement in AI.
To emphasise just how important AI has become for the company, he stated that Google has witnessed a shift in computing - from mobile-first to AI-first. To put that in context, a few years back Google realised that the mobile phone would be driving progress, and thus pushed resources behind Android and even scoring search page-rank on the mobile-friendliness of websites. Now, things have changed, since as Pichai puts it, “speech and vision are becoming as important to computing as the keyboard or multi-touch screens.”
Much of these new AI processes, like Google Photos’ ability to correctly identify objects in your pics, require machine learning. And to get machine learning right, learning models requires processing power by the boatloads, both when being trained (especially so) and in execution. To assist, Google has developed Tensor Processing Units (TPUs), and yesterday announced its second generation TPUs, called Cloud TPUs.
Each Cloud TPU is capable of up to 180 teraflops, but, much like the A-Team, is designed better to work in a group. String 64 of these new TPUs puppies together and you get a TPU pod, a machine learning supercomputer capable of an astonishing 11.5 petaflops of processing. In practical terms, the Google team notes that previously it took a day to train its large-scale translation models on 32 of the best Graphic Processing Units available. Using one eighth of a TPU pod, you’re looking at an afternoon.
If you were wondering about the Cloud part of the Cloud TPU moniker, this is because these will form part of Google’s Compute Engine, which are basically virtual machines that feature in Google Cloud Platform. This means any company willing to spend the cash can use Cloud TPUs for their own projects, and can programme it with a knowledge of TensorFlow.
No money to do so? If you are part of the machine learning research community you can thank your lucky stars for Google’s announcement of the TensorFlow Research Cloud (TFRC). This cluster of 1000 Cloud TPUs, offering 180 petaflops of brute power, is set to hopefully drive breakthroughs in machine learning. Unfortunately, or fortunately we believe in this case, there’s nothing like a free lunch. If you partake in the TFRC set-up, Google notes you will need to share your research via peer-reviewed publications, open-source code, blogs, or other means, a transaction that seems only fair.
You can be assured that the TFRC will be extremely popular, so sign up here as soon as possible, with Google noting that they will be evaluating applications on an ongoing basis.
A TPU pod
Most Read Articles
Have Your Say
What new tech or developments are you most anticipating this year?