"Jouppi joined Google in late 2013 to work on what became the TPU, after serving as a hardware researcher at places like HP and DEC, a kind of breeding ground for many of Google’s top hardware designers. He says the company considered moving its neural networks onto FPGAs, the kind of programmable chip that Microsoft uses. That route wouldn’t have taken as long, and the adaptability of FPGAs means the company could reprogram the chips for other tasks as needed. But tests indicated that these chips wouldn’t provide the necessary speed boost. “There’s a lot overhead with programmable chips,” he explains. “Our analysis showed that an FPGA wouldn’t be any faster than a GPU.”Google's TPU Chip Helped It Avoid Building Dozens of New Data Centers | WIRED
In the end, the team settled on an ASIC, a chip built from the ground up for a particular task. According to Jouppi, because Google designed the chip specifically for neural nets, it can run them 15 to 30 times faster than general purpose chips built with similar manufacturing techniques. That said, the chip is suited to any breed of neural network—at least as they exist today—including everything from the convolutional neural networks used in image recognition to the long-short-term-memory network used to recognize voice commands. “It’s not wired to one model,” he says."
Wednesday, April 05, 2017
Google's TPU Chip Helped It Avoid Building Dozens of New Data Centers | WIRED
It'll be interesting to see Microsoft's response, after their FPGA comments at Ignite 2016 and elsewhere (e.g., in this Sept 2016 Wired article...)
Subscribe to: Post Comments (Atom)
Post a Comment