"Reaching even further, it’s possible that instructions are statically scheduled in the TPU, although this was based on a rather general comment about how static scheduling is more power efficient than dynamic scheduling, which is not really a revelation in any shape or form. I wouldn’t be entirely surprised if the TPU actually looks an awful lot like a VLIW DSP with support for massive levels of SIMD and some twist to make it easier to program for, especially given recent research papers and industry discussions regarding the power efficiency and potential for DSPs in machine learning applications. Of course, this is also just idle speculation, so it’s entirely possible that I’m completely off the mark here, but it’ll definitely be interesting to see exactly what architecture Google has decided is most suited towards machine learning applications."Google’s Tensor Processing Unit: What We Know
Friday, May 20, 2016
Google’s Tensor Processing Unit: What We Know (AnandTech)
Also see Google’s Making Its Own Chips Now. Time for Intel to Freak Out (Wired)
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment