TensorFlow hits version 1.8 with improvements for Google Cloud TPUs and GPU memory



The popular framework for developing neural networks, TensorFlow 1.8 brings improvement for Google Cloud TPUs, with the machine learning library able to support prefetching to GPU memory.

TensorFlow is Google Brain's second generation system, with version 1.0.0 released on February 11, 2017. While TensorFlow adoption has soared as the common platform for deep learning presents an easy to use environment and serves as a powerful contribution to the world of machine learning.

The reference implementation runs on single devices, but TensorFlow can now run on multiple CPUs and GPUs (with optional CUDA and SYCL extensions for general-purpose computing on graphics processing units). And the support for third-generation pipeline config for Cloud TPUs, according to Google, help to improve performance and usability.

Also TPUs hardware units which are exclusively available in Google Cloud can accelerate TensorFlow performance.

TensorFlow 1.8 boasts of the following new features: Ability to prefetch data to GPU memory which speeds up GPU operations since it can be copied to the GPU. And support for reading/writing protocol buffers within Tensorflow, as well as support RPC communication, through the tf.contrib.proto and tf.contrib.rpc libraries.

Developers can find installation instructions for TensorFlow on Ubuntu Linux, MacOS, and Microsoft Windows on the TensorFlow project page. Also, you can compile the sources as available on GitHub into a binary.
Previous
Next Post »