Google is making Nvidia K80 GPUs generally available, as well as support for Nvidia P100 GPUs in Beta along with a new sustained pricing model. While the launch of Nvidia GPUs support on Google Cloud Platform (GCP), means another performance boost with Cloud GPUs which is capable of accelerating workloads including machine learning training and inference, among other enhancements.

Other high performance compute use cases include: geophysical data processing, simulation, seismic analysis, molecular modeling, and genomics.

As regards machine learning workloads, having access to GPUs in the cloud provides flexibility to pay for what is used with by the minute pricing, and the sustained pricing model means if you end up running the GPUs for a sustained period of time, you'd get up to a 30 percent discount, depending on the usage.



And all of the GPUs can take advantage of sustained use discounts (which requires no lock-in or upfront minimum fee commitments), which automatically lower the price of your virtual machines when you use them to run sustained workloads.

Google claims that this approach could provide bare metal kind of performance, and based on the Pascal GPU architecture, you can increase throughput with fewer instances while minimizing costs as compared to traditional solutions.

The company is offering the flexibility to run GPU workloads in virtual machines or containers and will be delivering the service in four global locations. To get started, visit the GPU site to learn more about how your organization can benefit from Cloud GPUs and Compute Engine.

Google brings Cloud GPUs support to its Cloud platforms

Google is making Nvidia K80 GPUs generally available, as well as support for Nvidia P100 GPUs in Beta along with a new sustained pricing model. While the launch of Nvidia GPUs support on Google Cloud Platform (GCP), means another performance boost with Cloud GPUs which is capable of accelerating workloads including machine learning training and inference, among other enhancements.

Other high performance compute use cases include: geophysical data processing, simulation, seismic analysis, molecular modeling, and genomics.

As regards machine learning workloads, having access to GPUs in the cloud provides flexibility to pay for what is used with by the minute pricing, and the sustained pricing model means if you end up running the GPUs for a sustained period of time, you'd get up to a 30 percent discount, depending on the usage.



And all of the GPUs can take advantage of sustained use discounts (which requires no lock-in or upfront minimum fee commitments), which automatically lower the price of your virtual machines when you use them to run sustained workloads.

Google claims that this approach could provide bare metal kind of performance, and based on the Pascal GPU architecture, you can increase throughput with fewer instances while minimizing costs as compared to traditional solutions.

The company is offering the flexibility to run GPU workloads in virtual machines or containers and will be delivering the service in four global locations. To get started, visit the GPU site to learn more about how your organization can benefit from Cloud GPUs and Compute Engine.