Microsoft at its Build conference detailed how it hopes to move Project Brainwave AI technology out of its research lab, and deploy it on Azure cloud-computing service, starting with an accelerated option for image recognition.

While Microsoft's Project Brainwave uses a fast and flexible processor type called FPGA (field programmable gate array), and tend to bring two important differences to conventional AI - ability to run AI jobs with Microsoft hardware, and accelerate AI chores with the latest algorithms.

It's also able to handle AI tasks fast enough to be used for real-time jobs where response time is very crucial.

The project is perhaps a microcosm of the current AI revolution, as it ensures rapid response for countless tasks, including: finding empty parking spaces, digesting legal contracts, looking for hiring biases and generating 3D models of people's bodies from a video.

Now, bringing the AI capabilities to Azure cloud-computing service, means that businesses will no longer need to buy and run their own servers.

Simply by tapping into the vast pools of computing power, businesses will only pay as they go for resources like processor performance, storage space, network capacity and the new AI processing.

Microsoft believes FPGAs, give the service an edge since they combine flexibility with speed, in comparison to Google's chips, called tensor processing units (TPUs), which is a special-purpose model with baked in design input; while FPGAs can be reconfigured for different works.

Microsoft's Project Brainwave promises fast and flexible FPGA chips to unlock new AI capabilities



Microsoft at its Build conference detailed how it hopes to move Project Brainwave AI technology out of its research lab, and deploy it on Azure cloud-computing service, starting with an accelerated option for image recognition.

While Microsoft's Project Brainwave uses a fast and flexible processor type called FPGA (field programmable gate array), and tend to bring two important differences to conventional AI - ability to run AI jobs with Microsoft hardware, and accelerate AI chores with the latest algorithms.

It's also able to handle AI tasks fast enough to be used for real-time jobs where response time is very crucial.

The project is perhaps a microcosm of the current AI revolution, as it ensures rapid response for countless tasks, including: finding empty parking spaces, digesting legal contracts, looking for hiring biases and generating 3D models of people's bodies from a video.

Now, bringing the AI capabilities to Azure cloud-computing service, means that businesses will no longer need to buy and run their own servers.

Simply by tapping into the vast pools of computing power, businesses will only pay as they go for resources like processor performance, storage space, network capacity and the new AI processing.

Microsoft believes FPGAs, give the service an edge since they combine flexibility with speed, in comparison to Google's chips, called tensor processing units (TPUs), which is a special-purpose model with baked in design input; while FPGAs can be reconfigured for different works.

No comments