Google has almost all about the world information available in its online platforms, and has now embarked on a project to create a helpful voice-driven AI that revolves around users interaction with those information.

The company announced a new digital assistant at Google I/O 2016, Google Assistant, which will bring together the complex information that Google already has about the world.

Google Assistant is built on the deep neural networks of Google Search, with the knowledge base of Google Now, and the advanced natural language recognition that’s been evolving with Android.

It ties the knowledge that Google already has about the world with your natural language queries to specific scenarios.

For instance, you can ask Google Assistant “Who designed this?” while standing in front of a famous sculpture with your phone, the bot will be able to mix location data and image recognition to identify and provide the answer in seconds.

While, first hints of the new Google Assistant appeared in the update to the Google app beta version 6.1, it isn’t a standalone program like Microsoft’s Cortana, but instead, its baked into Google’s various services and operating systems, and can be easily summoned by your voice.

Google Assistant is more “an ambient experience that extends across all its services,” so that you can tap into its power anywhere — including Google Home.

It will also be able to tap into third-party apps and services, such as Spotify, Uber, Open Table, and much more. And it's expected to debut alongside Android Nougat.