Google definitely has taken machine learning to greater heights, from the bizarre to the most unimaginable, even recently had went as far as pitching the machines against human at the board game, Go.

Now, the company has launched a new AI Experiment dubbed Move Mirror, that explore pictures in a fun way by matching peoples images as they move around. While the AI will match real-time movements to hundreds of images of people with similar poses around the world.

The experiment is meant to relay computer vision techniques like pose estimation to anyone with a computer and a webcam, as Google wants to make machine learning more accessible to coders by bringing pose estimation into the web agent, in a bid to inspire developers to experiment with the technology.



Move Mirror utilizes an open-souce "pose estimation model" called PoseNet, which can detect human figures in images and videos by identifying where key body joints are placed, and with the help of TensorFlow.js, the in-browser machine learning framework library.

It takes the input from your camera feed and maps it to a database of more than 80,000 images to find the best match.

Google, however maintains that images are not sent to its servers, as all the image recognition happens locally, in the browser. And the AI technology doesn't recognize who is in the image as there are "no personal identifiable information associated to pose estimation" at the least.

If you wish to try it out, you can visit: g.co/movemirror, and all that's required is a webcam, to take and upload your picture.

Google's "Move Mirror" AI Experiment match up images in same pose



Google definitely has taken machine learning to greater heights, from the bizarre to the most unimaginable, even recently had went as far as pitching the machines against human at the board game, Go.

Now, the company has launched a new AI Experiment dubbed Move Mirror, that explore pictures in a fun way by matching peoples images as they move around. While the AI will match real-time movements to hundreds of images of people with similar poses around the world.

The experiment is meant to relay computer vision techniques like pose estimation to anyone with a computer and a webcam, as Google wants to make machine learning more accessible to coders by bringing pose estimation into the web agent, in a bid to inspire developers to experiment with the technology.



Move Mirror utilizes an open-souce "pose estimation model" called PoseNet, which can detect human figures in images and videos by identifying where key body joints are placed, and with the help of TensorFlow.js, the in-browser machine learning framework library.

It takes the input from your camera feed and maps it to a database of more than 80,000 images to find the best match.

Google, however maintains that images are not sent to its servers, as all the image recognition happens locally, in the browser. And the AI technology doesn't recognize who is in the image as there are "no personal identifiable information associated to pose estimation" at the least.

If you wish to try it out, you can visit: g.co/movemirror, and all that's required is a webcam, to take and upload your picture.

No comments