Google I/O’s 2017 keynote kicked off with some exciting statistics, showing off just how impactful and widespread the company’s applications are. There are 800 million Google Drive users and 500 million active Google Photos users. For hardware, they’ve even reached an incredible milestone with over 2 billion active Android devices. Later into the keynote, Google’s CEO Sundar Pichai spoke of the advances Google is making with their computer vision technology.
From 2010 to now, the vision error rate for image recognition has fallen to a point where it is even below the rate of the human eye. An element of this allows for cleaning up noisy images taken with a Google device and even completely removing obstructions like a fence from a picture of a child playing baseball. Pichai followed this up with the announcement of Google Lens.
“All of Google was built because we started understanding text and web pages,” Pichai says. “The fact that computers can understand images and videos has profound implications for our core mission.”
Google Lens is a “set vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information” and a few examples were shown on-stage. You can use your phone’s camera to look at a flower and it tell you exactly what type you’re looking at. Instead of manually inputting the information from your wi-fi router, your device can fill in the fields automatically when you focus your camera on the info. The last example showed how, when pointed at restaurants on the street, an augmented overlay shows the venue’s title and rating.
In general, Google Lens looks like a solid sampling of what the desired AR experience would look like. Having this technology in a pair of lightweight glasses with a wider array of functionally would be a major leap toward the AR future Oculus’ Michael Abrash believes is at least 5 years away.
Google Lens will be shipped first with Google Assistant and Photos, eventually coming to other products.