
Google is rolling out new AI-powered Search capabilities through its Lens feature. Instead of type to search, the latest update to Google Lens lets users point their camera at an object and vocally ask Google’s AI model questions about it.
“Say you’re at the aquarium and want to learn more about some interesting fish at one of the exhibits. Open Lens in the Google app and hold down the shutter button to record while asking your question out loud, like, “why are they swimming together?” Google said in a blog post published on Thursday, October 3.
However, Google Lens’ voice-activated search feature is now globally accessible to all Android and iOS users in English. Additionally, the big tech company said they are launching a feature for shoppers that gives them details such as price info about a particular item that they spot in the physical world.
Google users in the US will also start seeing search results pages that have been organised with AI.
Launched around seven years ago, Google Lens is a feature that lets users submit search queries based on the objects in a picture. It can also be used, for instance, to translate a signboard or document in another language.
Over 20 billion visual searches are carried out through Google Lens every month, the company said, adding that users between the ages of 18 and 24 engaged the most with Lens.
“The whole goal is can we make search simpler to use for people, more effortless to use and make it more available so people can search any way, anywhere they are,” Rajan Patel, Google’s vice president of search engineering and a co-founder of the Lens feature, was quoted as saying by Associated Press.