Bringing Lens to Desktop

Dec 2019 - Jan 2021

Context

Before I joined the Lens team, Lens only existed as an in-app mobile experience. I led the design of it onto different desktop experiences across other Google products, starting with Google Images and Google Photos. This was an important milestone for Lens because it not only made the experience accessible to all users via the web, regardless of device, but added a new surface (desktop), thereby unlocking different types of user journeys.

Lens on Google Images

Why desktop?

A sizable percentage of “shopping intent” journeys happen on desktop over mobile. Lens is most useful on images that don’t have identifying information and/or aren’t actionable. This includes inspirational imagery that don’t have shoppable links or out of stock products.

This was the first instance of Lens’ visual search capabilities on a desktop surface. This required sensitivity to the Image Search UI to ensure a seamless experience as well as consideration of Lens’ identity to ensure consistency with its own brand.

Lens on Images: Mobile

Since I led the design on mobile in parallel with the work on desktop, there is feature parity between both surfaces. However, the desktop experience required navigating a different UI framework. On Google Images, users interact with each image result via a side panel. How would the equivalent of the mobile design look on desktop?

Since Lens is intended as an action on individual images, I designed it as an in-panel experience. This inevitably brought up questions about the relationship of Lens on top of a product like Google Images. Is it a tool? Or is it a product? What does navigating results on top of another panel of results look like?

The key thing I wanted to get right was seamless navigation. Although we were technically designing across multiple products, it was important to me that we did not expose that to our users: they do not need to know that Lens and Images are two entirely different organizations. This can often be a difficult thing to navigate.

In the design, Lens is treated as a utility layer on top of the Image the user is viewing. In order to enter and exit out of Lens, the user does so directly in the Image viewer.

Impact

We launched this on desktop in July 2021! I invite you to visit Google Images on your desktop and give this experience a try on your next visual search journey.

Lens on Google Photos

Google Photos + Productivity

This was the first instance of Lens’ capabilities on any desktop surface, and we focused specifically on text interactions after much discussion with the Photos team. Since Photos on desktop is for the personal purposes of documenting and organizing memories, we needed to tread carefully on this sensitive surface.

One key decision was on how to surface entry into Lens. Should it be accessible on all photos? What form should it take? Since this first milestone was focused on text interactions, it did not make sense to surface it on all photos because it would not be relevant for most photos. Copying text from a photo is very useful when needed, but only occasionally relevant. It did not make sense to highlight this path at all times. Thus we decided to surface a smart chip that suggested the action when there was high confidence that there was text in the photo.

We also had to decide between surfacing the Lens logo as an action to enter Lens versus the suggested action chip. Though seemingly small, it brought up larger questions about Lens' brand and its promise of visual search to the user. On the one hand, the logo is recognizable; as a persistent entry point across photos, it continues to educate potential users on how Lens can be useful to them. On the other hand, it had the potential to invite disappointment since the functionality did not yet include visual search. Thus we decided to hold off on surfacing the logo as an action until the functionality it surfaced lived up to expectations from the product and brand.


Mirroring text & selection

In this design, the text detected in the photo is reflected in the side panel. This presented an opportunity to invent a new interaction, in which selecting text is one places highlights the same text in the other, but in a subtler way. This allows the user to quickly scan and compare across the two places and see, for example, where in the photo a specific word is.


Impact

We launched this experience on Google Photos in December 2020 and continue to collaborate with the Photos team on bringing more Lens functionality onto its web surface. Stay tuned for more soon!