OVERVIEW

Problem
ReviewTrackers had no existing solution for managing photos across reviews, nor did it allow businesses to upload photos to their listings directly. As the feature was completely new, there were several critical design questions to answer: How should photos from reviews and user uploads be organized? Where should the feature live in the information architecture? What should the process look like if users want to upload the same photo to multiple locations? The challenge was to create a seamless experience that handled both review-generated and user-uploaded photos effectively, without overwhelming users.
Solution
To address these challenges, we designed a gallery where users could view all photos, aggregated across reviews and locations. Key features included the ability for users to upload their own photos and distribute them to multiple locations. We carefully considered whether to display duplicates of the same photo for multiple locations or aggregate them into a single entry, ultimately choosing aggregation for simplicity. Post-MVP, we enhanced the feature with Photo Object Recognition using AI, allowing users to search for specific objects in photos, such as “a burger.” Additionally, we introduced a Location Media Report that detailed photo metrics across user-uploaded and owner-uploaded content, which quickly became a favorite among users.
My Role
UX Designer, Triad Design Lead
Timeline
Dec 2022 - Aug 2023
Team Members
Cecilia Xie (Me), Sarah Spiegle (PM), Jesse Hinchcliffe (PM), Steffeni Veren (Tech Lead), Saul Ocampo (FE dev), Charlie Billaeau (FE dev),Lindsay Eggers (BE Dev), Dominic Bales (BE Dev)
Tools
Figma, Fullstory
Design Process
1. Research
User-Informed Discovery
Competitive Analysis
2. Design
Ideation
Lo-Fi Wireframes
Iterations
Hi-Fi Wireframes

Prototyping

Usability Testing
3. Retrospective
What I Learned
My Contribution
As the lead designer on this greenfield project, I was responsible for designing the Photos Management tool from the ground up. I made critical decisions about the gallery’s structure, user-upload workflows, and how to handle photo aggregation. Working closely with product managers and developers, I ensured that the design was both user-friendly and technically feasible. After the MVP release, I collaborated with the team to introduce the AI-powered Photo Object Recognition feature and the Location Media Report.
Design Outcome

RESEARCH

User-Informed Discovery
As a product team, we work closely with our Customer Support Managers since they receive constant feedback from our users. Over time, my PM identified a significant pain point: users lacked an aggregated way to view photos from reviews. We also recognized that many of our competitors had already implemented this feature, giving them an advantage. Despite knowing this, we previously lacked the bandwidth to build an entirely new feature. When I joined the team, we finally had the design capacity to move forward, and I was tasked with leading this effort. My PM provided minimal initial requirements, giving me the freedom to start by conducting a competitive analysis on standard gallery features and their functionality with the main problem statement “user needs a place to view all photos”.
Competitive Analysis
I conducted a competitive analysis of other top photo gallery tools in the domain, including Google Photos, Apple Photos, Flickr, Dropbox, and Photobucket.
One of the key takeaways was how different tools laid out their galleries: Google Photos and Flickr opted for a tiled layout that kept photos' original aspect ratios while reducing white space, Dropbox and Photobucket used consistent rectangles or squares to reduce visual clutter, and Apple Photos preserved the aspect ratio without tiling. Another interesting discovery was the variety of ways these photos could be organized—by location, date uploaded, and more. We also observed different filtering options like favorites, orientation, or tags, as well as variations in how tools handled clicking into full-screen photos and displaying metadata details.

DESIGN

Ideation and Lo-Fi Wireframes
I ideated on two key aspects: how the gallery would tile and how photo cards would look within it. I created lo-fi mocks for different tiling options—one that kept the original aspect ratio, like Google Photos and Flickr, and another with a consistent grid, like Photobucket and Dropbox. Ultimately, I chose the consistent grid style because our users needed to quickly scan photos for key information, such as review ratings or locations, rather than focus on visual aesthetics. This approach made it easier for users to find important details efficiently.
The next challenge was designing how the photo cards would look within the grid. Unlike traditional photo galleries that prioritize the image itself and often hide metadata, we needed users to quickly and easily access a significant amount of information, such as review ratings, upload date, location, group, source, and more. Initially, we explored showing only the most critical details—like rating and source—directly on the image, with additional details revealed on hover or click. However, this approach had two major drawbacks: it created visual clutter, and the inconsistency across photo types (e.g., review photos having ratings while owner-uploaded photos did not) confused users.
To rethink the design, I considered examples where both a primary image and detailed stats are equally important. This led to the idea of using a format inspired by Pokémon cards—where a bold hero image takes center stage, supported by clear, structured information. This concept, which my team affectionately called the “Pokémon card” design, became the foundation for our photo cards. We created lo-fi iterations based on this approach and refined the UI step by step until we reached the final design.
Information Architecture, Requirements, and Trade-Offs, Oh My!
While designing the gallery elements, I was simultaneously working with my PM to define requirements and map the information architecture for integrating photos into the existing ReviewTrackers app. A key challenge was determining how the gallery would handle the relationship between photos and locations. For example, should a photo uploaded to 100 locations appear 100 times with different location labels, or be consolidated into one entry showing how many locations it was uploaded to? Each approach had its own challenges.

We also tackled the photo upload workflow. Should uploads be tied to specific location pages, or should users upload multiple photos to multiple locations from the main gallery? This functionality required careful consideration of error states and technical feasibility, balancing user needs with implementation constraints.
We chose option A, as our target persona— social media marketing managers—prioritized ensuring their latest marketing post was correctly uploaded to all locations. They valued seeing each photo with its associated location at a glance, even if it meant repeating photos.



For the upload flow, we explored options like bulk uploading multiple photos, uploading to multiple locations, and bulk uploads to multiple locations. However, due to technical limitations with error states, we decided for the MVP to allow only single-photo uploads via the Location Details Page, while the Gallery was designated solely for viewing.
After deciding that photo management would take place on the Location Details Page, with uploads limited to one photo at a time, we focused on designing how this feature would function. Key requirements included accommodating Google My Business's special Cover Photo and Logo Photo, which users emphasized as essential for SEO. Additionally, we needed to ensure users could seamlessly upload, delete, and view photos associated with each location.


This involved creating an upload wizard to guide users through the process while addressing error states and edge cases for each photo type. At the same time, I was also redesigning our legacy Local Listings Feature (see my other case study!).
The final touch for the MVP was designing the Photo Detail modal—a full-screen view of the image that displays all relevant details, like its source, type, and any associated reviews. Since this is a well-established pattern, we leaned into Jakob’s Law and kept it simple, sticking to familiar, intuitive design principles that users already recognize.

EVALUATE AND ITERATE

Post MVP Features
One of the first post-MVP features we developed was Photos Object Recognition. During user testing, we uncovered a demand for the ability to search for specific objects within photos. For example, a fast-food chain launching a limited-edition burger might want to quickly locate all burger-related photos to better understand what customers are saying about the rollout. We implemented this using Amazon Rekognition's API, which enabled advanced image recognition capabilities. This was an exciting milestone for our team, as it marked our first hands-on experience with AI tools being used in the application.


The design for this was straightforward: we introduced a dropdown filter on the gallery page, allowing users to select the objects they wanted to filter by. When viewing a specific photo, all recognized objects in the image were displayed in the side panel in chips. If the user accessed the photo from a filtered gallery, the relevant object would be highlighted within its chip. Additionally, users could click on any recognized object chip to navigate to a gallery filtered by that object.
During rollout, we noticed the feature often labeled men as "human" or "person" but frequently misidentified women as "teenager" or "child." To address this, we disabled identity-related tags, which sparked a deeper discussion about AI ethics and the biases inherent in big data models.This inspired me to read Weapons of Math Destruction and create an educational presentation for our design team, highlighting how narrow training datasets in AI can lead to bias and discrimination. It was a valuable opportunity to reflect on the ethical responsibilities we have as designers when working with such technologies.
The final post-MVP feature we developed was the Location Media Report, designed based on feedback from social media marketing managers. They needed a simple way to verify if marketing images were successfully uploaded across multiple locations.

To address this, we designed a straightforward, no-frills table that listed all locations. For each location, the table displayed the total images, total customer photos, total owner-uploaded photos, date of the most recent owner-uploaded photo, and whether a cover photo or logo was present on Google My Business. While this wasn’t the flashiest feature, it earned glowing feedback for streamlining workflows and addressing a key user pain point.

RETROSPECTIVE

What I Learned
Most of my experience has been with redesigning legacy features, so working on a brand-new greenfield feature was both exciting and challenging. The freedom required close collaboration with the PM to define requirements, prioritize scope, and navigate a less structured, messier process—teaching me how to adapt to ambiguity.

This was also my first project involving AI, where I gained a deeper understanding of biases in AI training models and how they impact design. I also learned best practices for integrating AI responsibly and explored the ethical and moral considerations that come with this technology—important lessons for us as designers as AI becomes a bigger part of our work.