Giving feedback in AI chats often feels vague and effortful

Spotlight makes it easy by letting users highlight and report issues right in the flow of conversation

UX Research

UX Design

AI Ethics

Generative AI

Designing Granular, Low-Friction Feedback for AI Systems

PROJECT OVERVIEW

As AI becomes part of everyday life, users need better ways to give feedback — especially when responses are subtly biased or problematic. Most systems rely on blunt tools like thumbs-down, which oversimplify user intent and limit meaningful improvement.

Spotlight introduces a lightweight, context-aware reporting flow that lets users highlight specific text to flag issues — naturally, precisely, and without breaking their flow. While adaptable to many feedback types, we focused on bias reporting as a high-impact use case, using ChatGPT as a testbed for research and testing.

This academic project leveraged ChatGPT as a case study but was independently designed and executed without any affiliation to OpenAI.

My Role

UX Researcher

Duration

3 months (2024)

Tools

Figma

Miro

Google Forms

Excel

ChatGPT platform


Skills

Think-Aloud Protocol

User Interviews

Concept Ideation

Affinity Mapping

Data Analysis

Rapid Prototyping

Team

collaborative research, individual ideation

1 Project Manager

1 UX Researcher

1 UX Designer

Solution teaser

Users can effortlessly highlight and report issues directly within the flow of the conversation.

BACKGROUND RESEARCH: Literature Review

Expert Reviews Aren’t Enough — User Feedback Completes the Picture

Although AI companies often rely on technical experts to evaluate their systems, these assessments can miss critical usability issues that only surface during everyday interactions. Real-world user experiences provide valuable insights that traditional evaluations might overlook.

USER INTERVIEW: Usability Test Using Think-aloud Method

Users Are Engaged — But the System Isn’t Listening

Users engaged actively, but felt their input went nowhere. Some saw biased responses as personalized, not problematic. Unclear feedback tools and over-explained tutorials pointed to design gaps. Reporting felt like shouting into the void.

Bias Isn’t Always Negative

"I know it's biased, but sometimes it feels more like personalization than a problem — it really depends on the context."

College Student

Good Design Shouldn’t Need Instructions

"The tutorial felt like a waste of time — if the system needs that much explaining, maybe it’s not designed well enough."

Chiropractor

Unclear Feedback Tools Create Confusion

"I’m not sure what the thumbs up or down even means. Am I reporting a problem, or just saying I liked it? And if I want to explain why, there’s no way to do that."

Software Engineer

Reporting Without Feedback Feels Empty

"I don’t know what happens after I report something.

Does anyone actually read it, or is my input just lost in a pile of data?"

Retired

We ran usability tests with 4 regular ChatGPT users, using think-aloud method to observe how they naturally interact with the reporting feature.

Affinity Clustering

Framing the Challenge

How might we design a reporting experience that enables everyday users to uncover and share algorithmic harms — while equipping AI teams with actionable insights to improve fairness, and user trust?

We wanted to focus our solution on everyday users who want to report issues quickly and clearly

— without feeling confused, ignored, or overwhelmed by the process.

Design Opportunities

Beyond the Thumbs-Up: Designing Feedback That Works

After conducting user interviews and framing the challenge, we identified four key design opportunities to make AI feedback more intuitive, meaningful, and engaging.

Cognitive Load Reduction

Use autofill and smart prompts to make reporting quick and easy

Integration into Workflow

Let users report issues without leaving their current task

Flexible Reporting Methods

Offer both quick taps and detailed forms based on user preference

Contextual Social Proof

Gently show that others are reporting to encourage participation

IDEATION: Crazy8s & Speed Dating

From Natural Behavior to Design Direction

During our Crazy 8s session, We sketched interaction ideas inspired by real user behaviors observed during the discovery phase.

VALIDATION

Quick & Scrappy: Testing Early with Paper Prototypes

With limited time in our class project, we chose a low-fidelity paper prototype to quickly test the core reporting flow: highlight → report → confirm. This method let us validate interactions, surface user expectations, and gather feedback fast—before investing in high-fidelity design.

Highlight
Categorize

After clicking “Report Bias,” users choose from a list of predefined bias categories within the same dialog.

Expand
Track

FINAL PROTOTYPE

Quick & Scrappy: Testing Early with Paper Prototypes

With limited time in our class project, we chose a low-fidelity paper prototype to quickly test the core reporting flow: highlight → report → confirm. This method let us validate interactions, surface user expectations, and gather feedback fast—before investing in high-fidelity design.