Fragmented copy-paste across tools
Developers switch between PRDs, test plans, and trackers to move the same details.
Building test cases used to mean jumping across tabs, copying and pasting, and retyping the same details twice. I redesigned the workflow with a context-aware AI assistant, transforming it from 6 fragmented manual steps into 4 streamlined AI-assisted steps, giving developers speed, focus, and control.
All visuals and product details are fully anonymized and recreated to respect confidentiality while accurately reflecting the challenges, design process, and impact of my work.
Test case planning is repetitive, fragmented, and slow.
“Building test cases by hand feels like doing the same work twice.” — Developer pain point (from stakeholder conversation)
This fragmented process creates recurring issues:
From stakeholder conversations and workflow analysis, key themes emerged. I aligned each pain point with supporting research insights and proposed design goals to turn them into actionable solutions.
Fragmented copy-paste across tools
Developers switch between PRDs, test plans, and trackers to move the same details.
“Most planning time is spent reformatting and pasting from other docs.”
AI in the flow of work
Surface AI help directly in context. No extra tabs.
Review bottlenecks hide issues
One-by-one reviews slow releases and let problems slip through.
“Reviewing 100+ cases manually is unrealistic. We skim and miss things.”
Faster reviews
Batch actions and previews make large reviews quick and manageable.
Speed vs. oversight trade-off
Teams want speed but also need visibility and trust in results.
“Automation helps, but I need to see what changed and why.”
Keep developers in control
All suggestions stay editable, reviewable, and rejectable. AI proposes, humans decide.
I mapped potential moments where AI could assist during test creation through the lens of discoverability and focus.
The goal was to make assistance feel available yet invisible.
Universal access for power users anywhere in the experience.
Expandable guidance and deeper exploration without blocking the workspace.
Context-aware help near acceptance criteria, aligned to intent.
[change chat panel to side panel]
Through iterative prototypes, a clear pattern emerged:
Inline entry increased trust and preserved flow.The guiding principle became:
Start inline. Expand for depth. Keep context in view.
This approach evolved into the Progressive Assistance Model (PAM): a framework that adapts AI visibility to human attention.
Concept 1: Global “Ask AI” entry point
Concept 2: Inline trigger via acceptance criteria
Concept 1 — Global Ask AI: highly visible, but detached from the working surface; increased context switching.
Concept 2 — Inline trigger: aligned with intent and felt natural, but the entry was too subtle and the modal broke flow.
Fig. 2 — the **progressive assistance model** finds the sweet spot between discoverability and focus.
System detects relevant context while the developer writes criteria.
Lightweight affordance appears at the right moment without stealing attention.
Prompts and draft cases arrive scoped to the task; nothing to retype.
Edit inline, regenerate, or batch-accept. AI proposes — the developer decides.
I merged the best of both worlds: guidance and control.
“Now I can generate and review test cases right where I’m working instead of copying across tabs.”
Once I identified clear AI entry points, I worked on making review more efficient. Developers could accept multiple test cases at once using batch selection. Expandable previews let them quickly scan suggestions. When edits were needed, they could either make changes inline or ask the AI to adjust content.
(Recreated illustration of inline editing and batch selections)
Good AI design is not just about adding intelligence. It is about integrating it in ways that feel natural, keep people in control, and make their work easier without getting in the way.