Back to Projects
AI Design · UX Research · Enterprise · Human-in-the-Loop 2023–Present

AI-Assisted call summaries

OneView is Rogers' internal agent platform used by Care and Retail agents across Canada. I drove the design of its AI-assisted interaction creation workflow — turning a broken 15-second documentation window into a review-first experience where AI drafts and agents confirm.

RolePrimary Designer
CompanyRogers Communications
ToolsFigma · FigJam · Rogers CDK
FocusAI Interaction Design
Full interaction creation flow — Care and Retail designs in Figma Click to view full flow ↗
Full interaction creation flow — Care and Retail designs in Figma

Overview

OneView is Rogers Communications' internal agent platform, used by Customer Care and Retail agents to manage customer accounts and document every customer contact. An "interaction" is the official record of what happened — what was discussed, what was offered, and how it resolved — so the next agent doesn't have to ask the customer to start over.

The Problem

The business asked me to simplify the interaction creation form. But the research showed a deeper issue.

  • Care agents had 30 seconds after each call to document the interaction — later cut to 15 seconds
  • Agents were taking notes in personal notepads and sticky notes during calls, then copying them into OneView manually — a real privacy and compliance risk
  • Notes in the system were often incomplete, vague, or categorized randomly just to meet the time limit
  • The next agent to open that account had little to no useful context about what happened before
  • A cleaner form couldn't fix the core issue: agents were reconstructing entire conversations from memory under time pressure

The real problem wasn't the interface. It was the workflow.

Before & After

Maestro — the legacy tool. Over 500 reasons across three dropdowns, a narrow free-text field, and 15 seconds to fill it all in from memory
Before — Maestro: over 500 reasons across three dropdowns, a narrow free-text field, and 15 seconds to fill it all in from memory.
Final design in Oneview — AI-generated interaction draft with pre-filled summary, resolution, and next steps ready for agent review and confirmation
After — Final design: AI generates a structured draft with summary, resolution, and next steps. Agent reviews, edits inline, and confirms.

After: Flows for Care and Retail versions

Animated flow showing the full Care and Retail interaction creation experience with AI-generated draft
After — Final design: AI generates a structured draft with summary, resolution, and next steps. Agent reviews, edits inline, and confirms.
Care success flow — full AI-assisted interaction creation from trigger to confirmation
Care (call center) flow: the success flow that demonstrates what I've delivered, with the AI generated content. Agent reviews it, edits if needed, and confirms.
Retail success flow — simplified manual interaction creation with structured note fields
Retail (stores) flow: we could simplify the flow to only a few selections and a better structured text fields for the notes.

What changed

  • Replaced manual post-call reconstruction with AI-generated drafts — agents review instead of write from scratch
  • Introduced a consistent structure (Call Summary / Call Resolution / Next Steps) across every interaction
  • Added "AI-generated" labeling so agents always know what was auto-filled and what needs their attention
  • Built inline editing per field so agents can correct anything without leaving the flow
  • Added agent feedback controls (thumbs up/down) so AI accuracy improves over time
  • Designed a manual fallback for Retail and an API fail state — the workflow never breaks completely

My Role

I was the primary designer on this project from initial brief to final delivery. The design decisions — from the AI concept to every edge case — were mine to drive.

  • Problem reframe: Identified that a simpler form wouldn't fix the 15-second documentation problem — and pushed the solution further
  • AI concept: Proposed using Rogers' existing call recordings and Agent Assist to auto-generate interaction summaries
  • Human-in-the-loop design: Defined the review → edit → feedback → confirm flow that keeps agents in control
  • Window behavior: Designed how the interaction window floats, minimizes, and persists so agents can reference account context at any time
  • Interaction history: Designed the widget and full Activity History page so better-documented interactions benefit the next agent immediately
  • Error states: Designed API failure, unsaved changes, cancel confirmation, and Maestro fallback — every failure path handled
  • Components & CDK contribution: Built a component library for the file and introduced a new icon button pattern and icons later adopted into Rogers' CDK library

Process

What I found in research

We conducted at least three research sessions throughout this process, alongside listening to real calls between agents and customers. All insights informed the design decisions we made.

The most focused session was run alongside a UX Researcher with 16 agents from different departments — Care, Telesales, Billing, and Support.

Generative research board — 16 agents across Care, Telesales, Billing and Support: key findings, interaction mapping, and design directions
Generative Research: Interaction project — September 2023, by Adriano Renzi. 16 agents across departments: key points, interaction mapping, documentation pain points, and design directions.
  • Agents used shortkey tools and personal templates just to keep up with time pressure
  • Some asked customers to wait on hold while they finished documenting
  • Categories were selected approximately — the taxonomy didn't match how agents thought about calls
  • Agents preferred the floating window concept — they wanted to open it on every page they navigate
  • OneView instability caused some agents to avoid using it for notes altogether

A cleaner form would help. It wouldn't be enough.

First direction: improving the manual flow

Business confirmed that interaction reasons were necessary but could be simplified. Maestro had over 500 reasons across three dropdowns — something agents complained about consistently. We kept the concept per the business ask but pushed hard to reduce the selections.

I also considered a manual notes section for agents to write during the call. Research changed that thinking: OneView can crash, and agents would lose their in-progress notes. Business confirmed the AI could still access the call recording from the third-party app even if OneView went down — so real-time manual notes inside OneView wasn't the right solution.

Despite that, I kept the ability to add the interaction manually if an error occurs and the AI-generated interaction isn't possible — following the same pattern as the Retail flow, which is always manual.

The pivot: AI-assisted generation

The manual form improved the experience — but agents still had to reconstruct the call from memory. That's when I identified the real opportunity: Rogers already recorded all Care calls, and Agent Assist was already in use in chat. The pieces were there. I proposed using call recordings + Agent Assist to generate the interaction automatically.

The first explorations kept some elements in the agent's control — the AI would generate notes, but categories and product lines were still manually selected. The thinking at the time was that categorization required human judgment.

First AI exploration — AI generates notes after the call, agent still selects categories and product lines manually
First AI exploration — AI generates notes after the call, agent still selects categories and product lines manually.
Second AI exploration — AI generates both notes and categories, agent reviews and edits, with Generate from Agent Assist as primary action
Second exploration — AI generates both notes and categories, agent reviews and edits. "Generate from Agent Assist" as primary action, manual mode as fallback.

After further discussion with the AI team, they confirmed that categorization could also be automated — Agent Assist could generate topics, reasons, and product lines from the recording, not just the notes. This shifted the final design: AI first, manual entry only as a fallback if the API fails.

Designing the AI experience

The core principle: AI generates, human confirms. Never silent automation.

Full Care flow showing all states: default, AI triggered, interaction generated, edit mode, and success state
Full Care flow: Default state → AI triggered → Interaction generated → Edit mode → Success state

Key decisions:

  • "AI-generated" label on every auto-filled field — no ambiguity
  • Inline editing — no modal, no separate mode, no interruption to the flow
  • "Last edited at [time]" timestamp when an agent modifies the summary
  • Agent name attached to every submission — accountability stays with the human
  • Thumbs up/down feedback attached to the review state

Error states and fallbacks

OneView can crash. APIs can fail. The design handles both without leaving agents stranded.

AI generation failed — error message with options to retry or add the interaction manually
AI / API fail — AI generation hits a technical error. The agent can retry or switch to manual entry using the same structured form. The workflow never dead-ends.
Failed to save to Maestro — full interaction content displayed with a Copy interaction button for manual paste into Maestro
Failed to save to Maestro — If the API can't write the interaction to Maestro, the full content is surfaced with a Copy button. Agent pastes it directly into Maestro as a fallback.
  • If AI generation fails → agent can try again or add manually — no dead end
  • If the interaction fails to save to Maestro → content is copied so agents can enter it themselves
  • Business confirmed temporary recording is feasible, making this fallback viable

Retail flow

Retail agents work in-store with more time and no call recording. The flow is manual by design — same structured format, no AI generation.

Retail simplified manual mode — structured form with same success state as Care flow
Retail: simplified manual mode — agent fills in the structured form directly, same success state as Care.

Why a floating window?

Research told us directly: agents check account information and previous interactions before submitting. A static drawer would block that context.

Most agents used third-party note apps during calls to avoid losing information. They stated they wanted a floating window that they could open on every page they navigate — not just one fixed location.

I looked at familiar floating window patterns — specifically Gmail's compose window — to inform the behaviour.

Gmail compose window shown in three states: compact floating, fullscreen, and minimized — used as reference for OneView's interaction window behaviour
Reference — Gmail compose: can be minimized, maximized, or opened in a new tab, and persists across every page. The main design reference for the floating interaction window in OneView.

The interaction window floats, minimizes, and persists across pages. Closing saves the draft. Agents can open it from anywhere in OneView, drag it to any position, and reopen it with notes intact — whether they're reviewing an AI summary or entering notes manually.

Two states side by side: default OneView with Interaction icon in header, and the floating Add Interaction window open over the customer profile page
Window behaviour on desktop — default state shows the Interaction entry point in the header (left); clicking it opens a floating window that stays on top while the agent navigates OneView (right). Closing minimizes and keeps the draft active.

Components and CDK contribution

I built components throughout the file to manage changes efficiently across screens and states. Several of those contributions made it into Rogers' CDK library.

The interaction icon and all window control icons — drag, minimize, expand, fullscreen, and the interaction log icon — didn't exist in the CDK. I designed them from scratch, following Google Material icon patterns and existing Rogers CDK icons as references, and applying Rogers' visual style to keep them consistent with the broader system.

One interaction pattern I introduced — an icon button with a label that appears on hover — was later adopted by other designers and added to Rogers' CDK library. It's now used across multiple OneView features beyond this project.

New icons contributed to Rogers CDK: interaction log, AI feedback controls, close confirmation, and interaction window controls
CDK contributions — new icons designed for this project: interaction log, AI feedback controls with states, close confirmation actions, and all interaction window control icons. Designed following Google Material patterns and Rogers CDK style.

Outcome

The design is complete and in active development, with the AI team involved in implementation.

The scope expanded well beyond the original brief. What started as a form simplification became a full workflow redesign — validated by product leadership and stakeholders.

Interaction History — a direct reflection of the new format

I was the sole designer responsible for both the Activity History page and the Interactions widget — end to end, from concept to dev-ready specs.

These aren't separate features — they're the visible outcome of the new interaction format. Every AI-generated summary, every agent edit, every structured Call Summary and Call Resolution will surface here for the next agent to reference before or during a call. The "Generated by Agent Assist" and "Edited by agent" labels make the source of every record transparent.

Activity History page — searchable interaction log with expanded row showing AI-generated call summary, resolution, and next steps
Activity History page — full searchable and filterable log. Expanding a row reveals the AI-generated Call Summary, Call Resolution, and Next Steps, with the source label visible inline.
Interactions widget on the customer Overview page showing recent interactions list and a detail modal with full AI-generated summary
Interactions widget on the Overview page — compact list of recent interactions with source and status. Clicking the detail icon opens a modal with the full structured summary.

Expected impact

  • Less time spent on post-call documentation for Care agents
  • Consistent, structured interaction records across all agents and channels
  • Customer data stays inside official systems — privacy and compliance risk reduced
  • Future agents have real context before picking up a conversation
  • AI accuracy improves over time through agent feedback

Metrics and user feedback will be added once the feature reaches broader rollout.

Tools Used

  • Figma
  • FigJam
  • Rogers CDK
  • AI Interaction Design
  • User Interviews
  • WCAG 2.x AA

Next Project

Customer Search & Authentication