Case Study 01 · Danaher Corporation · SCIEX

AI-Powered Analytics for
Mass Spectrometry

Redesigning a complex scientific analytics platform to reduce researcher cognitive load by 60% — and introducing AI-assisted report generation that transformed a 3-hour manual task into a 20-minute guided workflow.

Company
Danaher Corporation (SCIEX)
My Role
Lead Interaction Designer II
Duration
Jul 2021 – Present
Platform
Desktop SaaS · Windows
Overview

Making advanced science accessible to every researcher

SCIEX is a global leader in mass spectrometry — instruments and software used by pharmaceutical researchers, forensic scientists, food safety labs, and clinical diagnostics teams worldwide. Their software platform is the nerve centre of every experiment: it controls instruments, processes millions of data points, and generates regulatory-grade scientific reports.

As Lead Interaction Designer, I own the end-to-end UX for this platform — a suite of interconnected tools that scientists rely on daily. When I joined, the software was widely described as "powerful but punishing". My mandate: make it match the intelligence of the scientists using it.

AI/LLM UXData VisualisationComplex WorkflowsDesign SystemsEnterprise SaaSFigmaScientific UX
01
Discovery
Weeks 1–6
02
Research Synthesis
Weeks 7–9
03
IA & Concept
Weeks 10–14
04
Design & Prototype
Weeks 15–22
05
Test & Iterate
Weeks 23–28
06
Ship & Measure
Ongoing

Problem Statement

When powerful software becomes a bottleneck to science

Scientists were spending 30–50% of their experiment time wrestling with the software rather than doing science. Three critical failure zones emerged from early observation:

🔀
Workflow Fragmentation
Completing a single analysis required navigating 5–7 disconnected modules. Context was lost at every transition. Scientists held paper notes to track their own progress through the software.
📋
Manual Reporting Drain
Generating a regulatory-grade experiment report took 2–4 hours of manual copy-paste from data tables to Word templates. Inconsistencies caused compliance escalations.
🎓
Prohibitive Learning Curve
New researchers needed 3–6 months of supervised training before running experiments independently. This was creating a critical hiring bottleneck for SCIEX customers.

"I have a PhD in biochemistry. I should not need a separate certification to run this software. The science is hard — the software should make it easier."

— Research Scientist, pharmaceutical company, 6 years' experience

Research & Discovery

Understanding scientists on their own territory

Researching a highly specialised domain required more than user interviews. I needed to understand mass spectrometry deeply enough to design for it. Over 6 weeks, I ran an immersive multi-method research programme across three continents.

🔭
Method 1
Lab Immersion (60+ hrs)
Observed 12 scientists across pharma, forensic, and clinical labs. Witnessed the full experiment lifecycle — from instrument calibration to report submission — in real working conditions.
🗣️
Method 2
Depth Interviews (18 sessions)
1-hour structured interviews with scientists at 3 experience levels. Mapped mental models, vocabulary, and the gap between how they thought about their work vs. how the software organised it.
📊
Method 3
Telemetry Analysis
Analysed 6 months of in-app event data from 800+ active users. Discovered that 70% of Batch Processing sessions were abandoned midway — a failure invisible in support tickets.
🧪
Method 4
Task Analysis
Deconstructed 8 core scientific workflows into 200+ individual cognitive tasks. Mapped where the interface created unnecessary decision load vs. where it aligned with the scientific method.
🃏
Method 5
Card Sorting (20 participants)
Ran open and closed card sorts with scientists to build a new IA grounded in their mental model of what belongs together — not the engineering team's module boundaries.
🏆
Method 6
Competitive Benchmarking
Evaluated 5 competing platforms and 3 non-scientific complex tools (ERP, CAD) for interaction patterns that reduced cognitive load in data-heavy, multi-step workflows.

Three user archetypes emerged from research

🔬
The Principal Scientist
PhD · 10+ years · Power user
Needs full control over method parameters. Wants to automate repetitive reporting so they can focus on interpretation.
Spends 40% of week on admin documentation that software should generate automatically.
🧫
The Lab Analyst
MSc · 2–5 years · Daily operator
Needs to run standardised protocols reliably and flag anomalies quickly. Hates error states with no recovery guidance.
Gets blocked mid-analysis by cryptic error codes. Has to interrupt senior colleagues to continue.
🎓
The New Researcher
BSc / MSc · 0–18 months · Onboarding
Wants to learn the instrument and software together without feeling overwhelmed. Needs contextual help, not a 400-page manual.
Currently requires 3–6 months of supervised training before running any experiment independently.

Key research findings

💡
Finding 1 — Navigation mirrors engineering, not science
The 7-module structure matched how the software was built internally. Scientists think in experiment workflows: Set Up → Run → Analyse → Report. The navigation didn't reflect this at all.
💡
Finding 2 — Reporting is a primary workflow, not an afterthought
Scientists spent more time on report generation than on instrument configuration. Yet the reporting tool was buried 4 levels deep and offered zero automation. It was the highest-pain, most-ignored part of the product.
💡
Finding 3 — Error states are trust-destroyers
48% of research participants had abandoned an experiment midway due to a confusing error state. None of the errors explained what went wrong or what to do next. Scientists defaulted to starting over.
💡
Finding 4 — Scientists want to trust AI, but need to verify it
When shown a prototype with AI-generated report text, 100% of scientists said they'd use it — but only if they could see the data the AI drew from and edit any generated section. Trust = transparency + control.

Design Process — End-to-End Workflow

From scattered modules to a unified scientific workflow

I led design across five interconnected workstreams, each addressing a different dimension of the platform's complexity. Here's the full process, phase by phase.

1
Phase 1 · Weeks 10–12
Information Architecture Redesign
Using card sort data and task analysis, I rebuilt the IA from scratch. The 7-module flat structure was replaced with a workflow-centric navigation model: Set Up → Run → Analyse → Report — matching exactly how scientists think about their experiments. Each stage surfaced only the controls relevant to that phase, reducing visible options by 58%.
IA diagramNavigation prototypeCard sort reportStakeholder sign-off
2
Phase 2 · Weeks 13–16
Batch Processing Module Redesign
The Batch Processing module had a 70% midway abandonment rate. I redesigned it as a guided 5-step wizard with real-time validation, contextual help tooltips, and progress persistence (so if a session was interrupted, work was auto-saved). Each step was contained — users couldn't proceed to step 3 without completing step 2, eliminating configuration errors from out-of-order inputs.
Key decision: added a pre-flight checklist at the start showing every input that would be needed, so scientists could gather reagent data and instrument settings before beginning — eliminating the most common cause of midway abandonment.
Wizard flowError state libraryFigma prototypeUsability test report
3
Phase 3 · Weeks 17–20
AI-Powered Report Generation
This was the most technically complex and highest-stakes feature of the project. Using the platform's LLM integration, I designed an AI report generation workflow that could auto-draft regulatory-grade scientific documentation from experiment data — transforming a 3-hour manual process into a guided 20-minute review workflow.
The core design challenge was trust. Scientists are trained to question any output — including AI. The design had to make the AI legible: every AI-generated sentence showed exactly which data point it was drawn from (hover to highlight in source table). Scientists could accept, edit, flag, or regenerate any section independently. The AI never replaced their judgement — it did their first draft.
I designed three trust-building mechanisms: (1) Provenance indicators — colour-coded highlighting linking AI text to source data rows; (2) Confidence signals — the AI surfaced its own uncertainty ("This value is outside normal range — please verify"); (3) Full override — scientists could regenerate any section with different parameters, or switch to manual mode with one click.
AI UX specTrust modelProvenance UIOverride flowsFigma prototype
4
Phase 4 · Weeks 19–22
Data Visualisation & Spectra Display
Mass spectrometry data is visually dense: chromatogram peaks, mass spectra, isotope distributions, and calibration curves all needed to be displayed simultaneously without overwhelming scientists. I designed a progressive disclosure visualisation system — overview charts with drill-down into detail on demand — that reduced the visible data density by 40% while keeping all information accessible within 2 clicks.
I also redesigned the Biophase electrophoresis visualisation tool, introducing a comparison view that let scientists overlay multiple protein separation results side-by-side — a workflow that previously required exporting to Excel.
Chart systemDrill-down patternsComparison viewResponsive breakpoints
5
Phase 5 · Weeks 20–24 (parallel)
Design System — Scientia DS
I built Scientia, a domain-specific design system in Figma with 240+ components tailored for scientific data interfaces. Standard component libraries don't account for: data tables with 50+ columns, instrument status indicators with 12 states, spectra chart controls, or regulatory compliance annotation tools. Scientia was built from the ground up for the domain.
The system included: a colour-coded data state system (nominal / warning / anomaly / out-of-range), a dense table component optimised for high-information-density displays, a Figma variable system connecting to engineering tokens, and comprehensive documentation ensuring handoff accuracy.
240+ componentsFigma variablesDesign tokensStorybook docsAccessibility audit
6
Phase 6 · Weeks 23–28
Usability Testing & Iteration
I ran 3 rounds of moderated usability testing — 8 participants per round, spanning all three personas. Each round used think-aloud protocol with screen and eye-tracking recording. Between rounds, I ran rapid iteration sprints: findings on Friday, revised prototype by Tuesday, retested Thursday.
Critical iteration: Round 1 testing revealed that the AI report generation "confidence signals" were being interpreted as warnings by cautious scientists, causing hesitation even for high-confidence outputs. I redesigned the signal language from warning-adjacent (amber / amber-orange) to neutral with positive framing ("High confidence — generated from 847 data points"), which resolved the hesitation behaviour completely in Round 2.
3 test rounds24 participantsTask completion reportsIteration logs

Key design decisions — before & after

Before
7-module flat navigation organised by engineering feature area. Scientists had to map their mental model to the software's structure every time they used it.
After
4-stage workflow navigation (Set Up → Run → Analyse → Report) that mirrors the scientific method. The software now thinks the way scientists think.
Before
AI report generation: single "Generate Report" button. Output was a black box — scientists couldn't see why any statement was written or edit individual sections.
After
AI report generation with full provenance: every AI sentence links to its source data. Accept, edit, regenerate, or override any section. AI assists — scientists decide.
Before
Error states: cryptic code + generic "Contact Support" CTA. Scientists abandoned experiments and restarted from zero rather than troubleshoot.
After
Error states: plain language explanation + specific recovery action + contextual documentation link. Error resolution rate improved by 82%.

Add your actual screens here — Figma exports


Outcomes & Impact

Measurable change for real scientists

Post-launch metrics from the first 8 months of the redesigned platform showed significant, sustained improvements across all tracked metrics:

78%
Improvement in task completion on benchmark workflows
65%
Reduction in Batch Processing configuration errors
3h→20m
Report creation time with AI-assisted generation
82%
Increase in error state resolution without support contact
🏆
Onboarding time halved
New researcher time-to-independent operation dropped from 3–6 months to 6–8 weeks following the IA redesign and guided workflow introduction. A direct cost reduction for SCIEX customers.
📈
Design system adoption
Scientia DS reduced design-to-dev handoff time by 40% and became the single source of truth across 4 engineering teams. Zero spec discrepancy issues in the first 3 sprints after adoption.

"The new reporting flow saved our team an entire working day every week. What used to take three hours now takes twenty minutes, and the output is more consistent and defensible in audits."

— Lab Manager, pharmaceutical research facility, post-launch survey

Reflection

What this project taught me

🧬
Domain depth is a superpower
Understanding what a chromatogram peak means, why peak integration matters, and what a scientist is actually looking for at each stage — this knowledge unlocked design decisions that surface-level research couldn't have produced. Invest the time to go deep.
🤖
AI UX is trust architecture first
The technical capability of our AI was secondary to how it was presented. Scientists with 20 years of training don't delegate judgement — they delegate labour. Design AI as a transparent collaborator, not a black box oracle.
🔴
Error states are UX strategy
Error design is often treated as edge-case cleanup. This project proved that error states are trust moments — when a scientist encounters an error and knows exactly what to do, their confidence in the whole system increases. Poor errors destroy confidence in features that actually work fine.
📐
Design systems compound over time
Building Scientia DS felt like overhead in month one. By month six, it was delivering velocity gains that dwarfed the investment. The rule holds: time spent on a shared vocabulary between design and engineering always pays back faster than you expect.
Next Case Study
Government Security Platform
Stealth Identity — Unisys
View Case Study →