Automotive ZF Friedrichshafen AG Field Research Discovery Engagement 2024

ZF Diagnostic System

A discovery engagement is only as good as its research. I traveled to shops in three US states, put a prototype in front of mechanics with their hands still oily from the last car, and built the design around what I actually observed, not what the brief assumed.

Role: Product Designer (UX/UI) + Naming & Brand Strategy · 3 states · 18 interviews

18

Professionals Interviewed

3

US States (FL, MO, TX)

A/B

AI vs Standard Search

10.8%

Outreach Success (vs 1–5% expected)

The Design Problem

ZF asked us to design a diagnostic platform for automotive technicians, a challenger to AllData, Mitchell 1, and Identifix. I led naming, brand strategy, and end-to-end UX/UI. The real design problem wasn't how to build the product; it was understanding what mechanics actually needed versus what we assumed they needed, before committing to anything.

Image placeholder Overview: main diagnostic search interface
Field Research

"Go to the user. Pay the cost." The single best decision on this project was committing to in-person research in three states before designing anything.

FL

South Florida

Round 1. Four shops in Boca Raton and surrounding area. 10.8% cold outreach success rate, against an industry expectation of 1–5%.

Prototype v1 → Interviews → Synthesis → Prototype v2

MO

St. Louis, Missouri

Round 2. ZF partner network shops, a very different repair-shop ecosystem than South Florida. Added the A/B test structure: same flow with and without AI assistance, run in the same session.

4 interviews + A/B structure introduced

TX

Houston, Texas

Round 2 continued. Independent shops, family-owned shops, dealer shops, specialist shops (transmissions, European imports). Confirmed patterns from FL and MO.

4 interviews · 18 total across all rounds

What the Research Found
01

Speed is the only feature that matters

A senior BMW mechanic in St. Louis put it bluntly: when his current system is slow, it's a paycheck problem. Flat-rate mechanics lose money every second waiting for a search result. I designed every search interaction around this single constraint: aggressive autocomplete, minimal path from "car in front of me" to "procedure on screen."

02

Interactive wiring diagrams are the killer feature

Came up at almost every shop, unprompted. The exact same feature description every time: click a circuit, the entire path through the schematic highlights. One mechanic said he'd switch systems for that single capability. I added it as a primary feature: research-driven, not assumption-driven.

03

License plate search is underrated

Almost every shop preferred typing a plate to typing a VIN. Shorter, more error-tolerant, readable without leaving the car. VIN is the obvious feature to lead with. Plate is what mechanics actually asked for, so it became the first option in the picker, not the second.

04

AI: genuine interest, legitimate concern

Mechanics were intrigued by AI but concerned that junior techs would lean on summaries instead of actually diagnosing. The A/B test surfaced this honestly; asking "what do you think of AI?" gets polite answers; showing the same flow with and without it at 4:45pm on a Friday gets the real answer. Result: AI became a contextual layer woven through search and document pages, not a destination.

05

Pay structure shapes everything

Hourly-paid mechanics have time to engage with software. Flat-rate mechanics lose money any minute they're not turning a wrench. A product that requires any meaningful learning curve is unsellable to flat-rate shops. This shaped onboarding defaults, search latency targets, and every decision about progressive disclosure throughout the product.

Image placeholder A/B test: AI-assisted vs standard search prototype
Image placeholder Field research: shop interview session
Engagement Structure

Design Sprint methodology (Google Ventures)

Phase 0.0a

Naming, brand strategy, competitive analysis

Phase 0.0b

Personas, journey, prototype v1

Field Round 1

South Florida · 4 shops

Synthesis

Prototype v2 + A/B structure

Field Round 2

St. Louis + Houston · 8 shops

Phase 0 Report

Recommendations → Delivered

The engagement closed after Phase 0. The research and prototype were the deliverable. A discovery phase that concludes with "not yet, and here's why" is a successful discovery phase.

Image placeholder Hi-fi UI: search results with AI assistance
Image placeholder Interactive wiring diagram: circuit highlight
Challenges

Recruiting the right people is the hardest part of field research

Automotive technicians are difficult to recruit through traditional channels; they're busy, skeptical of software vendors, and don't respond to generic cold emails. I solved this by walking into shops directly with a working prototype and asking for 20 minutes. The 10.8% success rate on cold outreach looks low until you compare it to the 1–5% industry expectation for comparable research. The in-person approach also produced richer data than any remote session would have. You learn things sitting next to someone while a car is on a lift that you simply cannot learn over Zoom.

Designing an A/B test without lab conditions

Running a meaningful A/B test inside uncontrolled shop environments (different mechanics, different levels of experience, different states of distraction) required designing the test carefully to minimize variables I couldn't control. I standardized the task, the script, and the sequence. What I couldn't standardize was the environment, which is actually part of the point: if AI assistance adds value at 4:45pm in a loud shop, that's meaningful. If it only helps in quiet conditions, that matters too.

Presenting research that expanded the scope

The initial brief was narrower than what the research found was actually needed. Presenting findings that implicitly challenged the project's original framing (and recommending a scope that changed the roadmap) required making the evidence undeniable before making the recommendation visible. I structured the Phase 0 report to let the research speak first and the recommendation follow, rather than leading with the ask and defending it afterward.

What I'd Take Forward

On field research

Cold emails to mechanics have terrible response rates. Walking into a shop and demoing a prototype while a car sits on a lift gives you information you simply cannot get any other way. I traveled for it. I'd travel for it again.

On A/B in qualitative sessions

Asking "what do you think of AI?" gets polite answers. Showing two versions of the same task and asking which they'd use at 4:45pm on a Friday gets the real answer. Embedding A/B into qualitative sessions is something I'll apply wherever AI features are on the table.

On credit

I led naming and strategy. Another designer executed the visual identity. Calling that 'I did the brand' would be cheap. The split made the work better and costs me nothing to be transparent about it.

All Projects Next: BRDGIT
tap to unmute