States → Continuity
Every component in a traditional design system has states. Default, hover, active, disabled, error. You’re in one state or another. The design system defines each state with specific visual treatments, and transitions between them are instantaneous or use a short animation.
AI doesn’t work in states. It works in probabilities.
The Shift
A response isn’t “loading” or “loaded” — it’s streaming, and the stream carries information at every moment: how fast tokens arrive, how long the response is getting, whether the model is producing code or prose. Confidence isn’t a boolean — it’s a gradient from “I’m certain” to “I have no idea.” A design system that can only express these as discrete switches is lying about what’s actually happening underneath.
And a single conversation often isn’t a single thread anymore. One user prompt might kick off sub-tasks — an agent searching the web while another queries a database while a third drafts a response. Background threads doing work you didn’t explicitly ask for but that the system anticipated. Sub-agents handing off to each other. The interface needs to represent all of this: not one linear stream, but multiple concurrent activities at different stages of completion, with different confidence levels, some foregrounded and some running quietly in the background. That’s a lot of simultaneous spectrums to manage.
This isn’t a new idea, by the way. Bill Buxton called this out years ago: designers think in states, but what really matters is designing the transitions — the negative space-time between key moments. Spectrums over snapshots. Design teams have been aspiring to this for a long time; some have even applied it to user research, replacing monolithic personas (“Mary, 38, single”) with property spectrums where you design for the full range, not just the mean. AI doesn’t invent the need for continuity. It makes continuity mandatory, because the data driving the interface is itself continuous.
Continuity means visual properties driven by continuous data, not toggled by events. Spring physics where tension and damping respond to input values. Shape morphing where geometry blends between states based on a parameter, not a flag. Animation curves driven by live data — how confident the model is, how far through a process you are, how much has changed — not by “enter” and “exit” triggers. The shift is from “what state am I in?” to “where am I on this spectrum, right now, and how is that changing?” — multiplied across every concurrent thread in the conversation.
Where Systems Stand Today
M3 Expressive on Compose is the most advanced here — spring physics, a Loading Indicator that morphs through seven shapes, and a FAB Menu driven by continuous checkedProgress values. That’s real continuity thinking. But it’s Kotlin-only. On the web, both Fluent and Material are still in discrete-state territory. Neither has adopted anything like Rive’s state machine model for fully data-driven blending. See the system pages for scores and the Rive tool page for what fills this gap.
What Pushes a Score Up
Discrete state switches (default → hover → active) keep you at a 2. Data-driven continuous parameters — spring physics, shape morphing, animation driven by confidence values rather than boolean flags — push toward a 7. If visual properties blend instead of switch, you’re getting there.
Where this is going. This page is a working summary — the shift, the current state, the scoring rubric. The full deep dive expands each section with code-level evidence, specific component proposals, and mockups. Trust Expression is the first dimension getting the full treatment; the rest follow as they earn it.
If you’re building against this shift — or you see something the summary is missing — write back. The scorecard is debatable by design.