Canned → Adaptive Motion
Open any AI product. Watch the loading state. It’s a spinner. The same spinner, at the same speed, whether the model is about to respond in 200 milliseconds or 30 seconds. Whether it’s generating a one-line answer or a detailed analysis with code examples. The spinner carries zero information. It’s a lie: “something is happening” when what you actually want to know is “what is happening, and how much longer?”
The Shift
That’s canned motion. An animation authored once, played identically every time, regardless of context. Design systems are full of it — fade-in at 200ms, slide-up at 300ms, spin indefinitely. These are fine for deterministic interfaces where the transition is decorative. They’re inadequate for AI interfaces where motion could actually carry meaning.
Adaptive motion responds to live data. The loading animation changes based on what the model is doing — shape-morphing that communicates “thinking about your question” differently from “generating code” differently from “searching external sources.” Streaming text reveals at a cadence that matches the token flow, not at a fixed speed. Confidence affects the visual treatment continuously, not as a state switch after the response is complete. The design artifact — the animation, the transition, the visual behavior — is the runtime artifact. You can’t hand it off in a screenshot because a screenshot captures one frame of something that only exists in motion.
Where Systems Stand Today
M3 Expressive’s Loading Indicator is the best example of this shift — a looping shape-morph sequence through seven Material shapes. It’s not a spinner. It communicates processing as a visual experience. Fluent’s motion system is token-driven and respects reduced motion, but every animation is still context-independent. On the tools side, Rive’s state machine model lets you drive animation from continuous inputs — exactly what adaptive motion requires. See the Rive tool page.
What Pushes a Score Up
A spinner that plays identically every time regardless of what’s happening underneath — that’s a 2. Motion that responds to live data — confidence driving visual treatment, streaming speed driving reveal cadence, loading that communicates meaning instead of just “wait” — that’s a 7. The design artifact needs to be the runtime artifact. If you can screenshot it and call it a spec, it’s not adaptive.
Where this is going. This page is a working summary — the shift, the current state, the scoring rubric. The full deep dive expands each section with code-level evidence, specific component proposals, and mockups. Trust Expression is the first dimension getting the full treatment; the rest follow as they earn it.
If you’re building against this shift — or you see something the summary is missing — write back. The scorecard is debatable by design.