Zero to Thirty-Three
My friend Joel Martinez submitted his docs site to the Lightning Scan and got a 0.
That score is now a 33.
Both numbers are right. That’s the whole story — but the gap between them is what I actually want to talk about.
Joel’s a past co-worker and a good engineer, not a random stranger stress-testing a product he found on the internet. When he told me the score was 0, I believed him immediately. And I knew something was wrong — not with his site.
The URL he submitted was a GitHub Pages project site: wildernesslabs.github.io/Chloroplast/, which lives under a subpath, not at the top-level domain. When he pasted it into the scan form, one bug did a lot of damage.
The scanner was apex-only. It ignored the path, extracted the domain — and without support for private domains in the Public Suffix List, wildernesslabs.github.io lost its registrable part entirely. The scanner fell back to github.io. Not the project’s subdomain. Not even a real site. The raw platform suffix.
Then it fetched https://github.io/ and followed every redirect.
github.io → pages.github.com → docs.github.com/pages → docs.github.com/en/pages → docs.github.com.
Joel submitted his docs site. We scored GitHub’s blank documentation home. And it scored a 0.
That’s not a bad score. That’s the correct score for a question nobody asked, four redirects away from the one they actually did.
This is a structural thing, not a bug in the traditional sense. GitHub Pages hosts hundreds of thousands of project sites on *.github.io. From a pure hostname standpoint, two completely unrelated projects share the same registrable domain. Rate-limit by hostname naively and you’re rate-limiting per platform. Scan by hostname naively and you’re scoring whatever github.io serves at the root — which is GitHub’s marketing page. Nothing useful.
Fixing it meant thinking about what “scope” actually means for subpath URLs.
We now accept the full submitted URL — path and all. When a developer pastes their docs link, we scan the page they actually care about. The report shows what they submitted, flags if the fetch landed somewhere different, and scores the right target. Per-project rate limits use the Public Suffix List to key by registrable domain: wildernesslabs.github.io and someone-else.github.io are separate registration boundaries even though they share a suffix, and they get separate quotas. (We also added a layer of SSRF protection at the scanner — resolve the IP once, validate it’s public, pin the connection to that address for the fetch. No second DNS lookup, which closes a whole class of rebinding attacks. Worth doing; not especially exciting to read about.)
The result: same URL, submitted the same way, now scores a 33.
Not a great score — but an honest one. It means there’s real work to do on that docs site. The rubric found things. That’s what the scan is supposed to do.
We shipped one more thing while we were in the same area.
The Lightning Scan used to hold an open HTTP connection from your browser until Python finished — anywhere from 5 to 60 seconds. If the browser timed out before the scan completed, you got an error. If the scan completed while your browser was already done waiting, you still got an error. (Some sites just take a long time. GitHub Pages project sites, it turns out, are often slower than apex domains to respond — which is a gift to anyone trying to expose a timeout problem.)
A 60-second spinner is indistinguishable from a broken product. That’s not a metaphor.
The scan is now async. You submit a URL, get back a scan ID in under a second, and the page polls. Phase pills track what the scanner is actually doing — Fetching → Discovering → Scoring — with non-deterministic timing, so the thing feels alive rather than choreographed. You’re watching real work, not a progress theater.
The score ring does double duty. During the scan it fills as a progress indicator — not a linear march but something more restless, pulling ahead and pausing, which is honest because that’s how the underlying work actually moves. Then the scan completes and the ring animates to the score.
That’s the moment. If your site scores in the 30s, the ring built toward full the whole scan. Now it’s going the other direction.
I can almost hear it — the oohhh when someone watches it slide back. That little deflation is the product working correctly. The ring was never telling you how good your site is; it was telling you how far along the scan was. The score is a different question with a different answer. Most people conflate them. The animation makes you feel the distinction instead of just reading it.
A score of 33 with something to look at while you wait is a better product than a score of 33 after 60 seconds of nothing. “Is it frozen?”
Here’s the pattern, named plainly: user signal is a design instrument.
Joel’s 0 triggered a diagnostic loop that surfaced a scope gap we knew existed but had been treating as edge-case. The fix shipped in a day. Along the way, we closed a security gap, added per-project rate limiting, and replaced a static spinner with a status read. None of that was on a roadmap. All of it was the right work.
The clean version of this story is: we shipped scope support. The real version is: a pal’s 0 told us something we didn’t know, we listened, and four things got better.
That’s what feedback loops are actually for — not confirmation that what you built works, but information about the gap between what you thought you built and what someone actually needed.
The score went from 0 to 33.
The gap closed a little.
The recording above shows 30, not 33 — a rendering error on the site dropped the score between the original test and tonight. The error’s gone now, and the score will recover. That’s the other thing live scores are good for: they catch regressions you didn’t know you shipped.