Diagnostics Portal: I built the product before I wrote a single spec. Here's what that unlocked.
Note: the client in this case study has been anonymized. The product, the process, and the outcomes are described accurately.
The brief arrived as a list of features and a few rough screenshots stitched together. No user flows, no edge cases, no clarity on what the system actually needed to do behind the scenes. Standard for an early-stage client with a real problem and a limited picture of what solving it would require.
The client is a european leading consumer health brand launching a diagnostics product. Customers buy a lab test, get a blood draw with a partnered practitioner, and receive a personalized result report with supplement recommendations tailored to their biomarkers. The product spans three actors: patient, medical partner, and an internal back office team. It involves a pipeline that runs from a lab PDF all the way to a published patient report. The client knew what they wanted the end state to look like. They hadn't fully worked through what needed to happen in between.
The typical response to a brief like this is discovery interviews, PRDs, wireframes, then an alignment session before anyone writes a line of code. I did something different.
Building before documenting
Philipp and I sat down, looked at the brief, and decided to vibe code the project. Over two three days I built a working prototype of the entire application: the medical partner submission portal, the back office case management interface, an AI biomarker extraction flow, and the patient-facing result states. Not a mockup. A functional product with real states, real data flow, and an actual AI agent extracting biomarkers from a lab PDF.
The reasoning was straightforward. A brief this rough, for a product this operationally complex, would generate a lot of written speculation. Building it would generate answers. Friction you can't see in a spec becomes obvious the moment you try to build something that actually works.
What the prototype contained: a doctor-facing submission form where the practitioner inputs patient details and uploads the lab PDF, a back office dashboard showing all submitted cases and their current status, an AI agent that automatically extracts biomarkers from the PDF and populates the back office view with values, units, and reference ranges, a recommendation layer where the internal team adds product suggestions with notes explaining the reasoning, a case management workflow moving patients through states (blood draw, analysis, results ready), a customer messaging thread embedded in the back office, and a patient-facing result page showing flagged biomarkers, in-range values, and curated product recommendations with explanations.
I used my own real blood test results for the demo. Nothing staged.
What building it revealed
The original scope hadn't mentioned the review overhead problem. When I built the review flow, it became immediately visible: biomarker reference ranges vary by age, gender, and individual baseline. The standard lab range tells you whether you're outside normal bounds. This product goes further, incorporating optimum ranges, a layer above standard reference that represents the actual target zone for a given patient profile. That's clinically meaningful and operationally complex. Without automation, the back office team would have to manually interpret every lab PDF, map each biomarker against the appropriate optimum range for that patient, and verify accuracy before publishing results.
At low volume that's manageable. At any real scale it's a bottleneck that would break the product.
So I built an AI biomarker extractor directly into the prototype. The agent reads the lab PDF, identifies each biomarker, pulls the value, unit, and reference range, and populates the back office automatically. The case manager's job becomes review and correction rather than manual extraction. The difference in operational burden is significant.
What the client hadn't seen coming
When we presented the prototype, we didn't position it as a finished product. We walked them through it as a concrete version of their brief: here's what your scope looks like when it actually runs, and here's a problem we found before we started building it properly.
The review overhead problem was new to them. They hadn't thought through the AI integration layer, and had been planning to handle clinical reasoning manually using a rough internal workflow. The prototype showed them what a structured pipeline could look like: extract biomarkers from the lab PDF, build a de-identified patient profile combining survey data and biomarker results, define optimum ranges for that specific patient based on age, gender, and baseline, send the full payload to a specialized medical AI for clinical reasoning, use the output to generate supplement recommendations, and produce a draft patient report the team reviews and publishes.
Separating these steps makes the system auditable, reduces error, and keeps sensitive patient data properly handled through a de-identification step before anything reaches an external AI. That architecture came out of building the prototype and then explaining it, not from a discovery process that preceded it.
We walked walked away with expanded scope, additional budget, and an AI integration plan that hadn't existed when they sent the brief. The product we're now building is meaningfully better than what the original brief would have produced.
Why this approach worked here
Not every project warrants starting with a working prototype instead of a spec. The conditions that made it the right call here: the brief was rough enough that writing a PRD would have meant speculating about the system rather than understanding it, the product was operationally complex in ways that only become visible when you try to build the actual flows, and the surface area was narrow enough that two days of focused building produced something worth showing.
The prototype served as a discovery artifact. It answered questions the brief hadn't asked, made the complexity visible before it became expensive, and gave the client something concrete to react to rather than something abstract to approve.
