PRINCIPLES.md
A living document. The principles this product and the business behind it will uphold. We refer to this throughout development. When a decision is hard, this document is the tiebreaker.
1. Evidence is sacred
Every claim in every output ties to a participant, a verbatim quote, a transcript offset, and (where applicable) a video timestamp. No exceptions. If we cannot trace it, we do not say it. The chain of evidence — finding → quote → participant → moment in time — is the spine of the product.
2. AI proposes, the researcher disposes
Models draft. The human researcher decides. Stage gates between automated passes are non-negotiable. We do not build flows that allow an end user to skip review and ship. If the workflow tempts a user to rubber-stamp AI output, the workflow is wrong.
3. Don't infer. Don't assume.
When the data does not say it, the report does not say it. We surface what we don't know with the same clarity as what we do. An under-evidenced research question is flagged, not finessed. "We cannot answer this with this dataset" is a legitimate, valuable finding.
4. Counter-evidence is a feature
Disconfirming voices are surfaced by default and rendered alongside confirming ones. Every finding card shows both sides. A finding that has not been tested against the contrary view is not a finding — it is a hypothesis.
5. Match the language to the evidence
"Most", "several", "a few", "one participant" — these words are governed, not stylistic. The tool enforces the mapping between numerical evidence and the language used to describe it. Over-claiming is a defect, not a flourish.
6. Tier, don't threshold
Findings are graded by strength — *dominant pattern*, *recurring pattern*, *signal*, *outlier* — and scored independently on prevalence and intensity. We do not pretend a count is a verdict. A high-intensity signal from one participant can be more important than a low-intensity pattern across many; the tool must be honest about that.
7. Defensibility over polish
A report that survives client interrogation beats a report that looks pretty. When polish and rigor compete, rigor wins. The artefact must hold up when a sceptical product manager opens it on a Tuesday morning and pushes back.
8. Speed is a byproduct, not the goal
Speed comes from structure and rigor done well — not from cutting corners. We never trade quality for turnaround. If the timeline forces a compromise, we narrow the scope rather than soften the standard.
9. Methodology is the product
The tooling exists to make best-practice qualitative research practical at AI speed. The methodology defines us; the tools serve it. When in doubt, default to the more rigorous option, even if it costs more compute or more user clicks.
10. The researcher is the protagonist
The product augments expert judgment; it does not replace it. We do not build features that let an inexpert user produce a report they could not defend. Power is reserved for those who know how to wield it; the tool's role is to remove drudgery, not skill.
11. Privacy is non-negotiable
Participants trusted us with their stories. We treat their data with care, with transparency, and with the minimum processing necessary. We do not log, cache, or retain interview content beyond what the workflow requires. We make consent posture visible to the researcher at every step.
12. Build the slice that delivers
We ship the smallest thing that produces a real client deliverable, then iterate. We do not pre-build for futures we have not tested. Every feature earns its place by being needed by a current project, not imagined for a future one.
13. The first user matters
The MVP is built for one researcher's real client engagement. That constraint is a gift: it forces every feature to be load-bearing. If a feature is not actively making the Bupa deliverable better, it does not belong in MVP-1.
14. Operate in the open with yourself
Every AI call, every model choice, every prompt is logged and inspectable. The researcher should always be able to ask "why did the tool say that?" and get a real answer. Black boxes are forbidden — including from ourselves.
15. Cost is a quality signal
Cheap models for routine work, premium models for synthesis. We make model routing a deliberate, observable design choice — never a default. The user should be able to see what we spent and on what.
16. The deck and the report are not the work
The work is the analysis. The deck and the report are renderings. We design the underlying data structures so they can be re-rendered in any format, in any frame, for any audience — without redoing the analysis.
17. We do not market to the inexpert
When the time comes to broaden access, we will sell to people who can hold themselves to these principles. Not to those looking for a shortcut around the craft.
This is a living document. Last updated 2026-05-03. The product is built on these principles — when implementation conflicts with them, the principles win.