Frame the question
Hours, not weeks
A precise statement of what is being measured and what counts as an answer. The decision the answer is meant to inform is named explicitly. Vague questions are sharpened before any data is touched.
What part of the work can the agents take? And what does running those agents for a quarter teach the operator about where the business should go next?
Method and model are the answer to both. The same agents that handle the repetitive work also see the data. Every order, every shift, every margin movement. That stream feeds an economic framework that names where the business can grow, what costs can come out, and which productivity move pays the most. Automation that prices the trade-offs.
Every engagement passes through the same five stages. Smaller engagements compress them. Larger engagements iterate within them. The discipline of the sequence is what makes the answer defensible.
Hours, not weeks
A precise statement of what is being measured and what counts as an answer. The decision the answer is meant to inform is named explicitly. Vague questions are sharpened before any data is touched.
The choice of design
What outcome would have occurred without the decision? The answer to that question selects the method. Difference-in-differences, event study, synthetic control, instrumental variables, or a structural model. The choice is documented before estimation.
Data work
Sources get authenticated, normalized, and joined into an analysis-ready data model. Every cleaning step is recorded. Treatment, control, timing, and outcomes are defined in code, not in conversation.
The empirical work
Point estimate, confidence interval, and sensitivity analyses run together. Robustness checks against alternate specifications, alternate samples, and alternate identifying assumptions are part of the answer, not appendix material.
The deliverable
A short memo, an updated dashboard tile, and a one-paragraph executive read of what the number means for the decision on the table. The technical appendix sits underneath, available to anyone who wants to verify it.
The right method depends on the structure of the question and the data available to answer it. The toolkit below is the working set. Method choice is documented at Stage II and revisited if the data refuses what was assumed.
Marginal & opportunity-cost analysis Structural modeling Instrumental variables Regression discontinuity Bootstrap inference Bayesian updating
A number that cannot be defended under questioning is not an answer. It is a guess in formal clothing. Every measurement output produced here ships with the inputs, the design choice, the alternative specifications, and the conditions under which the estimate would change sign or magnitude. If the question is contested, the work is built to be examined.
The standard is the one used in regulatory submission: a colleague trained in the same methods should be able to reproduce the result from the documentation alone.
Analysis runs in R, Python, and Stata, chosen per task. Dashboards and operating systems are built on production-grade web stacks, version-controlled in Git, and documented for handoff. Estimation pipelines are deterministic. A fixed seed, a recorded data snapshot, and a script that reproduces the result on demand.
Where the engagement format permits, replication packages are delivered alongside the final memo. The client team is left with the ability to re-run the analysis when the data updates, without depending on the firm to do it for them.
See how the methodology is composed into the three engagement formats, or read the capabilities each one supports.