Use difference-in-differences when you need to measure the impact of a specific event or policy change. By comparing the trends of a treatment group against a control group before and after an intervention, you can isolate the effect of the policy from other external factors changing over time.
Understanding the Method
Difference-in-differences works by calculating the difference in outcomes before and after an intervention for the treatment group, and subtracting the same difference observed in a control group. This approach accounts for underlying trends that affect both groups similarly.
By using an OLS regression framework, this method identifies the interaction between group assignment and time. The resulting coefficient provides a clear estimate of the impact of the policy, allowing you to move beyond simple before-and-after comparisons.
Verifying Your Results
Causal inference relies on the validity of your data's underlying assumptions. Lattice automatically performs a post-check for parallel trends if you provide a time variable.
If the test indicates that trends were not parallel prior to the intervention, the platform will flag this concern. In such cases, the reported impact may be inaccurate, and we suggest reviewing your choice of control group or checking for confounding variables.
Interpreting Outputs
The analysis returns the ATT along with statistical confidence intervals and p-values to help you understand the precision of the estimate. You will also see the cell means for each group across the two time periods, which helps visualize the 'parallel' (or non-parallel) nature of the data.
If your data has specific clustering—for example, if individuals are grouped within schools or regions—you can use cluster-robust standard errors to ensure your conclusions are not skewed by correlations within those groups.
1 · Intent → method
An LLM picks causal_did from a fixed catalog.
2 · Method → numbers
Deterministic Python engine runs the math. Same input → same output.
3 · Numbers → plain language
A second LLM translates the result into your domain’s vocabulary.
What does the parallel trends test tell me?
The parallel trends test checks if the treatment and control groups were moving in the same direction before the intervention. If this assumption is violated, the difference-in-differences result may be biased, and you should be cautious about interpreting the effect as purely causal.
Why does Lattice report an ATT?
The ATT (Average Treatment Effect on the Treated) is the primary output of this method. It represents the estimated change in your outcome attributable specifically to the intervention for the group that actually received it.
Tool input schema
Schema for causal_did not exported yet (run pnpm export:registry).