One-way ANOVA tests whether the means of three or more independent groups differ on a continuous outcome. Lattice picks the right post-hoc, checks variance equality, and returns numbers you can audit later.
When you would reach for one-way ANOVA
You have three or more groups and a single continuous response. The classic question is "do these groups differ on average?" — for example, comparing yield across three suppliers, conversion across four landing pages, or HbA1c reduction across three drug arms. The test does not tell you which pair of groups differs; it answers the prior question of whether any difference exists at all.
If you have only two groups, use a two-sample t-test instead. If your response is ordinal or strongly non-normal, use the Kruskal-Wallis non-parametric alternative.
What Lattice computes
Behind the scenes the deterministic engine runs scipy's f_oneway, then computes effect size (eta squared), confidence intervals, and the residual mean square. Lattice runs Levene's test for variance equality before ANOVA — if it rejects, the engine switches to Welch's ANOVA and notes the substitution explicitly in the result.
The output object contains: statistic (F), p_value, df_between, df_within, eta_squared, levene_p, welch_used, and a groups table with each group's count, mean, and standard deviation. Every value carries a trace_id you can replay months later to get the same numbers.
1 · Intent → method
An LLM picks svt_run_anova from a fixed catalog.
2 · Method → numbers
Deterministic Python engine runs the math. Same input → same output.
3 · Numbers → plain language
A second LLM translates the result into your domain’s vocabulary.
How to phrase your request
You do not need to name the test. Lattice's planner will pick svt_run_anova from prompts like:
- "Compare yield across the three reactor batches."
- "Are landing-page conversion rates different across the four variants?"
- "Test if HbA1c reduction differs by treatment arm."
If your question implies a follow-up — "and which one is best?" — the planner chains in a Tukey post-hoc automatically.
Reading the result
A small p-value (typically below 0.05) means the data give evidence that at least one group mean differs from the others. The eta-squared value tells you how much of the total variance the grouping explains; values around 0.01 are small, 0.06 medium, and 0.14 large by Cohen's conventions. Lattice's plain-language summary expresses both, then suggests a post-hoc step so you can see which specific groups separate.
Common mistakes Lattice guards against
The most common slip is running ANOVA on data that has repeated measures (the same subject in every group) — which inflates significance. Lattice asks for confirmation when it sees a column that looks like a subject id. The second slip is treating a borderline non-significant result (say, p = 0.07) as evidence of no difference — Lattice's interpreter explains the difference between "no evidence of effect" and "evidence of no effect."
What if my groups have unequal variance?
Lattice runs Levene's test before ANOVA. If variances differ significantly, the engine switches to Welch's ANOVA automatically and notes the substitution in the result.
How do I find which specific groups differ after ANOVA?
Pair the run with svt_posthoc_tukey for all-pairs comparison or svt_posthoc_dunnett if one of your groups is a control. The chain is suggested automatically when ANOVA is significant.
Can I trust ANOVA when one group is much smaller than the others?
ANOVA is reasonably robust to unbalanced sizes if variances are equal. With both unequal sizes and unequal variance, Welch's ANOVA (which Lattice falls back to) is the safer choice.
Tool input schema
Schema for svt_run_anova not exported yet (run pnpm export:registry).