When you have performed a series of experiments with repeated measurements, use the lack-of-fit test to check if your statistical model accurately describes your data. This test identifies whether the difference between your experimental observations and model predictions is due to random noise or a failure to model the trend.
Understanding Model Adequacy
In response surface methodology, the goal is to build a mathematical representation of how your process factors influence a specific response. However, a model that fits the data well on paper might still miss underlying physical trends if the mathematical form is too simple.
The lack-of-fit test provides a quantitative way to check for this. By partitioning the residual error into 'pure error' (derived from repeat runs) and 'lack-of-fit error' (the gap between the model and the actual mean response), Lattice determines if your model is capturing all available information or if it is oversimplifying the reality of your experiment.
Interpreting the Test Results
Lattice performs this test automatically by analyzing your design points. If the p-value is high, it indicates that the model is sufficient, meaning any remaining deviation between your experimental results and the model's prediction is statistically indistinguishable from random noise.
If the p-value is low, the model is flagged as inadequate. This usually suggests that the chosen polynomial degree—such as a simple linear model—is not complex enough to represent the process. In such cases, you should investigate if adding interaction terms or higher-order squares would better align the model with your experimental data.
The Role of Repeat Measurements
For this test to provide reliable insights, your experimental design should include repeat observations. These could be center points in a Central Composite Design, edge points in a Box-Behnken design, or specific runs you chose to duplicate to ensure consistency.
Lattice groups your data based on natural units to identify these replicate sets. Whether you have structured design points or manually added repeats, the platform uses these to estimate the pure error. If no repeat points are detected, the test cannot be performed, and the platform will notify you accordingly.
1 · Intent → method
An LLM picks rsm_lack_of_fit_test from a fixed catalog.
2 · Method → numbers
Deterministic Python engine runs the math. Same input → same output.
3 · Numbers → plain language
A second LLM translates the result into your domain’s vocabulary.
Why do I need repeated experiments to run this test?
The lack-of-fit test works by comparing the variation within your repeated points (pure error) to the residuals left over by your model. Without at least one group of repeated measurements, the test cannot mathematically isolate the pure noise from the model's error.
What does a significant p-value mean for my model?
If the lack-of-fit test returns a low p-value, it indicates that your current model is likely inadequate. This suggests that the model fails to capture significant patterns in the data, and you may need to add interaction terms or switch to a higher-order model to improve accuracy.
Tool input schema
Schema for rsm_lack_of_fit_test not exported yet (run pnpm export:registry).