This family provides a systematic framework for planning experiments, ensuring that every run contributes maximum information. When you identify the factors you need to test, Lattice acts as the conductor: first, our LLM interprets your process constraints; then, the deterministic engine executes the specific mathematical algorithm (such as CCD, BBD, or optimal design) to generate your test matrix; finally, the LLM presents the trial schedule in plain language. By separating the mathematical design logic from the experimental parameters, we ensure that your testing plan remains mathematically sound, unbiased, and ready for the lab.
When to choose this family
- You need to determine the optimal settings for a process with 2-6 key variables.
- You have limited time or budget and cannot afford a full test of every possible combination.
- You need to navigate physical constraints, such as avoiding high-temperature and high-pressure combinations that are dangerous.
- You want to identify both the primary factors and the secondary curvatures affecting your process output.
How the process works
The core objective is to move from 'trial and error' to a structured approach. You define your factors and their boundaries, and the platform generates a design matrix that dictates exactly how to set each variable for each trial.
Whether using geometric templates like CCD or BBD, or algorithm-based optimal designs, the goal is to capture maximum signal about your process while minimizing the number of runs required to reach statistical confidence.
Selecting the right approach
What separates this family from basic data analysis is its proactive nature. While regression helps you understand data you already have, these methods are used to decide what data to collect next. Choosing between, for instance, a Box-Behnken design and an Optimal design depends on whether your space is unconstrained or restricted by physical limitations.
We distinguish between designs that prioritize model parameter estimation and those that prioritize prediction accuracy, allowing you to tailor the experiment to whether you want to understand the 'why' or improve the 'precision' of your output.
Common pitfalls
A frequent error is assuming that more runs always equate to better results. Over-specifying your model without accounting for the number of available runs can lead to designs that are physically impossible or mathematically unstable.
Another mistake is neglecting to account for process constraints early. If a specific combination of factors creates a safety or quality risk, failing to use a constrained approach—like D-optimal—will result in a plan that cannot be executed in the real world.
Frequently asked questions
- How do I know which design type to pick?
- Start by identifying your constraints. If you have no physical limits on variable combinations, standard geometric templates like CCD or BBD are often sufficient. If you have specific 'forbidden' regions or an irregular number of available experimental runs, the Lattice D-optimal or I-optimal tools are better suited to construct a custom design.
- Can Lattice help me if I don't know the exact number of experiments I can run?
- Yes. If you use the D-optimal method, you define the number of runs you can afford, and the algorithm will calculate the most informative set of trials for that specific sample size, rather than forcing you into a rigid, pre-defined experimental structure.