Quality (SPC / Cpk / MSA) provides the formal methods necessary to verify that data—and the processes producing it—are reliable enough for decision-making. Lattice employs a structured three-stage architecture to ensure analytical integrity: first, our LLM identifies the appropriate statistical tool based on your specific goal (such as measuring gauge variance or plotting process stability). Second, our deterministic engine executes the calculations using fixed constants (like NIST-standard d2 values) to prevent stochastic artifacts. Finally, the LLM interprets these mathematical results, surfacing specific concern flags—such as Nelson rule violations or inadequate process capability—to provide actionable context rather than raw numbers.
When to choose this family
- You need to determine if your measurement system is consistent enough to provide trustworthy data.
- You want to distinguish between common-cause variation and specific, assignable shifts in your process.
- You are evaluating whether a process is capable of staying within customer-defined specification limits.
- You need to quantify the impact of operator or equipment variation on your final measurements.
The Three Pillars of Quality Control
This family relies on a specific sequence: Measurement Systems Analysis (MSA), Statistical Process Control (SPC), and Process Capability (Cpk). MSA uses ANOVA-based Gauge R&R to ensure your measurement system isn't the primary source of noise. Once validated, SPC uses Shewhart X̄-R charts to monitor stability over time, ensuring the process remains in a predictable state.
Only after confirming the system is both precise and stable does the Cpk analysis provide a credible assessment of whether your process meets design requirements. This sequence prevents 'false confidence' where capability indices might look positive despite a fundamentally unstable process or faulty measurement equipment.
Deterministic Logic vs. Descriptive Stats
Unlike standard descriptive libraries that merely calculate sample averages, this family operates on strict engineering assumptions. For instance, our SPC tools use subgrouping logic to separate short-term variation from long-term trends, following NIST and AIAG standards. This ensures that the indices we report—like Cp, Cpk, and Ppk—reflect the actual state of the factory floor.
A common mistake is attempting to calculate Cpk on raw data without checking for normality or stationarity. Lattice's tools are designed to surface non-normality flags automatically, preventing you from drawing conclusions based on inappropriate mathematical models.
Frequently asked questions
- Why does Lattice insist on running MSA before Cpk?
- If your measurement system has high error (a high %GRR), your process data is effectively noise. Running Cpk on data from an unvalidated measurement system produces a misleading index that reflects the measurement error, not the true capability of your process.
- What do the 'concern' flags mean in the output?
- These are systematic checks triggered when your data violates core statistical assumptions. For example, if a Nelson rule is triggered on a control chart, the system flags an 'out_of_control_concern' to warn you that your process is being influenced by assignable causes, making any further capability analysis invalid.