This method family allows you to decompose chronological data into its fundamental components: trends, cycles, and volatility. Whether you are investigating production yield shifts or user engagement cycles, the workflow relies on Lattice's deterministic architecture. First, our LLM identifies the appropriate statistical tool for your specific data structure. Next, the deterministic engine executes rigorous, non-parametric or regression-based calculations—such as Mann-Kendall tests or STL decomposition—to ensure accuracy. Finally, the LLM translates these raw outputs into plain language summaries, explicitly highlighting when methods provide consistent conclusions versus when they signal potential data anomalies.
When to choose this family
- You have timestamped data and need to determine if a metric is trending upward, downward, or remaining stationary.
- You suspect cyclical patterns (like weekly or monthly spikes) and want to isolate those signals from the overall trend.
- You need to verify if a change at a specific time point represents a statistically significant shift or just expected variance.
- You require short-term forecasts based on historical patterns using automated ARIMA modeling.
What this family does
Time series methods transform raw chronological data into actionable insights by stripping away noise. The core tools focus on trend detection, seasonality identification, and structural break analysis. By applying dual-method validation—such as running both linear regression and the Mann-Kendall test simultaneously—the platform avoids the pitfalls of assuming data follows a perfect distribution.
Beyond description, the tools also provide structural decomposition. By separating your data into trend, seasonal, and residual components, you can see clearly whether a sudden drop in a metric is a seasonal dip or a genuine decline in performance.
How we differ from other approaches
Many platforms default to complex black-box models like deep learning, which often require extensive tuning and are difficult to interpret. Lattice focuses on deterministic, interpretable tools that prioritize clarity over complexity. We favor statistical rigor that provides transparent results, such as Sen's slope for robust trend estimation, rather than obfuscating the math behind deep neural layers.
We prioritize data integrity. Our engine enforces strict checks on chronological ordering, missing values, and sample size requirements before running calculations, ensuring you are never provided with a result derived from unreliable data structures.
Common mistakes to avoid
A frequent error is ignoring the density of your observations. For example, running a seasonality check on a dataset with fewer than two full cycles will produce unreliable results, as the tool lacks sufficient evidence to distinguish a true pattern from random noise.
Another common mistake is treating all time-based data as linear. If your metric behaves according to complex, non-linear growth, relying solely on simple linear regression will lead to poor trend interpretation. Lattice helps mitigate this by providing multi-method feedback, but users should always verify that the chosen method aligns with the underlying data behavior.
Frequently asked questions
- Why does Lattice run both linear regression and the Mann-Kendall test for trends?
- Real-world data is rarely perfectly normally distributed. Linear regression is fast but can be misled by outliers. By running the non-parametric Mann-Kendall test alongside it, Lattice identifies when the two methods disagree, signaling that your data may be skewed or contain extreme values that warrant closer inspection.
- Can I use these tools for any time-interval data?
- Yes, but you must ensure your time column is parseable as either a numeric value or a standard date-time format. If your time data contains gaps or is unordered, our engine will automatically handle the sorting, but we recommend verifying your frequency consistency to ensure the resulting trend and seasonality insights remain accurate.