When your data shifts suddenly—due to a product launch, a change in policy, or a technical glitch—time series changepoint detection helps you find exactly when it happened. Use this tool to move beyond simple averages and identify the precise timestamps where your data's behavior fundamentally changed.
Pinpoint Shifts in Your Data
Data rarely follows a smooth, predictable path forever. Whether you are tracking daily active users, manufacturing error rates, or financial performance, external factors often create structural breaks. Time series changepoint detection allows you to automatically locate these transitions without needing to manually inspect every chart.
By identifying these points, you can transform a single long-term dataset into distinct phases. This allows you to compare the 'before' and 'after' states of your operations, making it easier to evaluate the impact of a specific business decision or process change.
How the Analysis Works
The tool uses a deterministic approach to partition your time series into segments. It evaluates potential breakpoints by calculating the cost of fitting the data within those segments. When it identifies a configuration that significantly minimizes this cost, it flags that time as a changepoint.
You can customize the detection sensitivity based on your needs. For instance, you can choose to detect changes in the average (mean) value, shifts in volatility (variance), or use non-parametric methods for data that doesn't follow a standard normal distribution. This flexibility ensures the tool adapts to the specific characteristics of your metric.
Understanding the Output
For every changepoint detected, the platform provides a detailed summary. This includes the exact timestamp, the average value of the data before and after the shift, and the magnitude of the change. By comparing the segments, you get a clear, plain-language explanation of how the behavior of your system evolved at that specific moment.
In addition to the raw data, the tool calculates a p-value to help you assess the confidence of the detected change. If a shift is marked as significant, it provides evidence that the change represents a meaningful deviation in your data rather than natural, minor noise.
1 · Intent → method
An LLM picks ts_detect_changepoint from a fixed catalog.
2 · Method → numbers
Deterministic Python engine runs the math. Same input → same output.
3 · Numbers → plain language
A second LLM translates the result into your domain’s vocabulary.
How does this method decide where a change happens?
It uses an algorithm called PELT to scan your data for points that best partition the timeline. By measuring the difference in data patterns before and after each potential break, it isolates the specific moments where the statistical properties, such as the mean or variance, shift the most.
What does the 'significant' flag mean in the results?
The significant flag indicates whether the shift at that changepoint passed a statistical test. We use a Welch t-test to compare the data segments before and after the point; if the result is significant, it suggests the change is unlikely to have happened by random chance alone.
Tool input schema
Schema for ts_detect_changepoint not exported yet (run pnpm export:registry).