Methods

Machine Learning

Predictive modeling is for analysts, engineers, and researchers who need to identify patterns in complex, non-linear data. Reach for these tools when you need to forecast outcomes or classify observations—such as predicting customer churn, equipment failure, or demand spikes—where traditional statistical coefficients fail to capture the underlying signals.

Lattice transforms predictive modeling from an opaque black box into a transparent, audit-ready process. When you submit a request, the LLM first interprets your goal and selects the appropriate model—Random Forest, XGBoost, or Neural Network. The platform then triggers a deterministic engine that executes the math with a locked random seed, ensuring that your results remain identical across repeated runs. Finally, the LLM parses the output, integrating SHAP-based feature attribution to explain why the model made a specific prediction. By enforcing a strict anti-hallucination post-check, Lattice automatically flags risks like over-fitting, data scarcity, or class imbalance. This three-stage architecture ensures that you receive actionable insights grounded in rigorous, verifiable computation rather than probabilistic guesswork.

When to choose this family

How these models uncover patterns

These models identify relationships by partitioning data into segments or optimizing weights across layers. Random Forest builds a collection of decision trees to stabilize predictions, XGBoost refines them iteratively to minimize errors, and Neural Networks learn complex representations through layers of activation functions. All three methods translate raw data into a predictive score while simultaneously calculating feature importance.

Instead of relying on rigid statistical assumptions, these tools treat every input variable as a potential contributor to the final prediction. This allows the system to discover meaningful connections between variables without human intervention.

Machine learning versus statistical regression

The primary difference lies in the objective: statistical methods excel at testing specific hypotheses and providing p-values, whereas these predictive tools focus on accuracy and identifying which inputs drive the output. If you need to know if a specific variable is 'statistically significant' under a normal distribution, reach for our stats tools. If you need to know which variables matter most for your real-world outcome, this family is the better choice.

Crucially, these models do not imply causality. While they are highly capable of spotting that two variables are associated, they cannot determine if one causes the other. Lattice flags this distinction to ensure users do not mistake predictive correlation for mechanical causation.

Common pitfalls to avoid

A frequent error is applying these methods to tiny, clean datasets where a simpler model would suffice. When you provide fewer than 50 observations, the platform will flag a low-data warning because the model will likely struggle to generalize to new data.

Another risk is ignoring class imbalance. If 95% of your records belong to one category, the model may achieve high accuracy simply by guessing the majority class. Lattice monitors this and will suggest you look at specific metrics like precision, recall, or F1-scores instead of simple accuracy.

Frequently asked questions

Why does the platform sometimes warn me about 'overfitting'?
Overfitting occurs when a model learns the noise in your training data rather than the underlying pattern. Lattice compares your training score to the test score; if the training performance is significantly higher, it triggers an overfit warning because the model will likely perform poorly on new, unseen data.
How can I trust the model's 'black box' output?
Lattice uses SHAP values to demystify every prediction. Rather than just giving you a number, the model breaks down the contribution of each input variable. This ensures that you can see exactly which features drove a specific prediction, making the decision process transparent and explainable.

Methods in this family