Predictive modeling is a method of predicting future outcomes by using data modeling. It’s one of the premier ways a business can see its path forward and make plans accordingly. While not foolproof, this method tends to have high accuracy rates, which is why it is so commonly used.
What Is Predictive Modeling?
In short, predictive modeling is a statistical technique using machine learning and data mining to predict and forecast likely future outcomes with the aid of historical and existing data. It works by analysing current and historical data and projecting what it learns on a model generated to forecast likely outcomes. Predictive modeling can be used to predict just about anything, from TV ratings and a customer’s next purchase to credit risks and corporate earnings.
A predictive model is not fixed; it is validated or revised regularly to incorporate changes in the underlying data. In other words, it’s not a one-and-done prediction. Predictive models make assumptions based on what has happened in the past and what is happening now. If incoming, new data shows changes in what is happening now, the impact on the likely future outcome must be recalculated, too. For example, a software company could model historical sales data against marketing expenditures across multiple regions to create a model for future revenue based on the impact of the marketing spend.
Most predictive models work fast and often complete their calculations in real time. That’s why banks and retailers can, for example, calculate the risk of an online mortgage or credit card application and accept or decline the request almost instantly based on that prediction.
Some predictive models are more complex, such as those used in computational biology and quantum computing; the resulting outputs take longer to compute than a credit card application but are done much more quickly than was possible in the past thanks to advances in technological capabilities, including computing power.
Top 5 Types of Predictive Models
Fortunately, predictive models don’t have to be created from scratch for every application. Predictive analytics tools use a variety of vetted models and algorithms that can be applied to a wide spread of use cases.
Predictive modeling techniques have been perfected over time. As we add more data, more muscular computing, AI and machine learning and see overall advancements in analytics, we’re able to do more with these models.
The top five predictive analytics models are:
- Classification model: Considered the simplest model, it categorises data for simple and direct query response. An example use case would be to answer the question “Is this a fraudulent transaction?”
- Clustering model: This model nests data together by common attributes. It works by grouping things or people with shared characteristics or behaviours and plans strategies for each group at a larger scale. An example is in determining credit risk for a loan applicant based on what other people in the same or a similar situation did in the past.
- Forecast model: This is a very popular model, and it works on anything with a numerical value based on learning from historical data. For example, in answering how much lettuce a restaurant should order next week or how many calls a customer support agent should be able to handle per day or week, the system looks back to historical data.
- Outliers model: This model works by analysing abnormal or outlying data points. For example, a bank might use an outlier model to identify fraud by asking whether a transaction is outside of the customer’s normal buying habits or whether an expense in a given category is normal or not. For example, a $1,000 credit card charge for a washer and dryer in the cardholder’s preferred big box store would not be alarming, but $1,000 spent on designer clothing in a location where the customer has never charged other items might be indicative of a breached account.
- Time series model: This model evaluates a sequence of data points based on time. For example, the number of stroke patients admitted to the hospital in the last four months is used to predict how many patients the hospital might expect to admit next week, next month or the rest of the year. A single metric measured and compared over time is thus more meaningful than a simple average.
Common Predictive Algorithms
Predictive algorithms use one of two things: machine learning or deep learning. Both are subsets of artificial intelligence (AI). Machine learning (ML) involves structured data, such as spreadsheet or machine data. Deep learning (DL) deals with unstructured data such as video, audio, text, social media posts and images—essentially the stuff that humans communicate with that are not numbers or metric reads.
Some of the more common predictive algorithms are:
- Random Forest: This algorithm is derived from a combination of decision trees, none of which are related, and can use both classification and regression to classify vast amounts of data.
- Generalised Linear Model (GLM) for Two Values: This algorithm narrows down the list of variables to find “best fit.” It can work out tipping points and change data capture and other influences, such as categorical predictors, to determine the “best fit” outcome, thereby overcoming drawbacks in other models, such as a regular linear regression.
- Gradient Boosted Model: This algorithm also uses several combined decision trees, but unlike Random Forest, the trees are related. It builds out one tree at a time, thus enabling the next tree to correct flaws in the previous tree. It’s often used in rankings, such as on search engine outputs.
- K-Means: A popular and fast algorithm, K-Means groups data points by similarities and so is often used for the clustering model. It can quickly render things like personalised retail offers to individuals within a huge group, such as a million or more customers with a similar liking of lined red wool coats.
- Prophet: This algorithm is used in time-series or forecast models for capacity planning, such as for inventory needs, sales quotas and resource allocations. It is highly flexible and can easily accommodate heuristics and an array of useful assumptions.
Predictive Modeling and Data Analytics
Predictive modeling is also known as predictive analytics. Generally, the term “predictive modeling” is favoured in academic settings, while “predictive analytics” is the preferred term for commercial applications of predictive modeling.
Successful use of predictive analytics depends heavily on unfettered access to sufficient volumes of accurate, clean and relevant data. While predictive models can be extraordinarily complex, such as those using decision trees and k-means clustering, the most complex part is always the neural network; that is, the model by which computers are trained to predict outcomes. Machine learning uses a neural network to find correlations in exceptionally large data sets and “to learn” and identify patterns within the data.
Benefits of Predictive Modeling
In a nutshell, predictive analytics reduce time, effort and costs in forecasting business outcomes. Variables such as environmental factors, competitive intelligence, regulation changes and market conditions can be factored into the mathematical calculation to render more complete views at relatively low costs.
Examples of specific types of forecasting that can benefit businesses include demand forecasting, headcount planning, churn analysis, external factors, competitive analysis, fleet and IT hardware maintenance and financial risks.
Challenges of Predictive Modeling
It’s essential to keep predictive analytics focused on producing useful business insights because not everything this technology digs up is useful. Some mined information is of value only in satisfying a curious mind and has few or no business implications. Getting side-tracked is a distraction few businesses can afford.
Also, being able to use more data in predictive modeling is an advantage only to a point. Too much data can skew the calculation and lead to a meaningless or an erroneous outcome. For example, more coats are sold as the outside temperature drops. But only to a point. People do not buy more coats when it’s -20 degrees Fahrenheit outside than they do when it’s -5 degrees below freezing. At a certain point, cold is cold enough to spur the purchase of coats and more frigid temps no longer appreciably change that pattern.
And with the massive volumes of data involved in predictive modeling, maintaining security and privacy will also be a challenge. Further challenges rest in machine learning’s limitations.
Limitations of Predictive Modeling
According to a McKinsey report, common limitations and their “best fixes” include:
- Errors in data labeling: These can be overcome with reinforcement learning or generative adversarial networks (GANs).
- Shortage of massive data sets needed to train machine learning: Apossible fix is “one-shot learning,” wherein a machine learns from a small number of demonstrations rather than on a massive data set.
- The machine’s inability to explain what and why it did what it did: Machines do not “think” or “learn” like humans. Likewise, their computations can be so exceptionally complex that humans have trouble finding, let alone following, the logic. All this makes it difficult for a machine to explain its work, or for humans to do so. Yet model transparency is necessary for a number of reasons, with human safety chief among them. Promising potential fixes: local-interpretable-model-agnostic explanations (LIME) and attention techniques.
- Generalisability of learning, or rather lack thereof: Unlike humans, machines have difficulty carrying what they’ve learned forward. In other words, they have trouble applying what they’ve learned to a new set of circumstances. Whatever it has learned is applicable to one use case only. This is largely why we need not worry about the rise of AI overlords anytime soon. For predictive modeling using machine learning to be reusable—that is, useful in more than one use case—a possible fix is transfer learning.
- Bias in data and algorithms: Non-representation can skew outcomes and lead to mistreatment of large groups of humans. Further, baked-in biases are difficult to find and purge later. In other words, biases tend to self-perpetuate. This is a moving target, and no clear fix has yet been identified.
Plan & Forecast
More Accurately
The Future of Predictive Modeling
Predictive modeling, also known as predictive analytics, and machine learning are still young and developing technologies, meaning there is much more to come. As techniques, methods, tools and technologies improve, so will the benefits to businesses and societies.
However, these are not technologies that businesses can afford to adopt later, after the tech reaches maturity and all the kinks are worked out. The near-term advantages are simply too strong for a late adopter to overcome and remain competitive.
Our advice: Understand and deploy the technology now and then grow the business benefits alongside subsequent advances in the technologies.
Predictive Modeling in Platforms
For all but the largest companies, reaping the benefits of predictive analytics is most easily achieved by using ERP systems that have the technologies built-in and contain pretrained machine learning. For example, planning, forecasting and budgeting features may provide a statistical model engine to rapidly model multiple scenarios that deal with changing market conditions.
As another example, a supply planning or supply capacity function can similarly predict potentially late deliveries, purchase or sales orders and other risks or impacts. Alternate suppliers can also be represented on the dashboard to enable companies to pivot to meet manufacturing or distribution requirements.
Financial modeling and planning and budgeting are key areas to reap the many benefits of using these advanced technologies without overwhelming your team.