How to quickly solve machine learning forecasting problems using Pandas and BigQuery

Fast and Reliable Development with Redgate Solutions for SQL Server
August 29, 2019
The Good, the Bad and the Ugly in Cybersecurity – Week 35
August 30, 2019
Fast and Reliable Development with Redgate Solutions for SQL Server
August 29, 2019
The Good, the Bad and the Ugly in Cybersecurity – Week 35
August 30, 2019

AI & Machine Learning

Data analytics static hero

Time-series forecasting problems are ubiquitous throughout the business world. For example, you may want to predict the probability that some event will happen in the future or forecast how many units of a product you’ll sell over the next six months. Forecasting like this can be posed as a supervised machine learning problem.

Like many machine learning problems, the most time-consuming part of forecasting can be setting up the problem, constructing the input, and feature engineering. Once you have created the features and labels that come out of this process, you are ready to train your model.

A common approach to creating features and labels is to use a sliding window where the features are historical entries and the label(s) represent entries in the future. As any data-scientist that works with time-series knows, this sliding window approach can be tricky to get right.

1_sliding window on an example dataset.gif
A sliding window on an example dataset. Each window represents a feature vector for the dataset and the label(s) is one or more points in the future.

Below is a good workflow for tackling forecasting problems:

1. Create features and labels on a subsample of data using Pandas and train an initial model locally
2. Create features and labels on the full dataset using BigQuery
3. Utilize BigQuery ML to build a scalable machine learning model
4. (Advanced) Build a forecasting model using Recurrent Neural Networks in Keras and TensorFlow

In the rest of this blog, we’ll use an example to provide more detail into how to build a forecasting model using the above workflow. (The code is available on AI Hub)

First, train locally

Machine learning is all about running experiments. The faster you can run experiments, the more quickly you can get feedback, and thus the faster you can get to a Minimum Viable Model (MVM). It’s beneficial, then, to first work on a subsample of your dataset and train locally before scaling out your model using the entire dataset.

Let’s build a model to forecast the median housing price week-by-week for New York City. We spun up a Deep Learning VM on Cloud AI Platform and loaded our data from nyc.gov into BigQuery. Our dataset goes back to 2003, but for now let’s just use prices beginning 2011.

2_median housing price.png

Since our goal is to forecast future prices, let’s create sliding windows that accumulate historical prices (features) and a future price (label). Our source table contains date and median price:

3_forecast future prices.png

Here is the entire dataset plotted over time:

4_entire dataset plotted.png

To create our features, we’ll pick a historical window size–e.g., one year–that will be used to forecast the median home price in six months. To do this, we have implemented a reusable function based on Pandas that allows you to easily generate time-series features and labels. Feel free to use this function on your own dataset.

  import time_series
WINDOW_SIZE = 52  # Weeks.
HORIZON = 4*6 # Predict approximately 6 months into the future.
LABELS_SIZE = 1  # The label will be the price for one week
df = time_series.create_rolling_features_label(sales, window_size=WINDOW_SIZE, pred_offset=HORIZON)

# Add date features: year, day of year, month, etc.
df = time_series.add_date_features(df, df.index)

After running create_rolling_features_label, a feature vector of length 52 (plus the date features) is created for each example, representing the features before the prediction date.

  df.head()
5_create_rolling_features_label.png

This can be shown with a rolling window:

6_rolling window.gif
The create_rolling_features_label function creates windows for the feature and label. In this case, the features consist of 52 weeks and the label consists of a week 6 months into the future.

Once we have the features and labels, the next step is to create a training and test set. In time-series problems, it’s important to split them temporally so that you are not leaking future information that would not be available at test time into the trained model.

7_training and test set.png
The training set (blue) will consist of data where the label occurs before the split date (2015-12-30′), while the test set (green) consists of rows where the label is after this date.
  # Features, label.
X = df.drop('label', axis=1)
y = df['label']

# Train/test split. Splitting on time.
train_ix = time_series.is_between_dates(y.index,
                                  end='2015-12-30')
test_ix = time_series.is_between_dates(y.index,
                                 start='2015-12-30',
                                 end='2018-12-30 08:00:00')
X_train, y_train = X.iloc[train_ix], y.iloc[train_ix]
X_test, y_test = X.iloc[test_ix], y.iloc[test_ix]

In practice, you may want to scale your data using z-normalization or detrend your data to reduce seasonality effects. It may help to utilize differencing, as well to remove trend information. Now that we have features and labels, this simply becomes a traditional supervised learning problem, and you can use your favorite ML library to train a model. Here is a simple example using sklearn:

  from sklearn.ensemble import RandomForestRegressor
# Train model.
cl = RandomForestRegressor(n_estimators=500, max_features='sqrt',
                              random_state=10, criterion='mse')

cl.fit(X_train, y_train)
pred = cl.predict(X_test)
random_forest_metrics = time_series.Metrics(y_test, pred)
random_forest_metrics.report("Forest Model")

Forest Model results
~~~~~~~~~~~~~~~~~~~~
RMSE: 236008.164
MAE: 180427.209
MALR: 0.112

Scale our model

Let’s imagine we want to put our model into production and automatically run it every week, using batch jobs, to get a better idea of future sales.Let’s also imagine we may want to forecast a model day-by-day.

Our data is stored in BigQuery, so let’s use the same logic that we used in Pandas to create features and labels, but instead run it at scale using BigQuery. We have developed a generalized Python function that creates a SQL string that lets you do this with BigQuery:

  table_name = 'asl-testing-217717.nyc_real_estate.all_sales'
value_name = 'med_sales_price'
window_size = 52 * 7  # days
labels_size = 1
query_bq_sub_sequences = scalable_time_series.bq_create_rolling_features_label(
table_name, value_name, window_size, horizon, labels_size)

We pass the table name that contains our data, the value name that we are interested in, the window size (which is the input sequence length), the horizon of how far ahead in time we skip between our features and our labels, and the labels_size (which is the output sequence length). Labels size is equal to 1 here because, for now, we are only modeling sequence-to-one–even though this data pipeline can handle sequence-to-sequence. Feel free to write your own sequence-to-sequence model to take full advantage of the data pipeline!

We can then execute the SQL string scalable_time_series in BigQuery. A sample of the output shows that each row is a different sequence. For each sequence, we can see the time ranges of the features and the labels. For the features, the timespan is 52 weeks, which is the window_size, and for labels it is one day, which is the labels_size.

8_scalable_time_series.png

Looking at the same sampled rows, we can see how the training data is laid out. We have a column for each timestep of the previous price, starting with the farthest back in time on the left and moving forward. The last column is the label, the price one week ahead.

9_price one week ahead.png

Now we have our data, ready for training, in a BigQuery table. Let’s take advantage of BigQuery ML and build a forecasting model using SQL.

10_forecasting model using SQL.png

Above we are creating a linear regression model using our 52 past price features and predicting our label price_ahead_1. This will create a BQML MODEL in our bqml_forecasting dataset.

11_52 past price features.png

We can check how our model performed by calling TRAINING_INFO. This shows the training run index, iteration index, the training and eval loss at each iteration, the duration of the iteration, and the iteration’s learning rate. Our model is training well since the eval loss is continually getting smaller for each iteration.

12_TRAINING_INFO.png

We can also do an evaluation of our trained model by calling EVALUATE. This will show common evaluation metrics that we can use to compare our model with other models to find the best choice among all of our options.

13_EVALUATE.png

Lastly, machine learning is all about prediction. The training is just a means to an end. We can get our predictions by using the above query, where we have prepended predicted_ to the name of our label.

Now, let’s imagine that we want to run this model every week. We can easily create a batch job that is automatically executed using a scheduled query.

14_scheduled query.png

Of course, if we want to build a more custom model, we can use TensorFlow or another machine library, while using this same data engineering approach to create our features and labels to be read into our custom machine learning model. This technique could possibly improve performance.

15_custom machine learning model.png

To use an ML framework like TensorFlow, we’ll need to write the model code and also get our data in the right format to be read into our model. We can make a slight modification to the previous query we used for BigQuery ML so that the data will be amenable to the CSV file format.

For this example, imagine you wanted to build a sequence-to-sequence model in TensorFlow that can handle variable length features. One approach to achieve this would be to aggregate all the features into a single column named med_sales_price_agg, separated by semicolons. The features (if we have more than just this feature in the future) and the label are all separated by a comma.

16_med_sales_price_agg.png

We’ll execute the query in BigQuery and will make a table for train and eval. This will then get exported to CSV files in Cloud Storage. The diagram above is what one of the exported CSV files looks like–at least the header and the first line–with some comments added. Then when reading the data into our model using tf.data, we will specify the delimiter pattern shown above to correctly parse the data.

Please check out our notebook on AI Hub for an end-to-end example showing how this would work in practice and how to submit a training job on Google Cloud AI Platform. For model serving, the model can deployed on AI Platform or it can be deployed directly in BigQuery.

Conclusion

That’s it! The workflow we shared will allow you to automatically and quickly setup any time-series forecasting problem. Of course, this framework can also be adapted for a classification problem, like using a customer’s historical behavior to predict the probability of churn or to identify anomalous behavior over time. Regardless of the model you build, these approaches let you quickly build an initial model locally, then scale to the cloud using BigQuery.

Learn more about BigQuery and AI Platform.

Leave a Reply

Your email address will not be published. Required fields are marked *