While normally I would suggest applying something like statsmodels.ols
on a rolling basis*, your dataset is large (length-1000 windows on 258k rows) and you will run into a memory error that way. Therefore, you could use the linear algebra approach to calculating coefficients and then apply these coefficients to each window of your explanatory variable. For more on this, see A Matrix Formulation of the Multiple Regression Model.
* To see an implementation of statsmodels, see a wrapper I created here. An example is here.
Realize that yhat
here is not an nx1 vector--it is a bunch of nx1 vectors stacked on top of each other, i.e. you have 1 set of predictions per rolling 1000-period block. So the shape of your predictions will be (257526, 1000), as shown below.
import numpy as np
import pandas as pd
df = pd.read_csv('input/estimated.csv', names=('x','y'))
def rolling_windows(a, window):
"""Creates rolling-window 'blocks' of length `window` from `a`.
Note that the orientation of rows/columns follows that of pandas.
Example
=======
onedim = np.arange(20)
twodim = onedim.reshape((5,4))
print(twodim)
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]
[16 17 18 19]]
print(rwindows(onedim, 3)[:5])
[[0 1 2]
[1 2 3]
[2 3 4]
[3 4 5]
[4 5 6]]
print(rwindows(twodim, 3)[:5])
[[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
[[ 8 9 10 11]
[12 13 14 15]
[16 17 18 19]]]
"""
if isinstance(a, (Series, DataFrame)):
a = a.values
if a.ndim == 1:
a = a.reshape(-1, 1)
shape = (a.shape[0] - window + 1, window) + a.shape[1:]
strides = (a.strides[0],) + a.strides
windows = np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
return np.squeeze(windows)
def coefs(y, x):
return np.dot(np.linalg.inv(np.dot(x.T, x)), np.dot(x.T, y))
rendog = rolling_windows(df.x.values, 1000)
rexog = rolling_windows(df.drop('x', axis=1).values, 1000)
preds = list()
for endog, exog in zip(rendog, rexog):
pred = np.sum(coefs(endog, exog).T * exog, axis=1)
preds.append(pred)
preds = np.array(preds)
print(preds.shape)
(257526, 1000)
Lastly: have you considered using a Random Forest Classifier here, given that your y
variable is discrete?