In your case, you're looking at at a multi-output regression problem:
- A regression problem - as opposed to classification - since you are trying to predict a value and not a class/state variable/category
- Multi-output since you are trying to predict 6 values for each data point
You can read more in the sklearn documentation about multiclass.
Here I'm going to show you how you can use sklearn.multioutput.MultiOutputRegressor with a sklearn.ensemble.RandomForestRegressor to predict your values.
Construct some dummy data
from sklearn.datasets import make_regression
X,y = make_regression(n_samples=1000, n_features=6,
n_informative=3, n_targets=6,
tail_strength=0.5, noise=0.02,
shuffle=True, coef=False, random_state=0)
# Convert to a pandas dataframe like in your example
icols = ['i0','i1','i2','i3','i4','i5']
jcols = ['j0', 'j1', 'j2', 'j3', 'j4', 'j5']
df = pd.concat([pd.DataFrame(X, columns=icols),
pd.DataFrame(y, columns=jcols)], axis=1)
# Introduce a few np.nans in there
df.loc[0, jcols] = np.nan
df.loc[10, jcols] = np.nan
df.loc[100, jcols] = np.nan
df.head()
Out:
i0 i1 i2 i3 i4 i5 j0 j1 j2 j3 j4 \
0 -0.21 -0.18 -0.06 0.27 -0.32 0.00 NaN NaN NaN NaN NaN
1 0.65 -2.16 0.46 1.82 0.22 -0.13 33.08 39.85 9.63 13.52 16.72
2 -0.75 -0.52 -1.08 0.14 1.12 -1.05 -0.96 -96.02 14.37 25.19 -44.90
3 0.01 0.62 0.20 0.53 0.35 -0.73 6.09 -12.07 -28.88 10.49 0.96
4 0.39 -0.70 -0.55 0.10 1.65 -0.69 83.15 -3.16 93.61 57.44 -17.33
j5
0 NaN
1 17.79
2 -77.48
3 -35.61
4 -2.47
Exclude the nans initially, and split into 75% train and 25% test
The split is done in order to be able to validate our model.
notnans = df[jcols].notnull().all(axis=1)
df_notnans = df[notnans]
# Split into 75% train and 25% test
X_train, X_test, y_train, y_test = train_test_split(df_notnans[icols], df_notnans[jcols],
train_size=0.75,
random_state=4)
Use a multi output regression based on a random forest regressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.multioutput import MultiOutputRegressor
from sklearn.model_selection import train_test_split
regr_multirf = MultiOutputRegressor(RandomForestRegressor(max_depth=30,
random_state=0))
# Fit on the train data
regr_multirf.fit(X_train, y_train)
# Check the prediction score
score = regr_multirf.score(X_test, y_test)
print("The prediction score on the test data is {:.2f}%".format(score*100))
Out: The prediction score on the test data is 96.76%
Predict the nan rows
df_nans = df.loc[~notnans].copy()
df_nans[jcols] = regr_multirf.predict(df_nans[icols])
df_nans
Out:
i0 i1 i2 i3 i4 i5 j0 \
0 -0.211620 -0.177927 -0.062205 0.267484 -0.317349 0.000341 -41.254983
10 1.138974 -1.326378 0.123960 0.982841 0.273958 0.414307 46.406351
100 -0.682390 -1.431414 -0.328235 -0.886463 1.212363 -0.577676 94.971966
j1 j2 j3 j4 j5
0 -18.197513 -31.029952 -14.749244 -5.990595 -9.296744
10 67.915628 59.750032 15.612843 10.177314 38.226387
100 -3.724223 65.630692 44.636895 -14.372414 11.947185