ml

Machine Learning Utility Functions

source

get_model_pipeline

 get_model_pipeline (preprocessor, reg)

source

get_pipeline_preprocessor

 get_pipeline_preprocessor (cat_cols, num_cols, imputer_strategy='knn',
                            use_catboost_native_cat_features=False,
                            scale_categoric_data=False,
                            scale_numeric_data=False, ohe_min_freq=0.05)

source

silhouette_analysis

 silhouette_analysis (X, range_n_clusters, random_state=None)

source

AutoClassifier

 AutoClassifier (num_cols, cat_cols, target_col, data=None, train=None,
                 test=None, random_st=42, log_target=False,
                 estimator='catboost', imputer_strategy='simple')

Initialize self. See help(type(self)) for accurate signature.


source

AutoRegressor

 AutoRegressor (num_cols, cat_cols, target_col, data=None, train=None,
                test=None, random_st=42, log_target=False,
                estimator='catboost', imputer_strategy='simple',
                use_catboost_native_cat_features=False, ohe_min_freq=0.05,
                scale_numeric_data=False, scale_categoric_data=False,
                scale_target=False)

AutoRegressor is a class for performing automated regression tasks, including preprocessing and model fitting. It supports several regression algorithms and allows for easy comparison of their performance on a given dataset. The class provides various methods for model evaluation, feature importance, and visualization.

Example Usage: ar = AutoRegressor(num_cols, cat_cols, target_col, data) ar.fit_report()

Unknown section Attributes
Unknown section See Also
Unknown section Notes
Unknown section Examples

source

KNNRegressor

 KNNRegressor (n_neighbors=5, weights='uniform', algorithm='auto',
               leaf_size=30, p=2, metric='minkowski', metric_params=None,
               n_jobs=None)

Regression based on k-nearest neighbors.

The target is predicted by local interpolation of the targets associated of the nearest neighbors in the training set.

Read more in the :ref:User Guide <regression>.

.. versionadded:: 0.9

Type Default Details
n_neighbors int 5 Number of neighbors to use by default for :meth:kneighbors queries.
weights str uniform Weight function used in prediction. Possible values:

- ‘uniform’ : uniform weights. All points in each neighborhood
are weighted equally.
- ‘distance’ : weight points by the inverse of their distance.
in this case, closer neighbors of a query point will have a
greater influence than neighbors which are further away.
- [callable] : a user-defined function which accepts an
array of distances, and returns an array of the same shape
containing the weights.

Uniform weights are used by default.
algorithm str auto Algorithm used to compute the nearest neighbors:

- ‘ball_tree’ will use :class:BallTree
- ‘kd_tree’ will use :class:KDTree
- ‘brute’ will use a brute-force search.
- ‘auto’ will attempt to decide the most appropriate algorithm
based on the values passed to :meth:fit method.

Note: fitting on sparse input will override the setting of
this parameter, using brute force.
leaf_size int 30 Leaf size passed to BallTree or KDTree. This can affect the
speed of the construction and query, as well as the memory
required to store the tree. The optimal value depends on the
nature of the problem.
p int 2 Power parameter for the Minkowski metric. When p = 1, this is
equivalent to using manhattan_distance (l1), and euclidean_distance
(l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
metric str minkowski Metric to use for distance computation. Default is “minkowski”, which
results in the standard Euclidean distance when p = 2. See the
documentation of scipy.spatial.distance<br><https://docs.scipy.org/doc/scipy/reference/spatial.distance.html>_ and
the metrics listed in
:class:~sklearn.metrics.pairwise.distance_metrics for valid metric
values.

If metric is “precomputed”, X is assumed to be a distance matrix and
must be square during fit. X may be a :term:sparse graph, in which
case only “nonzero” elements may be considered neighbors.

If metric is a callable function, it takes two arrays representing 1D
vectors as inputs and must return one value indicating the distance
between those vectors. This works for Scipy’s metrics, but is less
efficient than passing the metric name as a string.
metric_params NoneType None Additional keyword arguments for the metric function.
n_jobs NoneType None The number of parallel jobs to run for neighbors search.
None means 1 unless in a :obj:joblib.parallel_backend context.
-1 means using all processors. See :term:Glossary <n_jobs>
for more details.
Doesn’t affect :meth:fit method.

source

CatBoostRegressorCV

 CatBoostRegressorCV (cv=5, cat_features=None, groups=None, verbose=False,
                      n_bins_stratify=None, add_box_cox_target=False,
                      **reg_args)

Base class for all estimators in scikit-learn.


source

RegressorCV

 RegressorCV (base_reg, cv=5, groups=None, verbose=False,
              n_bins_stratify=None, add_box_cox_target=False)

Base class for all estimators in scikit-learn.


source

RegressorTimeSeriesCV

 RegressorTimeSeriesCV (base_reg, cv=5, verbose=False,
                        catboost_use_eval_set=False,
                        add_box_cox_target=False)

Base class for all estimators in scikit-learn.

Examples

Import Regression Dataset for Testing

from sklearn.datasets import fetch_openml
from sklearn.compose import make_column_selector
import numpy as np
import pandas as pd

# Load the Ames Housing dataset
housing = fetch_openml(name="house_prices", as_frame=True)
X = housing['data'].fillna(np.nan)
y = housing['target']
data = pd.concat([X, y], axis=1)

num_cols = make_column_selector(dtype_include=np.number)(X)
cat_cols = make_column_selector(dtype_include=object)(X)

# Fill na in X with most frequent for cat_cols and median for num_cols
X_cat = X[cat_cols].fillna(X[cat_cols].mode().iloc[0])
X_num = X[num_cols].fillna(X[num_cols].median())
The default value of `parser` will change from `'liac-arff'` to `'auto'` in 1.4. You can set `parser='auto'` to silence this warning. Therefore, an `ImportError` will be raised from 1.4 if the dataset is dense and pandas is not installed. Note that the pandas parser may return different data types. See the Notes Section in fetch_openml's API doc for details.
y.shape
(1460,)
data.head()
Id MSSubClass MSZoning LotFrontage LotArea Street Alley LotShape LandContour Utilities ... PoolArea PoolQC Fence MiscFeature MiscVal MoSold YrSold SaleType SaleCondition SalePrice
0 1 60 RL 65.0 8450 Pave NaN Reg Lvl AllPub ... 0 NaN NaN NaN 0 2 2008 WD Normal 208500
1 2 20 RL 80.0 9600 Pave NaN Reg Lvl AllPub ... 0 NaN NaN NaN 0 5 2007 WD Normal 181500
2 3 60 RL 68.0 11250 Pave NaN IR1 Lvl AllPub ... 0 NaN NaN NaN 0 9 2008 WD Normal 223500
3 4 70 RL 60.0 9550 Pave NaN IR1 Lvl AllPub ... 0 NaN NaN NaN 0 2 2006 WD Abnorml 140000
4 5 60 RL 84.0 14260 Pave NaN IR1 Lvl AllPub ... 0 NaN NaN NaN 0 12 2008 WD Normal 250000

5 rows × 81 columns

data.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 81 columns):
 #   Column         Non-Null Count  Dtype  
---  ------         --------------  -----  
 0   Id             1460 non-null   int64  
 1   MSSubClass     1460 non-null   int64  
 2   MSZoning       1460 non-null   object 
 3   LotFrontage    1201 non-null   float64
 4   LotArea        1460 non-null   int64  
 5   Street         1460 non-null   object 
 6   Alley          91 non-null     object 
 7   LotShape       1460 non-null   object 
 8   LandContour    1460 non-null   object 
 9   Utilities      1460 non-null   object 
 10  LotConfig      1460 non-null   object 
 11  LandSlope      1460 non-null   object 
 12  Neighborhood   1460 non-null   object 
 13  Condition1     1460 non-null   object 
 14  Condition2     1460 non-null   object 
 15  BldgType       1460 non-null   object 
 16  HouseStyle     1460 non-null   object 
 17  OverallQual    1460 non-null   int64  
 18  OverallCond    1460 non-null   int64  
 19  YearBuilt      1460 non-null   int64  
 20  YearRemodAdd   1460 non-null   int64  
 21  RoofStyle      1460 non-null   object 
 22  RoofMatl       1460 non-null   object 
 23  Exterior1st    1460 non-null   object 
 24  Exterior2nd    1460 non-null   object 
 25  MasVnrType     1452 non-null   object 
 26  MasVnrArea     1452 non-null   float64
 27  ExterQual      1460 non-null   object 
 28  ExterCond      1460 non-null   object 
 29  Foundation     1460 non-null   object 
 30  BsmtQual       1423 non-null   object 
 31  BsmtCond       1423 non-null   object 
 32  BsmtExposure   1422 non-null   object 
 33  BsmtFinType1   1423 non-null   object 
 34  BsmtFinSF1     1460 non-null   int64  
 35  BsmtFinType2   1422 non-null   object 
 36  BsmtFinSF2     1460 non-null   int64  
 37  BsmtUnfSF      1460 non-null   int64  
 38  TotalBsmtSF    1460 non-null   int64  
 39  Heating        1460 non-null   object 
 40  HeatingQC      1460 non-null   object 
 41  CentralAir     1460 non-null   object 
 42  Electrical     1459 non-null   object 
 43  1stFlrSF       1460 non-null   int64  
 44  2ndFlrSF       1460 non-null   int64  
 45  LowQualFinSF   1460 non-null   int64  
 46  GrLivArea      1460 non-null   int64  
 47  BsmtFullBath   1460 non-null   int64  
 48  BsmtHalfBath   1460 non-null   int64  
 49  FullBath       1460 non-null   int64  
 50  HalfBath       1460 non-null   int64  
 51  BedroomAbvGr   1460 non-null   int64  
 52  KitchenAbvGr   1460 non-null   int64  
 53  KitchenQual    1460 non-null   object 
 54  TotRmsAbvGrd   1460 non-null   int64  
 55  Functional     1460 non-null   object 
 56  Fireplaces     1460 non-null   int64  
 57  FireplaceQu    770 non-null    object 
 58  GarageType     1379 non-null   object 
 59  GarageYrBlt    1379 non-null   float64
 60  GarageFinish   1379 non-null   object 
 61  GarageCars     1460 non-null   int64  
 62  GarageArea     1460 non-null   int64  
 63  GarageQual     1379 non-null   object 
 64  GarageCond     1379 non-null   object 
 65  PavedDrive     1460 non-null   object 
 66  WoodDeckSF     1460 non-null   int64  
 67  OpenPorchSF    1460 non-null   int64  
 68  EnclosedPorch  1460 non-null   int64  
 69  3SsnPorch      1460 non-null   int64  
 70  ScreenPorch    1460 non-null   int64  
 71  PoolArea       1460 non-null   int64  
 72  PoolQC         7 non-null      object 
 73  Fence          281 non-null    object 
 74  MiscFeature    54 non-null     object 
 75  MiscVal        1460 non-null   int64  
 76  MoSold         1460 non-null   int64  
 77  YrSold         1460 non-null   int64  
 78  SaleType       1460 non-null   object 
 79  SaleCondition  1460 non-null   object 
 80  SalePrice      1460 non-null   int64  
dtypes: float64(3), int64(35), object(43)
memory usage: 924.0+ KB

AutoRegressor

ar = AutoRegressor(
    num_cols=num_cols,
    cat_cols=cat_cols,
    target_col='SalePrice',
    use_catboost_native_cat_features=True,
    data=data,
)
ar.fit_report()
Imputer strategy: SimpleImputer(strategy='median')
Using estimator <catboost.core.CatBoostRegressor object>
R2 Score: 0.9160567995409361
RMSE: 24249.702308651435
MAPE: 0.08528036637019434
ar.get_feature_importances().head(10).sort_values().plot.barh()
<Axes: >

ar.get_shap()

CatBoostRegressorCV

# Concat X_cat and X_num for using in catboost
X_catboost = pd.concat([X_cat, X_num], axis=1)
X_catboost.head()
MSZoning Street Alley LotShape LandContour Utilities LotConfig LandSlope Neighborhood Condition1 ... GarageArea WoodDeckSF OpenPorchSF EnclosedPorch 3SsnPorch ScreenPorch PoolArea MiscVal MoSold YrSold
0 RL Pave Grvl Reg Lvl AllPub Inside Gtl CollgCr Norm ... 548 0 61 0 0 0 0 0 2 2008
1 RL Pave Grvl Reg Lvl AllPub FR2 Gtl Veenker Feedr ... 460 298 0 0 0 0 0 0 5 2007
2 RL Pave Grvl IR1 Lvl AllPub Inside Gtl CollgCr Norm ... 608 0 42 0 0 0 0 0 9 2008
3 RL Pave Grvl IR1 Lvl AllPub Corner Gtl Crawfor Norm ... 642 0 35 272 0 0 0 0 2 2006
4 RL Pave Grvl IR1 Lvl AllPub FR2 Gtl NoRidge Norm ... 836 192 84 0 0 0 0 0 12 2008

5 rows × 80 columns

cbcv = CatBoostRegressorCV(
    cat_features=list(range(len(cat_cols)))
)
cbcv.fit(X_catboost, y)
CatBoostRegressorCV(cat_features=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
                                  14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24,
                                  25, 26, 27, 28, 29, ...])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
cbcv.score(X_catboost, y)
0.9835889355851075
cbcv.metrics_
r2_score rmse mape
0 0.907499 26636.677034 0.093462
1 0.934493 21104.525601 0.080070
2 0.708497 40130.504754 0.098265
3 0.886393 26708.645472 0.093434
4 0.925741 19701.295826 0.075160
mean 0.872524 26856.329737 0.088078
std 0.093532 8070.804505 0.009906
# Predict the target variable for new data
predictions = cbcv.predict(X_catboost)
# Test CatBoostRegressorCV using AutoRegressor to fill automatically missing values and arrange values automatically
cbcv = CatBoostRegressorCV(
    cat_features=list(range(len(cat_cols)))
)

ar = AutoRegressor(
    num_cols=num_cols,
    cat_cols=cat_cols,
    target_col='SalePrice',
    use_catboost_native_cat_features=True,
    data=data,
    estimator=cbcv,
)

ar.fit_report()
Imputer strategy: SimpleImputer(strategy='median')
Using estimator CatBoostRegressorCV(cat_features=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
                                  14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24,
                                  25, 26, 27, 28, 29, ...])
R2 Score: 0.9125046051539697
RMSE: 24757.469207350325
MAPE: 0.08813980327749088

RegressorTimeSeriesCV

from sklearn.ensemble import RandomForestRegressor

# Initialize the RegressorTimeSeriesCV with a base regressor and cross-validation strategy
reg_tscv = RegressorTimeSeriesCV(base_reg=RandomForestRegressor(), cv=5)

# Fit the RegressorTimeSeriesCV to the training data
reg_tscv.fit(X_num, y)
reg_tscv
RegressorTimeSeriesCV(base_reg=RandomForestRegressor())
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
reg_tscv.metrics_
train_size test_size train_start_index train_end_index test_start_index test_end_index r2_score rmse mape
0 245 243 0 244 245 487 0.870264 28630.258580 0.117311
1 488 243 0 487 488 730 0.794495 40330.053395 0.132850
2 731 243 0 730 731 973 0.862934 27865.628836 0.102215
3 974 243 0 973 974 1216 0.835772 33497.124956 0.109466
4 1217 243 0 1216 1217 1459 0.800384 32331.165167 0.112466
mean 731 243 0 730 731 973 0.832770 32530.846187 0.114862
std 384 0 0 384 384 384 0.034780 4969.407330 0.011450

KNNRegressor

# Initialize the KNNRegressor with specific parameters
knn_reg = KNNRegressor(n_neighbors=3)

# Fit the KNNRegressor to the training data
knn_reg.fit(X_num.values, y)

# Predict the target variable for new data and return the index of the nearest matched neighbor
predictions, nearest_matched_index, neigh_ind = knn_reg.predict(X_num, return_match_index=True, pred_calc='median')
knn_reg.score(X_num, y)
0.8264707348813984
predictions
array([208500., 173000., 223500., ..., 256000., 142125., 147500.])
nearest_matched_index
array([[   0],
       [   1],
       [   2],
       ...,
       [1457],
       [1458],
       [1459]])
neigh_ind
array([[   0,  212,  256],
       [   1,  395,  186],
       [   2,  280,  222],
       ...,
       [1457, 1171, 1328],
       [1458, 1418, 1259],
       [1459, 1424, 1259]])