sklearn.ensemble.partial_dependence.partial_dependence

sklearn.ensemble.partial_dependence.partial_dependence(gbrt, target_variables, grid=None, X=None, percentiles=(0.05, 0.95), grid_resolution=100)[source]

Partial dependence of target_variables.

Partial dependence plots show the dependence between the joint values of the target_variables and the function represented by the gbrt.

Read more in the User Guide.

Parameters:
gbrt : BaseGradientBoosting

A fitted gradient boosting model.

target_variables : array-like, dtype=int

The target features for which the partial dependecy should be computed (size should be smaller than 3 for visual renderings).

grid : array-like, shape=(n_points, len(target_variables))

The grid of target_variables values for which the partial dependecy should be evaluated (either grid or X must be specified).

X : array-like, shape=(n_samples, n_features)

The data on which gbrt was trained. It is used to generate a grid for the target_variables. The grid comprises grid_resolution equally spaced points between the two percentiles.

percentiles : (low, high), default=(0.05, 0.95)

The lower and upper percentile used create the extreme values for the grid. Only if X is not None.

grid_resolution : int, default=100

The number of equally spaced points on the grid.

Returns:
pdp : array, shape=(n_classes, n_points)

The partial dependence function evaluated on the grid. For regression and binary classification n_classes==1.

axes : seq of ndarray or None

The axes with which the grid has been created or None if the grid has been given.

Examples

>>> samples = [[0, 0, 2], [1, 0, 0]]
>>> labels = [0, 1]
>>> from sklearn.ensemble import GradientBoostingClassifier
>>> gb = GradientBoostingClassifier(random_state=0).fit(samples, labels)
>>> kwargs = dict(X=samples, percentiles=(0, 1), grid_resolution=2)
>>> partial_dependence(gb, [0], **kwargs) # doctest: +SKIP
(array([[-4.52...,  4.52...]]), [array([ 0.,  1.])])

Examples using sklearn.ensemble.partial_dependence.partial_dependence