mlflow.models
The mlflow.models
module provides an API for saving machine learning models in
“flavors” that can be understood by different downstream tools.
The built-in flavors are:
For details, see MLflow Models.
-
class
mlflow.models.
FlavorBackend
(config, **kwargs)[source] Bases:
object
Abstract class for Flavor Backend. This class defines the API interface for local model deployment of MLflow model flavors.
-
can_build_image
()[source] - Returns
True if this flavor has a build_image method defined for building a docker container capable of serving the model, False otherwise.
-
abstract
can_score_model
()[source] Check whether this flavor backend can be deployed in the current environment.
- Returns
True if this flavor backend can be applied in the current environment.
-
abstract
predict
(model_uri, input_path, output_path, content_type, json_format)[source] Generate predictions using a saved MLflow model referenced by the given URI. Input and output are read from and written to a file or stdin / stdout.
- Parameters
model_uri – URI pointing to the MLflow model to be used for scoring.
input_path – Path to the file with input data. If not specified, data is read from stdin.
output_path – Path to the file with output predictions. If not specified, data is written to stdout.
content_type – Specifies the input format. Can be one of {
json
,csv
}json_format – Only applies if
content_type == json
. Specifies how is the input data encoded in json. Can be one of {split
,records
} mirroring the behavior of Pandas orient attribute. The default issplit
which expects dict like data:{'index' -> [index], 'columns' -> [columns], 'data' -> [values]}
, where index is optional. For more information see https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_json.html
-
prepare_env
(model_uri)[source] Performs any preparation necessary to predict or serve the model, for example downloading dependencies or initializing a conda environment. After preparation, calling predict or serve should be fast.
-
abstract
serve
(model_uri, port, host)[source] Serve the specified MLflow model locally.
- Parameters
model_uri – URI pointing to the MLflow model to be used for scoring.
port – Port to use for the model deployment.
host – Host to use for the model deployment. Defaults to
localhost
.
-
-
class
mlflow.models.
Model
(artifact_path=None, run_id=None, utc_time_created=None, flavors=None, **kwargs)[source] Bases:
object
An MLflow Model that can support multiple model flavors. Provides APIs for implementing new Model flavors.
-
add_flavor
(name, **params)[source] Add an entry for how to serve the model in a given format.
-
classmethod
from_dict
(model_dict)[source] Load a model from its YAML representation.
-
classmethod
load
(path)[source] Load a model from its YAML representation.
-
classmethod
log
(artifact_path, flavor, registered_model_name=None, **kwargs)[source] Log model using supplied flavor module. If no run is active, this method will create a new active run.
- Parameters
artifact_path – Run relative path identifying the model.
flavor – Flavor module to save the model with. The module must have the
save_model
function that will persist the model as a valid MLflow model.registered_model_name – Note:: Experimental: This argument may change or be removed in a future release without warning. If given, create a model version under
registered_model_name
, also creating a registered model if one with the given name does not exist.kwargs – Extra args passed to the model flavor.
-
save
(path)[source] Write the model as a local YAML file.
-
to_dict
()[source]
-
to_json
()[source]
-
to_yaml
(stream=None)[source]
-