tf.keras.experimental.WideDeepModel

View source on GitHub

Wide & Deep Model for regression and classification problems.

Inherits From: Model

tf.keras.experimental.WideDeepModel(
    linear_model, dnn_model, activation=None, **kwargs
)

This model jointly train a linear and a dnn model.

Example:

linear_model = LinearModel()
dnn_model = keras.Sequential([keras.layers.Dense(units=64),
                             keras.layers.Dense(units=1)])
combined_model = WideDeepModel(dnn_model, linear_model)
combined_model.compile(optimizer=['sgd', 'adam'], 'mse', ['mse'])
# define dnn_inputs and linear_inputs as separate numpy arrays or
# a single numpy array if dnn_inputs is same as linear_inputs.
combined_model.fit([dnn_inputs, linear_inputs], y, epochs)
# or define a single `tf.data.Dataset` that contains a single tensor or
# separate tensors for dnn_inputs and linear_inputs.
dataset = tf.data.Dataset.from_tensors(([dnn_inputs, linear_inputs], y))
combined_model.fit(dataset, epochs)

Both linear and dnn model can be pre-compiled and trained separately before jointly training:

Example:

linear_model = LinearModel()
linear_model.compile('adagrad', 'mse')
linear_model.fit(linear_inputs, y, epochs)
dnn_model = keras.Sequential([keras.layers.Dense(units=1)])
dnn_model.compile('rmsprop', 'mse')
dnn_model.fit(dnn_inputs, y, epochs)
combined_model = WideDeepModel(dnn_model, linear_model)
combined_model.compile(optimizer=['sgd', 'adam'], 'mse', ['mse'])
combined_model.fit([dnn_inputs, linear_inputs], y, epochs)

Args:

Attributes:

Methods

compile

View source

compile(
    optimizer='rmsprop', loss=None, metrics=None, loss_weights=None,
    sample_weight_mode=None, weighted_metrics=None, target_tensors=None,
    distribute=None, **kwargs
)

Configures the model for training.

Arguments:

Raises:

evaluate

View source

evaluate(
    x=None, y=None, batch_size=None, verbose=1, sample_weight=None, steps=None,
    callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False
)

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches.

Arguments:

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

evaluate_generator

View source

evaluate_generator(
    generator, steps=None, callbacks=None, max_queue_size=10, workers=1,
    use_multiprocessing=False, verbose=0
)

Evaluates the model on a data generator. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use Model.evaluate, which supports generators.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

fit

View source

fit(
    x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None,
    validation_split=0.0, validation_data=None, shuffle=True, class_weight=None,
    sample_weight=None, initial_epoch=0, steps_per_epoch=None,
    validation_steps=None, validation_freq=1, max_queue_size=10, workers=1,
    use_multiprocessing=False, **kwargs
)

Trains the model for a fixed number of epochs (iterations on a dataset).

Arguments:

Unpacking behavior for iterator-like inputs: A common pattern is to pass a tf.data.Dataset, generator, or tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as 'x'. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({"x0": x0, "x1": x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict. A notable unsupported data type is the namedtuple. The reason is that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form: namedtuple("example_tuple", ["y", "x"]) it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form: namedtuple("other_tuple", ["x", "y", "z"]) where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

fit_generator

View source

fit_generator(
    generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None,
    validation_data=None, validation_steps=None, validation_freq=1,
    class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False,
    shuffle=True, initial_epoch=0
)

Fits the model on data yielded batch-by-batch by a Python generator. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use Model.fit, which supports generators.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

get_layer

View source

get_layer(
    name=None, index=None
)

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Arguments:

Returns:

A layer instance.

Raises:

load_weights

View source

load_weights(
    filepath, by_name=False, skip_mismatch=False
)

Loads all layer weights, either from a TensorFlow or an HDF5 weight file.

If by_name is False weights are loaded based on the network's topology. This means the architecture should be the same as when the weights were saved. Note that layers that don't have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don't have weights.

If by_name is True, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Only topological loading (by_name=False) is supported when loading weights from the TensorFlow format. Note that topological loading differs slightly between TensorFlow and HDF5 formats for user-defined classes inheriting from tf.keras.Model: HDF5 loads based on a flattened list of weights, while the TensorFlow format loads based on the object-local names of attributes to which layers are assigned in the Model's constructor.

Arguments:

Returns:

When loading a weight file in TensorFlow format, returns the same status object as tf.train.Checkpoint.restore. When graph building, restore ops are run automatically as soon as the network is built (on first call for user-defined classes inheriting from Model, immediately if it is already built).

When loading weights in HDF5 format, returns None.

Raises:

predict

View source

predict(
    x, batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10,
    workers=1, use_multiprocessing=False
)

Generates output predictions for the input samples.

Computation is done in batches.

Arguments:

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

predict_generator

View source

predict_generator(
    generator, steps=None, callbacks=None, max_queue_size=10, workers=1,
    use_multiprocessing=False, verbose=0
)

Generates predictions for the input samples from a data generator. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use Model.predict, which supports generators.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch

View source

predict_on_batch(
    x
)

Returns predictions for a single batch of samples.

Arguments:

Returns:

Numpy array(s) of predictions.

Raises:

reset_metrics

View source

reset_metrics()

Resets the state of metrics.

reset_states

View source

reset_states()

save

View source

save(
    filepath, overwrite=True, include_optimizer=True, save_format=None,
    signatures=None, options=None
)

Saves the model to Tensorflow SavedModel or a single HDF5 file.

The savefile includes:

This allows you to save the entirety of the state of a model in a single file.

Saved models can be reinstantiated via keras.models.load_model. The model returned by load_model is a compiled model ready to be used (unless the saved model was never compiled in the first place).

Models built with the Sequential and Functional API can be saved to both the HDF5 and SavedModel formats. Subclassed models can only be saved with the SavedModel format.

Arguments:

Example:

from keras.models import load_model

model.save('my_model.h5')  # creates a HDF5 file 'my_model.h5'
del model  # deletes the existing model

# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')

save_weights

View source

save_weights(
    filepath, overwrite=True, save_format=None
)

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has: - layer_names (attribute), a list of strings (ordered names of model layers). - For every layer, a group named layer.name - For every such layer group, a group attribute weight_names, a list of strings (ordered names of weights tensor of the layer). - For every weight in the layer, a dataset storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model's variables. See the guide to training checkpoints for details on the TensorFlow format.

Arguments:

Raises:

summary

View source

summary(
    line_length=None, positions=None, print_fn=None
)

Prints a string summary of the network.

Arguments:

Raises:

test_on_batch

View source

test_on_batch(
    x, y=None, sample_weight=None, reset_metrics=True
)

Test the model on a single batch of samples.

Arguments:

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

to_json

View source

to_json(
    **kwargs
)

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Arguments:

Returns:

A JSON string.

to_yaml

View source

to_yaml(
    **kwargs
)

Returns a yaml string containing the network configuration.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Arguments:

Returns:

A YAML string.

Raises:

train_on_batch

View source

train_on_batch(
    x, y=None, sample_weight=None, class_weight=None, reset_metrics=True
)

Runs a single gradient update on a single batch of data.

Arguments:

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises: