tf.lite.Interpreter

Class Interpreter

Aliases:

  • Class tf.contrib.lite.Interpreter
  • Class tf.lite.Interpreter

Defined in tensorflow/lite/python/interpreter.py.

Interpreter inferace for TF-Lite Models.

__init__

__init__(
    model_path=None,
    model_content=None
)

Constructor.

Args:

  • model_path: Path to TF-Lite Flatbuffer file.
  • model_content: Content of model.

Raises:

  • ValueError: If the interpreter was unable to create.

Methods

tf.lite.Interpreter.allocate_tensors

allocate_tensors()

tf.lite.Interpreter.get_input_details

get_input_details()

Gets model input details.

Returns:

A list of input details.

tf.lite.Interpreter.get_output_details

get_output_details()

Gets model output details.

Returns:

A list of output details.

tf.lite.Interpreter.get_tensor

get_tensor(tensor_index)

Gets the value of the input tensor (get a copy).

If you wish to avoid the copy, use tensor().

Args:

  • tensor_index: Tensor index of tensor to get. This value can be gotten from the 'index' field in get_output_details.

Returns:

a numpy array.

tf.lite.Interpreter.get_tensor_details

get_tensor_details()

Gets tensor details for every tensor with valid tensor details.

Tensors where required information about the tensor is not found are not added to the list. This includes temporary tensors without a name.

Returns:

A list of dictionaries containing tensor information.

tf.lite.Interpreter.invoke

invoke()

Invoke the interpreter.

Be sure to set the input sizes, allocate tensors and fill values before calling this.

Raises:

  • ValueError: When the underlying interpreter fails raise ValueError.

tf.lite.Interpreter.reset_all_variables

reset_all_variables()

tf.lite.Interpreter.resize_tensor_input

resize_tensor_input(
    input_index,
    tensor_size
)

Resizes an input tensor.

Args:

  • input_index: Tensor index of input to set. This value can be gotten from the 'index' field in get_input_details.
  • tensor_size: The tensor_shape to resize the input to.

Raises:

  • ValueError: If the interpreter could not resize the input tensor.

tf.lite.Interpreter.set_tensor

set_tensor(
    tensor_index,
    value
)

Sets the value of the input tensor. Note this copies data in value.

If you want to avoid copying, you can use the tensor() function to get a numpy buffer pointing to the input buffer in the tflite interpreter.

Args:

  • tensor_index: Tensor index of tensor to set. This value can be gotten from the 'index' field in get_input_details.
  • value: Value of tensor to set.

Raises:

  • ValueError: If the interpreter could not set the tensor.

tf.lite.Interpreter.tensor

tensor(tensor_index)

Returns function that gives a numpy view of the current tensor buffer.

This allows reading and writing to this tensors w/o copies. This more closely mirrors the C++ Interpreter class interface's tensor() member, hence the name. Be careful to not hold these output references through calls to allocate_tensors() and invoke().

Usage:

interpreter.allocate_tensors()
input = interpreter.tensor(interpreter.get_input_details()[0]["index"])
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
for i in range(10):
  input().fill(3.)
  interpreter.invoke()
  print("inference %s" % output())

Notice how this function avoids making a numpy array directly. This is because it is important to not hold actual numpy views to the data longer than necessary. If you do, then the interpreter can no longer be invoked, because it is possible the interpreter would resize and invalidate the referenced tensors. The NumPy API doesn't allow any mutability of the the underlying buffers.

WRONG:

input = interpreter.tensor(interpreter.get_input_details()[0]["index"])()
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])()
interpreter.allocate_tensors()  # This will throw RuntimeError
for i in range(10):
  input.fill(3.)
  interpreter.invoke()  # this will throw RuntimeError since input,output

Args:

  • tensor_index: Tensor index of tensor to get. This value can be gotten from the 'index' field in get_output_details.

Returns:

A function that can return a new numpy array pointing to the internal TFLite tensor state at any point. It is safe to hold the function forever, but it is not safe to hold the numpy array forever.