Table of Contents
A low-level client representing AWS IoT Analytics:
import boto3
client = boto3.client('iotanalytics')
These are the available methods:
Sends messages to a channel.
See also: AWS API Documentation
Request Syntax
response = client.batch_put_message(
channelName='string',
messages=[
{
'messageId': 'string',
'payload': b'bytes'
},
]
)
[REQUIRED]
The name of the channel where the messages are sent.
[REQUIRED]
The list of messages to be sent. Each message has format: '{ "messageId": "string", "payload": "string"}'.
Note that the field names of message payloads (data) that you send to AWS IoT Analytics:
For example, {"temp_01": 29} or {"_temp_01": 29} are valid, but {"temp-01": 29}, {"01_temp": 29} or {"__temp_01": 29} are invalid in message payloads.
Information about a message.
The ID you wish to assign to the message. Each "messageId" must be unique within each batch sent.
The payload of the message. This may be a JSON string or a Base64-encoded string representing binary data (in which case you must decode it by means of a pipeline activity).
dict
Response Syntax
{
'batchPutMessageErrorEntries': [
{
'messageId': 'string',
'errorCode': 'string',
'errorMessage': 'string'
},
]
}
Response Structure
(dict) --
batchPutMessageErrorEntries (list) --
A list of any errors encountered when sending the messages to the channel.
(dict) --
Contains informations about errors.
messageId (string) --
The ID of the message that caused the error. (See the value corresponding to the "messageId" key in the message object.)
errorCode (string) --
The code associated with the error.
errorMessage (string) --
The message associated with the error.
Check if an operation can be paginated.
Cancels the reprocessing of data through the pipeline.
See also: AWS API Documentation
Request Syntax
response = client.cancel_pipeline_reprocessing(
pipelineName='string',
reprocessingId='string'
)
[REQUIRED]
The name of pipeline for which data reprocessing is canceled.
[REQUIRED]
The ID of the reprocessing task (returned by "StartPipelineReprocessing").
dict
Response Syntax
{}
Response Structure
Creates a channel. A channel collects data from an MQTT topic and archives the raw, unprocessed messages before publishing the data to a pipeline.
See also: AWS API Documentation
Request Syntax
response = client.create_channel(
channelName='string',
retentionPeriod={
'unlimited': True|False,
'numberOfDays': 123
},
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
[REQUIRED]
The name of the channel.
How long, in days, message data is kept for the channel.
If true, message data is kept indefinitely.
The number of days that message data is kept. The "unlimited" parameter must be false.
Metadata which can be used to manage the channel.
A set of key/value pairs which are used to manage the resource.
The tag's key.
The tag's value.
dict
Response Syntax
{
'channelName': 'string',
'channelArn': 'string',
'retentionPeriod': {
'unlimited': True|False,
'numberOfDays': 123
}
}
Response Structure
(dict) --
channelName (string) --
The name of the channel.
channelArn (string) --
The ARN of the channel.
retentionPeriod (dict) --
How long, in days, message data is kept for the channel.
unlimited (boolean) --
If true, message data is kept indefinitely.
numberOfDays (integer) --
The number of days that message data is kept. The "unlimited" parameter must be false.
Creates a data set. A data set stores data retrieved from a data store by applying a "queryAction" (a SQL query) or a "containerAction" (executing a containerized application). This operation creates the skeleton of a data set. The data set can be populated manually by calling "CreateDatasetContent" or automatically according to a "trigger" you specify.
See also: AWS API Documentation
Request Syntax
response = client.create_dataset(
datasetName='string',
actions=[
{
'actionName': 'string',
'queryAction': {
'sqlQuery': 'string',
'filters': [
{
'deltaTime': {
'offsetSeconds': 123,
'timeExpression': 'string'
}
},
]
},
'containerAction': {
'image': 'string',
'executionRoleArn': 'string',
'resourceConfiguration': {
'computeType': 'ACU_1'|'ACU_2',
'volumeSizeInGB': 123
},
'variables': [
{
'name': 'string',
'stringValue': 'string',
'doubleValue': 123.0,
'datasetContentVersionValue': {
'datasetName': 'string'
},
'outputFileUriValue': {
'fileName': 'string'
}
},
]
}
},
],
triggers=[
{
'schedule': {
'expression': 'string'
},
'dataset': {
'name': 'string'
}
},
],
contentDeliveryRules=[
{
'entryName': 'string',
'destination': {
'iotEventsDestinationConfiguration': {
'inputName': 'string',
'roleArn': 'string'
}
}
},
],
retentionPeriod={
'unlimited': True|False,
'numberOfDays': 123
},
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
[REQUIRED]
The name of the data set.
[REQUIRED]
A list of actions that create the data set contents.
A "DatasetAction" object that specifies how data set contents are automatically created.
The name of the data set action by which data set contents are automatically created.
An "SqlQueryDatasetAction" object that uses an SQL query to automatically create data set contents.
A SQL query string.
Pre-filters applied to message data.
Information which is used to filter message data, to segregate it according to the time frame in which it arrives.
Used to limit data to that which has arrived since the last execution of the action.
The number of seconds of estimated "in flight" lag time of message data. When you create data set contents using message data from a specified time frame, some message data may still be "in flight" when processing begins, and so will not arrive in time to be processed. Use this field to make allowances for the "in flight" time of your message data, so that data not processed from a previous time frame will be included with the next time frame. Without this, missed message data would be excluded from processing during the next time frame as well, because its timestamp places it within the previous time frame.
An expression by which the time of the message data may be determined. This may be the name of a timestamp field, or a SQL expression which is used to derive the time the message data was generated.
Information which allows the system to run a containerized application in order to create the data set contents. The application must be in a Docker container along with any needed support libraries.
The ARN of the Docker container stored in your account. The Docker container contains an application and needed support libraries and is used to generate data set contents.
The ARN of the role which gives permission to the system to access needed resources in order to run the "containerAction". This includes, at minimum, permission to retrieve the data set contents which are the input to the containerized application.
Configuration of the resource which executes the "containerAction".
The type of the compute resource used to execute the "containerAction". Possible values are: ACU_1 (vCPU=4, memory=16GiB) or ACU_2 (vCPU=8, memory=32GiB).
The size (in GB) of the persistent storage available to the resource instance used to execute the "containerAction" (min: 1, max: 50).
The values of variables used within the context of the execution of the containerized application (basically, parameters passed to the application). Each variable must have a name and a value given by one of "stringValue", "datasetContentVersionValue", or "outputFileUriValue".
An instance of a variable to be passed to the "containerAction" execution. Each variable must have a name and a value given by one of "stringValue", "datasetContentVersionValue", or "outputFileUriValue".
The name of the variable.
The value of the variable as a string.
The value of the variable as a double (numeric).
The value of the variable as a structure that specifies a data set content version.
The name of the data set whose latest contents are used as input to the notebook or application.
The value of the variable as a structure that specifies an output file URI.
The URI of the location where data set contents are stored, usually the URI of a file in an S3 bucket.
A list of triggers. A trigger causes data set contents to be populated at a specified time interval or when another data set's contents are created. The list of triggers can be empty or contain up to five DataSetTrigger objects.
The "DatasetTrigger" that specifies when the data set is automatically updated.
The "Schedule" when the trigger is initiated.
The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the Amazon CloudWatch documentation.
The data set whose content creation triggers the creation of this data set's contents.
The name of the data set whose content generation triggers the new data set content generation.
When data set contents are created they are delivered to destinations specified here.
When data set contents are created they are delivered to destination specified here.
The name of the data set content delivery rules entry.
The destination to which data set contents are delivered.
Configuration information for delivery of data set contents to AWS IoT Events.
The name of the AWS IoT Events input to which data set contents are delivered.
The ARN of the role which grants AWS IoT Analytics permission to deliver data set contents to an AWS IoT Events input.
[Optional] How long, in days, message data is kept for the data set. If not given or set to null, the latest version of the dataset content plus the latest succeeded version (if they are different) are retained for at most 90 days.
If true, message data is kept indefinitely.
The number of days that message data is kept. The "unlimited" parameter must be false.
Metadata which can be used to manage the data set.
A set of key/value pairs which are used to manage the resource.
The tag's key.
The tag's value.
dict
Response Syntax
{
'datasetName': 'string',
'datasetArn': 'string',
'retentionPeriod': {
'unlimited': True|False,
'numberOfDays': 123
}
}
Response Structure
(dict) --
datasetName (string) --
The name of the data set.
datasetArn (string) --
The ARN of the data set.
retentionPeriod (dict) --
How long, in days, message data is kept for the data set.
unlimited (boolean) --
If true, message data is kept indefinitely.
numberOfDays (integer) --
The number of days that message data is kept. The "unlimited" parameter must be false.
Creates the content of a data set by applying a "queryAction" (a SQL query) or a "containerAction" (executing a containerized application).
See also: AWS API Documentation
Request Syntax
response = client.create_dataset_content(
datasetName='string'
)
[REQUIRED]
The name of the data set.
{
'versionId': 'string'
}
Response Structure
The version ID of the data set contents which are being created.
Creates a data store, which is a repository for messages.
See also: AWS API Documentation
Request Syntax
response = client.create_datastore(
datastoreName='string',
retentionPeriod={
'unlimited': True|False,
'numberOfDays': 123
},
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
[REQUIRED]
The name of the data store.
How long, in days, message data is kept for the data store.
If true, message data is kept indefinitely.
The number of days that message data is kept. The "unlimited" parameter must be false.
Metadata which can be used to manage the data store.
A set of key/value pairs which are used to manage the resource.
The tag's key.
The tag's value.
dict
Response Syntax
{
'datastoreName': 'string',
'datastoreArn': 'string',
'retentionPeriod': {
'unlimited': True|False,
'numberOfDays': 123
}
}
Response Structure
(dict) --
datastoreName (string) --
The name of the data store.
datastoreArn (string) --
The ARN of the data store.
retentionPeriod (dict) --
How long, in days, message data is kept for the data store.
unlimited (boolean) --
If true, message data is kept indefinitely.
numberOfDays (integer) --
The number of days that message data is kept. The "unlimited" parameter must be false.
Creates a pipeline. A pipeline consumes messages from one or more channels and allows you to process the messages before storing them in a data store.
See also: AWS API Documentation
Request Syntax
response = client.create_pipeline(
pipelineName='string',
pipelineActivities=[
{
'channel': {
'name': 'string',
'channelName': 'string',
'next': 'string'
},
'lambda': {
'name': 'string',
'lambdaName': 'string',
'batchSize': 123,
'next': 'string'
},
'datastore': {
'name': 'string',
'datastoreName': 'string'
},
'addAttributes': {
'name': 'string',
'attributes': {
'string': 'string'
},
'next': 'string'
},
'removeAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'selectAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'filter': {
'name': 'string',
'filter': 'string',
'next': 'string'
},
'math': {
'name': 'string',
'attribute': 'string',
'math': 'string',
'next': 'string'
},
'deviceRegistryEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
},
'deviceShadowEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
}
},
],
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
[REQUIRED]
The name of the pipeline.
[REQUIRED]
A list of pipeline activities.
The list can be 1-25 PipelineActivity objects. Activities perform transformations on your messages, such as removing, renaming, or adding message attributes; filtering messages based on attribute values; invoking your Lambda functions on messages for advanced processing; or performing mathematical transformations to normalize device data.
An activity that performs a transformation on a message.
Determines the source of the messages to be processed.
The name of the 'channel' activity.
The name of the channel from which the messages are processed.
The next activity in the pipeline.
Runs a Lambda function to modify the message.
The name of the 'lambda' activity.
The name of the Lambda function that is run on the message.
The number of messages passed to the Lambda function for processing.
The AWS Lambda function must be able to process all of these messages within five minutes, which is the maximum timeout duration for Lambda functions.
The next activity in the pipeline.
Specifies where to store the processed message data.
The name of the 'datastore' activity.
The name of the data store where processed messages are stored.
Adds other attributes based on existing attributes in the message.
The name of the 'addAttributes' activity.
A list of 1-50 "AttributeNameMapping" objects that map an existing attribute to a new attribute.
Note
The existing attributes remain in the message, so if you want to remove the originals, use "RemoveAttributeActivity".
The next activity in the pipeline.
Removes attributes from a message.
The name of the 'removeAttributes' activity.
A list of 1-50 attributes to remove from the message.
The next activity in the pipeline.
Creates a new message using only the specified attributes from the original message.
The name of the 'selectAttributes' activity.
A list of the attributes to select from the message.
The next activity in the pipeline.
Filters a message based on its attributes.
The name of the 'filter' activity.
An expression that looks like a SQL WHERE clause that must return a Boolean value.
The next activity in the pipeline.
Computes an arithmetic expression using the message's attributes and adds it to the message.
The name of the 'math' activity.
The name of the attribute that contains the result of the math operation.
An expression that uses one or more existing attributes and must return an integer value.
The next activity in the pipeline.
Adds data from the AWS IoT device registry to your message.
The name of the 'deviceRegistryEnrich' activity.
The name of the attribute that is added to the message.
The name of the IoT device whose registry information is added to the message.
The ARN of the role that allows access to the device's registry information.
The next activity in the pipeline.
Adds information from the AWS IoT Device Shadows service to a message.
The name of the 'deviceShadowEnrich' activity.
The name of the attribute that is added to the message.
The name of the IoT device whose shadow information is added to the message.
The ARN of the role that allows access to the device's shadow.
The next activity in the pipeline.
Metadata which can be used to manage the pipeline.
A set of key/value pairs which are used to manage the resource.
The tag's key.
The tag's value.
dict
Response Syntax
{
'pipelineName': 'string',
'pipelineArn': 'string'
}
Response Structure
(dict) --
pipelineName (string) --
The name of the pipeline.
pipelineArn (string) --
The ARN of the pipeline.
Deletes the specified channel.
See also: AWS API Documentation
Request Syntax
response = client.delete_channel(
channelName='string'
)
[REQUIRED]
The name of the channel to delete.
Deletes the specified data set.
You do not have to delete the content of the data set before you perform this operation.
See also: AWS API Documentation
Request Syntax
response = client.delete_dataset(
datasetName='string'
)
[REQUIRED]
The name of the data set to delete.
Deletes the content of the specified data set.
See also: AWS API Documentation
Request Syntax
response = client.delete_dataset_content(
datasetName='string',
versionId='string'
)
[REQUIRED]
The name of the data set whose content is deleted.
None
Deletes the specified data store.
See also: AWS API Documentation
Request Syntax
response = client.delete_datastore(
datastoreName='string'
)
[REQUIRED]
The name of the data store to delete.
Deletes the specified pipeline.
See also: AWS API Documentation
Request Syntax
response = client.delete_pipeline(
pipelineName='string'
)
[REQUIRED]
The name of the pipeline to delete.
Retrieves information about a channel.
See also: AWS API Documentation
Request Syntax
response = client.describe_channel(
channelName='string',
includeStatistics=True|False
)
[REQUIRED]
The name of the channel whose information is retrieved.
dict
Response Syntax
{
'channel': {
'name': 'string',
'arn': 'string',
'status': 'CREATING'|'ACTIVE'|'DELETING',
'retentionPeriod': {
'unlimited': True|False,
'numberOfDays': 123
},
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1)
},
'statistics': {
'size': {
'estimatedSizeInBytes': 123.0,
'estimatedOn': datetime(2015, 1, 1)
}
}
}
Response Structure
(dict) --
channel (dict) --
An object that contains information about the channel.
name (string) --
The name of the channel.
arn (string) --
The ARN of the channel.
status (string) --
The status of the channel.
retentionPeriod (dict) --
How long, in days, message data is kept for the channel.
unlimited (boolean) --
If true, message data is kept indefinitely.
numberOfDays (integer) --
The number of days that message data is kept. The "unlimited" parameter must be false.
creationTime (datetime) --
When the channel was created.
lastUpdateTime (datetime) --
When the channel was last updated.
statistics (dict) --
Statistics about the channel. Included if the 'includeStatistics' parameter is set to true in the request.
size (dict) --
The estimated size of the channel.
estimatedSizeInBytes (float) --
The estimated size of the resource in bytes.
estimatedOn (datetime) --
The time when the estimate of the size of the resource was made.
Retrieves information about a data set.
See also: AWS API Documentation
Request Syntax
response = client.describe_dataset(
datasetName='string'
)
[REQUIRED]
The name of the data set whose information is retrieved.
{
'dataset': {
'name': 'string',
'arn': 'string',
'actions': [
{
'actionName': 'string',
'queryAction': {
'sqlQuery': 'string',
'filters': [
{
'deltaTime': {
'offsetSeconds': 123,
'timeExpression': 'string'
}
},
]
},
'containerAction': {
'image': 'string',
'executionRoleArn': 'string',
'resourceConfiguration': {
'computeType': 'ACU_1'|'ACU_2',
'volumeSizeInGB': 123
},
'variables': [
{
'name': 'string',
'stringValue': 'string',
'doubleValue': 123.0,
'datasetContentVersionValue': {
'datasetName': 'string'
},
'outputFileUriValue': {
'fileName': 'string'
}
},
]
}
},
],
'triggers': [
{
'schedule': {
'expression': 'string'
},
'dataset': {
'name': 'string'
}
},
],
'contentDeliveryRules': [
{
'entryName': 'string',
'destination': {
'iotEventsDestinationConfiguration': {
'inputName': 'string',
'roleArn': 'string'
}
}
},
],
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1),
'retentionPeriod': {
'unlimited': True|False,
'numberOfDays': 123
}
}
}
Response Structure
An object that contains information about the data set.
The name of the data set.
The ARN of the data set.
The "DatasetAction" objects that automatically create the data set contents.
A "DatasetAction" object that specifies how data set contents are automatically created.
The name of the data set action by which data set contents are automatically created.
An "SqlQueryDatasetAction" object that uses an SQL query to automatically create data set contents.
A SQL query string.
Pre-filters applied to message data.
Information which is used to filter message data, to segregate it according to the time frame in which it arrives.
Used to limit data to that which has arrived since the last execution of the action.
The number of seconds of estimated "in flight" lag time of message data. When you create data set contents using message data from a specified time frame, some message data may still be "in flight" when processing begins, and so will not arrive in time to be processed. Use this field to make allowances for the "in flight" time of your message data, so that data not processed from a previous time frame will be included with the next time frame. Without this, missed message data would be excluded from processing during the next time frame as well, because its timestamp places it within the previous time frame.
An expression by which the time of the message data may be determined. This may be the name of a timestamp field, or a SQL expression which is used to derive the time the message data was generated.
Information which allows the system to run a containerized application in order to create the data set contents. The application must be in a Docker container along with any needed support libraries.
The ARN of the Docker container stored in your account. The Docker container contains an application and needed support libraries and is used to generate data set contents.
The ARN of the role which gives permission to the system to access needed resources in order to run the "containerAction". This includes, at minimum, permission to retrieve the data set contents which are the input to the containerized application.
Configuration of the resource which executes the "containerAction".
The type of the compute resource used to execute the "containerAction". Possible values are: ACU_1 (vCPU=4, memory=16GiB) or ACU_2 (vCPU=8, memory=32GiB).
The size (in GB) of the persistent storage available to the resource instance used to execute the "containerAction" (min: 1, max: 50).
The values of variables used within the context of the execution of the containerized application (basically, parameters passed to the application). Each variable must have a name and a value given by one of "stringValue", "datasetContentVersionValue", or "outputFileUriValue".
An instance of a variable to be passed to the "containerAction" execution. Each variable must have a name and a value given by one of "stringValue", "datasetContentVersionValue", or "outputFileUriValue".
The name of the variable.
The value of the variable as a string.
The value of the variable as a double (numeric).
The value of the variable as a structure that specifies a data set content version.
The name of the data set whose latest contents are used as input to the notebook or application.
The value of the variable as a structure that specifies an output file URI.
The URI of the location where data set contents are stored, usually the URI of a file in an S3 bucket.
The "DatasetTrigger" objects that specify when the data set is automatically updated.
The "DatasetTrigger" that specifies when the data set is automatically updated.
The "Schedule" when the trigger is initiated.
The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the Amazon CloudWatch documentation.
The data set whose content creation triggers the creation of this data set's contents.
The name of the data set whose content generation triggers the new data set content generation.
When data set contents are created they are delivered to destinations specified here.
When data set contents are created they are delivered to destination specified here.
The name of the data set content delivery rules entry.
The destination to which data set contents are delivered.
Configuration information for delivery of data set contents to AWS IoT Events.
The name of the AWS IoT Events input to which data set contents are delivered.
The ARN of the role which grants AWS IoT Analytics permission to deliver data set contents to an AWS IoT Events input.
The status of the data set.
When the data set was created.
The last time the data set was updated.
[Optional] How long, in days, message data is kept for the data set.
If true, message data is kept indefinitely.
The number of days that message data is kept. The "unlimited" parameter must be false.
Retrieves information about a data store.
See also: AWS API Documentation
Request Syntax
response = client.describe_datastore(
datastoreName='string',
includeStatistics=True|False
)
[REQUIRED]
The name of the data store
dict
Response Syntax
{
'datastore': {
'name': 'string',
'arn': 'string',
'status': 'CREATING'|'ACTIVE'|'DELETING',
'retentionPeriod': {
'unlimited': True|False,
'numberOfDays': 123
},
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1)
},
'statistics': {
'size': {
'estimatedSizeInBytes': 123.0,
'estimatedOn': datetime(2015, 1, 1)
}
}
}
Response Structure
(dict) --
datastore (dict) --
Information about the data store.
name (string) --
The name of the data store.
arn (string) --
The ARN of the data store.
status (string) --
The status of a data store:
CREATING
The data store is being created.
ACTIVE
The data store has been created and can be used.
DELETING
The data store is being deleted.
retentionPeriod (dict) --
How long, in days, message data is kept for the data store.
unlimited (boolean) --
If true, message data is kept indefinitely.
numberOfDays (integer) --
The number of days that message data is kept. The "unlimited" parameter must be false.
creationTime (datetime) --
When the data store was created.
lastUpdateTime (datetime) --
The last time the data store was updated.
statistics (dict) --
Additional statistical information about the data store. Included if the 'includeStatistics' parameter is set to true in the request.
size (dict) --
The estimated size of the data store.
estimatedSizeInBytes (float) --
The estimated size of the resource in bytes.
estimatedOn (datetime) --
The time when the estimate of the size of the resource was made.
Retrieves the current settings of the AWS IoT Analytics logging options.
See also: AWS API Documentation
Request Syntax
response = client.describe_logging_options()
{
'loggingOptions': {
'roleArn': 'string',
'level': 'ERROR',
'enabled': True|False
}
}
Response Structure
The current settings of the AWS IoT Analytics logging options.
The ARN of the role that grants permission to AWS IoT Analytics to perform logging.
The logging level. Currently, only "ERROR" is supported.
If true, logging is enabled for AWS IoT Analytics.
Retrieves information about a pipeline.
See also: AWS API Documentation
Request Syntax
response = client.describe_pipeline(
pipelineName='string'
)
[REQUIRED]
The name of the pipeline whose information is retrieved.
{
'pipeline': {
'name': 'string',
'arn': 'string',
'activities': [
{
'channel': {
'name': 'string',
'channelName': 'string',
'next': 'string'
},
'lambda': {
'name': 'string',
'lambdaName': 'string',
'batchSize': 123,
'next': 'string'
},
'datastore': {
'name': 'string',
'datastoreName': 'string'
},
'addAttributes': {
'name': 'string',
'attributes': {
'string': 'string'
},
'next': 'string'
},
'removeAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'selectAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'filter': {
'name': 'string',
'filter': 'string',
'next': 'string'
},
'math': {
'name': 'string',
'attribute': 'string',
'math': 'string',
'next': 'string'
},
'deviceRegistryEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
},
'deviceShadowEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
}
},
],
'reprocessingSummaries': [
{
'id': 'string',
'status': 'RUNNING'|'SUCCEEDED'|'CANCELLED'|'FAILED',
'creationTime': datetime(2015, 1, 1)
},
],
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1)
}
}
Response Structure
A "Pipeline" object that contains information about the pipeline.
The name of the pipeline.
The ARN of the pipeline.
The activities that perform transformations on the messages.
An activity that performs a transformation on a message.
Determines the source of the messages to be processed.
The name of the 'channel' activity.
The name of the channel from which the messages are processed.
The next activity in the pipeline.
Runs a Lambda function to modify the message.
The name of the 'lambda' activity.
The name of the Lambda function that is run on the message.
The number of messages passed to the Lambda function for processing.
The AWS Lambda function must be able to process all of these messages within five minutes, which is the maximum timeout duration for Lambda functions.
The next activity in the pipeline.
Specifies where to store the processed message data.
The name of the 'datastore' activity.
The name of the data store where processed messages are stored.
Adds other attributes based on existing attributes in the message.
The name of the 'addAttributes' activity.
A list of 1-50 "AttributeNameMapping" objects that map an existing attribute to a new attribute.
Note
The existing attributes remain in the message, so if you want to remove the originals, use "RemoveAttributeActivity".
The next activity in the pipeline.
Removes attributes from a message.
The name of the 'removeAttributes' activity.
A list of 1-50 attributes to remove from the message.
The next activity in the pipeline.
Creates a new message using only the specified attributes from the original message.
The name of the 'selectAttributes' activity.
A list of the attributes to select from the message.
The next activity in the pipeline.
Filters a message based on its attributes.
The name of the 'filter' activity.
An expression that looks like a SQL WHERE clause that must return a Boolean value.
The next activity in the pipeline.
Computes an arithmetic expression using the message's attributes and adds it to the message.
The name of the 'math' activity.
The name of the attribute that contains the result of the math operation.
An expression that uses one or more existing attributes and must return an integer value.
The next activity in the pipeline.
Adds data from the AWS IoT device registry to your message.
The name of the 'deviceRegistryEnrich' activity.
The name of the attribute that is added to the message.
The name of the IoT device whose registry information is added to the message.
The ARN of the role that allows access to the device's registry information.
The next activity in the pipeline.
Adds information from the AWS IoT Device Shadows service to a message.
The name of the 'deviceShadowEnrich' activity.
The name of the attribute that is added to the message.
The name of the IoT device whose shadow information is added to the message.
The ARN of the role that allows access to the device's shadow.
The next activity in the pipeline.
A summary of information about the pipeline reprocessing.
Information about pipeline reprocessing.
The 'reprocessingId' returned by "StartPipelineReprocessing".
The status of the pipeline reprocessing.
The time the pipeline reprocessing was created.
When the pipeline was created.
The last time the pipeline was updated.
Generate a presigned url given a client, its method, and arguments
The presigned url
Retrieves the contents of a data set as pre-signed URIs.
See also: AWS API Documentation
Request Syntax
response = client.get_dataset_content(
datasetName='string',
versionId='string'
)
[REQUIRED]
The name of the data set whose contents are retrieved.
dict
Response Syntax
{
'entries': [
{
'entryName': 'string',
'dataURI': 'string'
},
],
'timestamp': datetime(2015, 1, 1),
'status': {
'state': 'CREATING'|'SUCCEEDED'|'FAILED',
'reason': 'string'
}
}
Response Structure
(dict) --
entries (list) --
A list of "DatasetEntry" objects.
(dict) --
The reference to a data set entry.
entryName (string) --
The name of the data set item.
dataURI (string) --
The pre-signed URI of the data set item.
timestamp (datetime) --
The time when the request was made.
status (dict) --
The status of the data set content.
state (string) --
The state of the data set contents. Can be one of "READY", "CREATING", "SUCCEEDED" or "FAILED".
reason (string) --
The reason the data set contents are in this state.
Create a paginator for an operation.
Returns an object that can wait for some condition.
Retrieves a list of channels.
See also: AWS API Documentation
Request Syntax
response = client.list_channels(
nextToken='string',
maxResults=123
)
The maximum number of results to return in this request.
The default value is 100.
dict
Response Syntax
{
'channelSummaries': [
{
'channelName': 'string',
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1)
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
channelSummaries (list) --
A list of "ChannelSummary" objects.
(dict) --
A summary of information about a channel.
channelName (string) --
The name of the channel.
status (string) --
The status of the channel.
creationTime (datetime) --
When the channel was created.
lastUpdateTime (datetime) --
The last time the channel was updated.
nextToken (string) --
The token to retrieve the next set of results, or null if there are no more results.
Lists information about data set contents that have been created.
See also: AWS API Documentation
Request Syntax
response = client.list_dataset_contents(
datasetName='string',
nextToken='string',
maxResults=123,
scheduledOnOrAfter=datetime(2015, 1, 1),
scheduledBefore=datetime(2015, 1, 1)
)
[REQUIRED]
The name of the data set whose contents information you want to list.
dict
Response Syntax
{
'datasetContentSummaries': [
{
'version': 'string',
'status': {
'state': 'CREATING'|'SUCCEEDED'|'FAILED',
'reason': 'string'
},
'creationTime': datetime(2015, 1, 1),
'scheduleTime': datetime(2015, 1, 1)
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
datasetContentSummaries (list) --
Summary information about data set contents that have been created.
(dict) --
Summary information about data set contents.
version (string) --
The version of the data set contents.
status (dict) --
The status of the data set contents.
state (string) --
The state of the data set contents. Can be one of "READY", "CREATING", "SUCCEEDED" or "FAILED".
reason (string) --
The reason the data set contents are in this state.
creationTime (datetime) --
The actual time the creation of the data set contents was started.
scheduleTime (datetime) --
The time the creation of the data set contents was scheduled to start.
nextToken (string) --
The token to retrieve the next set of results, or null if there are no more results.
Retrieves information about data sets.
See also: AWS API Documentation
Request Syntax
response = client.list_datasets(
nextToken='string',
maxResults=123
)
The maximum number of results to return in this request.
The default value is 100.
dict
Response Syntax
{
'datasetSummaries': [
{
'datasetName': 'string',
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1),
'triggers': [
{
'schedule': {
'expression': 'string'
},
'dataset': {
'name': 'string'
}
},
],
'actions': [
{
'actionName': 'string',
'actionType': 'QUERY'|'CONTAINER'
},
]
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
datasetSummaries (list) --
A list of "DatasetSummary" objects.
(dict) --
A summary of information about a data set.
datasetName (string) --
The name of the data set.
status (string) --
The status of the data set.
creationTime (datetime) --
The time the data set was created.
lastUpdateTime (datetime) --
The last time the data set was updated.
triggers (list) --
A list of triggers. A trigger causes data set content to be populated at a specified time interval or when another data set is populated. The list of triggers can be empty or contain up to five DataSetTrigger objects
(dict) --
The "DatasetTrigger" that specifies when the data set is automatically updated.
schedule (dict) --
The "Schedule" when the trigger is initiated.
expression (string) --
The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the Amazon CloudWatch documentation.
dataset (dict) --
The data set whose content creation triggers the creation of this data set's contents.
name (string) --
The name of the data set whose content generation triggers the new data set content generation.
actions (list) --
A list of "DataActionSummary" objects.
(dict) --
actionName (string) --
The name of the action which automatically creates the data set's contents.
actionType (string) --
The type of action by which the data set's contents are automatically created.
nextToken (string) --
The token to retrieve the next set of results, or null if there are no more results.
Retrieves a list of data stores.
See also: AWS API Documentation
Request Syntax
response = client.list_datastores(
nextToken='string',
maxResults=123
)
The maximum number of results to return in this request.
The default value is 100.
dict
Response Syntax
{
'datastoreSummaries': [
{
'datastoreName': 'string',
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1)
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
datastoreSummaries (list) --
A list of "DatastoreSummary" objects.
(dict) --
A summary of information about a data store.
datastoreName (string) --
The name of the data store.
status (string) --
The status of the data store.
creationTime (datetime) --
When the data store was created.
lastUpdateTime (datetime) --
The last time the data store was updated.
nextToken (string) --
The token to retrieve the next set of results, or null if there are no more results.
Retrieves a list of pipelines.
See also: AWS API Documentation
Request Syntax
response = client.list_pipelines(
nextToken='string',
maxResults=123
)
The maximum number of results to return in this request.
The default value is 100.
dict
Response Syntax
{
'pipelineSummaries': [
{
'pipelineName': 'string',
'reprocessingSummaries': [
{
'id': 'string',
'status': 'RUNNING'|'SUCCEEDED'|'CANCELLED'|'FAILED',
'creationTime': datetime(2015, 1, 1)
},
],
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1)
},
],
'nextToken': 'string'
}
Response Structure
(dict) --
pipelineSummaries (list) --
A list of "PipelineSummary" objects.
(dict) --
A summary of information about a pipeline.
pipelineName (string) --
The name of the pipeline.
reprocessingSummaries (list) --
A summary of information about the pipeline reprocessing.
(dict) --
Information about pipeline reprocessing.
id (string) --
The 'reprocessingId' returned by "StartPipelineReprocessing".
status (string) --
The status of the pipeline reprocessing.
creationTime (datetime) --
The time the pipeline reprocessing was created.
creationTime (datetime) --
When the pipeline was created.
lastUpdateTime (datetime) --
When the pipeline was last updated.
nextToken (string) --
The token to retrieve the next set of results, or null if there are no more results.
Lists the tags (metadata) which you have assigned to the resource.
See also: AWS API Documentation
Request Syntax
response = client.list_tags_for_resource(
resourceArn='string'
)
[REQUIRED]
The ARN of the resource whose tags you want to list.
{
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
Response Structure
The tags (metadata) which you have assigned to the resource.
A set of key/value pairs which are used to manage the resource.
The tag's key.
The tag's value.
Sets or updates the AWS IoT Analytics logging options.
Note that if you update the value of any loggingOptions field, it takes up to one minute for the change to take effect. Also, if you change the policy attached to the role you specified in the roleArn field (for example, to correct an invalid policy) it takes up to 5 minutes for that change to take effect.
See also: AWS API Documentation
Request Syntax
response = client.put_logging_options(
loggingOptions={
'roleArn': 'string',
'level': 'ERROR',
'enabled': True|False
}
)
[REQUIRED]
The new values of the AWS IoT Analytics logging options.
The ARN of the role that grants permission to AWS IoT Analytics to perform logging.
The logging level. Currently, only "ERROR" is supported.
If true, logging is enabled for AWS IoT Analytics.
Simulates the results of running a pipeline activity on a message payload.
See also: AWS API Documentation
Request Syntax
response = client.run_pipeline_activity(
pipelineActivity={
'channel': {
'name': 'string',
'channelName': 'string',
'next': 'string'
},
'lambda': {
'name': 'string',
'lambdaName': 'string',
'batchSize': 123,
'next': 'string'
},
'datastore': {
'name': 'string',
'datastoreName': 'string'
},
'addAttributes': {
'name': 'string',
'attributes': {
'string': 'string'
},
'next': 'string'
},
'removeAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'selectAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'filter': {
'name': 'string',
'filter': 'string',
'next': 'string'
},
'math': {
'name': 'string',
'attribute': 'string',
'math': 'string',
'next': 'string'
},
'deviceRegistryEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
},
'deviceShadowEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
}
},
payloads=[
b'bytes',
]
)
[REQUIRED]
The pipeline activity that is run. This must not be a 'channel' activity or a 'datastore' activity because these activities are used in a pipeline only to load the original message and to store the (possibly) transformed message. If a 'lambda' activity is specified, only short-running Lambda functions (those with a timeout of less than 30 seconds or less) can be used.
Determines the source of the messages to be processed.
The name of the 'channel' activity.
The name of the channel from which the messages are processed.
The next activity in the pipeline.
Runs a Lambda function to modify the message.
The name of the 'lambda' activity.
The name of the Lambda function that is run on the message.
The number of messages passed to the Lambda function for processing.
The AWS Lambda function must be able to process all of these messages within five minutes, which is the maximum timeout duration for Lambda functions.
The next activity in the pipeline.
Specifies where to store the processed message data.
The name of the 'datastore' activity.
The name of the data store where processed messages are stored.
Adds other attributes based on existing attributes in the message.
The name of the 'addAttributes' activity.
A list of 1-50 "AttributeNameMapping" objects that map an existing attribute to a new attribute.
Note
The existing attributes remain in the message, so if you want to remove the originals, use "RemoveAttributeActivity".
The next activity in the pipeline.
Removes attributes from a message.
The name of the 'removeAttributes' activity.
A list of 1-50 attributes to remove from the message.
The next activity in the pipeline.
Creates a new message using only the specified attributes from the original message.
The name of the 'selectAttributes' activity.
A list of the attributes to select from the message.
The next activity in the pipeline.
Filters a message based on its attributes.
The name of the 'filter' activity.
An expression that looks like a SQL WHERE clause that must return a Boolean value.
The next activity in the pipeline.
Computes an arithmetic expression using the message's attributes and adds it to the message.
The name of the 'math' activity.
The name of the attribute that contains the result of the math operation.
An expression that uses one or more existing attributes and must return an integer value.
The next activity in the pipeline.
Adds data from the AWS IoT device registry to your message.
The name of the 'deviceRegistryEnrich' activity.
The name of the attribute that is added to the message.
The name of the IoT device whose registry information is added to the message.
The ARN of the role that allows access to the device's registry information.
The next activity in the pipeline.
Adds information from the AWS IoT Device Shadows service to a message.
The name of the 'deviceShadowEnrich' activity.
The name of the attribute that is added to the message.
The name of the IoT device whose shadow information is added to the message.
The ARN of the role that allows access to the device's shadow.
The next activity in the pipeline.
[REQUIRED]
The sample message payloads on which the pipeline activity is run.
dict
Response Syntax
{
'payloads': [
b'bytes',
],
'logResult': 'string'
}
Response Structure
(dict) --
payloads (list) --
The enriched or transformed sample message payloads as base64-encoded strings. (The results of running the pipeline activity on each input sample message payload, encoded in base64.)
logResult (string) --
In case the pipeline activity fails, the log message that is generated.
Retrieves a sample of messages from the specified channel ingested during the specified timeframe. Up to 10 messages can be retrieved.
See also: AWS API Documentation
Request Syntax
response = client.sample_channel_data(
channelName='string',
maxMessages=123,
startTime=datetime(2015, 1, 1),
endTime=datetime(2015, 1, 1)
)
[REQUIRED]
The name of the channel whose message samples are retrieved.
dict
Response Syntax
{
'payloads': [
b'bytes',
]
}
Response Structure
(dict) --
payloads (list) --
The list of message samples. Each sample message is returned as a base64-encoded string.
Starts the reprocessing of raw message data through the pipeline.
See also: AWS API Documentation
Request Syntax
response = client.start_pipeline_reprocessing(
pipelineName='string',
startTime=datetime(2015, 1, 1),
endTime=datetime(2015, 1, 1)
)
[REQUIRED]
The name of the pipeline on which to start reprocessing.
dict
Response Syntax
{
'reprocessingId': 'string'
}
Response Structure
(dict) --
reprocessingId (string) --
The ID of the pipeline reprocessing activity that was started.
Adds to or modifies the tags of the given resource. Tags are metadata which can be used to manage a resource.
See also: AWS API Documentation
Request Syntax
response = client.tag_resource(
resourceArn='string',
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
[REQUIRED]
The ARN of the resource whose tags you want to modify.
[REQUIRED]
The new or modified tags for the resource.
A set of key/value pairs which are used to manage the resource.
The tag's key.
The tag's value.
dict
Response Syntax
{}
Response Structure
Removes the given tags (metadata) from the resource.
See also: AWS API Documentation
Request Syntax
response = client.untag_resource(
resourceArn='string',
tagKeys=[
'string',
]
)
[REQUIRED]
The ARN of the resource whose tags you want to remove.
[REQUIRED]
The keys of those tags which you want to remove.
dict
Response Syntax
{}
Response Structure
Updates the settings of a channel.
See also: AWS API Documentation
Request Syntax
response = client.update_channel(
channelName='string',
retentionPeriod={
'unlimited': True|False,
'numberOfDays': 123
}
)
[REQUIRED]
The name of the channel to be updated.
How long, in days, message data is kept for the channel.
If true, message data is kept indefinitely.
The number of days that message data is kept. The "unlimited" parameter must be false.
None
Updates the settings of a data set.
See also: AWS API Documentation
Request Syntax
response = client.update_dataset(
datasetName='string',
actions=[
{
'actionName': 'string',
'queryAction': {
'sqlQuery': 'string',
'filters': [
{
'deltaTime': {
'offsetSeconds': 123,
'timeExpression': 'string'
}
},
]
},
'containerAction': {
'image': 'string',
'executionRoleArn': 'string',
'resourceConfiguration': {
'computeType': 'ACU_1'|'ACU_2',
'volumeSizeInGB': 123
},
'variables': [
{
'name': 'string',
'stringValue': 'string',
'doubleValue': 123.0,
'datasetContentVersionValue': {
'datasetName': 'string'
},
'outputFileUriValue': {
'fileName': 'string'
}
},
]
}
},
],
triggers=[
{
'schedule': {
'expression': 'string'
},
'dataset': {
'name': 'string'
}
},
],
contentDeliveryRules=[
{
'entryName': 'string',
'destination': {
'iotEventsDestinationConfiguration': {
'inputName': 'string',
'roleArn': 'string'
}
}
},
],
retentionPeriod={
'unlimited': True|False,
'numberOfDays': 123
}
)
[REQUIRED]
The name of the data set to update.
[REQUIRED]
A list of "DatasetAction" objects.
A "DatasetAction" object that specifies how data set contents are automatically created.
The name of the data set action by which data set contents are automatically created.
An "SqlQueryDatasetAction" object that uses an SQL query to automatically create data set contents.
A SQL query string.
Pre-filters applied to message data.
Information which is used to filter message data, to segregate it according to the time frame in which it arrives.
Used to limit data to that which has arrived since the last execution of the action.
The number of seconds of estimated "in flight" lag time of message data. When you create data set contents using message data from a specified time frame, some message data may still be "in flight" when processing begins, and so will not arrive in time to be processed. Use this field to make allowances for the "in flight" time of your message data, so that data not processed from a previous time frame will be included with the next time frame. Without this, missed message data would be excluded from processing during the next time frame as well, because its timestamp places it within the previous time frame.
An expression by which the time of the message data may be determined. This may be the name of a timestamp field, or a SQL expression which is used to derive the time the message data was generated.
Information which allows the system to run a containerized application in order to create the data set contents. The application must be in a Docker container along with any needed support libraries.
The ARN of the Docker container stored in your account. The Docker container contains an application and needed support libraries and is used to generate data set contents.
The ARN of the role which gives permission to the system to access needed resources in order to run the "containerAction". This includes, at minimum, permission to retrieve the data set contents which are the input to the containerized application.
Configuration of the resource which executes the "containerAction".
The type of the compute resource used to execute the "containerAction". Possible values are: ACU_1 (vCPU=4, memory=16GiB) or ACU_2 (vCPU=8, memory=32GiB).
The size (in GB) of the persistent storage available to the resource instance used to execute the "containerAction" (min: 1, max: 50).
The values of variables used within the context of the execution of the containerized application (basically, parameters passed to the application). Each variable must have a name and a value given by one of "stringValue", "datasetContentVersionValue", or "outputFileUriValue".
An instance of a variable to be passed to the "containerAction" execution. Each variable must have a name and a value given by one of "stringValue", "datasetContentVersionValue", or "outputFileUriValue".
The name of the variable.
The value of the variable as a string.
The value of the variable as a double (numeric).
The value of the variable as a structure that specifies a data set content version.
The name of the data set whose latest contents are used as input to the notebook or application.
The value of the variable as a structure that specifies an output file URI.
The URI of the location where data set contents are stored, usually the URI of a file in an S3 bucket.
A list of "DatasetTrigger" objects. The list can be empty or can contain up to five DataSetTrigger objects.
The "DatasetTrigger" that specifies when the data set is automatically updated.
The "Schedule" when the trigger is initiated.
The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the Amazon CloudWatch documentation.
The data set whose content creation triggers the creation of this data set's contents.
The name of the data set whose content generation triggers the new data set content generation.
When data set contents are created they are delivered to destinations specified here.
When data set contents are created they are delivered to destination specified here.
The name of the data set content delivery rules entry.
The destination to which data set contents are delivered.
Configuration information for delivery of data set contents to AWS IoT Events.
The name of the AWS IoT Events input to which data set contents are delivered.
The ARN of the role which grants AWS IoT Analytics permission to deliver data set contents to an AWS IoT Events input.
How long, in days, message data is kept for the data set.
If true, message data is kept indefinitely.
The number of days that message data is kept. The "unlimited" parameter must be false.
None
Updates the settings of a data store.
See also: AWS API Documentation
Request Syntax
response = client.update_datastore(
datastoreName='string',
retentionPeriod={
'unlimited': True|False,
'numberOfDays': 123
}
)
[REQUIRED]
The name of the data store to be updated.
How long, in days, message data is kept for the data store.
If true, message data is kept indefinitely.
The number of days that message data is kept. The "unlimited" parameter must be false.
None
Updates the settings of a pipeline.
See also: AWS API Documentation
Request Syntax
response = client.update_pipeline(
pipelineName='string',
pipelineActivities=[
{
'channel': {
'name': 'string',
'channelName': 'string',
'next': 'string'
},
'lambda': {
'name': 'string',
'lambdaName': 'string',
'batchSize': 123,
'next': 'string'
},
'datastore': {
'name': 'string',
'datastoreName': 'string'
},
'addAttributes': {
'name': 'string',
'attributes': {
'string': 'string'
},
'next': 'string'
},
'removeAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'selectAttributes': {
'name': 'string',
'attributes': [
'string',
],
'next': 'string'
},
'filter': {
'name': 'string',
'filter': 'string',
'next': 'string'
},
'math': {
'name': 'string',
'attribute': 'string',
'math': 'string',
'next': 'string'
},
'deviceRegistryEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
},
'deviceShadowEnrich': {
'name': 'string',
'attribute': 'string',
'thingName': 'string',
'roleArn': 'string',
'next': 'string'
}
},
]
)
[REQUIRED]
The name of the pipeline to update.
[REQUIRED]
A list of "PipelineActivity" objects.
The list can be 1-25 PipelineActivity objects. Activities perform transformations on your messages, such as removing, renaming or adding message attributes; filtering messages based on attribute values; invoking your Lambda functions on messages for advanced processing; or performing mathematical transformations to normalize device data.
An activity that performs a transformation on a message.
Determines the source of the messages to be processed.
The name of the 'channel' activity.
The name of the channel from which the messages are processed.
The next activity in the pipeline.
Runs a Lambda function to modify the message.
The name of the 'lambda' activity.
The name of the Lambda function that is run on the message.
The number of messages passed to the Lambda function for processing.
The AWS Lambda function must be able to process all of these messages within five minutes, which is the maximum timeout duration for Lambda functions.
The next activity in the pipeline.
Specifies where to store the processed message data.
The name of the 'datastore' activity.
The name of the data store where processed messages are stored.
Adds other attributes based on existing attributes in the message.
The name of the 'addAttributes' activity.
A list of 1-50 "AttributeNameMapping" objects that map an existing attribute to a new attribute.
Note
The existing attributes remain in the message, so if you want to remove the originals, use "RemoveAttributeActivity".
The next activity in the pipeline.
Removes attributes from a message.
The name of the 'removeAttributes' activity.
A list of 1-50 attributes to remove from the message.
The next activity in the pipeline.
Creates a new message using only the specified attributes from the original message.
The name of the 'selectAttributes' activity.
A list of the attributes to select from the message.
The next activity in the pipeline.
Filters a message based on its attributes.
The name of the 'filter' activity.
An expression that looks like a SQL WHERE clause that must return a Boolean value.
The next activity in the pipeline.
Computes an arithmetic expression using the message's attributes and adds it to the message.
The name of the 'math' activity.
The name of the attribute that contains the result of the math operation.
An expression that uses one or more existing attributes and must return an integer value.
The next activity in the pipeline.
Adds data from the AWS IoT device registry to your message.
The name of the 'deviceRegistryEnrich' activity.
The name of the attribute that is added to the message.
The name of the IoT device whose registry information is added to the message.
The ARN of the role that allows access to the device's registry information.
The next activity in the pipeline.
Adds information from the AWS IoT Device Shadows service to a message.
The name of the 'deviceShadowEnrich' activity.
The name of the attribute that is added to the message.
The name of the IoT device whose shadow information is added to the message.
The ARN of the role that allows access to the device's shadow.
The next activity in the pipeline.
None
The available paginators are:
paginator = client.get_paginator('list_channels')
Creates an iterator that will paginate through responses from IoTAnalytics.Client.list_channels().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
{
'channelSummaries': [
{
'channelName': 'string',
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1)
},
],
'NextToken': 'string'
}
Response Structure
A list of "ChannelSummary" objects.
A summary of information about a channel.
The name of the channel.
The status of the channel.
When the channel was created.
The last time the channel was updated.
A token to resume pagination.
paginator = client.get_paginator('list_dataset_contents')
Creates an iterator that will paginate through responses from IoTAnalytics.Client.list_dataset_contents().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
datasetName='string',
scheduledOnOrAfter=datetime(2015, 1, 1),
scheduledBefore=datetime(2015, 1, 1),
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
[REQUIRED]
The name of the data set whose contents information you want to list.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'datasetContentSummaries': [
{
'version': 'string',
'status': {
'state': 'CREATING'|'SUCCEEDED'|'FAILED',
'reason': 'string'
},
'creationTime': datetime(2015, 1, 1),
'scheduleTime': datetime(2015, 1, 1)
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
datasetContentSummaries (list) --
Summary information about data set contents that have been created.
(dict) --
Summary information about data set contents.
version (string) --
The version of the data set contents.
status (dict) --
The status of the data set contents.
state (string) --
The state of the data set contents. Can be one of "READY", "CREATING", "SUCCEEDED" or "FAILED".
reason (string) --
The reason the data set contents are in this state.
creationTime (datetime) --
The actual time the creation of the data set contents was started.
scheduleTime (datetime) --
The time the creation of the data set contents was scheduled to start.
NextToken (string) --
A token to resume pagination.
paginator = client.get_paginator('list_datasets')
Creates an iterator that will paginate through responses from IoTAnalytics.Client.list_datasets().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
{
'datasetSummaries': [
{
'datasetName': 'string',
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1),
'triggers': [
{
'schedule': {
'expression': 'string'
},
'dataset': {
'name': 'string'
}
},
],
'actions': [
{
'actionName': 'string',
'actionType': 'QUERY'|'CONTAINER'
},
]
},
],
'NextToken': 'string'
}
Response Structure
A list of "DatasetSummary" objects.
A summary of information about a data set.
The name of the data set.
The status of the data set.
The time the data set was created.
The last time the data set was updated.
A list of triggers. A trigger causes data set content to be populated at a specified time interval or when another data set is populated. The list of triggers can be empty or contain up to five DataSetTrigger objects
The "DatasetTrigger" that specifies when the data set is automatically updated.
The "Schedule" when the trigger is initiated.
The expression that defines when to trigger an update. For more information, see Schedule Expressions for Rules in the Amazon CloudWatch documentation.
The data set whose content creation triggers the creation of this data set's contents.
The name of the data set whose content generation triggers the new data set content generation.
A list of "DataActionSummary" objects.
The name of the action which automatically creates the data set's contents.
The type of action by which the data set's contents are automatically created.
A token to resume pagination.
paginator = client.get_paginator('list_datastores')
Creates an iterator that will paginate through responses from IoTAnalytics.Client.list_datastores().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
{
'datastoreSummaries': [
{
'datastoreName': 'string',
'status': 'CREATING'|'ACTIVE'|'DELETING',
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1)
},
],
'NextToken': 'string'
}
Response Structure
A list of "DatastoreSummary" objects.
A summary of information about a data store.
The name of the data store.
The status of the data store.
When the data store was created.
The last time the data store was updated.
A token to resume pagination.
paginator = client.get_paginator('list_pipelines')
Creates an iterator that will paginate through responses from IoTAnalytics.Client.list_pipelines().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
{
'pipelineSummaries': [
{
'pipelineName': 'string',
'reprocessingSummaries': [
{
'id': 'string',
'status': 'RUNNING'|'SUCCEEDED'|'CANCELLED'|'FAILED',
'creationTime': datetime(2015, 1, 1)
},
],
'creationTime': datetime(2015, 1, 1),
'lastUpdateTime': datetime(2015, 1, 1)
},
],
'NextToken': 'string'
}
Response Structure
A list of "PipelineSummary" objects.
A summary of information about a pipeline.
The name of the pipeline.
A summary of information about the pipeline reprocessing.
Information about pipeline reprocessing.
The 'reprocessingId' returned by "StartPipelineReprocessing".
The status of the pipeline reprocessing.
The time the pipeline reprocessing was created.
When the pipeline was created.
When the pipeline was last updated.
A token to resume pagination.